* [PATCH i-g-t 0/3] Unify slow/combinatorial test handling
@ 2015-10-23 11:42 David Weinehall
2015-10-23 11:42 ` [PATCH i-g-t 1/3] Rename gem_concurren_all over gem_concurrent_blit David Weinehall
` (5 more replies)
0 siblings, 6 replies; 41+ messages in thread
From: David Weinehall @ 2015-10-23 11:42 UTC (permalink / raw)
To: intel-gfx
Until now we've had no unified way to handle slow/combinatorial tests.
Most of the time we don't want to run slow/combinatorial tests, so this
should remain the default, but when we do want to run such tests,
it has been handled differently in different tests.
This patch adds a --with-slow-combinatorial command line option to
igt_core, changes gem_concurrent_blit and kms_frontbuffer_tracking
to use this instead of their own methods, and removes gem_concurrent_all
in the process, since it's now unnecessary.
The diffstat looks a bit scary, but that is due to the rename
of gem_concurrent_all to gem_concurrent_blit.
David Weinehall (3):
Rename gem_concurren_all over gem_concurrent_blit
Unify handling of slow/combinatorial tests
Remove gem_concurrent_all, since it is now superfluous
lib/igt_core.c | 19 +
lib/igt_core.h | 1 +
tests/Makefile.sources | 1 -
tests/gem_concurrent_all.c | 1108 -------------------------------------
tests/gem_concurrent_blit.c | 1132 +++++++++++++++++++++++++++++++++++++-
tests/kms_frontbuffer_tracking.c | 135 +++--
6 files changed, 1238 insertions(+), 1158 deletions(-)
delete mode 100644 tests/gem_concurrent_all.c
--
2.6.1
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH i-g-t 1/3] Rename gem_concurren_all over gem_concurrent_blit
2015-10-23 11:42 [PATCH i-g-t 0/3] Unify slow/combinatorial test handling David Weinehall
@ 2015-10-23 11:42 ` David Weinehall
2015-10-23 14:32 ` Thomas Wood
2015-10-23 11:42 ` [PATCH i-g-t 2/3] Unify handling of slow/combinatorial tests David Weinehall
` (4 subsequent siblings)
5 siblings, 1 reply; 41+ messages in thread
From: David Weinehall @ 2015-10-23 11:42 UTC (permalink / raw)
To: intel-gfx
We'll both rename gem_concurrent_all over gem_concurrent_blit
and change gem_concurrent_blit in this changeset. To make
this easier to follow we first do the the rename.
---
tests/gem_concurrent_blit.c | 1116 ++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 1108 insertions(+), 8 deletions(-)
diff --git a/tests/gem_concurrent_blit.c b/tests/gem_concurrent_blit.c
index 513de4a1b719..1d2d787202df 100644
--- a/tests/gem_concurrent_blit.c
+++ b/tests/gem_concurrent_blit.c
@@ -1,8 +1,1108 @@
-/* This test is just a duplicate of gem_concurrent_all. */
-/* However the executeable will be gem_concurrent_blit. */
-/* The main function examines argv[0] and, in the case */
-/* of gem_concurent_blit runs only a subset of the */
-/* available subtests. This avoids the use of */
-/* non-standard command line parameters which can cause */
-/* problems for automated testing */
-#include "gem_concurrent_all.c"
+/*
+ * Copyright © 2009,2012,2013 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ * Eric Anholt <eric@anholt.net>
+ * Chris Wilson <chris@chris-wilson.co.uk>
+ * Daniel Vetter <daniel.vetter@ffwll.ch>
+ *
+ */
+
+/** @file gem_concurrent.c
+ *
+ * This is a test of pread/pwrite/mmap behavior when writing to active
+ * buffers.
+ *
+ * Based on gem_gtt_concurrent_blt.
+ */
+
+#include "igt.h"
+#include <stdlib.h>
+#include <stdio.h>
+#include <string.h>
+#include <fcntl.h>
+#include <inttypes.h>
+#include <errno.h>
+#include <sys/stat.h>
+#include <sys/time.h>
+#include <sys/wait.h>
+
+#include <drm.h>
+
+#include "intel_bufmgr.h"
+
+IGT_TEST_DESCRIPTION("Test of pread/pwrite/mmap behavior when writing to active"
+ " buffers.");
+
+int fd, devid, gen;
+struct intel_batchbuffer *batch;
+int all;
+
+static void
+nop_release_bo(drm_intel_bo *bo)
+{
+ drm_intel_bo_unreference(bo);
+}
+
+static void
+prw_set_bo(drm_intel_bo *bo, uint32_t val, int width, int height)
+{
+ int size = width * height, i;
+ uint32_t *tmp;
+
+ tmp = malloc(4*size);
+ if (tmp) {
+ for (i = 0; i < size; i++)
+ tmp[i] = val;
+ drm_intel_bo_subdata(bo, 0, 4*size, tmp);
+ free(tmp);
+ } else {
+ for (i = 0; i < size; i++)
+ drm_intel_bo_subdata(bo, 4*i, 4, &val);
+ }
+}
+
+static void
+prw_cmp_bo(drm_intel_bo *bo, uint32_t val, int width, int height, drm_intel_bo *tmp)
+{
+ int size = width * height, i;
+ uint32_t *vaddr;
+
+ do_or_die(drm_intel_bo_map(tmp, true));
+ do_or_die(drm_intel_bo_get_subdata(bo, 0, 4*size, tmp->virtual));
+ vaddr = tmp->virtual;
+ for (i = 0; i < size; i++)
+ igt_assert_eq_u32(vaddr[i], val);
+ drm_intel_bo_unmap(tmp);
+}
+
+static drm_intel_bo *
+unmapped_create_bo(drm_intel_bufmgr *bufmgr, int width, int height)
+{
+ drm_intel_bo *bo;
+
+ bo = drm_intel_bo_alloc(bufmgr, "bo", 4*width*height, 0);
+ igt_assert(bo);
+
+ return bo;
+}
+
+static drm_intel_bo *
+snoop_create_bo(drm_intel_bufmgr *bufmgr, int width, int height)
+{
+ drm_intel_bo *bo;
+
+ igt_skip_on(gem_has_llc(fd));
+
+ bo = unmapped_create_bo(bufmgr, width, height);
+ gem_set_caching(fd, bo->handle, I915_CACHING_CACHED);
+ drm_intel_bo_disable_reuse(bo);
+
+ return bo;
+}
+
+static void
+gtt_set_bo(drm_intel_bo *bo, uint32_t val, int width, int height)
+{
+ uint32_t *vaddr = bo->virtual;
+ int size = width * height;
+
+ drm_intel_gem_bo_start_gtt_access(bo, true);
+ while (size--)
+ *vaddr++ = val;
+}
+
+static void
+gtt_cmp_bo(drm_intel_bo *bo, uint32_t val, int width, int height, drm_intel_bo *tmp)
+{
+ uint32_t *vaddr = bo->virtual;
+ int y;
+
+ /* GTT access is slow. So we just compare a few points */
+ drm_intel_gem_bo_start_gtt_access(bo, false);
+ for (y = 0; y < height; y++)
+ igt_assert_eq_u32(vaddr[y*width+y], val);
+}
+
+static drm_intel_bo *
+map_bo(drm_intel_bo *bo)
+{
+ /* gtt map doesn't have a write parameter, so just keep the mapping
+ * around (to avoid the set_domain with the gtt write domain set) and
+ * manually tell the kernel when we start access the gtt. */
+ do_or_die(drm_intel_gem_bo_map_gtt(bo));
+
+ return bo;
+}
+
+static drm_intel_bo *
+tile_bo(drm_intel_bo *bo, int width)
+{
+ uint32_t tiling = I915_TILING_X;
+ uint32_t stride = width * 4;
+
+ do_or_die(drm_intel_bo_set_tiling(bo, &tiling, stride));
+
+ return bo;
+}
+
+static drm_intel_bo *
+gtt_create_bo(drm_intel_bufmgr *bufmgr, int width, int height)
+{
+ return map_bo(unmapped_create_bo(bufmgr, width, height));
+}
+
+static drm_intel_bo *
+gttX_create_bo(drm_intel_bufmgr *bufmgr, int width, int height)
+{
+ return tile_bo(gtt_create_bo(bufmgr, width, height), width);
+}
+
+static drm_intel_bo *
+wc_create_bo(drm_intel_bufmgr *bufmgr, int width, int height)
+{
+ drm_intel_bo *bo;
+
+ gem_require_mmap_wc(fd);
+
+ bo = unmapped_create_bo(bufmgr, width, height);
+ bo->virtual = __gem_mmap__wc(fd, bo->handle, 0, bo->size, PROT_READ | PROT_WRITE);
+ return bo;
+}
+
+static void
+wc_release_bo(drm_intel_bo *bo)
+{
+ munmap(bo->virtual, bo->size);
+ bo->virtual = NULL;
+
+ nop_release_bo(bo);
+}
+
+static drm_intel_bo *
+gpu_create_bo(drm_intel_bufmgr *bufmgr, int width, int height)
+{
+ return unmapped_create_bo(bufmgr, width, height);
+}
+
+
+static drm_intel_bo *
+gpuX_create_bo(drm_intel_bufmgr *bufmgr, int width, int height)
+{
+ return tile_bo(gpu_create_bo(bufmgr, width, height), width);
+}
+
+static void
+cpu_set_bo(drm_intel_bo *bo, uint32_t val, int width, int height)
+{
+ int size = width * height;
+ uint32_t *vaddr;
+
+ do_or_die(drm_intel_bo_map(bo, true));
+ vaddr = bo->virtual;
+ while (size--)
+ *vaddr++ = val;
+ drm_intel_bo_unmap(bo);
+}
+
+static void
+cpu_cmp_bo(drm_intel_bo *bo, uint32_t val, int width, int height, drm_intel_bo *tmp)
+{
+ int size = width * height;
+ uint32_t *vaddr;
+
+ do_or_die(drm_intel_bo_map(bo, false));
+ vaddr = bo->virtual;
+ while (size--)
+ igt_assert_eq_u32(*vaddr++, val);
+ drm_intel_bo_unmap(bo);
+}
+
+static void
+gpu_set_bo(drm_intel_bo *bo, uint32_t val, int width, int height)
+{
+ struct drm_i915_gem_relocation_entry reloc[1];
+ struct drm_i915_gem_exec_object2 gem_exec[2];
+ struct drm_i915_gem_execbuffer2 execbuf;
+ struct drm_i915_gem_pwrite gem_pwrite;
+ struct drm_i915_gem_create create;
+ uint32_t buf[10], *b;
+ uint32_t tiling, swizzle;
+
+ drm_intel_bo_get_tiling(bo, &tiling, &swizzle);
+
+ memset(reloc, 0, sizeof(reloc));
+ memset(gem_exec, 0, sizeof(gem_exec));
+ memset(&execbuf, 0, sizeof(execbuf));
+
+ b = buf;
+ *b++ = XY_COLOR_BLT_CMD_NOLEN |
+ ((gen >= 8) ? 5 : 4) |
+ COLOR_BLT_WRITE_ALPHA | XY_COLOR_BLT_WRITE_RGB;
+ if (gen >= 4 && tiling) {
+ b[-1] |= XY_COLOR_BLT_TILED;
+ *b = width;
+ } else
+ *b = width << 2;
+ *b++ |= 0xf0 << 16 | 1 << 25 | 1 << 24;
+ *b++ = 0;
+ *b++ = height << 16 | width;
+ reloc[0].offset = (b - buf) * sizeof(uint32_t);
+ reloc[0].target_handle = bo->handle;
+ reloc[0].read_domains = I915_GEM_DOMAIN_RENDER;
+ reloc[0].write_domain = I915_GEM_DOMAIN_RENDER;
+ *b++ = 0;
+ if (gen >= 8)
+ *b++ = 0;
+ *b++ = val;
+ *b++ = MI_BATCH_BUFFER_END;
+ if ((b - buf) & 1)
+ *b++ = 0;
+
+ gem_exec[0].handle = bo->handle;
+ gem_exec[0].flags = EXEC_OBJECT_NEEDS_FENCE;
+
+ create.handle = 0;
+ create.size = 4096;
+ drmIoctl(fd, DRM_IOCTL_I915_GEM_CREATE, &create);
+ gem_exec[1].handle = create.handle;
+ gem_exec[1].relocation_count = 1;
+ gem_exec[1].relocs_ptr = (uintptr_t)reloc;
+
+ execbuf.buffers_ptr = (uintptr_t)gem_exec;
+ execbuf.buffer_count = 2;
+ execbuf.batch_len = (b - buf) * sizeof(buf[0]);
+ if (gen >= 6)
+ execbuf.flags = I915_EXEC_BLT;
+
+ gem_pwrite.handle = gem_exec[1].handle;
+ gem_pwrite.offset = 0;
+ gem_pwrite.size = execbuf.batch_len;
+ gem_pwrite.data_ptr = (uintptr_t)buf;
+ do_or_die(drmIoctl(fd, DRM_IOCTL_I915_GEM_PWRITE, &gem_pwrite));
+ do_or_die(drmIoctl(fd, DRM_IOCTL_I915_GEM_EXECBUFFER2, &execbuf));
+
+ drmIoctl(fd, DRM_IOCTL_GEM_CLOSE, &create.handle);
+}
+
+static void
+gpu_cmp_bo(drm_intel_bo *bo, uint32_t val, int width, int height, drm_intel_bo *tmp)
+{
+ intel_blt_copy(batch,
+ bo, 0, 0, 4*width,
+ tmp, 0, 0, 4*width,
+ width, height, 32);
+ cpu_cmp_bo(tmp, val, width, height, NULL);
+}
+
+const struct access_mode {
+ const char *name;
+ void (*set_bo)(drm_intel_bo *bo, uint32_t val, int w, int h);
+ void (*cmp_bo)(drm_intel_bo *bo, uint32_t val, int w, int h, drm_intel_bo *tmp);
+ drm_intel_bo *(*create_bo)(drm_intel_bufmgr *bufmgr, int width, int height);
+ void (*release_bo)(drm_intel_bo *bo);
+} access_modes[] = {
+ {
+ .name = "prw",
+ .set_bo = prw_set_bo,
+ .cmp_bo = prw_cmp_bo,
+ .create_bo = unmapped_create_bo,
+ .release_bo = nop_release_bo,
+ },
+ {
+ .name = "cpu",
+ .set_bo = cpu_set_bo,
+ .cmp_bo = cpu_cmp_bo,
+ .create_bo = unmapped_create_bo,
+ .release_bo = nop_release_bo,
+ },
+ {
+ .name = "snoop",
+ .set_bo = cpu_set_bo,
+ .cmp_bo = cpu_cmp_bo,
+ .create_bo = snoop_create_bo,
+ .release_bo = nop_release_bo,
+ },
+ {
+ .name = "gtt",
+ .set_bo = gtt_set_bo,
+ .cmp_bo = gtt_cmp_bo,
+ .create_bo = gtt_create_bo,
+ .release_bo = nop_release_bo,
+ },
+ {
+ .name = "gttX",
+ .set_bo = gtt_set_bo,
+ .cmp_bo = gtt_cmp_bo,
+ .create_bo = gttX_create_bo,
+ .release_bo = nop_release_bo,
+ },
+ {
+ .name = "wc",
+ .set_bo = gtt_set_bo,
+ .cmp_bo = gtt_cmp_bo,
+ .create_bo = wc_create_bo,
+ .release_bo = wc_release_bo,
+ },
+ {
+ .name = "gpu",
+ .set_bo = gpu_set_bo,
+ .cmp_bo = gpu_cmp_bo,
+ .create_bo = gpu_create_bo,
+ .release_bo = nop_release_bo,
+ },
+ {
+ .name = "gpuX",
+ .set_bo = gpu_set_bo,
+ .cmp_bo = gpu_cmp_bo,
+ .create_bo = gpuX_create_bo,
+ .release_bo = nop_release_bo,
+ },
+};
+
+#define MAX_NUM_BUFFERS 1024
+int num_buffers = MAX_NUM_BUFFERS;
+const int width = 512, height = 512;
+igt_render_copyfunc_t rendercopy;
+
+struct buffers {
+ const struct access_mode *mode;
+ drm_intel_bufmgr *bufmgr;
+ drm_intel_bo *src[MAX_NUM_BUFFERS], *dst[MAX_NUM_BUFFERS];
+ drm_intel_bo *dummy, *spare;
+ int count;
+};
+
+static void *buffers_init(struct buffers *data,
+ const struct access_mode *mode,
+ int _fd)
+{
+ data->mode = mode;
+ data->count = 0;
+
+ data->bufmgr = drm_intel_bufmgr_gem_init(_fd, 4096);
+ igt_assert(data->bufmgr);
+
+ drm_intel_bufmgr_gem_enable_reuse(data->bufmgr);
+ return intel_batchbuffer_alloc(data->bufmgr, devid);
+}
+
+static void buffers_destroy(struct buffers *data)
+{
+ if (data->count == 0)
+ return;
+
+ for (int i = 0; i < data->count; i++) {
+ data->mode->release_bo(data->src[i]);
+ data->mode->release_bo(data->dst[i]);
+ }
+ data->mode->release_bo(data->dummy);
+ data->mode->release_bo(data->spare);
+ data->count = 0;
+}
+
+static void buffers_create(struct buffers *data,
+ int count)
+{
+ igt_assert(data->bufmgr);
+
+ buffers_destroy(data);
+
+ for (int i = 0; i < count; i++) {
+ data->src[i] =
+ data->mode->create_bo(data->bufmgr, width, height);
+ data->dst[i] =
+ data->mode->create_bo(data->bufmgr, width, height);
+ }
+ data->dummy = data->mode->create_bo(data->bufmgr, width, height);
+ data->spare = data->mode->create_bo(data->bufmgr, width, height);
+ data->count = count;
+}
+
+static void buffers_fini(struct buffers *data)
+{
+ if (data->bufmgr == NULL)
+ return;
+
+ buffers_destroy(data);
+
+ intel_batchbuffer_free(batch);
+ drm_intel_bufmgr_destroy(data->bufmgr);
+ data->bufmgr = NULL;
+}
+
+typedef void (*do_copy)(drm_intel_bo *dst, drm_intel_bo *src);
+typedef struct igt_hang_ring (*do_hang)(void);
+
+static void render_copy_bo(drm_intel_bo *dst, drm_intel_bo *src)
+{
+ struct igt_buf d = {
+ .bo = dst,
+ .size = width * height * 4,
+ .num_tiles = width * height * 4,
+ .stride = width * 4,
+ }, s = {
+ .bo = src,
+ .size = width * height * 4,
+ .num_tiles = width * height * 4,
+ .stride = width * 4,
+ };
+ uint32_t swizzle;
+
+ drm_intel_bo_get_tiling(dst, &d.tiling, &swizzle);
+ drm_intel_bo_get_tiling(src, &s.tiling, &swizzle);
+
+ rendercopy(batch, NULL,
+ &s, 0, 0,
+ width, height,
+ &d, 0, 0);
+}
+
+static void blt_copy_bo(drm_intel_bo *dst, drm_intel_bo *src)
+{
+ intel_blt_copy(batch,
+ src, 0, 0, 4*width,
+ dst, 0, 0, 4*width,
+ width, height, 32);
+}
+
+static void cpu_copy_bo(drm_intel_bo *dst, drm_intel_bo *src)
+{
+ const int size = width * height * sizeof(uint32_t);
+ void *d, *s;
+
+ gem_set_domain(fd, src->handle, I915_GEM_DOMAIN_CPU, 0);
+ gem_set_domain(fd, dst->handle, I915_GEM_DOMAIN_CPU, I915_GEM_DOMAIN_CPU);
+ s = gem_mmap__cpu(fd, src->handle, 0, size, PROT_READ);
+ d = gem_mmap__cpu(fd, dst->handle, 0, size, PROT_WRITE);
+
+ memcpy(d, s, size);
+
+ munmap(d, size);
+ munmap(s, size);
+}
+
+static void gtt_copy_bo(drm_intel_bo *dst, drm_intel_bo *src)
+{
+ const int size = width * height * sizeof(uint32_t);
+ void *d, *s;
+
+ gem_set_domain(fd, src->handle, I915_GEM_DOMAIN_GTT, 0);
+ gem_set_domain(fd, dst->handle, I915_GEM_DOMAIN_GTT, I915_GEM_DOMAIN_GTT);
+
+ s = gem_mmap__gtt(fd, src->handle, size, PROT_READ);
+ d = gem_mmap__gtt(fd, dst->handle, size, PROT_WRITE);
+
+ memcpy(d, s, size);
+
+ munmap(d, size);
+ munmap(s, size);
+}
+
+static void wc_copy_bo(drm_intel_bo *dst, drm_intel_bo *src)
+{
+ const int size = width * height * sizeof(uint32_t);
+ void *d, *s;
+
+ gem_set_domain(fd, src->handle, I915_GEM_DOMAIN_GTT, 0);
+ gem_set_domain(fd, dst->handle, I915_GEM_DOMAIN_GTT, I915_GEM_DOMAIN_GTT);
+
+ s = gem_mmap__wc(fd, src->handle, 0, size, PROT_READ);
+ d = gem_mmap__wc(fd, dst->handle, 0, size, PROT_WRITE);
+
+ memcpy(d, s, size);
+
+ munmap(d, size);
+ munmap(s, size);
+}
+
+static struct igt_hang_ring no_hang(void)
+{
+ return (struct igt_hang_ring){0, 0};
+}
+
+static struct igt_hang_ring bcs_hang(void)
+{
+ return igt_hang_ring(fd, I915_EXEC_BLT);
+}
+
+static struct igt_hang_ring rcs_hang(void)
+{
+ return igt_hang_ring(fd, I915_EXEC_RENDER);
+}
+
+static void hang_require(void)
+{
+ igt_require_hang_ring(fd, -1);
+}
+
+static void do_overwrite_source(struct buffers *buffers,
+ do_copy do_copy_func,
+ do_hang do_hang_func)
+{
+ struct igt_hang_ring hang;
+ int i;
+
+ gem_quiescent_gpu(fd);
+ for (i = 0; i < buffers->count; i++) {
+ buffers->mode->set_bo(buffers->src[i], i, width, height);
+ buffers->mode->set_bo(buffers->dst[i], ~i, width, height);
+ }
+ for (i = 0; i < buffers->count; i++)
+ do_copy_func(buffers->dst[i], buffers->src[i]);
+ hang = do_hang_func();
+ for (i = buffers->count; i--; )
+ buffers->mode->set_bo(buffers->src[i], 0xdeadbeef, width, height);
+ for (i = 0; i < buffers->count; i++)
+ buffers->mode->cmp_bo(buffers->dst[i], i, width, height, buffers->dummy);
+ igt_post_hang_ring(fd, hang);
+}
+
+static void do_overwrite_source_read(struct buffers *buffers,
+ do_copy do_copy_func,
+ do_hang do_hang_func,
+ int do_rcs)
+{
+ const int half = buffers->count/2;
+ struct igt_hang_ring hang;
+ int i;
+
+ gem_quiescent_gpu(fd);
+ for (i = 0; i < half; i++) {
+ buffers->mode->set_bo(buffers->src[i], i, width, height);
+ buffers->mode->set_bo(buffers->dst[i], ~i, width, height);
+ buffers->mode->set_bo(buffers->dst[i+half], ~i, width, height);
+ }
+ for (i = 0; i < half; i++) {
+ do_copy_func(buffers->dst[i], buffers->src[i]);
+ if (do_rcs)
+ render_copy_bo(buffers->dst[i+half], buffers->src[i]);
+ else
+ blt_copy_bo(buffers->dst[i+half], buffers->src[i]);
+ }
+ hang = do_hang_func();
+ for (i = half; i--; )
+ buffers->mode->set_bo(buffers->src[i], 0xdeadbeef, width, height);
+ for (i = 0; i < half; i++) {
+ buffers->mode->cmp_bo(buffers->dst[i], i, width, height, buffers->dummy);
+ buffers->mode->cmp_bo(buffers->dst[i+half], i, width, height, buffers->dummy);
+ }
+ igt_post_hang_ring(fd, hang);
+}
+
+static void do_overwrite_source_read_bcs(struct buffers *buffers,
+ do_copy do_copy_func,
+ do_hang do_hang_func)
+{
+ do_overwrite_source_read(buffers, do_copy_func, do_hang_func, 0);
+}
+
+static void do_overwrite_source_read_rcs(struct buffers *buffers,
+ do_copy do_copy_func,
+ do_hang do_hang_func)
+{
+ do_overwrite_source_read(buffers, do_copy_func, do_hang_func, 1);
+}
+
+static void do_overwrite_source__rev(struct buffers *buffers,
+ do_copy do_copy_func,
+ do_hang do_hang_func)
+{
+ struct igt_hang_ring hang;
+ int i;
+
+ gem_quiescent_gpu(fd);
+ for (i = 0; i < buffers->count; i++) {
+ buffers->mode->set_bo(buffers->src[i], i, width, height);
+ buffers->mode->set_bo(buffers->dst[i], ~i, width, height);
+ }
+ for (i = 0; i < buffers->count; i++)
+ do_copy_func(buffers->dst[i], buffers->src[i]);
+ hang = do_hang_func();
+ for (i = 0; i < buffers->count; i++)
+ buffers->mode->set_bo(buffers->src[i], 0xdeadbeef, width, height);
+ for (i = buffers->count; i--; )
+ buffers->mode->cmp_bo(buffers->dst[i], i, width, height, buffers->dummy);
+ igt_post_hang_ring(fd, hang);
+}
+
+static void do_overwrite_source__one(struct buffers *buffers,
+ do_copy do_copy_func,
+ do_hang do_hang_func)
+{
+ struct igt_hang_ring hang;
+
+ gem_quiescent_gpu(fd);
+ buffers->mode->set_bo(buffers->src[0], 0, width, height);
+ buffers->mode->set_bo(buffers->dst[0], ~0, width, height);
+ do_copy_func(buffers->dst[0], buffers->src[0]);
+ hang = do_hang_func();
+ buffers->mode->set_bo(buffers->src[0], 0xdeadbeef, width, height);
+ buffers->mode->cmp_bo(buffers->dst[0], 0, width, height, buffers->dummy);
+ igt_post_hang_ring(fd, hang);
+}
+
+static void do_intermix(struct buffers *buffers,
+ do_copy do_copy_func,
+ do_hang do_hang_func,
+ int do_rcs)
+{
+ const int half = buffers->count/2;
+ struct igt_hang_ring hang;
+ int i;
+
+ gem_quiescent_gpu(fd);
+ for (i = 0; i < buffers->count; i++) {
+ buffers->mode->set_bo(buffers->src[i], 0xdeadbeef^~i, width, height);
+ buffers->mode->set_bo(buffers->dst[i], i, width, height);
+ }
+ for (i = 0; i < half; i++) {
+ if (do_rcs == 1 || (do_rcs == -1 && i & 1))
+ render_copy_bo(buffers->dst[i], buffers->src[i]);
+ else
+ blt_copy_bo(buffers->dst[i], buffers->src[i]);
+
+ do_copy_func(buffers->dst[i+half], buffers->src[i]);
+
+ if (do_rcs == 1 || (do_rcs == -1 && (i & 1) == 0))
+ render_copy_bo(buffers->dst[i], buffers->dst[i+half]);
+ else
+ blt_copy_bo(buffers->dst[i], buffers->dst[i+half]);
+
+ do_copy_func(buffers->dst[i+half], buffers->src[i+half]);
+ }
+ hang = do_hang_func();
+ for (i = 0; i < 2*half; i++)
+ buffers->mode->cmp_bo(buffers->dst[i], 0xdeadbeef^~i, width, height, buffers->dummy);
+ igt_post_hang_ring(fd, hang);
+}
+
+static void do_intermix_rcs(struct buffers *buffers,
+ do_copy do_copy_func,
+ do_hang do_hang_func)
+{
+ do_intermix(buffers, do_copy_func, do_hang_func, 1);
+}
+
+static void do_intermix_bcs(struct buffers *buffers,
+ do_copy do_copy_func,
+ do_hang do_hang_func)
+{
+ do_intermix(buffers, do_copy_func, do_hang_func, 0);
+}
+
+static void do_intermix_both(struct buffers *buffers,
+ do_copy do_copy_func,
+ do_hang do_hang_func)
+{
+ do_intermix(buffers, do_copy_func, do_hang_func, -1);
+}
+
+static void do_early_read(struct buffers *buffers,
+ do_copy do_copy_func,
+ do_hang do_hang_func)
+{
+ struct igt_hang_ring hang;
+ int i;
+
+ gem_quiescent_gpu(fd);
+ for (i = buffers->count; i--; )
+ buffers->mode->set_bo(buffers->src[i], 0xdeadbeef, width, height);
+ for (i = 0; i < buffers->count; i++)
+ do_copy_func(buffers->dst[i], buffers->src[i]);
+ hang = do_hang_func();
+ for (i = buffers->count; i--; )
+ buffers->mode->cmp_bo(buffers->dst[i], 0xdeadbeef, width, height, buffers->dummy);
+ igt_post_hang_ring(fd, hang);
+}
+
+static void do_read_read_bcs(struct buffers *buffers,
+ do_copy do_copy_func,
+ do_hang do_hang_func)
+{
+ struct igt_hang_ring hang;
+ int i;
+
+ gem_quiescent_gpu(fd);
+ for (i = buffers->count; i--; )
+ buffers->mode->set_bo(buffers->src[i], 0xdeadbeef ^ i, width, height);
+ for (i = 0; i < buffers->count; i++) {
+ do_copy_func(buffers->dst[i], buffers->src[i]);
+ blt_copy_bo(buffers->spare, buffers->src[i]);
+ }
+ cpu_cmp_bo(buffers->spare, 0xdeadbeef^(buffers->count-1), width, height, NULL);
+ hang = do_hang_func();
+ for (i = buffers->count; i--; )
+ buffers->mode->cmp_bo(buffers->dst[i], 0xdeadbeef ^ i, width, height, buffers->dummy);
+ igt_post_hang_ring(fd, hang);
+}
+
+static void do_read_read_rcs(struct buffers *buffers,
+ do_copy do_copy_func,
+ do_hang do_hang_func)
+{
+ struct igt_hang_ring hang;
+ int i;
+
+ gem_quiescent_gpu(fd);
+ for (i = buffers->count; i--; )
+ buffers->mode->set_bo(buffers->src[i], 0xdeadbeef ^ i, width, height);
+ for (i = 0; i < buffers->count; i++) {
+ do_copy_func(buffers->dst[i], buffers->src[i]);
+ render_copy_bo(buffers->spare, buffers->src[i]);
+ }
+ cpu_cmp_bo(buffers->spare, 0xdeadbeef^(buffers->count-1), width, height, NULL);
+ hang = do_hang_func();
+ for (i = buffers->count; i--; )
+ buffers->mode->cmp_bo(buffers->dst[i], 0xdeadbeef ^ i, width, height, buffers->dummy);
+ igt_post_hang_ring(fd, hang);
+}
+
+static void do_gpu_read_after_write(struct buffers *buffers,
+ do_copy do_copy_func,
+ do_hang do_hang_func)
+{
+ struct igt_hang_ring hang;
+ int i;
+
+ gem_quiescent_gpu(fd);
+ for (i = buffers->count; i--; )
+ buffers->mode->set_bo(buffers->src[i], 0xabcdabcd, width, height);
+ for (i = 0; i < buffers->count; i++)
+ do_copy_func(buffers->dst[i], buffers->src[i]);
+ for (i = buffers->count; i--; )
+ do_copy_func(buffers->dummy, buffers->dst[i]);
+ hang = do_hang_func();
+ for (i = buffers->count; i--; )
+ buffers->mode->cmp_bo(buffers->dst[i], 0xabcdabcd, width, height, buffers->dummy);
+ igt_post_hang_ring(fd, hang);
+}
+
+typedef void (*do_test)(struct buffers *buffers,
+ do_copy do_copy_func,
+ do_hang do_hang_func);
+
+typedef void (*run_wrap)(struct buffers *buffers,
+ do_test do_test_func,
+ do_copy do_copy_func,
+ do_hang do_hang_func);
+
+static void run_single(struct buffers *buffers,
+ do_test do_test_func,
+ do_copy do_copy_func,
+ do_hang do_hang_func)
+{
+ do_test_func(buffers, do_copy_func, do_hang_func);
+}
+
+static void run_interruptible(struct buffers *buffers,
+ do_test do_test_func,
+ do_copy do_copy_func,
+ do_hang do_hang_func)
+{
+ int loop;
+
+ for (loop = 0; loop < 10; loop++)
+ do_test_func(buffers, do_copy_func, do_hang_func);
+}
+
+static void run_forked(struct buffers *buffers,
+ do_test do_test_func,
+ do_copy do_copy_func,
+ do_hang do_hang_func)
+{
+ const int old_num_buffers = num_buffers;
+
+ num_buffers /= 16;
+ num_buffers += 2;
+
+ igt_fork(child, 16) {
+ /* recreate process local variables */
+ buffers->count = 0;
+ fd = drm_open_driver(DRIVER_INTEL);
+
+ batch = buffers_init(buffers, buffers->mode, fd);
+
+ buffers_create(buffers, num_buffers);
+ for (int loop = 0; loop < 10; loop++)
+ do_test_func(buffers, do_copy_func, do_hang_func);
+
+ buffers_fini(buffers);
+ }
+
+ igt_waitchildren();
+
+ num_buffers = old_num_buffers;
+}
+
+static void bit17_require(void)
+{
+ struct drm_i915_gem_get_tiling2 {
+ uint32_t handle;
+ uint32_t tiling_mode;
+ uint32_t swizzle_mode;
+ uint32_t phys_swizzle_mode;
+ } arg;
+#define DRM_IOCTL_I915_GEM_GET_TILING2 DRM_IOWR (DRM_COMMAND_BASE + DRM_I915_GEM_GET_TILING, struct drm_i915_gem_get_tiling2)
+
+ memset(&arg, 0, sizeof(arg));
+ arg.handle = gem_create(fd, 4096);
+ gem_set_tiling(fd, arg.handle, I915_TILING_X, 512);
+
+ do_or_die(drmIoctl(fd, DRM_IOCTL_I915_GEM_GET_TILING2, &arg));
+ gem_close(fd, arg.handle);
+ igt_require(arg.phys_swizzle_mode == arg.swizzle_mode);
+}
+
+static void cpu_require(void)
+{
+ bit17_require();
+}
+
+static void gtt_require(void)
+{
+}
+
+static void wc_require(void)
+{
+ bit17_require();
+ gem_require_mmap_wc(fd);
+}
+
+static void bcs_require(void)
+{
+}
+
+static void rcs_require(void)
+{
+ igt_require(rendercopy);
+}
+
+static void no_require(void)
+{
+}
+
+static void
+run_basic_modes(const struct access_mode *mode,
+ const char *suffix,
+ run_wrap run_wrap_func)
+{
+ const struct {
+ const char *prefix;
+ do_copy copy;
+ void (*require)(void);
+ } pipelines[] = {
+ { "cpu", cpu_copy_bo, cpu_require },
+ { "gtt", gtt_copy_bo, gtt_require },
+ { "wc", wc_copy_bo, wc_require },
+ { "blt", blt_copy_bo, bcs_require },
+ { "render", render_copy_bo, rcs_require },
+ { NULL, NULL }
+ }, *pskip = pipelines + 3, *p;
+ const struct {
+ const char *suffix;
+ do_hang hang;
+ void (*require)(void);
+ } hangs[] = {
+ { "", no_hang, no_require },
+ { "-hang-blt", bcs_hang, hang_require },
+ { "-hang-render", rcs_hang, hang_require },
+ { NULL, NULL },
+ }, *h;
+ struct buffers buffers;
+
+ for (h = hangs; h->suffix; h++) {
+ if (!all && *h->suffix)
+ continue;
+
+ for (p = all ? pipelines : pskip; p->prefix; p++) {
+ igt_fixture {
+ batch = buffers_init(&buffers, mode, fd);
+ }
+
+ /* try to overwrite the source values */
+ igt_subtest_f("%s-%s-overwrite-source-one%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ h->require();
+ p->require();
+ buffers_create(&buffers, num_buffers);
+ run_wrap_func(&buffers,
+ do_overwrite_source__one,
+ p->copy, h->hang);
+ }
+
+ igt_subtest_f("%s-%s-overwrite-source%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ h->require();
+ p->require();
+ buffers_create(&buffers, num_buffers);
+ run_wrap_func(&buffers,
+ do_overwrite_source,
+ p->copy, h->hang);
+ }
+
+ igt_subtest_f("%s-%s-overwrite-source-read-bcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ h->require();
+ p->require();
+ buffers_create(&buffers, num_buffers);
+ run_wrap_func(&buffers,
+ do_overwrite_source_read_bcs,
+ p->copy, h->hang);
+ }
+
+ igt_subtest_f("%s-%s-overwrite-source-read-rcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ h->require();
+ p->require();
+ igt_require(rendercopy);
+ buffers_create(&buffers, num_buffers);
+ run_wrap_func(&buffers,
+ do_overwrite_source_read_rcs,
+ p->copy, h->hang);
+ }
+
+ igt_subtest_f("%s-%s-overwrite-source-rev%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ h->require();
+ p->require();
+ buffers_create(&buffers, num_buffers);
+ run_wrap_func(&buffers,
+ do_overwrite_source__rev,
+ p->copy, h->hang);
+ }
+
+ /* try to intermix copies with GPU copies*/
+ igt_subtest_f("%s-%s-intermix-rcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ h->require();
+ p->require();
+ igt_require(rendercopy);
+ buffers_create(&buffers, num_buffers);
+ run_wrap_func(&buffers,
+ do_intermix_rcs,
+ p->copy, h->hang);
+ }
+ igt_subtest_f("%s-%s-intermix-bcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ h->require();
+ p->require();
+ igt_require(rendercopy);
+ buffers_create(&buffers, num_buffers);
+ run_wrap_func(&buffers,
+ do_intermix_bcs,
+ p->copy, h->hang);
+ }
+ igt_subtest_f("%s-%s-intermix-both%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ h->require();
+ p->require();
+ igt_require(rendercopy);
+ buffers_create(&buffers, num_buffers);
+ run_wrap_func(&buffers,
+ do_intermix_both,
+ p->copy, h->hang);
+ }
+
+ /* try to read the results before the copy completes */
+ igt_subtest_f("%s-%s-early-read%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ h->require();
+ p->require();
+ buffers_create(&buffers, num_buffers);
+ run_wrap_func(&buffers,
+ do_early_read,
+ p->copy, h->hang);
+ }
+
+ /* concurrent reads */
+ igt_subtest_f("%s-%s-read-read-bcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ h->require();
+ p->require();
+ buffers_create(&buffers, num_buffers);
+ run_wrap_func(&buffers,
+ do_read_read_bcs,
+ p->copy, h->hang);
+ }
+ igt_subtest_f("%s-%s-read-read-rcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ h->require();
+ p->require();
+ igt_require(rendercopy);
+ buffers_create(&buffers, num_buffers);
+ run_wrap_func(&buffers,
+ do_read_read_rcs,
+ p->copy, h->hang);
+ }
+
+ /* and finally try to trick the kernel into loosing the pending write */
+ igt_subtest_f("%s-%s-gpu-read-after-write%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ h->require();
+ p->require();
+ buffers_create(&buffers, num_buffers);
+ run_wrap_func(&buffers,
+ do_gpu_read_after_write,
+ p->copy, h->hang);
+ }
+
+ igt_fixture {
+ buffers_fini(&buffers);
+ }
+ }
+ }
+}
+
+static void
+run_modes(const struct access_mode *mode)
+{
+ if (all) {
+ run_basic_modes(mode, "", run_single);
+
+ igt_fork_signal_helper();
+ run_basic_modes(mode, "-interruptible", run_interruptible);
+ igt_stop_signal_helper();
+ }
+
+ igt_fork_signal_helper();
+ run_basic_modes(mode, "-forked", run_forked);
+ igt_stop_signal_helper();
+}
+
+igt_main
+{
+ int max, i;
+
+ igt_skip_on_simulation();
+
+ if (strstr(igt_test_name(), "all"))
+ all = true;
+
+ igt_fixture {
+ fd = drm_open_driver(DRIVER_INTEL);
+ devid = intel_get_drm_devid(fd);
+ gen = intel_gen(devid);
+ rendercopy = igt_get_render_copyfunc(devid);
+
+ max = gem_aperture_size (fd) / (1024 * 1024) / 2;
+ if (num_buffers > max)
+ num_buffers = max;
+
+ max = intel_get_total_ram_mb() * 3 / 4;
+ if (num_buffers > max)
+ num_buffers = max;
+ num_buffers /= 2;
+ igt_info("using 2x%d buffers, each 1MiB\n", num_buffers);
+ }
+
+ for (i = 0; i < ARRAY_SIZE(access_modes); i++)
+ run_modes(&access_modes[i]);
+}
--
2.6.1
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH i-g-t 2/3] Unify handling of slow/combinatorial tests
2015-10-23 11:42 [PATCH i-g-t 0/3] Unify slow/combinatorial test handling David Weinehall
2015-10-23 11:42 ` [PATCH i-g-t 1/3] Rename gem_concurren_all over gem_concurrent_blit David Weinehall
@ 2015-10-23 11:42 ` David Weinehall
2015-10-23 11:56 ` Chris Wilson
` (2 more replies)
2015-10-23 11:42 ` [PATCH i-g-t 3/3] Remove gem_concurrent_all, since it is now superfluous David Weinehall
` (3 subsequent siblings)
5 siblings, 3 replies; 41+ messages in thread
From: David Weinehall @ 2015-10-23 11:42 UTC (permalink / raw)
To: intel-gfx
Some tests should not be run by default, due to their slow,
and sometimes superfluous, nature.
We still want to be able to run these tests though in some cases.
Until now there's been no unified way of handling this. Remedy
this by introducing the --with-slow-combinatorial option to
igt_core, and use it in gem_concurrent_blit & kms_frontbuffer_tracking.
---
lib/igt_core.c | 19 ++++++
lib/igt_core.h | 1 +
tests/gem_concurrent_blit.c | 40 ++++++++----
tests/kms_frontbuffer_tracking.c | 135 +++++++++++++++++++++++++++------------
4 files changed, 142 insertions(+), 53 deletions(-)
diff --git a/lib/igt_core.c b/lib/igt_core.c
index 59127cafe606..ba40ce0e0ead 100644
--- a/lib/igt_core.c
+++ b/lib/igt_core.c
@@ -216,6 +216,7 @@ const char *igt_interactive_debug;
/* subtests helpers */
static bool list_subtests = false;
+static bool with_slow_combinatorial = false;
static char *run_single_subtest = NULL;
static bool run_single_subtest_found = false;
static const char *in_subtest = NULL;
@@ -235,6 +236,7 @@ bool test_child;
enum {
OPT_LIST_SUBTESTS,
+ OPT_WITH_SLOW_COMBINATORIAL,
OPT_RUN_SUBTEST,
OPT_DESCRIPTION,
OPT_DEBUG,
@@ -478,6 +480,7 @@ static void print_usage(const char *help_str, bool output_on_stderr)
fprintf(f, "Usage: %s [OPTIONS]\n", command_str);
fprintf(f, " --list-subtests\n"
+ " --with-slow-combinatorial\n"
" --run-subtest <pattern>\n"
" --debug[=log-domain]\n"
" --interactive-debug[=domain]\n"
@@ -510,6 +513,7 @@ static int common_init(int *argc, char **argv,
int c, option_index = 0, i, x;
static struct option long_options[] = {
{"list-subtests", 0, 0, OPT_LIST_SUBTESTS},
+ {"with-slow-combinatorial", 0, 0, OPT_WITH_SLOW_COMBINATORIAL},
{"run-subtest", 1, 0, OPT_RUN_SUBTEST},
{"help-description", 0, 0, OPT_DESCRIPTION},
{"debug", optional_argument, 0, OPT_DEBUG},
@@ -617,6 +621,10 @@ static int common_init(int *argc, char **argv,
if (!run_single_subtest)
list_subtests = true;
break;
+ case OPT_WITH_SLOW_COMBINATORIAL:
+ if (!run_single_subtest)
+ with_slow_combinatorial = true;
+ break;
case OPT_RUN_SUBTEST:
if (!list_subtests)
run_single_subtest = strdup(optarg);
@@ -1629,6 +1637,17 @@ void igt_skip_on_simulation(void)
igt_require(!igt_run_in_simulation());
}
+/**
+ * igt_slow_combinatorial:
+ *
+ * This is used to define subtests that should only be listed/run
+ * when the "--with-slow-combinatorial" has been specified
+ */
+void igt_slow_combinatorial(void)
+{
+ igt_skip_on(!with_slow_combinatorial);
+}
+
/* structured logging */
/**
diff --git a/lib/igt_core.h b/lib/igt_core.h
index 5ae09653fd55..6ddf25563275 100644
--- a/lib/igt_core.h
+++ b/lib/igt_core.h
@@ -680,6 +680,7 @@ bool igt_run_in_simulation(void);
#define SLOW_QUICK(slow,quick) (igt_run_in_simulation() ? (quick) : (slow))
void igt_skip_on_simulation(void);
+void igt_slow_combinatorial(void);
extern const char *igt_interactive_debug;
diff --git a/tests/gem_concurrent_blit.c b/tests/gem_concurrent_blit.c
index 1d2d787202df..311b6829e984 100644
--- a/tests/gem_concurrent_blit.c
+++ b/tests/gem_concurrent_blit.c
@@ -931,9 +931,6 @@ run_basic_modes(const struct access_mode *mode,
struct buffers buffers;
for (h = hangs; h->suffix; h++) {
- if (!all && *h->suffix)
- continue;
-
for (p = all ? pipelines : pskip; p->prefix; p++) {
igt_fixture {
batch = buffers_init(&buffers, mode, fd);
@@ -941,6 +938,8 @@ run_basic_modes(const struct access_mode *mode,
/* try to overwrite the source values */
igt_subtest_f("%s-%s-overwrite-source-one%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ if (*h->suffix)
+ igt_slow_combinatorial();
h->require();
p->require();
buffers_create(&buffers, num_buffers);
@@ -950,6 +949,8 @@ run_basic_modes(const struct access_mode *mode,
}
igt_subtest_f("%s-%s-overwrite-source%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ if (*h->suffix)
+ igt_slow_combinatorial();
h->require();
p->require();
buffers_create(&buffers, num_buffers);
@@ -959,6 +960,8 @@ run_basic_modes(const struct access_mode *mode,
}
igt_subtest_f("%s-%s-overwrite-source-read-bcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ if (*h->suffix)
+ igt_slow_combinatorial();
h->require();
p->require();
buffers_create(&buffers, num_buffers);
@@ -968,6 +971,8 @@ run_basic_modes(const struct access_mode *mode,
}
igt_subtest_f("%s-%s-overwrite-source-read-rcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ if (*h->suffix)
+ igt_slow_combinatorial();
h->require();
p->require();
igt_require(rendercopy);
@@ -978,6 +983,8 @@ run_basic_modes(const struct access_mode *mode,
}
igt_subtest_f("%s-%s-overwrite-source-rev%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ if (*h->suffix)
+ igt_slow_combinatorial();
h->require();
p->require();
buffers_create(&buffers, num_buffers);
@@ -988,6 +995,8 @@ run_basic_modes(const struct access_mode *mode,
/* try to intermix copies with GPU copies*/
igt_subtest_f("%s-%s-intermix-rcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ if (*h->suffix)
+ igt_slow_combinatorial();
h->require();
p->require();
igt_require(rendercopy);
@@ -997,6 +1006,8 @@ run_basic_modes(const struct access_mode *mode,
p->copy, h->hang);
}
igt_subtest_f("%s-%s-intermix-bcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ if (*h->suffix)
+ igt_slow_combinatorial();
h->require();
p->require();
igt_require(rendercopy);
@@ -1006,6 +1017,8 @@ run_basic_modes(const struct access_mode *mode,
p->copy, h->hang);
}
igt_subtest_f("%s-%s-intermix-both%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ if (*h->suffix)
+ igt_slow_combinatorial();
h->require();
p->require();
igt_require(rendercopy);
@@ -1017,6 +1030,8 @@ run_basic_modes(const struct access_mode *mode,
/* try to read the results before the copy completes */
igt_subtest_f("%s-%s-early-read%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ if (*h->suffix)
+ igt_slow_combinatorial();
h->require();
p->require();
buffers_create(&buffers, num_buffers);
@@ -1027,6 +1042,8 @@ run_basic_modes(const struct access_mode *mode,
/* concurrent reads */
igt_subtest_f("%s-%s-read-read-bcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ if (*h->suffix)
+ igt_slow_combinatorial();
h->require();
p->require();
buffers_create(&buffers, num_buffers);
@@ -1035,6 +1052,8 @@ run_basic_modes(const struct access_mode *mode,
p->copy, h->hang);
}
igt_subtest_f("%s-%s-read-read-rcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ if (*h->suffix)
+ igt_slow_combinatorial();
h->require();
p->require();
igt_require(rendercopy);
@@ -1046,6 +1065,8 @@ run_basic_modes(const struct access_mode *mode,
/* and finally try to trick the kernel into loosing the pending write */
igt_subtest_f("%s-%s-gpu-read-after-write%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ if (*h->suffix)
+ igt_slow_combinatorial();
h->require();
p->require();
buffers_create(&buffers, num_buffers);
@@ -1064,13 +1085,11 @@ run_basic_modes(const struct access_mode *mode,
static void
run_modes(const struct access_mode *mode)
{
- if (all) {
- run_basic_modes(mode, "", run_single);
+ run_basic_modes(mode, "", run_single);
- igt_fork_signal_helper();
- run_basic_modes(mode, "-interruptible", run_interruptible);
- igt_stop_signal_helper();
- }
+ igt_fork_signal_helper();
+ run_basic_modes(mode, "-interruptible", run_interruptible);
+ igt_stop_signal_helper();
igt_fork_signal_helper();
run_basic_modes(mode, "-forked", run_forked);
@@ -1083,9 +1102,6 @@ igt_main
igt_skip_on_simulation();
- if (strstr(igt_test_name(), "all"))
- all = true;
-
igt_fixture {
fd = drm_open_driver(DRIVER_INTEL);
devid = intel_get_drm_devid(fd);
diff --git a/tests/kms_frontbuffer_tracking.c b/tests/kms_frontbuffer_tracking.c
index d97e148c5073..6f84ef0813d9 100644
--- a/tests/kms_frontbuffer_tracking.c
+++ b/tests/kms_frontbuffer_tracking.c
@@ -47,8 +47,8 @@ IGT_TEST_DESCRIPTION("Test the Kernel's frontbuffer tracking mechanism and "
* combinations that are somewhat redundant and don't add much value to the
* test. For example, since we already do the offscreen testing with a single
* pipe enabled, there's no much value in doing it again with dual pipes. If you
- * still want to try these redundant tests, you need to use the --show-hidden
- * option.
+ * still want to try these redundant tests, you need to use the
+ * "--with-slow-combinatorial" option.
*
* The most important hidden thing is the FEATURE_NONE set of tests. Whenever
* you get a failure on any test, it is important to check whether the same test
@@ -116,6 +116,10 @@ struct test_mode {
} format;
enum igt_draw_method method;
+
+ /* The test is slow and/or combinatorial;
+ * skip unless otherwise specified */
+ bool slow;
};
enum flip_type {
@@ -237,7 +241,6 @@ struct {
bool fbc_check_last_action;
bool no_edp;
bool small_modes;
- bool show_hidden;
int step;
int only_feature;
int only_pipes;
@@ -250,7 +253,6 @@ struct {
.fbc_check_last_action = true,
.no_edp = false,
.small_modes = false,
- .show_hidden= false,
.step = 0,
.only_feature = FEATURE_COUNT,
.only_pipes = PIPE_COUNT,
@@ -2892,9 +2894,6 @@ static int opt_handler(int option, int option_index, void *data)
case 'm':
opt.small_modes = true;
break;
- case 'i':
- opt.show_hidden = true;
- break;
case 't':
opt.step++;
break;
@@ -2942,7 +2941,6 @@ const char *help_str =
" --no-fbc-action-check Don't check for the FBC last action\n"
" --no-edp Don't use eDP monitors\n"
" --use-small-modes Use smaller resolutions for the modes\n"
-" --show-hidden Show hidden subtests\n"
" --step Stop on each step so you can check the screen\n"
" --nop-only Only run the \"nop\" feature subtests\n"
" --fbc-only Only run the \"fbc\" feature subtests\n"
@@ -3036,6 +3034,7 @@ static const char *format_str(enum pixel_format format)
#define TEST_MODE_ITER_BEGIN(t) \
t.format = FORMAT_DEFAULT; \
+ t.slow = false; \
for (t.feature = 0; t.feature < FEATURE_COUNT; t.feature++) { \
for (t.pipes = 0; t.pipes < PIPE_COUNT; t.pipes++) { \
for (t.screen = 0; t.screen < SCREEN_COUNT; t.screen++) { \
@@ -3046,15 +3045,15 @@ static const char *format_str(enum pixel_format format)
continue; \
if (t.screen == SCREEN_OFFSCREEN && t.plane != PLANE_PRI) \
continue; \
- if (!opt.show_hidden && t.pipes == PIPE_DUAL && \
+ if (t.pipes == PIPE_DUAL && \
t.screen == SCREEN_OFFSCREEN) \
- continue; \
- if ((!opt.show_hidden && opt.only_feature != FEATURE_NONE) \
- && t.feature == FEATURE_NONE) \
- continue; \
- if (!opt.show_hidden && t.fbs == FBS_SHARED && \
+ t.slow = true; \
+ if (opt.only_feature != FEATURE_NONE && \
+ t.feature == FEATURE_NONE) \
+ t.slow = true; \
+ if (t.fbs == FBS_SHARED && \
(t.plane == PLANE_CUR || t.plane == PLANE_SPR)) \
- continue;
+ t.slow = true;
#define TEST_MODE_ITER_END } } } } } }
@@ -3069,7 +3068,6 @@ int main(int argc, char *argv[])
{ "no-fbc-action-check", 0, 0, 'a'},
{ "no-edp", 0, 0, 'e'},
{ "use-small-modes", 0, 0, 'm'},
- { "show-hidden", 0, 0, 'i'},
{ "step", 0, 0, 't'},
{ "nop-only", 0, 0, 'n'},
{ "fbc-only", 0, 0, 'f'},
@@ -3088,9 +3086,11 @@ int main(int argc, char *argv[])
setup_environment();
for (t.feature = 0; t.feature < FEATURE_COUNT; t.feature++) {
- if ((!opt.show_hidden && opt.only_feature != FEATURE_NONE)
- && t.feature == FEATURE_NONE)
- continue;
+ bool slow = false;
+
+ if (opt.only_feature != FEATURE_NONE &&
+ t.feature == FEATURE_NONE)
+ slow = true;
for (t.pipes = 0; t.pipes < PIPE_COUNT; t.pipes++) {
t.screen = SCREEN_PRIM;
t.plane = PLANE_PRI;
@@ -3101,8 +3101,11 @@ int main(int argc, char *argv[])
igt_subtest_f("%s-%s-rte",
feature_str(t.feature),
- pipes_str(t.pipes))
+ pipes_str(t.pipes)) {
+ if (slow)
+ igt_slow_combinatorial();
rte_subtest(&t);
+ }
}
}
@@ -3113,39 +3116,52 @@ int main(int argc, char *argv[])
screen_str(t.screen),
plane_str(t.plane),
fbs_str(t.fbs),
- igt_draw_get_method_name(t.method))
+ igt_draw_get_method_name(t.method)) {
+ if (t.slow)
+ igt_slow_combinatorial();
draw_subtest(&t);
+ }
TEST_MODE_ITER_END
TEST_MODE_ITER_BEGIN(t)
if (t.plane != PLANE_PRI ||
- t.screen == SCREEN_OFFSCREEN ||
- (!opt.show_hidden && t.method != IGT_DRAW_BLT))
+ t.screen == SCREEN_OFFSCREEN)
continue;
+ if (t.method != IGT_DRAW_BLT)
+ t.slow = true;
igt_subtest_f("%s-%s-%s-%s-flip-%s",
feature_str(t.feature),
pipes_str(t.pipes),
screen_str(t.screen),
fbs_str(t.fbs),
- igt_draw_get_method_name(t.method))
+ igt_draw_get_method_name(t.method)) {
+ if (t.slow)
+ igt_slow_combinatorial();
flip_subtest(&t, FLIP_PAGEFLIP);
+ }
igt_subtest_f("%s-%s-%s-%s-evflip-%s",
feature_str(t.feature),
pipes_str(t.pipes),
screen_str(t.screen),
fbs_str(t.fbs),
- igt_draw_get_method_name(t.method))
+ igt_draw_get_method_name(t.method)) {
+ if (t.slow)
+ igt_slow_combinatorial();
flip_subtest(&t, FLIP_PAGEFLIP_EVENT);
+ }
igt_subtest_f("%s-%s-%s-%s-msflip-%s",
feature_str(t.feature),
pipes_str(t.pipes),
screen_str(t.screen),
fbs_str(t.fbs),
- igt_draw_get_method_name(t.method))
+ igt_draw_get_method_name(t.method)) {
+ if (t.slow)
+ igt_slow_combinatorial();
flip_subtest(&t, FLIP_MODESET);
+ }
TEST_MODE_ITER_END
@@ -3159,8 +3175,11 @@ int main(int argc, char *argv[])
igt_subtest_f("%s-%s-%s-fliptrack",
feature_str(t.feature),
pipes_str(t.pipes),
- fbs_str(t.fbs))
+ fbs_str(t.fbs)) {
+ if (t.slow)
+ igt_slow_combinatorial();
fliptrack_subtest(&t, FLIP_PAGEFLIP);
+ }
TEST_MODE_ITER_END
TEST_MODE_ITER_BEGIN(t)
@@ -3174,16 +3193,22 @@ int main(int argc, char *argv[])
pipes_str(t.pipes),
screen_str(t.screen),
plane_str(t.plane),
- fbs_str(t.fbs))
+ fbs_str(t.fbs)) {
+ if (t.slow)
+ igt_slow_combinatorial();
move_subtest(&t);
+ }
igt_subtest_f("%s-%s-%s-%s-%s-onoff",
feature_str(t.feature),
pipes_str(t.pipes),
screen_str(t.screen),
plane_str(t.plane),
- fbs_str(t.fbs))
+ fbs_str(t.fbs)) {
+ if (t.slow)
+ igt_slow_combinatorial();
onoff_subtest(&t);
+ }
TEST_MODE_ITER_END
TEST_MODE_ITER_BEGIN(t)
@@ -3197,23 +3222,30 @@ int main(int argc, char *argv[])
pipes_str(t.pipes),
screen_str(t.screen),
plane_str(t.plane),
- fbs_str(t.fbs))
+ fbs_str(t.fbs)) {
+ if (t.slow)
+ igt_slow_combinatorial();
fullscreen_plane_subtest(&t);
+ }
TEST_MODE_ITER_END
TEST_MODE_ITER_BEGIN(t)
if (t.screen != SCREEN_PRIM ||
- t.method != IGT_DRAW_BLT ||
- (!opt.show_hidden && t.plane != PLANE_PRI) ||
- (!opt.show_hidden && t.fbs != FBS_INDIVIDUAL))
+ t.method != IGT_DRAW_BLT)
continue;
+ if (t.plane != PLANE_PRI ||
+ t.fbs != FBS_INDIVIDUAL)
+ t.slow = true;
igt_subtest_f("%s-%s-%s-%s-multidraw",
feature_str(t.feature),
pipes_str(t.pipes),
plane_str(t.plane),
- fbs_str(t.fbs))
+ fbs_str(t.fbs)) {
+ if (t.slow)
+ igt_slow_combinatorial();
multidraw_subtest(&t);
+ }
TEST_MODE_ITER_END
TEST_MODE_ITER_BEGIN(t)
@@ -3224,8 +3256,11 @@ int main(int argc, char *argv[])
t.method != IGT_DRAW_MMAP_GTT)
continue;
- igt_subtest_f("%s-farfromfence", feature_str(t.feature))
+ igt_subtest_f("%s-farfromfence", feature_str(t.feature)) {
+ if (t.slow)
+ igt_slow_combinatorial();
farfromfence_subtest(&t);
+ }
TEST_MODE_ITER_END
TEST_MODE_ITER_BEGIN(t)
@@ -3243,8 +3278,11 @@ int main(int argc, char *argv[])
igt_subtest_f("%s-%s-draw-%s",
feature_str(t.feature),
format_str(t.format),
- igt_draw_get_method_name(t.method))
+ igt_draw_get_method_name(t.method)) {
+ if (t.slow)
+ igt_slow_combinatorial();
format_draw_subtest(&t);
+ }
}
TEST_MODE_ITER_END
@@ -3256,8 +3294,11 @@ int main(int argc, char *argv[])
continue;
igt_subtest_f("%s-%s-scaledprimary",
feature_str(t.feature),
- fbs_str(t.fbs))
+ fbs_str(t.fbs)) {
+ if (t.slow)
+ igt_slow_combinatorial();
scaledprimary_subtest(&t);
+ }
TEST_MODE_ITER_END
TEST_MODE_ITER_BEGIN(t)
@@ -3268,19 +3309,31 @@ int main(int argc, char *argv[])
t.method != IGT_DRAW_MMAP_CPU)
continue;
- igt_subtest_f("%s-modesetfrombusy", feature_str(t.feature))
+ igt_subtest_f("%s-modesetfrombusy", feature_str(t.feature)) {
+ if (t.slow)
+ igt_slow_combinatorial();
modesetfrombusy_subtest(&t);
+ }
if (t.feature & FEATURE_FBC)
- igt_subtest_f("%s-badstride", feature_str(t.feature))
+ igt_subtest_f("%s-badstride", feature_str(t.feature)) {
+ if (t.slow)
+ igt_slow_combinatorial();
badstride_subtest(&t);
+ }
if (t.feature & FEATURE_PSR)
- igt_subtest_f("%s-slowdraw", feature_str(t.feature))
+ igt_subtest_f("%s-slowdraw", feature_str(t.feature)) {
+ if (t.slow)
+ igt_slow_combinatorial();
slow_draw_subtest(&t);
+ }
- igt_subtest_f("%s-suspend", feature_str(t.feature))
+ igt_subtest_f("%s-suspend", feature_str(t.feature)) {
+ if (t.slow)
+ igt_slow_combinatorial();
suspend_subtest(&t);
+ }
TEST_MODE_ITER_END
igt_fixture
--
2.6.1
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH i-g-t 3/3] Remove gem_concurrent_all, since it is now superfluous
2015-10-23 11:42 [PATCH i-g-t 0/3] Unify slow/combinatorial test handling David Weinehall
2015-10-23 11:42 ` [PATCH i-g-t 1/3] Rename gem_concurren_all over gem_concurrent_blit David Weinehall
2015-10-23 11:42 ` [PATCH i-g-t 2/3] Unify handling of slow/combinatorial tests David Weinehall
@ 2015-10-23 11:42 ` David Weinehall
2015-10-23 11:58 ` [PATCH i-g-t 0/3] Unify slow/combinatorial test handling Chris Wilson
` (2 subsequent siblings)
5 siblings, 0 replies; 41+ messages in thread
From: David Weinehall @ 2015-10-23 11:42 UTC (permalink / raw)
To: intel-gfx
With the addition of unified command-line handling for
slow/combinatorial tests we no longer need the
gem_concurrent_blit/gem_concurrent_all magic. Delete the latter,
since the former has a more descriptive file name.
---
tests/Makefile.sources | 1 -
tests/gem_concurrent_all.c | 1108 --------------------------------------------
2 files changed, 1109 deletions(-)
delete mode 100644 tests/gem_concurrent_all.c
diff --git a/tests/Makefile.sources b/tests/Makefile.sources
index ac731f90dcb2..321c7f33e4d3 100644
--- a/tests/Makefile.sources
+++ b/tests/Makefile.sources
@@ -14,7 +14,6 @@ TESTS_progs_M = \
gem_caching \
gem_close_race \
gem_concurrent_blit \
- gem_concurrent_all \
gem_cs_tlb \
gem_ctx_param_basic \
gem_ctx_bad_exec \
diff --git a/tests/gem_concurrent_all.c b/tests/gem_concurrent_all.c
deleted file mode 100644
index 1d2d787202df..000000000000
--- a/tests/gem_concurrent_all.c
+++ /dev/null
@@ -1,1108 +0,0 @@
-/*
- * Copyright © 2009,2012,2013 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- * Authors:
- * Eric Anholt <eric@anholt.net>
- * Chris Wilson <chris@chris-wilson.co.uk>
- * Daniel Vetter <daniel.vetter@ffwll.ch>
- *
- */
-
-/** @file gem_concurrent.c
- *
- * This is a test of pread/pwrite/mmap behavior when writing to active
- * buffers.
- *
- * Based on gem_gtt_concurrent_blt.
- */
-
-#include "igt.h"
-#include <stdlib.h>
-#include <stdio.h>
-#include <string.h>
-#include <fcntl.h>
-#include <inttypes.h>
-#include <errno.h>
-#include <sys/stat.h>
-#include <sys/time.h>
-#include <sys/wait.h>
-
-#include <drm.h>
-
-#include "intel_bufmgr.h"
-
-IGT_TEST_DESCRIPTION("Test of pread/pwrite/mmap behavior when writing to active"
- " buffers.");
-
-int fd, devid, gen;
-struct intel_batchbuffer *batch;
-int all;
-
-static void
-nop_release_bo(drm_intel_bo *bo)
-{
- drm_intel_bo_unreference(bo);
-}
-
-static void
-prw_set_bo(drm_intel_bo *bo, uint32_t val, int width, int height)
-{
- int size = width * height, i;
- uint32_t *tmp;
-
- tmp = malloc(4*size);
- if (tmp) {
- for (i = 0; i < size; i++)
- tmp[i] = val;
- drm_intel_bo_subdata(bo, 0, 4*size, tmp);
- free(tmp);
- } else {
- for (i = 0; i < size; i++)
- drm_intel_bo_subdata(bo, 4*i, 4, &val);
- }
-}
-
-static void
-prw_cmp_bo(drm_intel_bo *bo, uint32_t val, int width, int height, drm_intel_bo *tmp)
-{
- int size = width * height, i;
- uint32_t *vaddr;
-
- do_or_die(drm_intel_bo_map(tmp, true));
- do_or_die(drm_intel_bo_get_subdata(bo, 0, 4*size, tmp->virtual));
- vaddr = tmp->virtual;
- for (i = 0; i < size; i++)
- igt_assert_eq_u32(vaddr[i], val);
- drm_intel_bo_unmap(tmp);
-}
-
-static drm_intel_bo *
-unmapped_create_bo(drm_intel_bufmgr *bufmgr, int width, int height)
-{
- drm_intel_bo *bo;
-
- bo = drm_intel_bo_alloc(bufmgr, "bo", 4*width*height, 0);
- igt_assert(bo);
-
- return bo;
-}
-
-static drm_intel_bo *
-snoop_create_bo(drm_intel_bufmgr *bufmgr, int width, int height)
-{
- drm_intel_bo *bo;
-
- igt_skip_on(gem_has_llc(fd));
-
- bo = unmapped_create_bo(bufmgr, width, height);
- gem_set_caching(fd, bo->handle, I915_CACHING_CACHED);
- drm_intel_bo_disable_reuse(bo);
-
- return bo;
-}
-
-static void
-gtt_set_bo(drm_intel_bo *bo, uint32_t val, int width, int height)
-{
- uint32_t *vaddr = bo->virtual;
- int size = width * height;
-
- drm_intel_gem_bo_start_gtt_access(bo, true);
- while (size--)
- *vaddr++ = val;
-}
-
-static void
-gtt_cmp_bo(drm_intel_bo *bo, uint32_t val, int width, int height, drm_intel_bo *tmp)
-{
- uint32_t *vaddr = bo->virtual;
- int y;
-
- /* GTT access is slow. So we just compare a few points */
- drm_intel_gem_bo_start_gtt_access(bo, false);
- for (y = 0; y < height; y++)
- igt_assert_eq_u32(vaddr[y*width+y], val);
-}
-
-static drm_intel_bo *
-map_bo(drm_intel_bo *bo)
-{
- /* gtt map doesn't have a write parameter, so just keep the mapping
- * around (to avoid the set_domain with the gtt write domain set) and
- * manually tell the kernel when we start access the gtt. */
- do_or_die(drm_intel_gem_bo_map_gtt(bo));
-
- return bo;
-}
-
-static drm_intel_bo *
-tile_bo(drm_intel_bo *bo, int width)
-{
- uint32_t tiling = I915_TILING_X;
- uint32_t stride = width * 4;
-
- do_or_die(drm_intel_bo_set_tiling(bo, &tiling, stride));
-
- return bo;
-}
-
-static drm_intel_bo *
-gtt_create_bo(drm_intel_bufmgr *bufmgr, int width, int height)
-{
- return map_bo(unmapped_create_bo(bufmgr, width, height));
-}
-
-static drm_intel_bo *
-gttX_create_bo(drm_intel_bufmgr *bufmgr, int width, int height)
-{
- return tile_bo(gtt_create_bo(bufmgr, width, height), width);
-}
-
-static drm_intel_bo *
-wc_create_bo(drm_intel_bufmgr *bufmgr, int width, int height)
-{
- drm_intel_bo *bo;
-
- gem_require_mmap_wc(fd);
-
- bo = unmapped_create_bo(bufmgr, width, height);
- bo->virtual = __gem_mmap__wc(fd, bo->handle, 0, bo->size, PROT_READ | PROT_WRITE);
- return bo;
-}
-
-static void
-wc_release_bo(drm_intel_bo *bo)
-{
- munmap(bo->virtual, bo->size);
- bo->virtual = NULL;
-
- nop_release_bo(bo);
-}
-
-static drm_intel_bo *
-gpu_create_bo(drm_intel_bufmgr *bufmgr, int width, int height)
-{
- return unmapped_create_bo(bufmgr, width, height);
-}
-
-
-static drm_intel_bo *
-gpuX_create_bo(drm_intel_bufmgr *bufmgr, int width, int height)
-{
- return tile_bo(gpu_create_bo(bufmgr, width, height), width);
-}
-
-static void
-cpu_set_bo(drm_intel_bo *bo, uint32_t val, int width, int height)
-{
- int size = width * height;
- uint32_t *vaddr;
-
- do_or_die(drm_intel_bo_map(bo, true));
- vaddr = bo->virtual;
- while (size--)
- *vaddr++ = val;
- drm_intel_bo_unmap(bo);
-}
-
-static void
-cpu_cmp_bo(drm_intel_bo *bo, uint32_t val, int width, int height, drm_intel_bo *tmp)
-{
- int size = width * height;
- uint32_t *vaddr;
-
- do_or_die(drm_intel_bo_map(bo, false));
- vaddr = bo->virtual;
- while (size--)
- igt_assert_eq_u32(*vaddr++, val);
- drm_intel_bo_unmap(bo);
-}
-
-static void
-gpu_set_bo(drm_intel_bo *bo, uint32_t val, int width, int height)
-{
- struct drm_i915_gem_relocation_entry reloc[1];
- struct drm_i915_gem_exec_object2 gem_exec[2];
- struct drm_i915_gem_execbuffer2 execbuf;
- struct drm_i915_gem_pwrite gem_pwrite;
- struct drm_i915_gem_create create;
- uint32_t buf[10], *b;
- uint32_t tiling, swizzle;
-
- drm_intel_bo_get_tiling(bo, &tiling, &swizzle);
-
- memset(reloc, 0, sizeof(reloc));
- memset(gem_exec, 0, sizeof(gem_exec));
- memset(&execbuf, 0, sizeof(execbuf));
-
- b = buf;
- *b++ = XY_COLOR_BLT_CMD_NOLEN |
- ((gen >= 8) ? 5 : 4) |
- COLOR_BLT_WRITE_ALPHA | XY_COLOR_BLT_WRITE_RGB;
- if (gen >= 4 && tiling) {
- b[-1] |= XY_COLOR_BLT_TILED;
- *b = width;
- } else
- *b = width << 2;
- *b++ |= 0xf0 << 16 | 1 << 25 | 1 << 24;
- *b++ = 0;
- *b++ = height << 16 | width;
- reloc[0].offset = (b - buf) * sizeof(uint32_t);
- reloc[0].target_handle = bo->handle;
- reloc[0].read_domains = I915_GEM_DOMAIN_RENDER;
- reloc[0].write_domain = I915_GEM_DOMAIN_RENDER;
- *b++ = 0;
- if (gen >= 8)
- *b++ = 0;
- *b++ = val;
- *b++ = MI_BATCH_BUFFER_END;
- if ((b - buf) & 1)
- *b++ = 0;
-
- gem_exec[0].handle = bo->handle;
- gem_exec[0].flags = EXEC_OBJECT_NEEDS_FENCE;
-
- create.handle = 0;
- create.size = 4096;
- drmIoctl(fd, DRM_IOCTL_I915_GEM_CREATE, &create);
- gem_exec[1].handle = create.handle;
- gem_exec[1].relocation_count = 1;
- gem_exec[1].relocs_ptr = (uintptr_t)reloc;
-
- execbuf.buffers_ptr = (uintptr_t)gem_exec;
- execbuf.buffer_count = 2;
- execbuf.batch_len = (b - buf) * sizeof(buf[0]);
- if (gen >= 6)
- execbuf.flags = I915_EXEC_BLT;
-
- gem_pwrite.handle = gem_exec[1].handle;
- gem_pwrite.offset = 0;
- gem_pwrite.size = execbuf.batch_len;
- gem_pwrite.data_ptr = (uintptr_t)buf;
- do_or_die(drmIoctl(fd, DRM_IOCTL_I915_GEM_PWRITE, &gem_pwrite));
- do_or_die(drmIoctl(fd, DRM_IOCTL_I915_GEM_EXECBUFFER2, &execbuf));
-
- drmIoctl(fd, DRM_IOCTL_GEM_CLOSE, &create.handle);
-}
-
-static void
-gpu_cmp_bo(drm_intel_bo *bo, uint32_t val, int width, int height, drm_intel_bo *tmp)
-{
- intel_blt_copy(batch,
- bo, 0, 0, 4*width,
- tmp, 0, 0, 4*width,
- width, height, 32);
- cpu_cmp_bo(tmp, val, width, height, NULL);
-}
-
-const struct access_mode {
- const char *name;
- void (*set_bo)(drm_intel_bo *bo, uint32_t val, int w, int h);
- void (*cmp_bo)(drm_intel_bo *bo, uint32_t val, int w, int h, drm_intel_bo *tmp);
- drm_intel_bo *(*create_bo)(drm_intel_bufmgr *bufmgr, int width, int height);
- void (*release_bo)(drm_intel_bo *bo);
-} access_modes[] = {
- {
- .name = "prw",
- .set_bo = prw_set_bo,
- .cmp_bo = prw_cmp_bo,
- .create_bo = unmapped_create_bo,
- .release_bo = nop_release_bo,
- },
- {
- .name = "cpu",
- .set_bo = cpu_set_bo,
- .cmp_bo = cpu_cmp_bo,
- .create_bo = unmapped_create_bo,
- .release_bo = nop_release_bo,
- },
- {
- .name = "snoop",
- .set_bo = cpu_set_bo,
- .cmp_bo = cpu_cmp_bo,
- .create_bo = snoop_create_bo,
- .release_bo = nop_release_bo,
- },
- {
- .name = "gtt",
- .set_bo = gtt_set_bo,
- .cmp_bo = gtt_cmp_bo,
- .create_bo = gtt_create_bo,
- .release_bo = nop_release_bo,
- },
- {
- .name = "gttX",
- .set_bo = gtt_set_bo,
- .cmp_bo = gtt_cmp_bo,
- .create_bo = gttX_create_bo,
- .release_bo = nop_release_bo,
- },
- {
- .name = "wc",
- .set_bo = gtt_set_bo,
- .cmp_bo = gtt_cmp_bo,
- .create_bo = wc_create_bo,
- .release_bo = wc_release_bo,
- },
- {
- .name = "gpu",
- .set_bo = gpu_set_bo,
- .cmp_bo = gpu_cmp_bo,
- .create_bo = gpu_create_bo,
- .release_bo = nop_release_bo,
- },
- {
- .name = "gpuX",
- .set_bo = gpu_set_bo,
- .cmp_bo = gpu_cmp_bo,
- .create_bo = gpuX_create_bo,
- .release_bo = nop_release_bo,
- },
-};
-
-#define MAX_NUM_BUFFERS 1024
-int num_buffers = MAX_NUM_BUFFERS;
-const int width = 512, height = 512;
-igt_render_copyfunc_t rendercopy;
-
-struct buffers {
- const struct access_mode *mode;
- drm_intel_bufmgr *bufmgr;
- drm_intel_bo *src[MAX_NUM_BUFFERS], *dst[MAX_NUM_BUFFERS];
- drm_intel_bo *dummy, *spare;
- int count;
-};
-
-static void *buffers_init(struct buffers *data,
- const struct access_mode *mode,
- int _fd)
-{
- data->mode = mode;
- data->count = 0;
-
- data->bufmgr = drm_intel_bufmgr_gem_init(_fd, 4096);
- igt_assert(data->bufmgr);
-
- drm_intel_bufmgr_gem_enable_reuse(data->bufmgr);
- return intel_batchbuffer_alloc(data->bufmgr, devid);
-}
-
-static void buffers_destroy(struct buffers *data)
-{
- if (data->count == 0)
- return;
-
- for (int i = 0; i < data->count; i++) {
- data->mode->release_bo(data->src[i]);
- data->mode->release_bo(data->dst[i]);
- }
- data->mode->release_bo(data->dummy);
- data->mode->release_bo(data->spare);
- data->count = 0;
-}
-
-static void buffers_create(struct buffers *data,
- int count)
-{
- igt_assert(data->bufmgr);
-
- buffers_destroy(data);
-
- for (int i = 0; i < count; i++) {
- data->src[i] =
- data->mode->create_bo(data->bufmgr, width, height);
- data->dst[i] =
- data->mode->create_bo(data->bufmgr, width, height);
- }
- data->dummy = data->mode->create_bo(data->bufmgr, width, height);
- data->spare = data->mode->create_bo(data->bufmgr, width, height);
- data->count = count;
-}
-
-static void buffers_fini(struct buffers *data)
-{
- if (data->bufmgr == NULL)
- return;
-
- buffers_destroy(data);
-
- intel_batchbuffer_free(batch);
- drm_intel_bufmgr_destroy(data->bufmgr);
- data->bufmgr = NULL;
-}
-
-typedef void (*do_copy)(drm_intel_bo *dst, drm_intel_bo *src);
-typedef struct igt_hang_ring (*do_hang)(void);
-
-static void render_copy_bo(drm_intel_bo *dst, drm_intel_bo *src)
-{
- struct igt_buf d = {
- .bo = dst,
- .size = width * height * 4,
- .num_tiles = width * height * 4,
- .stride = width * 4,
- }, s = {
- .bo = src,
- .size = width * height * 4,
- .num_tiles = width * height * 4,
- .stride = width * 4,
- };
- uint32_t swizzle;
-
- drm_intel_bo_get_tiling(dst, &d.tiling, &swizzle);
- drm_intel_bo_get_tiling(src, &s.tiling, &swizzle);
-
- rendercopy(batch, NULL,
- &s, 0, 0,
- width, height,
- &d, 0, 0);
-}
-
-static void blt_copy_bo(drm_intel_bo *dst, drm_intel_bo *src)
-{
- intel_blt_copy(batch,
- src, 0, 0, 4*width,
- dst, 0, 0, 4*width,
- width, height, 32);
-}
-
-static void cpu_copy_bo(drm_intel_bo *dst, drm_intel_bo *src)
-{
- const int size = width * height * sizeof(uint32_t);
- void *d, *s;
-
- gem_set_domain(fd, src->handle, I915_GEM_DOMAIN_CPU, 0);
- gem_set_domain(fd, dst->handle, I915_GEM_DOMAIN_CPU, I915_GEM_DOMAIN_CPU);
- s = gem_mmap__cpu(fd, src->handle, 0, size, PROT_READ);
- d = gem_mmap__cpu(fd, dst->handle, 0, size, PROT_WRITE);
-
- memcpy(d, s, size);
-
- munmap(d, size);
- munmap(s, size);
-}
-
-static void gtt_copy_bo(drm_intel_bo *dst, drm_intel_bo *src)
-{
- const int size = width * height * sizeof(uint32_t);
- void *d, *s;
-
- gem_set_domain(fd, src->handle, I915_GEM_DOMAIN_GTT, 0);
- gem_set_domain(fd, dst->handle, I915_GEM_DOMAIN_GTT, I915_GEM_DOMAIN_GTT);
-
- s = gem_mmap__gtt(fd, src->handle, size, PROT_READ);
- d = gem_mmap__gtt(fd, dst->handle, size, PROT_WRITE);
-
- memcpy(d, s, size);
-
- munmap(d, size);
- munmap(s, size);
-}
-
-static void wc_copy_bo(drm_intel_bo *dst, drm_intel_bo *src)
-{
- const int size = width * height * sizeof(uint32_t);
- void *d, *s;
-
- gem_set_domain(fd, src->handle, I915_GEM_DOMAIN_GTT, 0);
- gem_set_domain(fd, dst->handle, I915_GEM_DOMAIN_GTT, I915_GEM_DOMAIN_GTT);
-
- s = gem_mmap__wc(fd, src->handle, 0, size, PROT_READ);
- d = gem_mmap__wc(fd, dst->handle, 0, size, PROT_WRITE);
-
- memcpy(d, s, size);
-
- munmap(d, size);
- munmap(s, size);
-}
-
-static struct igt_hang_ring no_hang(void)
-{
- return (struct igt_hang_ring){0, 0};
-}
-
-static struct igt_hang_ring bcs_hang(void)
-{
- return igt_hang_ring(fd, I915_EXEC_BLT);
-}
-
-static struct igt_hang_ring rcs_hang(void)
-{
- return igt_hang_ring(fd, I915_EXEC_RENDER);
-}
-
-static void hang_require(void)
-{
- igt_require_hang_ring(fd, -1);
-}
-
-static void do_overwrite_source(struct buffers *buffers,
- do_copy do_copy_func,
- do_hang do_hang_func)
-{
- struct igt_hang_ring hang;
- int i;
-
- gem_quiescent_gpu(fd);
- for (i = 0; i < buffers->count; i++) {
- buffers->mode->set_bo(buffers->src[i], i, width, height);
- buffers->mode->set_bo(buffers->dst[i], ~i, width, height);
- }
- for (i = 0; i < buffers->count; i++)
- do_copy_func(buffers->dst[i], buffers->src[i]);
- hang = do_hang_func();
- for (i = buffers->count; i--; )
- buffers->mode->set_bo(buffers->src[i], 0xdeadbeef, width, height);
- for (i = 0; i < buffers->count; i++)
- buffers->mode->cmp_bo(buffers->dst[i], i, width, height, buffers->dummy);
- igt_post_hang_ring(fd, hang);
-}
-
-static void do_overwrite_source_read(struct buffers *buffers,
- do_copy do_copy_func,
- do_hang do_hang_func,
- int do_rcs)
-{
- const int half = buffers->count/2;
- struct igt_hang_ring hang;
- int i;
-
- gem_quiescent_gpu(fd);
- for (i = 0; i < half; i++) {
- buffers->mode->set_bo(buffers->src[i], i, width, height);
- buffers->mode->set_bo(buffers->dst[i], ~i, width, height);
- buffers->mode->set_bo(buffers->dst[i+half], ~i, width, height);
- }
- for (i = 0; i < half; i++) {
- do_copy_func(buffers->dst[i], buffers->src[i]);
- if (do_rcs)
- render_copy_bo(buffers->dst[i+half], buffers->src[i]);
- else
- blt_copy_bo(buffers->dst[i+half], buffers->src[i]);
- }
- hang = do_hang_func();
- for (i = half; i--; )
- buffers->mode->set_bo(buffers->src[i], 0xdeadbeef, width, height);
- for (i = 0; i < half; i++) {
- buffers->mode->cmp_bo(buffers->dst[i], i, width, height, buffers->dummy);
- buffers->mode->cmp_bo(buffers->dst[i+half], i, width, height, buffers->dummy);
- }
- igt_post_hang_ring(fd, hang);
-}
-
-static void do_overwrite_source_read_bcs(struct buffers *buffers,
- do_copy do_copy_func,
- do_hang do_hang_func)
-{
- do_overwrite_source_read(buffers, do_copy_func, do_hang_func, 0);
-}
-
-static void do_overwrite_source_read_rcs(struct buffers *buffers,
- do_copy do_copy_func,
- do_hang do_hang_func)
-{
- do_overwrite_source_read(buffers, do_copy_func, do_hang_func, 1);
-}
-
-static void do_overwrite_source__rev(struct buffers *buffers,
- do_copy do_copy_func,
- do_hang do_hang_func)
-{
- struct igt_hang_ring hang;
- int i;
-
- gem_quiescent_gpu(fd);
- for (i = 0; i < buffers->count; i++) {
- buffers->mode->set_bo(buffers->src[i], i, width, height);
- buffers->mode->set_bo(buffers->dst[i], ~i, width, height);
- }
- for (i = 0; i < buffers->count; i++)
- do_copy_func(buffers->dst[i], buffers->src[i]);
- hang = do_hang_func();
- for (i = 0; i < buffers->count; i++)
- buffers->mode->set_bo(buffers->src[i], 0xdeadbeef, width, height);
- for (i = buffers->count; i--; )
- buffers->mode->cmp_bo(buffers->dst[i], i, width, height, buffers->dummy);
- igt_post_hang_ring(fd, hang);
-}
-
-static void do_overwrite_source__one(struct buffers *buffers,
- do_copy do_copy_func,
- do_hang do_hang_func)
-{
- struct igt_hang_ring hang;
-
- gem_quiescent_gpu(fd);
- buffers->mode->set_bo(buffers->src[0], 0, width, height);
- buffers->mode->set_bo(buffers->dst[0], ~0, width, height);
- do_copy_func(buffers->dst[0], buffers->src[0]);
- hang = do_hang_func();
- buffers->mode->set_bo(buffers->src[0], 0xdeadbeef, width, height);
- buffers->mode->cmp_bo(buffers->dst[0], 0, width, height, buffers->dummy);
- igt_post_hang_ring(fd, hang);
-}
-
-static void do_intermix(struct buffers *buffers,
- do_copy do_copy_func,
- do_hang do_hang_func,
- int do_rcs)
-{
- const int half = buffers->count/2;
- struct igt_hang_ring hang;
- int i;
-
- gem_quiescent_gpu(fd);
- for (i = 0; i < buffers->count; i++) {
- buffers->mode->set_bo(buffers->src[i], 0xdeadbeef^~i, width, height);
- buffers->mode->set_bo(buffers->dst[i], i, width, height);
- }
- for (i = 0; i < half; i++) {
- if (do_rcs == 1 || (do_rcs == -1 && i & 1))
- render_copy_bo(buffers->dst[i], buffers->src[i]);
- else
- blt_copy_bo(buffers->dst[i], buffers->src[i]);
-
- do_copy_func(buffers->dst[i+half], buffers->src[i]);
-
- if (do_rcs == 1 || (do_rcs == -1 && (i & 1) == 0))
- render_copy_bo(buffers->dst[i], buffers->dst[i+half]);
- else
- blt_copy_bo(buffers->dst[i], buffers->dst[i+half]);
-
- do_copy_func(buffers->dst[i+half], buffers->src[i+half]);
- }
- hang = do_hang_func();
- for (i = 0; i < 2*half; i++)
- buffers->mode->cmp_bo(buffers->dst[i], 0xdeadbeef^~i, width, height, buffers->dummy);
- igt_post_hang_ring(fd, hang);
-}
-
-static void do_intermix_rcs(struct buffers *buffers,
- do_copy do_copy_func,
- do_hang do_hang_func)
-{
- do_intermix(buffers, do_copy_func, do_hang_func, 1);
-}
-
-static void do_intermix_bcs(struct buffers *buffers,
- do_copy do_copy_func,
- do_hang do_hang_func)
-{
- do_intermix(buffers, do_copy_func, do_hang_func, 0);
-}
-
-static void do_intermix_both(struct buffers *buffers,
- do_copy do_copy_func,
- do_hang do_hang_func)
-{
- do_intermix(buffers, do_copy_func, do_hang_func, -1);
-}
-
-static void do_early_read(struct buffers *buffers,
- do_copy do_copy_func,
- do_hang do_hang_func)
-{
- struct igt_hang_ring hang;
- int i;
-
- gem_quiescent_gpu(fd);
- for (i = buffers->count; i--; )
- buffers->mode->set_bo(buffers->src[i], 0xdeadbeef, width, height);
- for (i = 0; i < buffers->count; i++)
- do_copy_func(buffers->dst[i], buffers->src[i]);
- hang = do_hang_func();
- for (i = buffers->count; i--; )
- buffers->mode->cmp_bo(buffers->dst[i], 0xdeadbeef, width, height, buffers->dummy);
- igt_post_hang_ring(fd, hang);
-}
-
-static void do_read_read_bcs(struct buffers *buffers,
- do_copy do_copy_func,
- do_hang do_hang_func)
-{
- struct igt_hang_ring hang;
- int i;
-
- gem_quiescent_gpu(fd);
- for (i = buffers->count; i--; )
- buffers->mode->set_bo(buffers->src[i], 0xdeadbeef ^ i, width, height);
- for (i = 0; i < buffers->count; i++) {
- do_copy_func(buffers->dst[i], buffers->src[i]);
- blt_copy_bo(buffers->spare, buffers->src[i]);
- }
- cpu_cmp_bo(buffers->spare, 0xdeadbeef^(buffers->count-1), width, height, NULL);
- hang = do_hang_func();
- for (i = buffers->count; i--; )
- buffers->mode->cmp_bo(buffers->dst[i], 0xdeadbeef ^ i, width, height, buffers->dummy);
- igt_post_hang_ring(fd, hang);
-}
-
-static void do_read_read_rcs(struct buffers *buffers,
- do_copy do_copy_func,
- do_hang do_hang_func)
-{
- struct igt_hang_ring hang;
- int i;
-
- gem_quiescent_gpu(fd);
- for (i = buffers->count; i--; )
- buffers->mode->set_bo(buffers->src[i], 0xdeadbeef ^ i, width, height);
- for (i = 0; i < buffers->count; i++) {
- do_copy_func(buffers->dst[i], buffers->src[i]);
- render_copy_bo(buffers->spare, buffers->src[i]);
- }
- cpu_cmp_bo(buffers->spare, 0xdeadbeef^(buffers->count-1), width, height, NULL);
- hang = do_hang_func();
- for (i = buffers->count; i--; )
- buffers->mode->cmp_bo(buffers->dst[i], 0xdeadbeef ^ i, width, height, buffers->dummy);
- igt_post_hang_ring(fd, hang);
-}
-
-static void do_gpu_read_after_write(struct buffers *buffers,
- do_copy do_copy_func,
- do_hang do_hang_func)
-{
- struct igt_hang_ring hang;
- int i;
-
- gem_quiescent_gpu(fd);
- for (i = buffers->count; i--; )
- buffers->mode->set_bo(buffers->src[i], 0xabcdabcd, width, height);
- for (i = 0; i < buffers->count; i++)
- do_copy_func(buffers->dst[i], buffers->src[i]);
- for (i = buffers->count; i--; )
- do_copy_func(buffers->dummy, buffers->dst[i]);
- hang = do_hang_func();
- for (i = buffers->count; i--; )
- buffers->mode->cmp_bo(buffers->dst[i], 0xabcdabcd, width, height, buffers->dummy);
- igt_post_hang_ring(fd, hang);
-}
-
-typedef void (*do_test)(struct buffers *buffers,
- do_copy do_copy_func,
- do_hang do_hang_func);
-
-typedef void (*run_wrap)(struct buffers *buffers,
- do_test do_test_func,
- do_copy do_copy_func,
- do_hang do_hang_func);
-
-static void run_single(struct buffers *buffers,
- do_test do_test_func,
- do_copy do_copy_func,
- do_hang do_hang_func)
-{
- do_test_func(buffers, do_copy_func, do_hang_func);
-}
-
-static void run_interruptible(struct buffers *buffers,
- do_test do_test_func,
- do_copy do_copy_func,
- do_hang do_hang_func)
-{
- int loop;
-
- for (loop = 0; loop < 10; loop++)
- do_test_func(buffers, do_copy_func, do_hang_func);
-}
-
-static void run_forked(struct buffers *buffers,
- do_test do_test_func,
- do_copy do_copy_func,
- do_hang do_hang_func)
-{
- const int old_num_buffers = num_buffers;
-
- num_buffers /= 16;
- num_buffers += 2;
-
- igt_fork(child, 16) {
- /* recreate process local variables */
- buffers->count = 0;
- fd = drm_open_driver(DRIVER_INTEL);
-
- batch = buffers_init(buffers, buffers->mode, fd);
-
- buffers_create(buffers, num_buffers);
- for (int loop = 0; loop < 10; loop++)
- do_test_func(buffers, do_copy_func, do_hang_func);
-
- buffers_fini(buffers);
- }
-
- igt_waitchildren();
-
- num_buffers = old_num_buffers;
-}
-
-static void bit17_require(void)
-{
- struct drm_i915_gem_get_tiling2 {
- uint32_t handle;
- uint32_t tiling_mode;
- uint32_t swizzle_mode;
- uint32_t phys_swizzle_mode;
- } arg;
-#define DRM_IOCTL_I915_GEM_GET_TILING2 DRM_IOWR (DRM_COMMAND_BASE + DRM_I915_GEM_GET_TILING, struct drm_i915_gem_get_tiling2)
-
- memset(&arg, 0, sizeof(arg));
- arg.handle = gem_create(fd, 4096);
- gem_set_tiling(fd, arg.handle, I915_TILING_X, 512);
-
- do_or_die(drmIoctl(fd, DRM_IOCTL_I915_GEM_GET_TILING2, &arg));
- gem_close(fd, arg.handle);
- igt_require(arg.phys_swizzle_mode == arg.swizzle_mode);
-}
-
-static void cpu_require(void)
-{
- bit17_require();
-}
-
-static void gtt_require(void)
-{
-}
-
-static void wc_require(void)
-{
- bit17_require();
- gem_require_mmap_wc(fd);
-}
-
-static void bcs_require(void)
-{
-}
-
-static void rcs_require(void)
-{
- igt_require(rendercopy);
-}
-
-static void no_require(void)
-{
-}
-
-static void
-run_basic_modes(const struct access_mode *mode,
- const char *suffix,
- run_wrap run_wrap_func)
-{
- const struct {
- const char *prefix;
- do_copy copy;
- void (*require)(void);
- } pipelines[] = {
- { "cpu", cpu_copy_bo, cpu_require },
- { "gtt", gtt_copy_bo, gtt_require },
- { "wc", wc_copy_bo, wc_require },
- { "blt", blt_copy_bo, bcs_require },
- { "render", render_copy_bo, rcs_require },
- { NULL, NULL }
- }, *pskip = pipelines + 3, *p;
- const struct {
- const char *suffix;
- do_hang hang;
- void (*require)(void);
- } hangs[] = {
- { "", no_hang, no_require },
- { "-hang-blt", bcs_hang, hang_require },
- { "-hang-render", rcs_hang, hang_require },
- { NULL, NULL },
- }, *h;
- struct buffers buffers;
-
- for (h = hangs; h->suffix; h++) {
- if (!all && *h->suffix)
- continue;
-
- for (p = all ? pipelines : pskip; p->prefix; p++) {
- igt_fixture {
- batch = buffers_init(&buffers, mode, fd);
- }
-
- /* try to overwrite the source values */
- igt_subtest_f("%s-%s-overwrite-source-one%s%s", mode->name, p->prefix, suffix, h->suffix) {
- h->require();
- p->require();
- buffers_create(&buffers, num_buffers);
- run_wrap_func(&buffers,
- do_overwrite_source__one,
- p->copy, h->hang);
- }
-
- igt_subtest_f("%s-%s-overwrite-source%s%s", mode->name, p->prefix, suffix, h->suffix) {
- h->require();
- p->require();
- buffers_create(&buffers, num_buffers);
- run_wrap_func(&buffers,
- do_overwrite_source,
- p->copy, h->hang);
- }
-
- igt_subtest_f("%s-%s-overwrite-source-read-bcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
- h->require();
- p->require();
- buffers_create(&buffers, num_buffers);
- run_wrap_func(&buffers,
- do_overwrite_source_read_bcs,
- p->copy, h->hang);
- }
-
- igt_subtest_f("%s-%s-overwrite-source-read-rcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
- h->require();
- p->require();
- igt_require(rendercopy);
- buffers_create(&buffers, num_buffers);
- run_wrap_func(&buffers,
- do_overwrite_source_read_rcs,
- p->copy, h->hang);
- }
-
- igt_subtest_f("%s-%s-overwrite-source-rev%s%s", mode->name, p->prefix, suffix, h->suffix) {
- h->require();
- p->require();
- buffers_create(&buffers, num_buffers);
- run_wrap_func(&buffers,
- do_overwrite_source__rev,
- p->copy, h->hang);
- }
-
- /* try to intermix copies with GPU copies*/
- igt_subtest_f("%s-%s-intermix-rcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
- h->require();
- p->require();
- igt_require(rendercopy);
- buffers_create(&buffers, num_buffers);
- run_wrap_func(&buffers,
- do_intermix_rcs,
- p->copy, h->hang);
- }
- igt_subtest_f("%s-%s-intermix-bcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
- h->require();
- p->require();
- igt_require(rendercopy);
- buffers_create(&buffers, num_buffers);
- run_wrap_func(&buffers,
- do_intermix_bcs,
- p->copy, h->hang);
- }
- igt_subtest_f("%s-%s-intermix-both%s%s", mode->name, p->prefix, suffix, h->suffix) {
- h->require();
- p->require();
- igt_require(rendercopy);
- buffers_create(&buffers, num_buffers);
- run_wrap_func(&buffers,
- do_intermix_both,
- p->copy, h->hang);
- }
-
- /* try to read the results before the copy completes */
- igt_subtest_f("%s-%s-early-read%s%s", mode->name, p->prefix, suffix, h->suffix) {
- h->require();
- p->require();
- buffers_create(&buffers, num_buffers);
- run_wrap_func(&buffers,
- do_early_read,
- p->copy, h->hang);
- }
-
- /* concurrent reads */
- igt_subtest_f("%s-%s-read-read-bcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
- h->require();
- p->require();
- buffers_create(&buffers, num_buffers);
- run_wrap_func(&buffers,
- do_read_read_bcs,
- p->copy, h->hang);
- }
- igt_subtest_f("%s-%s-read-read-rcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
- h->require();
- p->require();
- igt_require(rendercopy);
- buffers_create(&buffers, num_buffers);
- run_wrap_func(&buffers,
- do_read_read_rcs,
- p->copy, h->hang);
- }
-
- /* and finally try to trick the kernel into loosing the pending write */
- igt_subtest_f("%s-%s-gpu-read-after-write%s%s", mode->name, p->prefix, suffix, h->suffix) {
- h->require();
- p->require();
- buffers_create(&buffers, num_buffers);
- run_wrap_func(&buffers,
- do_gpu_read_after_write,
- p->copy, h->hang);
- }
-
- igt_fixture {
- buffers_fini(&buffers);
- }
- }
- }
-}
-
-static void
-run_modes(const struct access_mode *mode)
-{
- if (all) {
- run_basic_modes(mode, "", run_single);
-
- igt_fork_signal_helper();
- run_basic_modes(mode, "-interruptible", run_interruptible);
- igt_stop_signal_helper();
- }
-
- igt_fork_signal_helper();
- run_basic_modes(mode, "-forked", run_forked);
- igt_stop_signal_helper();
-}
-
-igt_main
-{
- int max, i;
-
- igt_skip_on_simulation();
-
- if (strstr(igt_test_name(), "all"))
- all = true;
-
- igt_fixture {
- fd = drm_open_driver(DRIVER_INTEL);
- devid = intel_get_drm_devid(fd);
- gen = intel_gen(devid);
- rendercopy = igt_get_render_copyfunc(devid);
-
- max = gem_aperture_size (fd) / (1024 * 1024) / 2;
- if (num_buffers > max)
- num_buffers = max;
-
- max = intel_get_total_ram_mb() * 3 / 4;
- if (num_buffers > max)
- num_buffers = max;
- num_buffers /= 2;
- igt_info("using 2x%d buffers, each 1MiB\n", num_buffers);
- }
-
- for (i = 0; i < ARRAY_SIZE(access_modes); i++)
- run_modes(&access_modes[i]);
-}
--
2.6.1
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 41+ messages in thread
* Re: [PATCH i-g-t 2/3] Unify handling of slow/combinatorial tests
2015-10-23 11:42 ` [PATCH i-g-t 2/3] Unify handling of slow/combinatorial tests David Weinehall
@ 2015-10-23 11:56 ` Chris Wilson
2015-10-23 13:50 ` Paulo Zanoni
2015-10-23 14:55 ` Thomas Wood
2 siblings, 0 replies; 41+ messages in thread
From: Chris Wilson @ 2015-10-23 11:56 UTC (permalink / raw)
To: David Weinehall; +Cc: intel-gfx
On Fri, Oct 23, 2015 at 02:42:35PM +0300, David Weinehall wrote:
> Some tests should not be run by default, due to their slow,
> and sometimes superfluous, nature.
>
> We still want to be able to run these tests though in some cases.
> Until now there's been no unified way of handling this. Remedy
> this by introducing the --with-slow-combinatorial option to
> igt_core, and use it in gem_concurrent_blit & kms_frontbuffer_tracking.
> ---
> diff --git a/tests/gem_concurrent_blit.c b/tests/gem_concurrent_blit.c
> index 1d2d787202df..311b6829e984 100644
> --- a/tests/gem_concurrent_blit.c
> +++ b/tests/gem_concurrent_blit.c
> @@ -931,9 +931,6 @@ run_basic_modes(const struct access_mode *mode,
> struct buffers buffers;
>
> for (h = hangs; h->suffix; h++) {
> - if (!all && *h->suffix)
> - continue;
> -
> for (p = all ? pipelines : pskip; p->prefix; p++) {
> igt_fixture {
> batch = buffers_init(&buffers, mode, fd);
You didn't update this to skip the first few CPU vs CPU loops. Perhaps
if you just kill the all variable that would help.
-Chris
--
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH i-g-t 0/3] Unify slow/combinatorial test handling
2015-10-23 11:42 [PATCH i-g-t 0/3] Unify slow/combinatorial test handling David Weinehall
` (2 preceding siblings ...)
2015-10-23 11:42 ` [PATCH i-g-t 3/3] Remove gem_concurrent_all, since it is now superfluous David Weinehall
@ 2015-10-23 11:58 ` Chris Wilson
2015-10-23 12:47 ` Daniel Vetter
2015-10-28 11:29 ` [PATCH i-g-t 0/3 v2] " David Weinehall
2015-10-30 13:18 ` [PATCH i-g-t 0/3 v3] Unify slow/combinatorial test handling David Weinehall
5 siblings, 1 reply; 41+ messages in thread
From: Chris Wilson @ 2015-10-23 11:58 UTC (permalink / raw)
To: David Weinehall; +Cc: intel-gfx
On Fri, Oct 23, 2015 at 02:42:33PM +0300, David Weinehall wrote:
> Until now we've had no unified way to handle slow/combinatorial tests.
> Most of the time we don't want to run slow/combinatorial tests, so this
> should remain the default, but when we do want to run such tests,
> it has been handled differently in different tests.
>
> This patch adds a --with-slow-combinatorial command line option to
> igt_core, changes gem_concurrent_blit and kms_frontbuffer_tracking
> to use this instead of their own methods, and removes gem_concurrent_all
> in the process, since it's now unnecessary.
I'm not going to remember the --with-slow-combinatorial option. How
about just --all, or --slow?
-Chris
--
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH i-g-t 0/3] Unify slow/combinatorial test handling
2015-10-23 11:58 ` [PATCH i-g-t 0/3] Unify slow/combinatorial test handling Chris Wilson
@ 2015-10-23 12:47 ` Daniel Vetter
2015-10-26 13:55 ` David Weinehall
0 siblings, 1 reply; 41+ messages in thread
From: Daniel Vetter @ 2015-10-23 12:47 UTC (permalink / raw)
To: Chris Wilson, David Weinehall, intel-gfx
On Fri, Oct 23, 2015 at 12:58:45PM +0100, Chris Wilson wrote:
> On Fri, Oct 23, 2015 at 02:42:33PM +0300, David Weinehall wrote:
> > Until now we've had no unified way to handle slow/combinatorial tests.
> > Most of the time we don't want to run slow/combinatorial tests, so this
> > should remain the default, but when we do want to run such tests,
> > it has been handled differently in different tests.
> >
> > This patch adds a --with-slow-combinatorial command line option to
> > igt_core, changes gem_concurrent_blit and kms_frontbuffer_tracking
> > to use this instead of their own methods, and removes gem_concurrent_all
> > in the process, since it's now unnecessary.
>
> I'm not going to remember the --with-slow-combinatorial option. How
> about just --all, or --slow?
Yeah, --all as a shorthand sounds good to me.
-Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH i-g-t 2/3] Unify handling of slow/combinatorial tests
2015-10-23 11:42 ` [PATCH i-g-t 2/3] Unify handling of slow/combinatorial tests David Weinehall
2015-10-23 11:56 ` Chris Wilson
@ 2015-10-23 13:50 ` Paulo Zanoni
2015-10-26 14:59 ` David Weinehall
2015-10-23 14:55 ` Thomas Wood
2 siblings, 1 reply; 41+ messages in thread
From: Paulo Zanoni @ 2015-10-23 13:50 UTC (permalink / raw)
To: David Weinehall; +Cc: Intel Graphics Development
2015-10-23 9:42 GMT-02:00 David Weinehall <david.weinehall@linux.intel.com>:
> Some tests should not be run by default, due to their slow,
> and sometimes superfluous, nature.
>
> We still want to be able to run these tests though in some cases.
> Until now there's been no unified way of handling this. Remedy
> this by introducing the --with-slow-combinatorial option to
> igt_core, and use it in gem_concurrent_blit & kms_frontbuffer_tracking.
> ---
> lib/igt_core.c | 19 ++++++
> lib/igt_core.h | 1 +
> tests/gem_concurrent_blit.c | 40 ++++++++----
> tests/kms_frontbuffer_tracking.c | 135 +++++++++++++++++++++++++++------------
> 4 files changed, 142 insertions(+), 53 deletions(-)
>
> diff --git a/lib/igt_core.c b/lib/igt_core.c
> index 59127cafe606..ba40ce0e0ead 100644
> --- a/lib/igt_core.c
> +++ b/lib/igt_core.c
> @@ -216,6 +216,7 @@ const char *igt_interactive_debug;
>
> /* subtests helpers */
> static bool list_subtests = false;
> +static bool with_slow_combinatorial = false;
> static char *run_single_subtest = NULL;
> static bool run_single_subtest_found = false;
> static const char *in_subtest = NULL;
> @@ -235,6 +236,7 @@ bool test_child;
>
> enum {
> OPT_LIST_SUBTESTS,
> + OPT_WITH_SLOW_COMBINATORIAL,
> OPT_RUN_SUBTEST,
> OPT_DESCRIPTION,
> OPT_DEBUG,
> @@ -478,6 +480,7 @@ static void print_usage(const char *help_str, bool output_on_stderr)
>
> fprintf(f, "Usage: %s [OPTIONS]\n", command_str);
> fprintf(f, " --list-subtests\n"
> + " --with-slow-combinatorial\n"
> " --run-subtest <pattern>\n"
> " --debug[=log-domain]\n"
> " --interactive-debug[=domain]\n"
> @@ -510,6 +513,7 @@ static int common_init(int *argc, char **argv,
> int c, option_index = 0, i, x;
> static struct option long_options[] = {
> {"list-subtests", 0, 0, OPT_LIST_SUBTESTS},
> + {"with-slow-combinatorial", 0, 0, OPT_WITH_SLOW_COMBINATORIAL},
> {"run-subtest", 1, 0, OPT_RUN_SUBTEST},
> {"help-description", 0, 0, OPT_DESCRIPTION},
> {"debug", optional_argument, 0, OPT_DEBUG},
> @@ -617,6 +621,10 @@ static int common_init(int *argc, char **argv,
> if (!run_single_subtest)
> list_subtests = true;
> break;
> + case OPT_WITH_SLOW_COMBINATORIAL:
> + if (!run_single_subtest)
> + with_slow_combinatorial = true;
> + break;
> case OPT_RUN_SUBTEST:
> if (!list_subtests)
> run_single_subtest = strdup(optarg);
> @@ -1629,6 +1637,17 @@ void igt_skip_on_simulation(void)
> igt_require(!igt_run_in_simulation());
> }
>
> +/**
> + * igt_slow_combinatorial:
> + *
> + * This is used to define subtests that should only be listed/run
> + * when the "--with-slow-combinatorial" has been specified
> + */
> +void igt_slow_combinatorial(void)
> +{
> + igt_skip_on(!with_slow_combinatorial);
> +}
> +
> /* structured logging */
>
> /**
> diff --git a/lib/igt_core.h b/lib/igt_core.h
> index 5ae09653fd55..6ddf25563275 100644
> --- a/lib/igt_core.h
> +++ b/lib/igt_core.h
> @@ -680,6 +680,7 @@ bool igt_run_in_simulation(void);
> #define SLOW_QUICK(slow,quick) (igt_run_in_simulation() ? (quick) : (slow))
>
> void igt_skip_on_simulation(void);
> +void igt_slow_combinatorial(void);
>
> extern const char *igt_interactive_debug;
>
> diff --git a/tests/gem_concurrent_blit.c b/tests/gem_concurrent_blit.c
> index 1d2d787202df..311b6829e984 100644
> --- a/tests/gem_concurrent_blit.c
> +++ b/tests/gem_concurrent_blit.c
> @@ -931,9 +931,6 @@ run_basic_modes(const struct access_mode *mode,
> struct buffers buffers;
>
> for (h = hangs; h->suffix; h++) {
> - if (!all && *h->suffix)
> - continue;
> -
> for (p = all ? pipelines : pskip; p->prefix; p++) {
> igt_fixture {
> batch = buffers_init(&buffers, mode, fd);
> @@ -941,6 +938,8 @@ run_basic_modes(const struct access_mode *mode,
>
> /* try to overwrite the source values */
> igt_subtest_f("%s-%s-overwrite-source-one%s%s", mode->name, p->prefix, suffix, h->suffix) {
> + if (*h->suffix)
> + igt_slow_combinatorial();
> h->require();
> p->require();
> buffers_create(&buffers, num_buffers);
> @@ -950,6 +949,8 @@ run_basic_modes(const struct access_mode *mode,
> }
>
> igt_subtest_f("%s-%s-overwrite-source%s%s", mode->name, p->prefix, suffix, h->suffix) {
> + if (*h->suffix)
> + igt_slow_combinatorial();
> h->require();
> p->require();
> buffers_create(&buffers, num_buffers);
> @@ -959,6 +960,8 @@ run_basic_modes(const struct access_mode *mode,
> }
>
> igt_subtest_f("%s-%s-overwrite-source-read-bcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
> + if (*h->suffix)
> + igt_slow_combinatorial();
> h->require();
> p->require();
> buffers_create(&buffers, num_buffers);
> @@ -968,6 +971,8 @@ run_basic_modes(const struct access_mode *mode,
> }
>
> igt_subtest_f("%s-%s-overwrite-source-read-rcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
> + if (*h->suffix)
> + igt_slow_combinatorial();
> h->require();
> p->require();
> igt_require(rendercopy);
> @@ -978,6 +983,8 @@ run_basic_modes(const struct access_mode *mode,
> }
>
> igt_subtest_f("%s-%s-overwrite-source-rev%s%s", mode->name, p->prefix, suffix, h->suffix) {
> + if (*h->suffix)
> + igt_slow_combinatorial();
> h->require();
> p->require();
> buffers_create(&buffers, num_buffers);
> @@ -988,6 +995,8 @@ run_basic_modes(const struct access_mode *mode,
>
> /* try to intermix copies with GPU copies*/
> igt_subtest_f("%s-%s-intermix-rcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
> + if (*h->suffix)
> + igt_slow_combinatorial();
> h->require();
> p->require();
> igt_require(rendercopy);
> @@ -997,6 +1006,8 @@ run_basic_modes(const struct access_mode *mode,
> p->copy, h->hang);
> }
> igt_subtest_f("%s-%s-intermix-bcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
> + if (*h->suffix)
> + igt_slow_combinatorial();
> h->require();
> p->require();
> igt_require(rendercopy);
> @@ -1006,6 +1017,8 @@ run_basic_modes(const struct access_mode *mode,
> p->copy, h->hang);
> }
> igt_subtest_f("%s-%s-intermix-both%s%s", mode->name, p->prefix, suffix, h->suffix) {
> + if (*h->suffix)
> + igt_slow_combinatorial();
> h->require();
> p->require();
> igt_require(rendercopy);
> @@ -1017,6 +1030,8 @@ run_basic_modes(const struct access_mode *mode,
>
> /* try to read the results before the copy completes */
> igt_subtest_f("%s-%s-early-read%s%s", mode->name, p->prefix, suffix, h->suffix) {
> + if (*h->suffix)
> + igt_slow_combinatorial();
> h->require();
> p->require();
> buffers_create(&buffers, num_buffers);
> @@ -1027,6 +1042,8 @@ run_basic_modes(const struct access_mode *mode,
>
> /* concurrent reads */
> igt_subtest_f("%s-%s-read-read-bcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
> + if (*h->suffix)
> + igt_slow_combinatorial();
> h->require();
> p->require();
> buffers_create(&buffers, num_buffers);
> @@ -1035,6 +1052,8 @@ run_basic_modes(const struct access_mode *mode,
> p->copy, h->hang);
> }
> igt_subtest_f("%s-%s-read-read-rcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
> + if (*h->suffix)
> + igt_slow_combinatorial();
> h->require();
> p->require();
> igt_require(rendercopy);
> @@ -1046,6 +1065,8 @@ run_basic_modes(const struct access_mode *mode,
>
> /* and finally try to trick the kernel into loosing the pending write */
> igt_subtest_f("%s-%s-gpu-read-after-write%s%s", mode->name, p->prefix, suffix, h->suffix) {
> + if (*h->suffix)
> + igt_slow_combinatorial();
> h->require();
> p->require();
> buffers_create(&buffers, num_buffers);
> @@ -1064,13 +1085,11 @@ run_basic_modes(const struct access_mode *mode,
> static void
> run_modes(const struct access_mode *mode)
> {
> - if (all) {
> - run_basic_modes(mode, "", run_single);
> + run_basic_modes(mode, "", run_single);
>
> - igt_fork_signal_helper();
> - run_basic_modes(mode, "-interruptible", run_interruptible);
> - igt_stop_signal_helper();
> - }
> + igt_fork_signal_helper();
> + run_basic_modes(mode, "-interruptible", run_interruptible);
> + igt_stop_signal_helper();
>
> igt_fork_signal_helper();
> run_basic_modes(mode, "-forked", run_forked);
> @@ -1083,9 +1102,6 @@ igt_main
>
> igt_skip_on_simulation();
>
> - if (strstr(igt_test_name(), "all"))
> - all = true;
> -
> igt_fixture {
> fd = drm_open_driver(DRIVER_INTEL);
> devid = intel_get_drm_devid(fd);
> diff --git a/tests/kms_frontbuffer_tracking.c b/tests/kms_frontbuffer_tracking.c
> index d97e148c5073..6f84ef0813d9 100644
> --- a/tests/kms_frontbuffer_tracking.c
> +++ b/tests/kms_frontbuffer_tracking.c
> @@ -47,8 +47,8 @@ IGT_TEST_DESCRIPTION("Test the Kernel's frontbuffer tracking mechanism and "
> * combinations that are somewhat redundant and don't add much value to the
> * test. For example, since we already do the offscreen testing with a single
> * pipe enabled, there's no much value in doing it again with dual pipes. If you
> - * still want to try these redundant tests, you need to use the --show-hidden
> - * option.
> + * still want to try these redundant tests, you need to use the
> + * "--with-slow-combinatorial" option.
> *
> * The most important hidden thing is the FEATURE_NONE set of tests. Whenever
> * you get a failure on any test, it is important to check whether the same test
> @@ -116,6 +116,10 @@ struct test_mode {
> } format;
>
> enum igt_draw_method method;
> +
> + /* The test is slow and/or combinatorial;
> + * skip unless otherwise specified */
> + bool slow;
> };
>
> enum flip_type {
> @@ -237,7 +241,6 @@ struct {
> bool fbc_check_last_action;
> bool no_edp;
> bool small_modes;
> - bool show_hidden;
It's not clear to me, please clarify: now the tests that were
previously completely hidden will be listed in --list-subtests and
will be shown as skipped during normal runs?
For kms_frontbuffer_tracking, hidden tests are supposed to be just for
developers who know what they are doing. I hide them behind a special
command-line switch that's not used by QA because I don't want QA
wasting time running those tests. One third of the
kms_frontbuffer_tracking hidden tests only serve the purpose of
checking whether there's a bug in kms_frontbuffer_track itself or not.
For some other hidden tests, they are there just to help better debug
in case some other non-hidden tests fail. Some other hidden tests are
100% useless and superfulous.QA should only run the non-hidden tests.
So if some non-hidden test fails, the developers can use the hidden
tests to help debugging.
Besides, the "if (t.slow)" could have been moved to
check_test_requirements(), making the code much simpler :)
> int step;
> int only_feature;
> int only_pipes;
> @@ -250,7 +253,6 @@ struct {
> .fbc_check_last_action = true,
> .no_edp = false,
> .small_modes = false,
> - .show_hidden= false,
> .step = 0,
> .only_feature = FEATURE_COUNT,
> .only_pipes = PIPE_COUNT,
> @@ -2892,9 +2894,6 @@ static int opt_handler(int option, int option_index, void *data)
> case 'm':
> opt.small_modes = true;
> break;
> - case 'i':
> - opt.show_hidden = true;
> - break;
> case 't':
> opt.step++;
> break;
> @@ -2942,7 +2941,6 @@ const char *help_str =
> " --no-fbc-action-check Don't check for the FBC last action\n"
> " --no-edp Don't use eDP monitors\n"
> " --use-small-modes Use smaller resolutions for the modes\n"
> -" --show-hidden Show hidden subtests\n"
> " --step Stop on each step so you can check the screen\n"
> " --nop-only Only run the \"nop\" feature subtests\n"
> " --fbc-only Only run the \"fbc\" feature subtests\n"
> @@ -3036,6 +3034,7 @@ static const char *format_str(enum pixel_format format)
>
> #define TEST_MODE_ITER_BEGIN(t) \
> t.format = FORMAT_DEFAULT; \
> + t.slow = false; \
> for (t.feature = 0; t.feature < FEATURE_COUNT; t.feature++) { \
> for (t.pipes = 0; t.pipes < PIPE_COUNT; t.pipes++) { \
> for (t.screen = 0; t.screen < SCREEN_COUNT; t.screen++) { \
> @@ -3046,15 +3045,15 @@ static const char *format_str(enum pixel_format format)
> continue; \
> if (t.screen == SCREEN_OFFSCREEN && t.plane != PLANE_PRI) \
> continue; \
> - if (!opt.show_hidden && t.pipes == PIPE_DUAL && \
> + if (t.pipes == PIPE_DUAL && \
> t.screen == SCREEN_OFFSCREEN) \
> - continue; \
> - if ((!opt.show_hidden && opt.only_feature != FEATURE_NONE) \
> - && t.feature == FEATURE_NONE) \
> - continue; \
> - if (!opt.show_hidden && t.fbs == FBS_SHARED && \
> + t.slow = true; \
> + if (opt.only_feature != FEATURE_NONE && \
> + t.feature == FEATURE_NONE) \
> + t.slow = true; \
> + if (t.fbs == FBS_SHARED && \
> (t.plane == PLANE_CUR || t.plane == PLANE_SPR)) \
> - continue;
> + t.slow = true;
>
>
> #define TEST_MODE_ITER_END } } } } } }
> @@ -3069,7 +3068,6 @@ int main(int argc, char *argv[])
> { "no-fbc-action-check", 0, 0, 'a'},
> { "no-edp", 0, 0, 'e'},
> { "use-small-modes", 0, 0, 'm'},
> - { "show-hidden", 0, 0, 'i'},
> { "step", 0, 0, 't'},
> { "nop-only", 0, 0, 'n'},
> { "fbc-only", 0, 0, 'f'},
> @@ -3088,9 +3086,11 @@ int main(int argc, char *argv[])
> setup_environment();
>
> for (t.feature = 0; t.feature < FEATURE_COUNT; t.feature++) {
> - if ((!opt.show_hidden && opt.only_feature != FEATURE_NONE)
> - && t.feature == FEATURE_NONE)
> - continue;
> + bool slow = false;
> +
> + if (opt.only_feature != FEATURE_NONE &&
> + t.feature == FEATURE_NONE)
> + slow = true;
> for (t.pipes = 0; t.pipes < PIPE_COUNT; t.pipes++) {
> t.screen = SCREEN_PRIM;
> t.plane = PLANE_PRI;
> @@ -3101,8 +3101,11 @@ int main(int argc, char *argv[])
>
> igt_subtest_f("%s-%s-rte",
> feature_str(t.feature),
> - pipes_str(t.pipes))
> + pipes_str(t.pipes)) {
> + if (slow)
> + igt_slow_combinatorial();
> rte_subtest(&t);
> + }
> }
> }
>
> @@ -3113,39 +3116,52 @@ int main(int argc, char *argv[])
> screen_str(t.screen),
> plane_str(t.plane),
> fbs_str(t.fbs),
> - igt_draw_get_method_name(t.method))
> + igt_draw_get_method_name(t.method)) {
> + if (t.slow)
> + igt_slow_combinatorial();
> draw_subtest(&t);
> + }
> TEST_MODE_ITER_END
>
> TEST_MODE_ITER_BEGIN(t)
> if (t.plane != PLANE_PRI ||
> - t.screen == SCREEN_OFFSCREEN ||
> - (!opt.show_hidden && t.method != IGT_DRAW_BLT))
> + t.screen == SCREEN_OFFSCREEN)
> continue;
> + if (t.method != IGT_DRAW_BLT)
> + t.slow = true;
>
> igt_subtest_f("%s-%s-%s-%s-flip-%s",
> feature_str(t.feature),
> pipes_str(t.pipes),
> screen_str(t.screen),
> fbs_str(t.fbs),
> - igt_draw_get_method_name(t.method))
> + igt_draw_get_method_name(t.method)) {
> + if (t.slow)
> + igt_slow_combinatorial();
> flip_subtest(&t, FLIP_PAGEFLIP);
> + }
>
> igt_subtest_f("%s-%s-%s-%s-evflip-%s",
> feature_str(t.feature),
> pipes_str(t.pipes),
> screen_str(t.screen),
> fbs_str(t.fbs),
> - igt_draw_get_method_name(t.method))
> + igt_draw_get_method_name(t.method)) {
> + if (t.slow)
> + igt_slow_combinatorial();
> flip_subtest(&t, FLIP_PAGEFLIP_EVENT);
> + }
>
> igt_subtest_f("%s-%s-%s-%s-msflip-%s",
> feature_str(t.feature),
> pipes_str(t.pipes),
> screen_str(t.screen),
> fbs_str(t.fbs),
> - igt_draw_get_method_name(t.method))
> + igt_draw_get_method_name(t.method)) {
> + if (t.slow)
> + igt_slow_combinatorial();
> flip_subtest(&t, FLIP_MODESET);
> + }
>
> TEST_MODE_ITER_END
>
> @@ -3159,8 +3175,11 @@ int main(int argc, char *argv[])
> igt_subtest_f("%s-%s-%s-fliptrack",
> feature_str(t.feature),
> pipes_str(t.pipes),
> - fbs_str(t.fbs))
> + fbs_str(t.fbs)) {
> + if (t.slow)
> + igt_slow_combinatorial();
> fliptrack_subtest(&t, FLIP_PAGEFLIP);
> + }
> TEST_MODE_ITER_END
>
> TEST_MODE_ITER_BEGIN(t)
> @@ -3174,16 +3193,22 @@ int main(int argc, char *argv[])
> pipes_str(t.pipes),
> screen_str(t.screen),
> plane_str(t.plane),
> - fbs_str(t.fbs))
> + fbs_str(t.fbs)) {
> + if (t.slow)
> + igt_slow_combinatorial();
> move_subtest(&t);
> + }
>
> igt_subtest_f("%s-%s-%s-%s-%s-onoff",
> feature_str(t.feature),
> pipes_str(t.pipes),
> screen_str(t.screen),
> plane_str(t.plane),
> - fbs_str(t.fbs))
> + fbs_str(t.fbs)) {
> + if (t.slow)
> + igt_slow_combinatorial();
> onoff_subtest(&t);
> + }
> TEST_MODE_ITER_END
>
> TEST_MODE_ITER_BEGIN(t)
> @@ -3197,23 +3222,30 @@ int main(int argc, char *argv[])
> pipes_str(t.pipes),
> screen_str(t.screen),
> plane_str(t.plane),
> - fbs_str(t.fbs))
> + fbs_str(t.fbs)) {
> + if (t.slow)
> + igt_slow_combinatorial();
> fullscreen_plane_subtest(&t);
> + }
> TEST_MODE_ITER_END
>
> TEST_MODE_ITER_BEGIN(t)
> if (t.screen != SCREEN_PRIM ||
> - t.method != IGT_DRAW_BLT ||
> - (!opt.show_hidden && t.plane != PLANE_PRI) ||
> - (!opt.show_hidden && t.fbs != FBS_INDIVIDUAL))
> + t.method != IGT_DRAW_BLT)
> continue;
> + if (t.plane != PLANE_PRI ||
> + t.fbs != FBS_INDIVIDUAL)
> + t.slow = true;
>
> igt_subtest_f("%s-%s-%s-%s-multidraw",
> feature_str(t.feature),
> pipes_str(t.pipes),
> plane_str(t.plane),
> - fbs_str(t.fbs))
> + fbs_str(t.fbs)) {
> + if (t.slow)
> + igt_slow_combinatorial();
> multidraw_subtest(&t);
> + }
> TEST_MODE_ITER_END
>
> TEST_MODE_ITER_BEGIN(t)
> @@ -3224,8 +3256,11 @@ int main(int argc, char *argv[])
> t.method != IGT_DRAW_MMAP_GTT)
> continue;
>
> - igt_subtest_f("%s-farfromfence", feature_str(t.feature))
> + igt_subtest_f("%s-farfromfence", feature_str(t.feature)) {
> + if (t.slow)
> + igt_slow_combinatorial();
> farfromfence_subtest(&t);
> + }
> TEST_MODE_ITER_END
>
> TEST_MODE_ITER_BEGIN(t)
> @@ -3243,8 +3278,11 @@ int main(int argc, char *argv[])
> igt_subtest_f("%s-%s-draw-%s",
> feature_str(t.feature),
> format_str(t.format),
> - igt_draw_get_method_name(t.method))
> + igt_draw_get_method_name(t.method)) {
> + if (t.slow)
> + igt_slow_combinatorial();
> format_draw_subtest(&t);
> + }
> }
> TEST_MODE_ITER_END
>
> @@ -3256,8 +3294,11 @@ int main(int argc, char *argv[])
> continue;
> igt_subtest_f("%s-%s-scaledprimary",
> feature_str(t.feature),
> - fbs_str(t.fbs))
> + fbs_str(t.fbs)) {
> + if (t.slow)
> + igt_slow_combinatorial();
> scaledprimary_subtest(&t);
> + }
> TEST_MODE_ITER_END
>
> TEST_MODE_ITER_BEGIN(t)
> @@ -3268,19 +3309,31 @@ int main(int argc, char *argv[])
> t.method != IGT_DRAW_MMAP_CPU)
> continue;
>
> - igt_subtest_f("%s-modesetfrombusy", feature_str(t.feature))
> + igt_subtest_f("%s-modesetfrombusy", feature_str(t.feature)) {
> + if (t.slow)
> + igt_slow_combinatorial();
> modesetfrombusy_subtest(&t);
> + }
>
> if (t.feature & FEATURE_FBC)
> - igt_subtest_f("%s-badstride", feature_str(t.feature))
> + igt_subtest_f("%s-badstride", feature_str(t.feature)) {
> + if (t.slow)
> + igt_slow_combinatorial();
> badstride_subtest(&t);
> + }
>
> if (t.feature & FEATURE_PSR)
> - igt_subtest_f("%s-slowdraw", feature_str(t.feature))
> + igt_subtest_f("%s-slowdraw", feature_str(t.feature)) {
> + if (t.slow)
> + igt_slow_combinatorial();
> slow_draw_subtest(&t);
> + }
>
> - igt_subtest_f("%s-suspend", feature_str(t.feature))
> + igt_subtest_f("%s-suspend", feature_str(t.feature)) {
> + if (t.slow)
> + igt_slow_combinatorial();
> suspend_subtest(&t);
> + }
> TEST_MODE_ITER_END
>
> igt_fixture
> --
> 2.6.1
>
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/intel-gfx
--
Paulo Zanoni
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH i-g-t 1/3] Rename gem_concurren_all over gem_concurrent_blit
2015-10-23 11:42 ` [PATCH i-g-t 1/3] Rename gem_concurren_all over gem_concurrent_blit David Weinehall
@ 2015-10-23 14:32 ` Thomas Wood
2015-10-26 15:03 ` David Weinehall
0 siblings, 1 reply; 41+ messages in thread
From: Thomas Wood @ 2015-10-23 14:32 UTC (permalink / raw)
To: David Weinehall; +Cc: Intel Graphics Development
gem_concurrent_all is misspelled in the subject.
On 23 October 2015 at 12:42, David Weinehall
<david.weinehall@linux.intel.com> wrote:
> We'll both rename gem_concurrent_all over gem_concurrent_blit
> and change gem_concurrent_blit in this changeset. To make
> this easier to follow we first do the the rename.
Please add a Signed-off-by line to your patches as intel-gpu-tools
requires contributions to follow the developer's certificate of origin
(http://developercertificate.org/).
> ---
> tests/gem_concurrent_blit.c | 1116 ++++++++++++++++++++++++++++++++++++++++++-
> 1 file changed, 1108 insertions(+), 8 deletions(-)
This appears only to be adding gem_concurrent_blit, not renaming
gem_concurrent_all. Also, the relevant changes to .gitignore are
missing from this patch and the third patch in this series.
>
> diff --git a/tests/gem_concurrent_blit.c b/tests/gem_concurrent_blit.c
> index 513de4a1b719..1d2d787202df 100644
> --- a/tests/gem_concurrent_blit.c
> +++ b/tests/gem_concurrent_blit.c
> @@ -1,8 +1,1108 @@
> -/* This test is just a duplicate of gem_concurrent_all. */
> -/* However the executeable will be gem_concurrent_blit. */
> -/* The main function examines argv[0] and, in the case */
> -/* of gem_concurent_blit runs only a subset of the */
> -/* available subtests. This avoids the use of */
> -/* non-standard command line parameters which can cause */
> -/* problems for automated testing */
> -#include "gem_concurrent_all.c"
> +/*
> + * Copyright © 2009,2012,2013 Intel Corporation
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice (including the next
> + * paragraph) shall be included in all copies or substantial portions of the
> + * Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
> + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
> + * IN THE SOFTWARE.
> + *
> + * Authors:
> + * Eric Anholt <eric@anholt.net>
> + * Chris Wilson <chris@chris-wilson.co.uk>
> + * Daniel Vetter <daniel.vetter@ffwll.ch>
> + *
> + */
> +
> +/** @file gem_concurrent.c
> + *
> + * This is a test of pread/pwrite/mmap behavior when writing to active
> + * buffers.
> + *
> + * Based on gem_gtt_concurrent_blt.
> + */
> +
> +#include "igt.h"
> +#include <stdlib.h>
> +#include <stdio.h>
> +#include <string.h>
> +#include <fcntl.h>
> +#include <inttypes.h>
> +#include <errno.h>
> +#include <sys/stat.h>
> +#include <sys/time.h>
> +#include <sys/wait.h>
> +
> +#include <drm.h>
> +
> +#include "intel_bufmgr.h"
> +
> +IGT_TEST_DESCRIPTION("Test of pread/pwrite/mmap behavior when writing to active"
> + " buffers.");
> +
> +int fd, devid, gen;
> +struct intel_batchbuffer *batch;
> +int all;
> +
> +static void
> +nop_release_bo(drm_intel_bo *bo)
> +{
> + drm_intel_bo_unreference(bo);
> +}
> +
> +static void
> +prw_set_bo(drm_intel_bo *bo, uint32_t val, int width, int height)
> +{
> + int size = width * height, i;
> + uint32_t *tmp;
> +
> + tmp = malloc(4*size);
> + if (tmp) {
> + for (i = 0; i < size; i++)
> + tmp[i] = val;
> + drm_intel_bo_subdata(bo, 0, 4*size, tmp);
> + free(tmp);
> + } else {
> + for (i = 0; i < size; i++)
> + drm_intel_bo_subdata(bo, 4*i, 4, &val);
> + }
> +}
> +
> +static void
> +prw_cmp_bo(drm_intel_bo *bo, uint32_t val, int width, int height, drm_intel_bo *tmp)
> +{
> + int size = width * height, i;
> + uint32_t *vaddr;
> +
> + do_or_die(drm_intel_bo_map(tmp, true));
> + do_or_die(drm_intel_bo_get_subdata(bo, 0, 4*size, tmp->virtual));
> + vaddr = tmp->virtual;
> + for (i = 0; i < size; i++)
> + igt_assert_eq_u32(vaddr[i], val);
> + drm_intel_bo_unmap(tmp);
> +}
> +
> +static drm_intel_bo *
> +unmapped_create_bo(drm_intel_bufmgr *bufmgr, int width, int height)
> +{
> + drm_intel_bo *bo;
> +
> + bo = drm_intel_bo_alloc(bufmgr, "bo", 4*width*height, 0);
> + igt_assert(bo);
> +
> + return bo;
> +}
> +
> +static drm_intel_bo *
> +snoop_create_bo(drm_intel_bufmgr *bufmgr, int width, int height)
> +{
> + drm_intel_bo *bo;
> +
> + igt_skip_on(gem_has_llc(fd));
> +
> + bo = unmapped_create_bo(bufmgr, width, height);
> + gem_set_caching(fd, bo->handle, I915_CACHING_CACHED);
> + drm_intel_bo_disable_reuse(bo);
> +
> + return bo;
> +}
> +
> +static void
> +gtt_set_bo(drm_intel_bo *bo, uint32_t val, int width, int height)
> +{
> + uint32_t *vaddr = bo->virtual;
> + int size = width * height;
> +
> + drm_intel_gem_bo_start_gtt_access(bo, true);
> + while (size--)
> + *vaddr++ = val;
> +}
> +
> +static void
> +gtt_cmp_bo(drm_intel_bo *bo, uint32_t val, int width, int height, drm_intel_bo *tmp)
> +{
> + uint32_t *vaddr = bo->virtual;
> + int y;
> +
> + /* GTT access is slow. So we just compare a few points */
> + drm_intel_gem_bo_start_gtt_access(bo, false);
> + for (y = 0; y < height; y++)
> + igt_assert_eq_u32(vaddr[y*width+y], val);
> +}
> +
> +static drm_intel_bo *
> +map_bo(drm_intel_bo *bo)
> +{
> + /* gtt map doesn't have a write parameter, so just keep the mapping
> + * around (to avoid the set_domain with the gtt write domain set) and
> + * manually tell the kernel when we start access the gtt. */
> + do_or_die(drm_intel_gem_bo_map_gtt(bo));
> +
> + return bo;
> +}
> +
> +static drm_intel_bo *
> +tile_bo(drm_intel_bo *bo, int width)
> +{
> + uint32_t tiling = I915_TILING_X;
> + uint32_t stride = width * 4;
> +
> + do_or_die(drm_intel_bo_set_tiling(bo, &tiling, stride));
> +
> + return bo;
> +}
> +
> +static drm_intel_bo *
> +gtt_create_bo(drm_intel_bufmgr *bufmgr, int width, int height)
> +{
> + return map_bo(unmapped_create_bo(bufmgr, width, height));
> +}
> +
> +static drm_intel_bo *
> +gttX_create_bo(drm_intel_bufmgr *bufmgr, int width, int height)
> +{
> + return tile_bo(gtt_create_bo(bufmgr, width, height), width);
> +}
> +
> +static drm_intel_bo *
> +wc_create_bo(drm_intel_bufmgr *bufmgr, int width, int height)
> +{
> + drm_intel_bo *bo;
> +
> + gem_require_mmap_wc(fd);
> +
> + bo = unmapped_create_bo(bufmgr, width, height);
> + bo->virtual = __gem_mmap__wc(fd, bo->handle, 0, bo->size, PROT_READ | PROT_WRITE);
> + return bo;
> +}
> +
> +static void
> +wc_release_bo(drm_intel_bo *bo)
> +{
> + munmap(bo->virtual, bo->size);
> + bo->virtual = NULL;
> +
> + nop_release_bo(bo);
> +}
> +
> +static drm_intel_bo *
> +gpu_create_bo(drm_intel_bufmgr *bufmgr, int width, int height)
> +{
> + return unmapped_create_bo(bufmgr, width, height);
> +}
> +
> +
> +static drm_intel_bo *
> +gpuX_create_bo(drm_intel_bufmgr *bufmgr, int width, int height)
> +{
> + return tile_bo(gpu_create_bo(bufmgr, width, height), width);
> +}
> +
> +static void
> +cpu_set_bo(drm_intel_bo *bo, uint32_t val, int width, int height)
> +{
> + int size = width * height;
> + uint32_t *vaddr;
> +
> + do_or_die(drm_intel_bo_map(bo, true));
> + vaddr = bo->virtual;
> + while (size--)
> + *vaddr++ = val;
> + drm_intel_bo_unmap(bo);
> +}
> +
> +static void
> +cpu_cmp_bo(drm_intel_bo *bo, uint32_t val, int width, int height, drm_intel_bo *tmp)
> +{
> + int size = width * height;
> + uint32_t *vaddr;
> +
> + do_or_die(drm_intel_bo_map(bo, false));
> + vaddr = bo->virtual;
> + while (size--)
> + igt_assert_eq_u32(*vaddr++, val);
> + drm_intel_bo_unmap(bo);
> +}
> +
> +static void
> +gpu_set_bo(drm_intel_bo *bo, uint32_t val, int width, int height)
> +{
> + struct drm_i915_gem_relocation_entry reloc[1];
> + struct drm_i915_gem_exec_object2 gem_exec[2];
> + struct drm_i915_gem_execbuffer2 execbuf;
> + struct drm_i915_gem_pwrite gem_pwrite;
> + struct drm_i915_gem_create create;
> + uint32_t buf[10], *b;
> + uint32_t tiling, swizzle;
> +
> + drm_intel_bo_get_tiling(bo, &tiling, &swizzle);
> +
> + memset(reloc, 0, sizeof(reloc));
> + memset(gem_exec, 0, sizeof(gem_exec));
> + memset(&execbuf, 0, sizeof(execbuf));
> +
> + b = buf;
> + *b++ = XY_COLOR_BLT_CMD_NOLEN |
> + ((gen >= 8) ? 5 : 4) |
> + COLOR_BLT_WRITE_ALPHA | XY_COLOR_BLT_WRITE_RGB;
> + if (gen >= 4 && tiling) {
> + b[-1] |= XY_COLOR_BLT_TILED;
> + *b = width;
> + } else
> + *b = width << 2;
> + *b++ |= 0xf0 << 16 | 1 << 25 | 1 << 24;
> + *b++ = 0;
> + *b++ = height << 16 | width;
> + reloc[0].offset = (b - buf) * sizeof(uint32_t);
> + reloc[0].target_handle = bo->handle;
> + reloc[0].read_domains = I915_GEM_DOMAIN_RENDER;
> + reloc[0].write_domain = I915_GEM_DOMAIN_RENDER;
> + *b++ = 0;
> + if (gen >= 8)
> + *b++ = 0;
> + *b++ = val;
> + *b++ = MI_BATCH_BUFFER_END;
> + if ((b - buf) & 1)
> + *b++ = 0;
> +
> + gem_exec[0].handle = bo->handle;
> + gem_exec[0].flags = EXEC_OBJECT_NEEDS_FENCE;
> +
> + create.handle = 0;
> + create.size = 4096;
> + drmIoctl(fd, DRM_IOCTL_I915_GEM_CREATE, &create);
> + gem_exec[1].handle = create.handle;
> + gem_exec[1].relocation_count = 1;
> + gem_exec[1].relocs_ptr = (uintptr_t)reloc;
> +
> + execbuf.buffers_ptr = (uintptr_t)gem_exec;
> + execbuf.buffer_count = 2;
> + execbuf.batch_len = (b - buf) * sizeof(buf[0]);
> + if (gen >= 6)
> + execbuf.flags = I915_EXEC_BLT;
> +
> + gem_pwrite.handle = gem_exec[1].handle;
> + gem_pwrite.offset = 0;
> + gem_pwrite.size = execbuf.batch_len;
> + gem_pwrite.data_ptr = (uintptr_t)buf;
> + do_or_die(drmIoctl(fd, DRM_IOCTL_I915_GEM_PWRITE, &gem_pwrite));
> + do_or_die(drmIoctl(fd, DRM_IOCTL_I915_GEM_EXECBUFFER2, &execbuf));
> +
> + drmIoctl(fd, DRM_IOCTL_GEM_CLOSE, &create.handle);
> +}
> +
> +static void
> +gpu_cmp_bo(drm_intel_bo *bo, uint32_t val, int width, int height, drm_intel_bo *tmp)
> +{
> + intel_blt_copy(batch,
> + bo, 0, 0, 4*width,
> + tmp, 0, 0, 4*width,
> + width, height, 32);
> + cpu_cmp_bo(tmp, val, width, height, NULL);
> +}
> +
> +const struct access_mode {
> + const char *name;
> + void (*set_bo)(drm_intel_bo *bo, uint32_t val, int w, int h);
> + void (*cmp_bo)(drm_intel_bo *bo, uint32_t val, int w, int h, drm_intel_bo *tmp);
> + drm_intel_bo *(*create_bo)(drm_intel_bufmgr *bufmgr, int width, int height);
> + void (*release_bo)(drm_intel_bo *bo);
> +} access_modes[] = {
> + {
> + .name = "prw",
> + .set_bo = prw_set_bo,
> + .cmp_bo = prw_cmp_bo,
> + .create_bo = unmapped_create_bo,
> + .release_bo = nop_release_bo,
> + },
> + {
> + .name = "cpu",
> + .set_bo = cpu_set_bo,
> + .cmp_bo = cpu_cmp_bo,
> + .create_bo = unmapped_create_bo,
> + .release_bo = nop_release_bo,
> + },
> + {
> + .name = "snoop",
> + .set_bo = cpu_set_bo,
> + .cmp_bo = cpu_cmp_bo,
> + .create_bo = snoop_create_bo,
> + .release_bo = nop_release_bo,
> + },
> + {
> + .name = "gtt",
> + .set_bo = gtt_set_bo,
> + .cmp_bo = gtt_cmp_bo,
> + .create_bo = gtt_create_bo,
> + .release_bo = nop_release_bo,
> + },
> + {
> + .name = "gttX",
> + .set_bo = gtt_set_bo,
> + .cmp_bo = gtt_cmp_bo,
> + .create_bo = gttX_create_bo,
> + .release_bo = nop_release_bo,
> + },
> + {
> + .name = "wc",
> + .set_bo = gtt_set_bo,
> + .cmp_bo = gtt_cmp_bo,
> + .create_bo = wc_create_bo,
> + .release_bo = wc_release_bo,
> + },
> + {
> + .name = "gpu",
> + .set_bo = gpu_set_bo,
> + .cmp_bo = gpu_cmp_bo,
> + .create_bo = gpu_create_bo,
> + .release_bo = nop_release_bo,
> + },
> + {
> + .name = "gpuX",
> + .set_bo = gpu_set_bo,
> + .cmp_bo = gpu_cmp_bo,
> + .create_bo = gpuX_create_bo,
> + .release_bo = nop_release_bo,
> + },
> +};
> +
> +#define MAX_NUM_BUFFERS 1024
> +int num_buffers = MAX_NUM_BUFFERS;
> +const int width = 512, height = 512;
> +igt_render_copyfunc_t rendercopy;
> +
> +struct buffers {
> + const struct access_mode *mode;
> + drm_intel_bufmgr *bufmgr;
> + drm_intel_bo *src[MAX_NUM_BUFFERS], *dst[MAX_NUM_BUFFERS];
> + drm_intel_bo *dummy, *spare;
> + int count;
> +};
> +
> +static void *buffers_init(struct buffers *data,
> + const struct access_mode *mode,
> + int _fd)
> +{
> + data->mode = mode;
> + data->count = 0;
> +
> + data->bufmgr = drm_intel_bufmgr_gem_init(_fd, 4096);
> + igt_assert(data->bufmgr);
> +
> + drm_intel_bufmgr_gem_enable_reuse(data->bufmgr);
> + return intel_batchbuffer_alloc(data->bufmgr, devid);
> +}
> +
> +static void buffers_destroy(struct buffers *data)
> +{
> + if (data->count == 0)
> + return;
> +
> + for (int i = 0; i < data->count; i++) {
> + data->mode->release_bo(data->src[i]);
> + data->mode->release_bo(data->dst[i]);
> + }
> + data->mode->release_bo(data->dummy);
> + data->mode->release_bo(data->spare);
> + data->count = 0;
> +}
> +
> +static void buffers_create(struct buffers *data,
> + int count)
> +{
> + igt_assert(data->bufmgr);
> +
> + buffers_destroy(data);
> +
> + for (int i = 0; i < count; i++) {
> + data->src[i] =
> + data->mode->create_bo(data->bufmgr, width, height);
> + data->dst[i] =
> + data->mode->create_bo(data->bufmgr, width, height);
> + }
> + data->dummy = data->mode->create_bo(data->bufmgr, width, height);
> + data->spare = data->mode->create_bo(data->bufmgr, width, height);
> + data->count = count;
> +}
> +
> +static void buffers_fini(struct buffers *data)
> +{
> + if (data->bufmgr == NULL)
> + return;
> +
> + buffers_destroy(data);
> +
> + intel_batchbuffer_free(batch);
> + drm_intel_bufmgr_destroy(data->bufmgr);
> + data->bufmgr = NULL;
> +}
> +
> +typedef void (*do_copy)(drm_intel_bo *dst, drm_intel_bo *src);
> +typedef struct igt_hang_ring (*do_hang)(void);
> +
> +static void render_copy_bo(drm_intel_bo *dst, drm_intel_bo *src)
> +{
> + struct igt_buf d = {
> + .bo = dst,
> + .size = width * height * 4,
> + .num_tiles = width * height * 4,
> + .stride = width * 4,
> + }, s = {
> + .bo = src,
> + .size = width * height * 4,
> + .num_tiles = width * height * 4,
> + .stride = width * 4,
> + };
> + uint32_t swizzle;
> +
> + drm_intel_bo_get_tiling(dst, &d.tiling, &swizzle);
> + drm_intel_bo_get_tiling(src, &s.tiling, &swizzle);
> +
> + rendercopy(batch, NULL,
> + &s, 0, 0,
> + width, height,
> + &d, 0, 0);
> +}
> +
> +static void blt_copy_bo(drm_intel_bo *dst, drm_intel_bo *src)
> +{
> + intel_blt_copy(batch,
> + src, 0, 0, 4*width,
> + dst, 0, 0, 4*width,
> + width, height, 32);
> +}
> +
> +static void cpu_copy_bo(drm_intel_bo *dst, drm_intel_bo *src)
> +{
> + const int size = width * height * sizeof(uint32_t);
> + void *d, *s;
> +
> + gem_set_domain(fd, src->handle, I915_GEM_DOMAIN_CPU, 0);
> + gem_set_domain(fd, dst->handle, I915_GEM_DOMAIN_CPU, I915_GEM_DOMAIN_CPU);
> + s = gem_mmap__cpu(fd, src->handle, 0, size, PROT_READ);
> + d = gem_mmap__cpu(fd, dst->handle, 0, size, PROT_WRITE);
> +
> + memcpy(d, s, size);
> +
> + munmap(d, size);
> + munmap(s, size);
> +}
> +
> +static void gtt_copy_bo(drm_intel_bo *dst, drm_intel_bo *src)
> +{
> + const int size = width * height * sizeof(uint32_t);
> + void *d, *s;
> +
> + gem_set_domain(fd, src->handle, I915_GEM_DOMAIN_GTT, 0);
> + gem_set_domain(fd, dst->handle, I915_GEM_DOMAIN_GTT, I915_GEM_DOMAIN_GTT);
> +
> + s = gem_mmap__gtt(fd, src->handle, size, PROT_READ);
> + d = gem_mmap__gtt(fd, dst->handle, size, PROT_WRITE);
> +
> + memcpy(d, s, size);
> +
> + munmap(d, size);
> + munmap(s, size);
> +}
> +
> +static void wc_copy_bo(drm_intel_bo *dst, drm_intel_bo *src)
> +{
> + const int size = width * height * sizeof(uint32_t);
> + void *d, *s;
> +
> + gem_set_domain(fd, src->handle, I915_GEM_DOMAIN_GTT, 0);
> + gem_set_domain(fd, dst->handle, I915_GEM_DOMAIN_GTT, I915_GEM_DOMAIN_GTT);
> +
> + s = gem_mmap__wc(fd, src->handle, 0, size, PROT_READ);
> + d = gem_mmap__wc(fd, dst->handle, 0, size, PROT_WRITE);
> +
> + memcpy(d, s, size);
> +
> + munmap(d, size);
> + munmap(s, size);
> +}
> +
> +static struct igt_hang_ring no_hang(void)
> +{
> + return (struct igt_hang_ring){0, 0};
> +}
> +
> +static struct igt_hang_ring bcs_hang(void)
> +{
> + return igt_hang_ring(fd, I915_EXEC_BLT);
> +}
> +
> +static struct igt_hang_ring rcs_hang(void)
> +{
> + return igt_hang_ring(fd, I915_EXEC_RENDER);
> +}
> +
> +static void hang_require(void)
> +{
> + igt_require_hang_ring(fd, -1);
> +}
> +
> +static void do_overwrite_source(struct buffers *buffers,
> + do_copy do_copy_func,
> + do_hang do_hang_func)
> +{
> + struct igt_hang_ring hang;
> + int i;
> +
> + gem_quiescent_gpu(fd);
> + for (i = 0; i < buffers->count; i++) {
> + buffers->mode->set_bo(buffers->src[i], i, width, height);
> + buffers->mode->set_bo(buffers->dst[i], ~i, width, height);
> + }
> + for (i = 0; i < buffers->count; i++)
> + do_copy_func(buffers->dst[i], buffers->src[i]);
> + hang = do_hang_func();
> + for (i = buffers->count; i--; )
> + buffers->mode->set_bo(buffers->src[i], 0xdeadbeef, width, height);
> + for (i = 0; i < buffers->count; i++)
> + buffers->mode->cmp_bo(buffers->dst[i], i, width, height, buffers->dummy);
> + igt_post_hang_ring(fd, hang);
> +}
> +
> +static void do_overwrite_source_read(struct buffers *buffers,
> + do_copy do_copy_func,
> + do_hang do_hang_func,
> + int do_rcs)
> +{
> + const int half = buffers->count/2;
> + struct igt_hang_ring hang;
> + int i;
> +
> + gem_quiescent_gpu(fd);
> + for (i = 0; i < half; i++) {
> + buffers->mode->set_bo(buffers->src[i], i, width, height);
> + buffers->mode->set_bo(buffers->dst[i], ~i, width, height);
> + buffers->mode->set_bo(buffers->dst[i+half], ~i, width, height);
> + }
> + for (i = 0; i < half; i++) {
> + do_copy_func(buffers->dst[i], buffers->src[i]);
> + if (do_rcs)
> + render_copy_bo(buffers->dst[i+half], buffers->src[i]);
> + else
> + blt_copy_bo(buffers->dst[i+half], buffers->src[i]);
> + }
> + hang = do_hang_func();
> + for (i = half; i--; )
> + buffers->mode->set_bo(buffers->src[i], 0xdeadbeef, width, height);
> + for (i = 0; i < half; i++) {
> + buffers->mode->cmp_bo(buffers->dst[i], i, width, height, buffers->dummy);
> + buffers->mode->cmp_bo(buffers->dst[i+half], i, width, height, buffers->dummy);
> + }
> + igt_post_hang_ring(fd, hang);
> +}
> +
> +static void do_overwrite_source_read_bcs(struct buffers *buffers,
> + do_copy do_copy_func,
> + do_hang do_hang_func)
> +{
> + do_overwrite_source_read(buffers, do_copy_func, do_hang_func, 0);
> +}
> +
> +static void do_overwrite_source_read_rcs(struct buffers *buffers,
> + do_copy do_copy_func,
> + do_hang do_hang_func)
> +{
> + do_overwrite_source_read(buffers, do_copy_func, do_hang_func, 1);
> +}
> +
> +static void do_overwrite_source__rev(struct buffers *buffers,
> + do_copy do_copy_func,
> + do_hang do_hang_func)
> +{
> + struct igt_hang_ring hang;
> + int i;
> +
> + gem_quiescent_gpu(fd);
> + for (i = 0; i < buffers->count; i++) {
> + buffers->mode->set_bo(buffers->src[i], i, width, height);
> + buffers->mode->set_bo(buffers->dst[i], ~i, width, height);
> + }
> + for (i = 0; i < buffers->count; i++)
> + do_copy_func(buffers->dst[i], buffers->src[i]);
> + hang = do_hang_func();
> + for (i = 0; i < buffers->count; i++)
> + buffers->mode->set_bo(buffers->src[i], 0xdeadbeef, width, height);
> + for (i = buffers->count; i--; )
> + buffers->mode->cmp_bo(buffers->dst[i], i, width, height, buffers->dummy);
> + igt_post_hang_ring(fd, hang);
> +}
> +
> +static void do_overwrite_source__one(struct buffers *buffers,
> + do_copy do_copy_func,
> + do_hang do_hang_func)
> +{
> + struct igt_hang_ring hang;
> +
> + gem_quiescent_gpu(fd);
> + buffers->mode->set_bo(buffers->src[0], 0, width, height);
> + buffers->mode->set_bo(buffers->dst[0], ~0, width, height);
> + do_copy_func(buffers->dst[0], buffers->src[0]);
> + hang = do_hang_func();
> + buffers->mode->set_bo(buffers->src[0], 0xdeadbeef, width, height);
> + buffers->mode->cmp_bo(buffers->dst[0], 0, width, height, buffers->dummy);
> + igt_post_hang_ring(fd, hang);
> +}
> +
> +static void do_intermix(struct buffers *buffers,
> + do_copy do_copy_func,
> + do_hang do_hang_func,
> + int do_rcs)
> +{
> + const int half = buffers->count/2;
> + struct igt_hang_ring hang;
> + int i;
> +
> + gem_quiescent_gpu(fd);
> + for (i = 0; i < buffers->count; i++) {
> + buffers->mode->set_bo(buffers->src[i], 0xdeadbeef^~i, width, height);
> + buffers->mode->set_bo(buffers->dst[i], i, width, height);
> + }
> + for (i = 0; i < half; i++) {
> + if (do_rcs == 1 || (do_rcs == -1 && i & 1))
> + render_copy_bo(buffers->dst[i], buffers->src[i]);
> + else
> + blt_copy_bo(buffers->dst[i], buffers->src[i]);
> +
> + do_copy_func(buffers->dst[i+half], buffers->src[i]);
> +
> + if (do_rcs == 1 || (do_rcs == -1 && (i & 1) == 0))
> + render_copy_bo(buffers->dst[i], buffers->dst[i+half]);
> + else
> + blt_copy_bo(buffers->dst[i], buffers->dst[i+half]);
> +
> + do_copy_func(buffers->dst[i+half], buffers->src[i+half]);
> + }
> + hang = do_hang_func();
> + for (i = 0; i < 2*half; i++)
> + buffers->mode->cmp_bo(buffers->dst[i], 0xdeadbeef^~i, width, height, buffers->dummy);
> + igt_post_hang_ring(fd, hang);
> +}
> +
> +static void do_intermix_rcs(struct buffers *buffers,
> + do_copy do_copy_func,
> + do_hang do_hang_func)
> +{
> + do_intermix(buffers, do_copy_func, do_hang_func, 1);
> +}
> +
> +static void do_intermix_bcs(struct buffers *buffers,
> + do_copy do_copy_func,
> + do_hang do_hang_func)
> +{
> + do_intermix(buffers, do_copy_func, do_hang_func, 0);
> +}
> +
> +static void do_intermix_both(struct buffers *buffers,
> + do_copy do_copy_func,
> + do_hang do_hang_func)
> +{
> + do_intermix(buffers, do_copy_func, do_hang_func, -1);
> +}
> +
> +static void do_early_read(struct buffers *buffers,
> + do_copy do_copy_func,
> + do_hang do_hang_func)
> +{
> + struct igt_hang_ring hang;
> + int i;
> +
> + gem_quiescent_gpu(fd);
> + for (i = buffers->count; i--; )
> + buffers->mode->set_bo(buffers->src[i], 0xdeadbeef, width, height);
> + for (i = 0; i < buffers->count; i++)
> + do_copy_func(buffers->dst[i], buffers->src[i]);
> + hang = do_hang_func();
> + for (i = buffers->count; i--; )
> + buffers->mode->cmp_bo(buffers->dst[i], 0xdeadbeef, width, height, buffers->dummy);
> + igt_post_hang_ring(fd, hang);
> +}
> +
> +static void do_read_read_bcs(struct buffers *buffers,
> + do_copy do_copy_func,
> + do_hang do_hang_func)
> +{
> + struct igt_hang_ring hang;
> + int i;
> +
> + gem_quiescent_gpu(fd);
> + for (i = buffers->count; i--; )
> + buffers->mode->set_bo(buffers->src[i], 0xdeadbeef ^ i, width, height);
> + for (i = 0; i < buffers->count; i++) {
> + do_copy_func(buffers->dst[i], buffers->src[i]);
> + blt_copy_bo(buffers->spare, buffers->src[i]);
> + }
> + cpu_cmp_bo(buffers->spare, 0xdeadbeef^(buffers->count-1), width, height, NULL);
> + hang = do_hang_func();
> + for (i = buffers->count; i--; )
> + buffers->mode->cmp_bo(buffers->dst[i], 0xdeadbeef ^ i, width, height, buffers->dummy);
> + igt_post_hang_ring(fd, hang);
> +}
> +
> +static void do_read_read_rcs(struct buffers *buffers,
> + do_copy do_copy_func,
> + do_hang do_hang_func)
> +{
> + struct igt_hang_ring hang;
> + int i;
> +
> + gem_quiescent_gpu(fd);
> + for (i = buffers->count; i--; )
> + buffers->mode->set_bo(buffers->src[i], 0xdeadbeef ^ i, width, height);
> + for (i = 0; i < buffers->count; i++) {
> + do_copy_func(buffers->dst[i], buffers->src[i]);
> + render_copy_bo(buffers->spare, buffers->src[i]);
> + }
> + cpu_cmp_bo(buffers->spare, 0xdeadbeef^(buffers->count-1), width, height, NULL);
> + hang = do_hang_func();
> + for (i = buffers->count; i--; )
> + buffers->mode->cmp_bo(buffers->dst[i], 0xdeadbeef ^ i, width, height, buffers->dummy);
> + igt_post_hang_ring(fd, hang);
> +}
> +
> +static void do_gpu_read_after_write(struct buffers *buffers,
> + do_copy do_copy_func,
> + do_hang do_hang_func)
> +{
> + struct igt_hang_ring hang;
> + int i;
> +
> + gem_quiescent_gpu(fd);
> + for (i = buffers->count; i--; )
> + buffers->mode->set_bo(buffers->src[i], 0xabcdabcd, width, height);
> + for (i = 0; i < buffers->count; i++)
> + do_copy_func(buffers->dst[i], buffers->src[i]);
> + for (i = buffers->count; i--; )
> + do_copy_func(buffers->dummy, buffers->dst[i]);
> + hang = do_hang_func();
> + for (i = buffers->count; i--; )
> + buffers->mode->cmp_bo(buffers->dst[i], 0xabcdabcd, width, height, buffers->dummy);
> + igt_post_hang_ring(fd, hang);
> +}
> +
> +typedef void (*do_test)(struct buffers *buffers,
> + do_copy do_copy_func,
> + do_hang do_hang_func);
> +
> +typedef void (*run_wrap)(struct buffers *buffers,
> + do_test do_test_func,
> + do_copy do_copy_func,
> + do_hang do_hang_func);
> +
> +static void run_single(struct buffers *buffers,
> + do_test do_test_func,
> + do_copy do_copy_func,
> + do_hang do_hang_func)
> +{
> + do_test_func(buffers, do_copy_func, do_hang_func);
> +}
> +
> +static void run_interruptible(struct buffers *buffers,
> + do_test do_test_func,
> + do_copy do_copy_func,
> + do_hang do_hang_func)
> +{
> + int loop;
> +
> + for (loop = 0; loop < 10; loop++)
> + do_test_func(buffers, do_copy_func, do_hang_func);
> +}
> +
> +static void run_forked(struct buffers *buffers,
> + do_test do_test_func,
> + do_copy do_copy_func,
> + do_hang do_hang_func)
> +{
> + const int old_num_buffers = num_buffers;
> +
> + num_buffers /= 16;
> + num_buffers += 2;
> +
> + igt_fork(child, 16) {
> + /* recreate process local variables */
> + buffers->count = 0;
> + fd = drm_open_driver(DRIVER_INTEL);
> +
> + batch = buffers_init(buffers, buffers->mode, fd);
> +
> + buffers_create(buffers, num_buffers);
> + for (int loop = 0; loop < 10; loop++)
> + do_test_func(buffers, do_copy_func, do_hang_func);
> +
> + buffers_fini(buffers);
> + }
> +
> + igt_waitchildren();
> +
> + num_buffers = old_num_buffers;
> +}
> +
> +static void bit17_require(void)
> +{
> + struct drm_i915_gem_get_tiling2 {
> + uint32_t handle;
> + uint32_t tiling_mode;
> + uint32_t swizzle_mode;
> + uint32_t phys_swizzle_mode;
> + } arg;
> +#define DRM_IOCTL_I915_GEM_GET_TILING2 DRM_IOWR (DRM_COMMAND_BASE + DRM_I915_GEM_GET_TILING, struct drm_i915_gem_get_tiling2)
> +
> + memset(&arg, 0, sizeof(arg));
> + arg.handle = gem_create(fd, 4096);
> + gem_set_tiling(fd, arg.handle, I915_TILING_X, 512);
> +
> + do_or_die(drmIoctl(fd, DRM_IOCTL_I915_GEM_GET_TILING2, &arg));
> + gem_close(fd, arg.handle);
> + igt_require(arg.phys_swizzle_mode == arg.swizzle_mode);
> +}
> +
> +static void cpu_require(void)
> +{
> + bit17_require();
> +}
> +
> +static void gtt_require(void)
> +{
> +}
> +
> +static void wc_require(void)
> +{
> + bit17_require();
> + gem_require_mmap_wc(fd);
> +}
> +
> +static void bcs_require(void)
> +{
> +}
> +
> +static void rcs_require(void)
> +{
> + igt_require(rendercopy);
> +}
> +
> +static void no_require(void)
> +{
> +}
> +
> +static void
> +run_basic_modes(const struct access_mode *mode,
> + const char *suffix,
> + run_wrap run_wrap_func)
> +{
> + const struct {
> + const char *prefix;
> + do_copy copy;
> + void (*require)(void);
> + } pipelines[] = {
> + { "cpu", cpu_copy_bo, cpu_require },
> + { "gtt", gtt_copy_bo, gtt_require },
> + { "wc", wc_copy_bo, wc_require },
> + { "blt", blt_copy_bo, bcs_require },
> + { "render", render_copy_bo, rcs_require },
> + { NULL, NULL }
> + }, *pskip = pipelines + 3, *p;
> + const struct {
> + const char *suffix;
> + do_hang hang;
> + void (*require)(void);
> + } hangs[] = {
> + { "", no_hang, no_require },
> + { "-hang-blt", bcs_hang, hang_require },
> + { "-hang-render", rcs_hang, hang_require },
> + { NULL, NULL },
> + }, *h;
> + struct buffers buffers;
> +
> + for (h = hangs; h->suffix; h++) {
> + if (!all && *h->suffix)
> + continue;
> +
> + for (p = all ? pipelines : pskip; p->prefix; p++) {
> + igt_fixture {
> + batch = buffers_init(&buffers, mode, fd);
> + }
> +
> + /* try to overwrite the source values */
> + igt_subtest_f("%s-%s-overwrite-source-one%s%s", mode->name, p->prefix, suffix, h->suffix) {
> + h->require();
> + p->require();
> + buffers_create(&buffers, num_buffers);
> + run_wrap_func(&buffers,
> + do_overwrite_source__one,
> + p->copy, h->hang);
> + }
> +
> + igt_subtest_f("%s-%s-overwrite-source%s%s", mode->name, p->prefix, suffix, h->suffix) {
> + h->require();
> + p->require();
> + buffers_create(&buffers, num_buffers);
> + run_wrap_func(&buffers,
> + do_overwrite_source,
> + p->copy, h->hang);
> + }
> +
> + igt_subtest_f("%s-%s-overwrite-source-read-bcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
> + h->require();
> + p->require();
> + buffers_create(&buffers, num_buffers);
> + run_wrap_func(&buffers,
> + do_overwrite_source_read_bcs,
> + p->copy, h->hang);
> + }
> +
> + igt_subtest_f("%s-%s-overwrite-source-read-rcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
> + h->require();
> + p->require();
> + igt_require(rendercopy);
> + buffers_create(&buffers, num_buffers);
> + run_wrap_func(&buffers,
> + do_overwrite_source_read_rcs,
> + p->copy, h->hang);
> + }
> +
> + igt_subtest_f("%s-%s-overwrite-source-rev%s%s", mode->name, p->prefix, suffix, h->suffix) {
> + h->require();
> + p->require();
> + buffers_create(&buffers, num_buffers);
> + run_wrap_func(&buffers,
> + do_overwrite_source__rev,
> + p->copy, h->hang);
> + }
> +
> + /* try to intermix copies with GPU copies*/
> + igt_subtest_f("%s-%s-intermix-rcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
> + h->require();
> + p->require();
> + igt_require(rendercopy);
> + buffers_create(&buffers, num_buffers);
> + run_wrap_func(&buffers,
> + do_intermix_rcs,
> + p->copy, h->hang);
> + }
> + igt_subtest_f("%s-%s-intermix-bcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
> + h->require();
> + p->require();
> + igt_require(rendercopy);
> + buffers_create(&buffers, num_buffers);
> + run_wrap_func(&buffers,
> + do_intermix_bcs,
> + p->copy, h->hang);
> + }
> + igt_subtest_f("%s-%s-intermix-both%s%s", mode->name, p->prefix, suffix, h->suffix) {
> + h->require();
> + p->require();
> + igt_require(rendercopy);
> + buffers_create(&buffers, num_buffers);
> + run_wrap_func(&buffers,
> + do_intermix_both,
> + p->copy, h->hang);
> + }
> +
> + /* try to read the results before the copy completes */
> + igt_subtest_f("%s-%s-early-read%s%s", mode->name, p->prefix, suffix, h->suffix) {
> + h->require();
> + p->require();
> + buffers_create(&buffers, num_buffers);
> + run_wrap_func(&buffers,
> + do_early_read,
> + p->copy, h->hang);
> + }
> +
> + /* concurrent reads */
> + igt_subtest_f("%s-%s-read-read-bcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
> + h->require();
> + p->require();
> + buffers_create(&buffers, num_buffers);
> + run_wrap_func(&buffers,
> + do_read_read_bcs,
> + p->copy, h->hang);
> + }
> + igt_subtest_f("%s-%s-read-read-rcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
> + h->require();
> + p->require();
> + igt_require(rendercopy);
> + buffers_create(&buffers, num_buffers);
> + run_wrap_func(&buffers,
> + do_read_read_rcs,
> + p->copy, h->hang);
> + }
> +
> + /* and finally try to trick the kernel into loosing the pending write */
> + igt_subtest_f("%s-%s-gpu-read-after-write%s%s", mode->name, p->prefix, suffix, h->suffix) {
> + h->require();
> + p->require();
> + buffers_create(&buffers, num_buffers);
> + run_wrap_func(&buffers,
> + do_gpu_read_after_write,
> + p->copy, h->hang);
> + }
> +
> + igt_fixture {
> + buffers_fini(&buffers);
> + }
> + }
> + }
> +}
> +
> +static void
> +run_modes(const struct access_mode *mode)
> +{
> + if (all) {
> + run_basic_modes(mode, "", run_single);
> +
> + igt_fork_signal_helper();
> + run_basic_modes(mode, "-interruptible", run_interruptible);
> + igt_stop_signal_helper();
> + }
> +
> + igt_fork_signal_helper();
> + run_basic_modes(mode, "-forked", run_forked);
> + igt_stop_signal_helper();
> +}
> +
> +igt_main
> +{
> + int max, i;
> +
> + igt_skip_on_simulation();
> +
> + if (strstr(igt_test_name(), "all"))
> + all = true;
> +
> + igt_fixture {
> + fd = drm_open_driver(DRIVER_INTEL);
> + devid = intel_get_drm_devid(fd);
> + gen = intel_gen(devid);
> + rendercopy = igt_get_render_copyfunc(devid);
> +
> + max = gem_aperture_size (fd) / (1024 * 1024) / 2;
> + if (num_buffers > max)
> + num_buffers = max;
> +
> + max = intel_get_total_ram_mb() * 3 / 4;
> + if (num_buffers > max)
> + num_buffers = max;
> + num_buffers /= 2;
> + igt_info("using 2x%d buffers, each 1MiB\n", num_buffers);
> + }
> +
> + for (i = 0; i < ARRAY_SIZE(access_modes); i++)
> + run_modes(&access_modes[i]);
> +}
> --
> 2.6.1
>
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/intel-gfx
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH i-g-t 2/3] Unify handling of slow/combinatorial tests
2015-10-23 11:42 ` [PATCH i-g-t 2/3] Unify handling of slow/combinatorial tests David Weinehall
2015-10-23 11:56 ` Chris Wilson
2015-10-23 13:50 ` Paulo Zanoni
@ 2015-10-23 14:55 ` Thomas Wood
2015-10-26 15:28 ` David Weinehall
2015-10-26 18:15 ` Paulo Zanoni
2 siblings, 2 replies; 41+ messages in thread
From: Thomas Wood @ 2015-10-23 14:55 UTC (permalink / raw)
To: David Weinehall; +Cc: Intel Graphics Development
On 23 October 2015 at 12:42, David Weinehall
<david.weinehall@linux.intel.com> wrote:
> Some tests should not be run by default, due to their slow,
> and sometimes superfluous, nature.
>
> We still want to be able to run these tests though in some cases.
> Until now there's been no unified way of handling this. Remedy
> this by introducing the --with-slow-combinatorial option to
> igt_core, and use it in gem_concurrent_blit & kms_frontbuffer_tracking.
> ---
> lib/igt_core.c | 19 ++++++
> lib/igt_core.h | 1 +
> tests/gem_concurrent_blit.c | 40 ++++++++----
> tests/kms_frontbuffer_tracking.c | 135 +++++++++++++++++++++++++++------------
> 4 files changed, 142 insertions(+), 53 deletions(-)
>
> diff --git a/lib/igt_core.c b/lib/igt_core.c
> index 59127cafe606..ba40ce0e0ead 100644
> --- a/lib/igt_core.c
> +++ b/lib/igt_core.c
> @@ -216,6 +216,7 @@ const char *igt_interactive_debug;
>
> /* subtests helpers */
> static bool list_subtests = false;
> +static bool with_slow_combinatorial = false;
> static char *run_single_subtest = NULL;
> static bool run_single_subtest_found = false;
> static const char *in_subtest = NULL;
> @@ -235,6 +236,7 @@ bool test_child;
>
> enum {
> OPT_LIST_SUBTESTS,
> + OPT_WITH_SLOW_COMBINATORIAL,
> OPT_RUN_SUBTEST,
> OPT_DESCRIPTION,
> OPT_DEBUG,
> @@ -478,6 +480,7 @@ static void print_usage(const char *help_str, bool output_on_stderr)
>
> fprintf(f, "Usage: %s [OPTIONS]\n", command_str);
> fprintf(f, " --list-subtests\n"
> + " --with-slow-combinatorial\n"
> " --run-subtest <pattern>\n"
> " --debug[=log-domain]\n"
> " --interactive-debug[=domain]\n"
> @@ -510,6 +513,7 @@ static int common_init(int *argc, char **argv,
> int c, option_index = 0, i, x;
> static struct option long_options[] = {
> {"list-subtests", 0, 0, OPT_LIST_SUBTESTS},
> + {"with-slow-combinatorial", 0, 0, OPT_WITH_SLOW_COMBINATORIAL},
> {"run-subtest", 1, 0, OPT_RUN_SUBTEST},
> {"help-description", 0, 0, OPT_DESCRIPTION},
> {"debug", optional_argument, 0, OPT_DEBUG},
> @@ -617,6 +621,10 @@ static int common_init(int *argc, char **argv,
> if (!run_single_subtest)
> list_subtests = true;
> break;
> + case OPT_WITH_SLOW_COMBINATORIAL:
> + if (!run_single_subtest)
This will cause piglit (and therefore QA) to unconditionally run all
tests marked as slow, since it runs subtests individually.
> + with_slow_combinatorial = true;
> + break;
> case OPT_RUN_SUBTEST:
> if (!list_subtests)
> run_single_subtest = strdup(optarg);
> @@ -1629,6 +1637,17 @@ void igt_skip_on_simulation(void)
> igt_require(!igt_run_in_simulation());
> }
>
> +/**
> + * igt_slow_combinatorial:
> + *
> + * This is used to define subtests that should only be listed/run
> + * when the "--with-slow-combinatorial" has been specified
This isn't quite correct, as the subtests that use
igt_slow_combinatorial will still always be listed.
> + */
> +void igt_slow_combinatorial(void)
> +{
> + igt_skip_on(!with_slow_combinatorial);
Although it is convenient to just skip the tests when the
--with-slow-combinatorial flag is passed, it may be useful to be able
to classify the subtests before they are run, so that they are
filtered out from the test list entirely. An approach that can do this
might also be used to mark tests as being part of the basic acceptance
tests, so that they can be marked as such without relying on the
naming convention.
> +}
> +
> /* structured logging */
>
> /**
> diff --git a/lib/igt_core.h b/lib/igt_core.h
> index 5ae09653fd55..6ddf25563275 100644
> --- a/lib/igt_core.h
> +++ b/lib/igt_core.h
> @@ -680,6 +680,7 @@ bool igt_run_in_simulation(void);
> #define SLOW_QUICK(slow,quick) (igt_run_in_simulation() ? (quick) : (slow))
>
> void igt_skip_on_simulation(void);
> +void igt_slow_combinatorial(void);
>
> extern const char *igt_interactive_debug;
>
> diff --git a/tests/gem_concurrent_blit.c b/tests/gem_concurrent_blit.c
> index 1d2d787202df..311b6829e984 100644
> --- a/tests/gem_concurrent_blit.c
> +++ b/tests/gem_concurrent_blit.c
> @@ -931,9 +931,6 @@ run_basic_modes(const struct access_mode *mode,
> struct buffers buffers;
>
> for (h = hangs; h->suffix; h++) {
> - if (!all && *h->suffix)
> - continue;
> -
> for (p = all ? pipelines : pskip; p->prefix; p++) {
> igt_fixture {
> batch = buffers_init(&buffers, mode, fd);
> @@ -941,6 +938,8 @@ run_basic_modes(const struct access_mode *mode,
>
> /* try to overwrite the source values */
> igt_subtest_f("%s-%s-overwrite-source-one%s%s", mode->name, p->prefix, suffix, h->suffix) {
> + if (*h->suffix)
> + igt_slow_combinatorial();
> h->require();
> p->require();
> buffers_create(&buffers, num_buffers);
> @@ -950,6 +949,8 @@ run_basic_modes(const struct access_mode *mode,
> }
>
> igt_subtest_f("%s-%s-overwrite-source%s%s", mode->name, p->prefix, suffix, h->suffix) {
> + if (*h->suffix)
> + igt_slow_combinatorial();
> h->require();
> p->require();
> buffers_create(&buffers, num_buffers);
> @@ -959,6 +960,8 @@ run_basic_modes(const struct access_mode *mode,
> }
>
> igt_subtest_f("%s-%s-overwrite-source-read-bcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
> + if (*h->suffix)
> + igt_slow_combinatorial();
> h->require();
> p->require();
> buffers_create(&buffers, num_buffers);
> @@ -968,6 +971,8 @@ run_basic_modes(const struct access_mode *mode,
> }
>
> igt_subtest_f("%s-%s-overwrite-source-read-rcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
> + if (*h->suffix)
> + igt_slow_combinatorial();
> h->require();
> p->require();
> igt_require(rendercopy);
> @@ -978,6 +983,8 @@ run_basic_modes(const struct access_mode *mode,
> }
>
> igt_subtest_f("%s-%s-overwrite-source-rev%s%s", mode->name, p->prefix, suffix, h->suffix) {
> + if (*h->suffix)
> + igt_slow_combinatorial();
> h->require();
> p->require();
> buffers_create(&buffers, num_buffers);
> @@ -988,6 +995,8 @@ run_basic_modes(const struct access_mode *mode,
>
> /* try to intermix copies with GPU copies*/
> igt_subtest_f("%s-%s-intermix-rcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
> + if (*h->suffix)
> + igt_slow_combinatorial();
> h->require();
> p->require();
> igt_require(rendercopy);
> @@ -997,6 +1006,8 @@ run_basic_modes(const struct access_mode *mode,
> p->copy, h->hang);
> }
> igt_subtest_f("%s-%s-intermix-bcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
> + if (*h->suffix)
> + igt_slow_combinatorial();
> h->require();
> p->require();
> igt_require(rendercopy);
> @@ -1006,6 +1017,8 @@ run_basic_modes(const struct access_mode *mode,
> p->copy, h->hang);
> }
> igt_subtest_f("%s-%s-intermix-both%s%s", mode->name, p->prefix, suffix, h->suffix) {
> + if (*h->suffix)
> + igt_slow_combinatorial();
> h->require();
> p->require();
> igt_require(rendercopy);
> @@ -1017,6 +1030,8 @@ run_basic_modes(const struct access_mode *mode,
>
> /* try to read the results before the copy completes */
> igt_subtest_f("%s-%s-early-read%s%s", mode->name, p->prefix, suffix, h->suffix) {
> + if (*h->suffix)
> + igt_slow_combinatorial();
> h->require();
> p->require();
> buffers_create(&buffers, num_buffers);
> @@ -1027,6 +1042,8 @@ run_basic_modes(const struct access_mode *mode,
>
> /* concurrent reads */
> igt_subtest_f("%s-%s-read-read-bcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
> + if (*h->suffix)
> + igt_slow_combinatorial();
> h->require();
> p->require();
> buffers_create(&buffers, num_buffers);
> @@ -1035,6 +1052,8 @@ run_basic_modes(const struct access_mode *mode,
> p->copy, h->hang);
> }
> igt_subtest_f("%s-%s-read-read-rcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
> + if (*h->suffix)
> + igt_slow_combinatorial();
> h->require();
> p->require();
> igt_require(rendercopy);
> @@ -1046,6 +1065,8 @@ run_basic_modes(const struct access_mode *mode,
>
> /* and finally try to trick the kernel into loosing the pending write */
> igt_subtest_f("%s-%s-gpu-read-after-write%s%s", mode->name, p->prefix, suffix, h->suffix) {
> + if (*h->suffix)
> + igt_slow_combinatorial();
> h->require();
> p->require();
> buffers_create(&buffers, num_buffers);
> @@ -1064,13 +1085,11 @@ run_basic_modes(const struct access_mode *mode,
> static void
> run_modes(const struct access_mode *mode)
> {
> - if (all) {
> - run_basic_modes(mode, "", run_single);
> + run_basic_modes(mode, "", run_single);
>
> - igt_fork_signal_helper();
> - run_basic_modes(mode, "-interruptible", run_interruptible);
> - igt_stop_signal_helper();
> - }
> + igt_fork_signal_helper();
> + run_basic_modes(mode, "-interruptible", run_interruptible);
> + igt_stop_signal_helper();
>
> igt_fork_signal_helper();
> run_basic_modes(mode, "-forked", run_forked);
> @@ -1083,9 +1102,6 @@ igt_main
>
> igt_skip_on_simulation();
>
> - if (strstr(igt_test_name(), "all"))
> - all = true;
> -
> igt_fixture {
> fd = drm_open_driver(DRIVER_INTEL);
> devid = intel_get_drm_devid(fd);
> diff --git a/tests/kms_frontbuffer_tracking.c b/tests/kms_frontbuffer_tracking.c
> index d97e148c5073..6f84ef0813d9 100644
> --- a/tests/kms_frontbuffer_tracking.c
> +++ b/tests/kms_frontbuffer_tracking.c
> @@ -47,8 +47,8 @@ IGT_TEST_DESCRIPTION("Test the Kernel's frontbuffer tracking mechanism and "
> * combinations that are somewhat redundant and don't add much value to the
> * test. For example, since we already do the offscreen testing with a single
> * pipe enabled, there's no much value in doing it again with dual pipes. If you
> - * still want to try these redundant tests, you need to use the --show-hidden
> - * option.
> + * still want to try these redundant tests, you need to use the
> + * "--with-slow-combinatorial" option.
> *
> * The most important hidden thing is the FEATURE_NONE set of tests. Whenever
> * you get a failure on any test, it is important to check whether the same test
> @@ -116,6 +116,10 @@ struct test_mode {
> } format;
>
> enum igt_draw_method method;
> +
> + /* The test is slow and/or combinatorial;
> + * skip unless otherwise specified */
> + bool slow;
> };
>
> enum flip_type {
> @@ -237,7 +241,6 @@ struct {
> bool fbc_check_last_action;
> bool no_edp;
> bool small_modes;
> - bool show_hidden;
> int step;
> int only_feature;
> int only_pipes;
> @@ -250,7 +253,6 @@ struct {
> .fbc_check_last_action = true,
> .no_edp = false,
> .small_modes = false,
> - .show_hidden= false,
> .step = 0,
> .only_feature = FEATURE_COUNT,
> .only_pipes = PIPE_COUNT,
> @@ -2892,9 +2894,6 @@ static int opt_handler(int option, int option_index, void *data)
> case 'm':
> opt.small_modes = true;
> break;
> - case 'i':
> - opt.show_hidden = true;
> - break;
> case 't':
> opt.step++;
> break;
> @@ -2942,7 +2941,6 @@ const char *help_str =
> " --no-fbc-action-check Don't check for the FBC last action\n"
> " --no-edp Don't use eDP monitors\n"
> " --use-small-modes Use smaller resolutions for the modes\n"
> -" --show-hidden Show hidden subtests\n"
> " --step Stop on each step so you can check the screen\n"
> " --nop-only Only run the \"nop\" feature subtests\n"
> " --fbc-only Only run the \"fbc\" feature subtests\n"
> @@ -3036,6 +3034,7 @@ static const char *format_str(enum pixel_format format)
>
> #define TEST_MODE_ITER_BEGIN(t) \
> t.format = FORMAT_DEFAULT; \
> + t.slow = false; \
> for (t.feature = 0; t.feature < FEATURE_COUNT; t.feature++) { \
> for (t.pipes = 0; t.pipes < PIPE_COUNT; t.pipes++) { \
> for (t.screen = 0; t.screen < SCREEN_COUNT; t.screen++) { \
> @@ -3046,15 +3045,15 @@ static const char *format_str(enum pixel_format format)
> continue; \
> if (t.screen == SCREEN_OFFSCREEN && t.plane != PLANE_PRI) \
> continue; \
> - if (!opt.show_hidden && t.pipes == PIPE_DUAL && \
> + if (t.pipes == PIPE_DUAL && \
> t.screen == SCREEN_OFFSCREEN) \
> - continue; \
> - if ((!opt.show_hidden && opt.only_feature != FEATURE_NONE) \
> - && t.feature == FEATURE_NONE) \
> - continue; \
> - if (!opt.show_hidden && t.fbs == FBS_SHARED && \
> + t.slow = true; \
> + if (opt.only_feature != FEATURE_NONE && \
> + t.feature == FEATURE_NONE) \
> + t.slow = true; \
> + if (t.fbs == FBS_SHARED && \
> (t.plane == PLANE_CUR || t.plane == PLANE_SPR)) \
> - continue;
> + t.slow = true;
>
>
> #define TEST_MODE_ITER_END } } } } } }
> @@ -3069,7 +3068,6 @@ int main(int argc, char *argv[])
> { "no-fbc-action-check", 0, 0, 'a'},
> { "no-edp", 0, 0, 'e'},
> { "use-small-modes", 0, 0, 'm'},
> - { "show-hidden", 0, 0, 'i'},
> { "step", 0, 0, 't'},
> { "nop-only", 0, 0, 'n'},
> { "fbc-only", 0, 0, 'f'},
> @@ -3088,9 +3086,11 @@ int main(int argc, char *argv[])
> setup_environment();
>
> for (t.feature = 0; t.feature < FEATURE_COUNT; t.feature++) {
> - if ((!opt.show_hidden && opt.only_feature != FEATURE_NONE)
> - && t.feature == FEATURE_NONE)
> - continue;
> + bool slow = false;
> +
> + if (opt.only_feature != FEATURE_NONE &&
> + t.feature == FEATURE_NONE)
> + slow = true;
> for (t.pipes = 0; t.pipes < PIPE_COUNT; t.pipes++) {
> t.screen = SCREEN_PRIM;
> t.plane = PLANE_PRI;
> @@ -3101,8 +3101,11 @@ int main(int argc, char *argv[])
>
> igt_subtest_f("%s-%s-rte",
> feature_str(t.feature),
> - pipes_str(t.pipes))
> + pipes_str(t.pipes)) {
> + if (slow)
> + igt_slow_combinatorial();
> rte_subtest(&t);
> + }
> }
> }
>
> @@ -3113,39 +3116,52 @@ int main(int argc, char *argv[])
> screen_str(t.screen),
> plane_str(t.plane),
> fbs_str(t.fbs),
> - igt_draw_get_method_name(t.method))
> + igt_draw_get_method_name(t.method)) {
> + if (t.slow)
> + igt_slow_combinatorial();
> draw_subtest(&t);
> + }
> TEST_MODE_ITER_END
>
> TEST_MODE_ITER_BEGIN(t)
> if (t.plane != PLANE_PRI ||
> - t.screen == SCREEN_OFFSCREEN ||
> - (!opt.show_hidden && t.method != IGT_DRAW_BLT))
> + t.screen == SCREEN_OFFSCREEN)
> continue;
> + if (t.method != IGT_DRAW_BLT)
> + t.slow = true;
>
> igt_subtest_f("%s-%s-%s-%s-flip-%s",
> feature_str(t.feature),
> pipes_str(t.pipes),
> screen_str(t.screen),
> fbs_str(t.fbs),
> - igt_draw_get_method_name(t.method))
> + igt_draw_get_method_name(t.method)) {
> + if (t.slow)
> + igt_slow_combinatorial();
> flip_subtest(&t, FLIP_PAGEFLIP);
> + }
>
> igt_subtest_f("%s-%s-%s-%s-evflip-%s",
> feature_str(t.feature),
> pipes_str(t.pipes),
> screen_str(t.screen),
> fbs_str(t.fbs),
> - igt_draw_get_method_name(t.method))
> + igt_draw_get_method_name(t.method)) {
> + if (t.slow)
> + igt_slow_combinatorial();
> flip_subtest(&t, FLIP_PAGEFLIP_EVENT);
> + }
>
> igt_subtest_f("%s-%s-%s-%s-msflip-%s",
> feature_str(t.feature),
> pipes_str(t.pipes),
> screen_str(t.screen),
> fbs_str(t.fbs),
> - igt_draw_get_method_name(t.method))
> + igt_draw_get_method_name(t.method)) {
> + if (t.slow)
> + igt_slow_combinatorial();
> flip_subtest(&t, FLIP_MODESET);
> + }
>
> TEST_MODE_ITER_END
>
> @@ -3159,8 +3175,11 @@ int main(int argc, char *argv[])
> igt_subtest_f("%s-%s-%s-fliptrack",
> feature_str(t.feature),
> pipes_str(t.pipes),
> - fbs_str(t.fbs))
> + fbs_str(t.fbs)) {
> + if (t.slow)
> + igt_slow_combinatorial();
> fliptrack_subtest(&t, FLIP_PAGEFLIP);
> + }
> TEST_MODE_ITER_END
>
> TEST_MODE_ITER_BEGIN(t)
> @@ -3174,16 +3193,22 @@ int main(int argc, char *argv[])
> pipes_str(t.pipes),
> screen_str(t.screen),
> plane_str(t.plane),
> - fbs_str(t.fbs))
> + fbs_str(t.fbs)) {
> + if (t.slow)
> + igt_slow_combinatorial();
> move_subtest(&t);
> + }
>
> igt_subtest_f("%s-%s-%s-%s-%s-onoff",
> feature_str(t.feature),
> pipes_str(t.pipes),
> screen_str(t.screen),
> plane_str(t.plane),
> - fbs_str(t.fbs))
> + fbs_str(t.fbs)) {
> + if (t.slow)
> + igt_slow_combinatorial();
> onoff_subtest(&t);
> + }
> TEST_MODE_ITER_END
>
> TEST_MODE_ITER_BEGIN(t)
> @@ -3197,23 +3222,30 @@ int main(int argc, char *argv[])
> pipes_str(t.pipes),
> screen_str(t.screen),
> plane_str(t.plane),
> - fbs_str(t.fbs))
> + fbs_str(t.fbs)) {
> + if (t.slow)
> + igt_slow_combinatorial();
> fullscreen_plane_subtest(&t);
> + }
> TEST_MODE_ITER_END
>
> TEST_MODE_ITER_BEGIN(t)
> if (t.screen != SCREEN_PRIM ||
> - t.method != IGT_DRAW_BLT ||
> - (!opt.show_hidden && t.plane != PLANE_PRI) ||
> - (!opt.show_hidden && t.fbs != FBS_INDIVIDUAL))
> + t.method != IGT_DRAW_BLT)
> continue;
> + if (t.plane != PLANE_PRI ||
> + t.fbs != FBS_INDIVIDUAL)
> + t.slow = true;
>
> igt_subtest_f("%s-%s-%s-%s-multidraw",
> feature_str(t.feature),
> pipes_str(t.pipes),
> plane_str(t.plane),
> - fbs_str(t.fbs))
> + fbs_str(t.fbs)) {
> + if (t.slow)
> + igt_slow_combinatorial();
> multidraw_subtest(&t);
> + }
> TEST_MODE_ITER_END
>
> TEST_MODE_ITER_BEGIN(t)
> @@ -3224,8 +3256,11 @@ int main(int argc, char *argv[])
> t.method != IGT_DRAW_MMAP_GTT)
> continue;
>
> - igt_subtest_f("%s-farfromfence", feature_str(t.feature))
> + igt_subtest_f("%s-farfromfence", feature_str(t.feature)) {
> + if (t.slow)
> + igt_slow_combinatorial();
> farfromfence_subtest(&t);
> + }
> TEST_MODE_ITER_END
>
> TEST_MODE_ITER_BEGIN(t)
> @@ -3243,8 +3278,11 @@ int main(int argc, char *argv[])
> igt_subtest_f("%s-%s-draw-%s",
> feature_str(t.feature),
> format_str(t.format),
> - igt_draw_get_method_name(t.method))
> + igt_draw_get_method_name(t.method)) {
> + if (t.slow)
> + igt_slow_combinatorial();
> format_draw_subtest(&t);
> + }
> }
> TEST_MODE_ITER_END
>
> @@ -3256,8 +3294,11 @@ int main(int argc, char *argv[])
> continue;
> igt_subtest_f("%s-%s-scaledprimary",
> feature_str(t.feature),
> - fbs_str(t.fbs))
> + fbs_str(t.fbs)) {
> + if (t.slow)
> + igt_slow_combinatorial();
> scaledprimary_subtest(&t);
> + }
> TEST_MODE_ITER_END
>
> TEST_MODE_ITER_BEGIN(t)
> @@ -3268,19 +3309,31 @@ int main(int argc, char *argv[])
> t.method != IGT_DRAW_MMAP_CPU)
> continue;
>
> - igt_subtest_f("%s-modesetfrombusy", feature_str(t.feature))
> + igt_subtest_f("%s-modesetfrombusy", feature_str(t.feature)) {
> + if (t.slow)
> + igt_slow_combinatorial();
> modesetfrombusy_subtest(&t);
> + }
>
> if (t.feature & FEATURE_FBC)
> - igt_subtest_f("%s-badstride", feature_str(t.feature))
> + igt_subtest_f("%s-badstride", feature_str(t.feature)) {
> + if (t.slow)
> + igt_slow_combinatorial();
> badstride_subtest(&t);
> + }
>
> if (t.feature & FEATURE_PSR)
> - igt_subtest_f("%s-slowdraw", feature_str(t.feature))
> + igt_subtest_f("%s-slowdraw", feature_str(t.feature)) {
> + if (t.slow)
> + igt_slow_combinatorial();
> slow_draw_subtest(&t);
> + }
>
> - igt_subtest_f("%s-suspend", feature_str(t.feature))
> + igt_subtest_f("%s-suspend", feature_str(t.feature)) {
> + if (t.slow)
> + igt_slow_combinatorial();
> suspend_subtest(&t);
> + }
> TEST_MODE_ITER_END
>
> igt_fixture
> --
> 2.6.1
>
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/intel-gfx
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH i-g-t 0/3] Unify slow/combinatorial test handling
2015-10-23 12:47 ` Daniel Vetter
@ 2015-10-26 13:55 ` David Weinehall
0 siblings, 0 replies; 41+ messages in thread
From: David Weinehall @ 2015-10-26 13:55 UTC (permalink / raw)
To: Daniel Vetter; +Cc: intel-gfx
On Fri, Oct 23, 2015 at 02:47:57PM +0200, Daniel Vetter wrote:
> On Fri, Oct 23, 2015 at 12:58:45PM +0100, Chris Wilson wrote:
> > On Fri, Oct 23, 2015 at 02:42:33PM +0300, David Weinehall wrote:
> > > Until now we've had no unified way to handle slow/combinatorial tests.
> > > Most of the time we don't want to run slow/combinatorial tests, so this
> > > should remain the default, but when we do want to run such tests,
> > > it has been handled differently in different tests.
> > >
> > > This patch adds a --with-slow-combinatorial command line option to
> > > igt_core, changes gem_concurrent_blit and kms_frontbuffer_tracking
> > > to use this instead of their own methods, and removes gem_concurrent_all
> > > in the process, since it's now unnecessary.
> >
> > I'm not going to remember the --with-slow-combinatorial option. How
> > about just --all, or --slow?
>
> Yeah, --all as a shorthand sounds good to me.
OK, will rename it to '--all'.
Kind regards, David
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH i-g-t 2/3] Unify handling of slow/combinatorial tests
2015-10-23 13:50 ` Paulo Zanoni
@ 2015-10-26 14:59 ` David Weinehall
2015-10-26 16:44 ` Paulo Zanoni
0 siblings, 1 reply; 41+ messages in thread
From: David Weinehall @ 2015-10-26 14:59 UTC (permalink / raw)
To: Paulo Zanoni; +Cc: Intel Graphics Development
On Fri, Oct 23, 2015 at 11:50:46AM -0200, Paulo Zanoni wrote:
[snip]
> It's not clear to me, please clarify: now the tests that were
> previously completely hidden will be listed in --list-subtests and
> will be shown as skipped during normal runs?
Yes. Daniel and I discussed this and he thought listing all test
cases, even the slow ones, would not be an issue, since QA should
be running the default set not the full list
(and for that matter, shouldn't QA know what they are doing too? :P).
> For kms_frontbuffer_tracking, hidden tests are supposed to be just for
> developers who know what they are doing. I hide them behind a special
> command-line switch that's not used by QA because I don't want QA
> wasting time running those tests. One third of the
> kms_frontbuffer_tracking hidden tests only serve the purpose of
> checking whether there's a bug in kms_frontbuffer_track itself or not.
> For some other hidden tests, they are there just to help better debug
> in case some other non-hidden tests fail. Some other hidden tests are
> 100% useless and superfluous.
Shouldn't 100% useless and superfluous tests be excised completely?
> QA should only run the non-hidden tests.
Which is the default behaviour, AFAICT.
> So if some non-hidden test fails, the developers can use the hidden
> tests to help debugging.
>
> Besides, the "if (t.slow)" could have been moved to
> check_test_requirements(), making the code much simpler :)
Thanks for the suggestion. Will modify the code accordingly.
That change does indeed simplify things quite a bit!
Kind regards, David
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH i-g-t 1/3] Rename gem_concurren_all over gem_concurrent_blit
2015-10-23 14:32 ` Thomas Wood
@ 2015-10-26 15:03 ` David Weinehall
0 siblings, 0 replies; 41+ messages in thread
From: David Weinehall @ 2015-10-26 15:03 UTC (permalink / raw)
To: Thomas Wood; +Cc: Intel Graphics Development
On Fri, Oct 23, 2015 at 03:32:08PM +0100, Thomas Wood wrote:
> gem_concurrent_all is misspelled in the subject.
>
> On 23 October 2015 at 12:42, David Weinehall
> <david.weinehall@linux.intel.com> wrote:
> > We'll both rename gem_concurrent_all over gem_concurrent_blit
> > and change gem_concurrent_blit in this changeset. To make
> > this easier to follow we first do the the rename.
>
> Please add a Signed-off-by line to your patches as intel-gpu-tools
> requires contributions to follow the developer's certificate of origin
> (http://developercertificate.org/).
Oh, of course.
> > ---
> > tests/gem_concurrent_blit.c | 1116 ++++++++++++++++++++++++++++++++++++++++++-
> > 1 file changed, 1108 insertions(+), 8 deletions(-)
>
> This appears only to be adding gem_concurrent_blit, not renaming
> gem_concurrent_all. Also, the relevant changes to .gitignore are
> missing from this patch and the third patch in this series.
Only copying it over gem_concurrent_blit without removing
gem_concurrent_all simultaneously is intentional;
that way the patches can be bisected without things missing.
At least that's the theory.
I'll amend the commit message a bit to make that clearer.
Regards, David
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH i-g-t 2/3] Unify handling of slow/combinatorial tests
2015-10-23 14:55 ` Thomas Wood
@ 2015-10-26 15:28 ` David Weinehall
2015-10-26 16:28 ` Thomas Wood
2015-10-26 18:15 ` Paulo Zanoni
1 sibling, 1 reply; 41+ messages in thread
From: David Weinehall @ 2015-10-26 15:28 UTC (permalink / raw)
To: Thomas Wood; +Cc: Intel Graphics Development
On Fri, Oct 23, 2015 at 03:55:23PM +0100, Thomas Wood wrote:
> On 23 October 2015 at 12:42, David Weinehall
> <david.weinehall@linux.intel.com> wrote:
> > Some tests should not be run by default, due to their slow,
> > and sometimes superfluous, nature.
> >
> > We still want to be able to run these tests though in some cases.
> > Until now there's been no unified way of handling this. Remedy
> > this by introducing the --with-slow-combinatorial option to
> > igt_core, and use it in gem_concurrent_blit & kms_frontbuffer_tracking.
> > ---
> > lib/igt_core.c | 19 ++++++
> > lib/igt_core.h | 1 +
> > tests/gem_concurrent_blit.c | 40 ++++++++----
> > tests/kms_frontbuffer_tracking.c | 135 +++++++++++++++++++++++++++------------
> > 4 files changed, 142 insertions(+), 53 deletions(-)
> >
> > diff --git a/lib/igt_core.c b/lib/igt_core.c
> > index 59127cafe606..ba40ce0e0ead 100644
> > --- a/lib/igt_core.c
> > +++ b/lib/igt_core.c
> > @@ -216,6 +216,7 @@ const char *igt_interactive_debug;
> >
> > /* subtests helpers */
> > static bool list_subtests = false;
> > +static bool with_slow_combinatorial = false;
> > static char *run_single_subtest = NULL;
> > static bool run_single_subtest_found = false;
> > static const char *in_subtest = NULL;
> > @@ -235,6 +236,7 @@ bool test_child;
> >
> > enum {
> > OPT_LIST_SUBTESTS,
> > + OPT_WITH_SLOW_COMBINATORIAL,
> > OPT_RUN_SUBTEST,
> > OPT_DESCRIPTION,
> > OPT_DEBUG,
> > @@ -478,6 +480,7 @@ static void print_usage(const char *help_str, bool output_on_stderr)
> >
> > fprintf(f, "Usage: %s [OPTIONS]\n", command_str);
> > fprintf(f, " --list-subtests\n"
> > + " --with-slow-combinatorial\n"
> > " --run-subtest <pattern>\n"
> > " --debug[=log-domain]\n"
> > " --interactive-debug[=domain]\n"
> > @@ -510,6 +513,7 @@ static int common_init(int *argc, char **argv,
> > int c, option_index = 0, i, x;
> > static struct option long_options[] = {
> > {"list-subtests", 0, 0, OPT_LIST_SUBTESTS},
> > + {"with-slow-combinatorial", 0, 0, OPT_WITH_SLOW_COMBINATORIAL},
> > {"run-subtest", 1, 0, OPT_RUN_SUBTEST},
> > {"help-description", 0, 0, OPT_DESCRIPTION},
> > {"debug", optional_argument, 0, OPT_DEBUG},
> > @@ -617,6 +621,10 @@ static int common_init(int *argc, char **argv,
> > if (!run_single_subtest)
> > list_subtests = true;
> > break;
> > + case OPT_WITH_SLOW_COMBINATORIAL:
> > + if (!run_single_subtest)
>
> This will cause piglit (and therefore QA) to unconditionally run all
> tests marked as slow, since it runs subtests individually.
Why doesn't piglit run the default set of tests instead?
>
> > + with_slow_combinatorial = true;
> > + break;
> > case OPT_RUN_SUBTEST:
> > if (!list_subtests)
> > run_single_subtest = strdup(optarg);
> > @@ -1629,6 +1637,17 @@ void igt_skip_on_simulation(void)
> > igt_require(!igt_run_in_simulation());
> > }
> >
> > +/**
> > + * igt_slow_combinatorial:
> > + *
> > + * This is used to define subtests that should only be listed/run
> > + * when the "--with-slow-combinatorial" has been specified
>
> This isn't quite correct, as the subtests that use
> igt_slow_combinatorial will still always be listed.
Yeah, I agree that the comment is incorrect; it should say "be run",
or alternatively the code altered to not list them unless "--all"
is passed.
> > + */
> > +void igt_slow_combinatorial(void)
> > +{
> > + igt_skip_on(!with_slow_combinatorial);
>
> Although it is convenient to just skip the tests when the
> --with-slow-combinatorial flag is passed, it may be useful to be able
> to classify the subtests before they are run, so that they are
> filtered out from the test list entirely. An approach that can do this
> might also be used to mark tests as being part of the basic acceptance
> tests, so that they can be marked as such without relying on the
> naming convention.
If the list is how piglit gets its list of tests, doign classification
won't be feasible, since only "testname" or "testname (SKIP)" are
valid, TTBOMK.
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH i-g-t 2/3] Unify handling of slow/combinatorial tests
2015-10-26 15:28 ` David Weinehall
@ 2015-10-26 16:28 ` Thomas Wood
2015-10-26 17:34 ` David Weinehall
0 siblings, 1 reply; 41+ messages in thread
From: Thomas Wood @ 2015-10-26 16:28 UTC (permalink / raw)
To: Thomas Wood, Intel Graphics Development
On 26 October 2015 at 15:28, David Weinehall
<david.weinehall@linux.intel.com> wrote:
> On Fri, Oct 23, 2015 at 03:55:23PM +0100, Thomas Wood wrote:
>> On 23 October 2015 at 12:42, David Weinehall
>> <david.weinehall@linux.intel.com> wrote:
>> > Some tests should not be run by default, due to their slow,
>> > and sometimes superfluous, nature.
>> >
>> > We still want to be able to run these tests though in some cases.
>> > Until now there's been no unified way of handling this. Remedy
>> > this by introducing the --with-slow-combinatorial option to
>> > igt_core, and use it in gem_concurrent_blit & kms_frontbuffer_tracking.
>> > ---
>> > lib/igt_core.c | 19 ++++++
>> > lib/igt_core.h | 1 +
>> > tests/gem_concurrent_blit.c | 40 ++++++++----
>> > tests/kms_frontbuffer_tracking.c | 135 +++++++++++++++++++++++++++------------
>> > 4 files changed, 142 insertions(+), 53 deletions(-)
>> >
>> > diff --git a/lib/igt_core.c b/lib/igt_core.c
>> > index 59127cafe606..ba40ce0e0ead 100644
>> > --- a/lib/igt_core.c
>> > +++ b/lib/igt_core.c
>> > @@ -216,6 +216,7 @@ const char *igt_interactive_debug;
>> >
>> > /* subtests helpers */
>> > static bool list_subtests = false;
>> > +static bool with_slow_combinatorial = false;
>> > static char *run_single_subtest = NULL;
>> > static bool run_single_subtest_found = false;
>> > static const char *in_subtest = NULL;
>> > @@ -235,6 +236,7 @@ bool test_child;
>> >
>> > enum {
>> > OPT_LIST_SUBTESTS,
>> > + OPT_WITH_SLOW_COMBINATORIAL,
>> > OPT_RUN_SUBTEST,
>> > OPT_DESCRIPTION,
>> > OPT_DEBUG,
>> > @@ -478,6 +480,7 @@ static void print_usage(const char *help_str, bool output_on_stderr)
>> >
>> > fprintf(f, "Usage: %s [OPTIONS]\n", command_str);
>> > fprintf(f, " --list-subtests\n"
>> > + " --with-slow-combinatorial\n"
>> > " --run-subtest <pattern>\n"
>> > " --debug[=log-domain]\n"
>> > " --interactive-debug[=domain]\n"
>> > @@ -510,6 +513,7 @@ static int common_init(int *argc, char **argv,
>> > int c, option_index = 0, i, x;
>> > static struct option long_options[] = {
>> > {"list-subtests", 0, 0, OPT_LIST_SUBTESTS},
>> > + {"with-slow-combinatorial", 0, 0, OPT_WITH_SLOW_COMBINATORIAL},
>> > {"run-subtest", 1, 0, OPT_RUN_SUBTEST},
>> > {"help-description", 0, 0, OPT_DESCRIPTION},
>> > {"debug", optional_argument, 0, OPT_DEBUG},
>> > @@ -617,6 +621,10 @@ static int common_init(int *argc, char **argv,
>> > if (!run_single_subtest)
>> > list_subtests = true;
>> > break;
>> > + case OPT_WITH_SLOW_COMBINATORIAL:
>> > + if (!run_single_subtest)
>>
>> This will cause piglit (and therefore QA) to unconditionally run all
>> tests marked as slow, since it runs subtests individually.
>
> Why doesn't piglit run the default set of tests instead?
What is the default set of tests? Each subtest is executed by piglit
using --run-subtest to ensure information can be collected per-subtest
(return code, error messages, dmesg logs, timings, etc.).
>
>>
>> > + with_slow_combinatorial = true;
>> > + break;
>> > case OPT_RUN_SUBTEST:
>> > if (!list_subtests)
>> > run_single_subtest = strdup(optarg);
>> > @@ -1629,6 +1637,17 @@ void igt_skip_on_simulation(void)
>> > igt_require(!igt_run_in_simulation());
>> > }
>> >
>> > +/**
>> > + * igt_slow_combinatorial:
>> > + *
>> > + * This is used to define subtests that should only be listed/run
>> > + * when the "--with-slow-combinatorial" has been specified
>>
>> This isn't quite correct, as the subtests that use
>> igt_slow_combinatorial will still always be listed.
>
> Yeah, I agree that the comment is incorrect; it should say "be run",
> or alternatively the code altered to not list them unless "--all"
> is passed.
>
>> > + */
>> > +void igt_slow_combinatorial(void)
>> > +{
>> > + igt_skip_on(!with_slow_combinatorial);
>>
>> Although it is convenient to just skip the tests when the
>> --with-slow-combinatorial flag is passed, it may be useful to be able
>> to classify the subtests before they are run, so that they are
>> filtered out from the test list entirely. An approach that can do this
>> might also be used to mark tests as being part of the basic acceptance
>> tests, so that they can be marked as such without relying on the
>> naming convention.
>
> If the list is how piglit gets its list of tests, doign classification
> won't be feasible, since only "testname" or "testname (SKIP)" are
> valid, TTBOMK.
Test and subtest names for i-g-t are collected and parsed in
piglit/tests/igt.py, which could always be updated include
classification parsing.
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH i-g-t 2/3] Unify handling of slow/combinatorial tests
2015-10-26 14:59 ` David Weinehall
@ 2015-10-26 16:44 ` Paulo Zanoni
2015-10-26 17:30 ` David Weinehall
0 siblings, 1 reply; 41+ messages in thread
From: Paulo Zanoni @ 2015-10-26 16:44 UTC (permalink / raw)
To: Paulo Zanoni, Intel Graphics Development
2015-10-26 12:59 GMT-02:00 David Weinehall <david.weinehall@linux.intel.com>:
> On Fri, Oct 23, 2015 at 11:50:46AM -0200, Paulo Zanoni wrote:
>
> [snip]
>
>> It's not clear to me, please clarify: now the tests that were
>> previously completely hidden will be listed in --list-subtests and
>> will be shown as skipped during normal runs?
>
> Yes. Daniel and I discussed this and he thought listing all test
> cases, even the slow ones, would not be an issue, since QA should
> be running the default set not the full list
> (and for that matter, shouldn't QA know what they are doing too? :P).
If that's the case, I really think your patch should not touch
kms_frontbuffer_tracking.c. The hidden subtests should not appear on
the list. People shouldn't even have to ask themselves why they are
getting 800 skips from a single testcase. Those are only for debugging
purposes.
>
>> For kms_frontbuffer_tracking, hidden tests are supposed to be just for
>> developers who know what they are doing. I hide them behind a special
>> command-line switch that's not used by QA because I don't want QA
>> wasting time running those tests. One third of the
>> kms_frontbuffer_tracking hidden tests only serve the purpose of
>> checking whether there's a bug in kms_frontbuffer_track itself or not.
>> For some other hidden tests, they are there just to help better debug
>> in case some other non-hidden tests fail. Some other hidden tests are
>> 100% useless and superfluous.
>
> Shouldn't 100% useless and superfluous tests be excised completely?
The change would be from "if (case && hidden) continue;" to "if (case)
continue;". But that's not the focus. There are still tests that are
useful for debugging but useless for QA.
>
>> QA should only run the non-hidden tests.
>
> Which is the default behaviour, AFAICT.
Then why do you want to expose those tests that you're not even
planning to run?? You're kinda implying that QA - or someone else -
will run those tests at some point, and I say that, for
kms_frontbuffer_tracking, that's a waste of time. Maybe this is the
case for the other tests you're touching, but not here.
>
>> So if some non-hidden test fails, the developers can use the hidden
>> tests to help debugging.
>>
>> Besides, the "if (t.slow)" could have been moved to
>> check_test_requirements(), making the code much simpler :)
>
> Thanks for the suggestion. Will modify the code accordingly.
> That change does indeed simplify things quite a bit!
>
>
> Kind regards, David
--
Paulo Zanoni
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH i-g-t 2/3] Unify handling of slow/combinatorial tests
2015-10-26 16:44 ` Paulo Zanoni
@ 2015-10-26 17:30 ` David Weinehall
2015-10-26 17:59 ` Paulo Zanoni
0 siblings, 1 reply; 41+ messages in thread
From: David Weinehall @ 2015-10-26 17:30 UTC (permalink / raw)
To: Paulo Zanoni; +Cc: Intel Graphics Development
On Mon, Oct 26, 2015 at 02:44:18PM -0200, Paulo Zanoni wrote:
> 2015-10-26 12:59 GMT-02:00 David Weinehall <david.weinehall@linux.intel.com>:
> > On Fri, Oct 23, 2015 at 11:50:46AM -0200, Paulo Zanoni wrote:
> >
> > [snip]
> >
> >> It's not clear to me, please clarify: now the tests that were
> >> previously completely hidden will be listed in --list-subtests and
> >> will be shown as skipped during normal runs?
> >
> > Yes. Daniel and I discussed this and he thought listing all test
> > cases, even the slow ones, would not be an issue, since QA should
> > be running the default set not the full list
> > (and for that matter, shouldn't QA know what they are doing too? :P).
>
> If that's the case, I really think your patch should not touch
> kms_frontbuffer_tracking.c. The hidden subtests should not appear on
> the list. People shouldn't even have to ask themselves why they are
> getting 800 skips from a single testcase. Those are only for debugging
> purposes.
Fair enough. I'll try to come up with a resonable way to exclude them
from the list in a generic manner. Because that's the whole point of
this exercise -- to standardise this rather than have every test case
implement its own method of choosing whether or not to run all tests.
> >
> >> For kms_frontbuffer_tracking, hidden tests are supposed to be just for
> >> developers who know what they are doing. I hide them behind a special
> >> command-line switch that's not used by QA because I don't want QA
> >> wasting time running those tests. One third of the
> >> kms_frontbuffer_tracking hidden tests only serve the purpose of
> >> checking whether there's a bug in kms_frontbuffer_track itself or not.
> >> For some other hidden tests, they are there just to help better debug
> >> in case some other non-hidden tests fail. Some other hidden tests are
> >> 100% useless and superfluous.
> >
> > Shouldn't 100% useless and superfluous tests be excised completely?
>
> The change would be from "if (case && hidden) continue;" to "if (case)
> continue;". But that's not the focus. There are still tests that are
> useful for debugging but useless for QA.
It's not the focus of my change, no. But if there are tests that are
useless and/or superfluous, then they should be dropped. Note that
I'm not suggesting that all non-default tests be dropped, just that
if there indeed are tests that don't make sense, they shouldn't be
in the test case in the first place.
> >
> >> QA should only run the non-hidden tests.
> >
> > Which is the default behaviour, AFAICT.
>
> Then why do you want to expose those tests that you're not even
> planning to run??
To allow developers to see the options they have?
> You're kinda implying that QA - or someone else -
> will run those tests at some point, and I say that, for
> kms_frontbuffer_tracking, that's a waste of time. Maybe this is the
> case for the other tests you're touching, but not here.
No, I'm not implying that -- you're putting those words in my mouth.
Anyway, the choice to expose all cases, not just those run without
specifying --all, was a suggestion by Daniel -- you'll have to prod him
to hear what his reasoning was.
Regards, David
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH i-g-t 2/3] Unify handling of slow/combinatorial tests
2015-10-26 16:28 ` Thomas Wood
@ 2015-10-26 17:34 ` David Weinehall
0 siblings, 0 replies; 41+ messages in thread
From: David Weinehall @ 2015-10-26 17:34 UTC (permalink / raw)
To: Thomas Wood; +Cc: Intel Graphics Development
On Mon, Oct 26, 2015 at 04:28:15PM +0000, Thomas Wood wrote:
> On 26 October 2015 at 15:28, David Weinehall
> <david.weinehall@linux.intel.com> wrote:
> > On Fri, Oct 23, 2015 at 03:55:23PM +0100, Thomas Wood wrote:
> >> On 23 October 2015 at 12:42, David Weinehall
> >> <david.weinehall@linux.intel.com> wrote:
> >> > Some tests should not be run by default, due to their slow,
> >> > and sometimes superfluous, nature.
> >> >
> >> > We still want to be able to run these tests though in some cases.
> >> > Until now there's been no unified way of handling this. Remedy
> >> > this by introducing the --with-slow-combinatorial option to
> >> > igt_core, and use it in gem_concurrent_blit & kms_frontbuffer_tracking.
> >> > ---
> >> > lib/igt_core.c | 19 ++++++
> >> > lib/igt_core.h | 1 +
> >> > tests/gem_concurrent_blit.c | 40 ++++++++----
> >> > tests/kms_frontbuffer_tracking.c | 135 +++++++++++++++++++++++++++------------
> >> > 4 files changed, 142 insertions(+), 53 deletions(-)
> >> >
> >> > diff --git a/lib/igt_core.c b/lib/igt_core.c
> >> > index 59127cafe606..ba40ce0e0ead 100644
> >> > --- a/lib/igt_core.c
> >> > +++ b/lib/igt_core.c
> >> > @@ -216,6 +216,7 @@ const char *igt_interactive_debug;
> >> >
> >> > /* subtests helpers */
> >> > static bool list_subtests = false;
> >> > +static bool with_slow_combinatorial = false;
> >> > static char *run_single_subtest = NULL;
> >> > static bool run_single_subtest_found = false;
> >> > static const char *in_subtest = NULL;
> >> > @@ -235,6 +236,7 @@ bool test_child;
> >> >
> >> > enum {
> >> > OPT_LIST_SUBTESTS,
> >> > + OPT_WITH_SLOW_COMBINATORIAL,
> >> > OPT_RUN_SUBTEST,
> >> > OPT_DESCRIPTION,
> >> > OPT_DEBUG,
> >> > @@ -478,6 +480,7 @@ static void print_usage(const char *help_str, bool output_on_stderr)
> >> >
> >> > fprintf(f, "Usage: %s [OPTIONS]\n", command_str);
> >> > fprintf(f, " --list-subtests\n"
> >> > + " --with-slow-combinatorial\n"
> >> > " --run-subtest <pattern>\n"
> >> > " --debug[=log-domain]\n"
> >> > " --interactive-debug[=domain]\n"
> >> > @@ -510,6 +513,7 @@ static int common_init(int *argc, char **argv,
> >> > int c, option_index = 0, i, x;
> >> > static struct option long_options[] = {
> >> > {"list-subtests", 0, 0, OPT_LIST_SUBTESTS},
> >> > + {"with-slow-combinatorial", 0, 0, OPT_WITH_SLOW_COMBINATORIAL},
> >> > {"run-subtest", 1, 0, OPT_RUN_SUBTEST},
> >> > {"help-description", 0, 0, OPT_DESCRIPTION},
> >> > {"debug", optional_argument, 0, OPT_DEBUG},
> >> > @@ -617,6 +621,10 @@ static int common_init(int *argc, char **argv,
> >> > if (!run_single_subtest)
> >> > list_subtests = true;
> >> > break;
> >> > + case OPT_WITH_SLOW_COMBINATORIAL:
> >> > + if (!run_single_subtest)
> >>
> >> This will cause piglit (and therefore QA) to unconditionally run all
> >> tests marked as slow, since it runs subtests individually.
> >
> > Why doesn't piglit run the default set of tests instead?
>
> What is the default set of tests? Each subtest is executed by piglit
> using --run-subtest to ensure information can be collected per-subtest
> (return code, error messages, dmesg logs, timings, etc.).
The default set would be the tests that are run when running
gem_concurrent_blit or kms_frontbuffer_tracking without specifying
--all.
> >
> >>
> >> > + with_slow_combinatorial = true;
> >> > + break;
> >> > case OPT_RUN_SUBTEST:
> >> > if (!list_subtests)
> >> > run_single_subtest = strdup(optarg);
> >> > @@ -1629,6 +1637,17 @@ void igt_skip_on_simulation(void)
> >> > igt_require(!igt_run_in_simulation());
> >> > }
> >> >
> >> > +/**
> >> > + * igt_slow_combinatorial:
> >> > + *
> >> > + * This is used to define subtests that should only be listed/run
> >> > + * when the "--with-slow-combinatorial" has been specified
> >>
> >> This isn't quite correct, as the subtests that use
> >> igt_slow_combinatorial will still always be listed.
> >
> > Yeah, I agree that the comment is incorrect; it should say "be run",
> > or alternatively the code altered to not list them unless "--all"
> > is passed.
> >
> >> > + */
> >> > +void igt_slow_combinatorial(void)
> >> > +{
> >> > + igt_skip_on(!with_slow_combinatorial);
> >>
> >> Although it is convenient to just skip the tests when the
> >> --with-slow-combinatorial flag is passed, it may be useful to be able
> >> to classify the subtests before they are run, so that they are
> >> filtered out from the test list entirely. An approach that can do this
> >> might also be used to mark tests as being part of the basic acceptance
> >> tests, so that they can be marked as such without relying on the
> >> naming convention.
> >
> > If the list is how piglit gets its list of tests, doing classification
> > won't be feasible, since only "testname" or "testname (SKIP)" are
> > valid, TTBOMK.
>
> Test and subtest names for i-g-t are collected and parsed in
> piglit/tests/igt.py, which could always be updated include
> classification parsing.
It would probably make sense to do this, but considering that I neither
have proper python-fu nor know enough about the various test cases to
classify them properly, I think that's orthogonal to this changeset.
Kind regards, David
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH i-g-t 2/3] Unify handling of slow/combinatorial tests
2015-10-26 17:30 ` David Weinehall
@ 2015-10-26 17:59 ` Paulo Zanoni
2015-10-27 6:47 ` David Weinehall
2015-11-17 15:34 ` Daniel Vetter
0 siblings, 2 replies; 41+ messages in thread
From: Paulo Zanoni @ 2015-10-26 17:59 UTC (permalink / raw)
To: Paulo Zanoni, Intel Graphics Development, Daniel Vetter
2015-10-26 15:30 GMT-02:00 David Weinehall <david.weinehall@linux.intel.com>:
> On Mon, Oct 26, 2015 at 02:44:18PM -0200, Paulo Zanoni wrote:
>> 2015-10-26 12:59 GMT-02:00 David Weinehall <david.weinehall@linux.intel.com>:
>> > On Fri, Oct 23, 2015 at 11:50:46AM -0200, Paulo Zanoni wrote:
>> >
>> > [snip]
>> >
>> >> It's not clear to me, please clarify: now the tests that were
>> >> previously completely hidden will be listed in --list-subtests and
>> >> will be shown as skipped during normal runs?
>> >
>> > Yes. Daniel and I discussed this and he thought listing all test
>> > cases, even the slow ones, would not be an issue, since QA should
>> > be running the default set not the full list
>> > (and for that matter, shouldn't QA know what they are doing too? :P).
>>
>> If that's the case, I really think your patch should not touch
>> kms_frontbuffer_tracking.c. The hidden subtests should not appear on
>> the list. People shouldn't even have to ask themselves why they are
>> getting 800 skips from a single testcase. Those are only for debugging
>> purposes.
>
> Fair enough. I'll try to come up with a resonable way to exclude them
> from the list in a generic manner. Because that's the whole point of
> this exercise -- to standardise this rather than have every test case
> implement its own method of choosing whether or not to run all tests.
Maybe instead of marking these tests as SKIP we could use some other
flag. That would avoid the confusion between "skipped because some
condition was not match but the test is useful" vs "skipped because
the test is unnecessary".
>
>> >
>> >> For kms_frontbuffer_tracking, hidden tests are supposed to be just for
>> >> developers who know what they are doing. I hide them behind a special
>> >> command-line switch that's not used by QA because I don't want QA
>> >> wasting time running those tests. One third of the
>> >> kms_frontbuffer_tracking hidden tests only serve the purpose of
>> >> checking whether there's a bug in kms_frontbuffer_track itself or not.
>> >> For some other hidden tests, they are there just to help better debug
>> >> in case some other non-hidden tests fail. Some other hidden tests are
>> >> 100% useless and superfluous.
>> >
>> > Shouldn't 100% useless and superfluous tests be excised completely?
>>
>> The change would be from "if (case && hidden) continue;" to "if (case)
>> continue;". But that's not the focus. There are still tests that are
>> useful for debugging but useless for QA.
>
> It's not the focus of my change, no. But if there are tests that are
> useless and/or superfluous, then they should be dropped.
> Note that
> I'm not suggesting that all non-default tests be dropped, just that
> if there indeed are tests that don't make sense, they shouldn't be
> in the test case in the first place.
>
>> >
>> >> QA should only run the non-hidden tests.
>> >
>> > Which is the default behaviour, AFAICT.
>>
>> Then why do you want to expose those tests that you're not even
>> planning to run??
>
> To allow developers to see the options they have?
>
>> You're kinda implying that QA - or someone else -
>> will run those tests at some point, and I say that, for
>> kms_frontbuffer_tracking, that's a waste of time. Maybe this is the
>> case for the other tests you're touching, but not here.
>
> No, I'm not implying that -- you're putting those words in my mouth.
>
> Anyway, the choice to expose all cases, not just those run without
> specifying --all, was a suggestion by Daniel -- you'll have to prod him
> to hear what his reasoning was.
CC'ing Daniel.
>
>
> Regards, David
--
Paulo Zanoni
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH i-g-t 2/3] Unify handling of slow/combinatorial tests
2015-10-23 14:55 ` Thomas Wood
2015-10-26 15:28 ` David Weinehall
@ 2015-10-26 18:15 ` Paulo Zanoni
1 sibling, 0 replies; 41+ messages in thread
From: Paulo Zanoni @ 2015-10-26 18:15 UTC (permalink / raw)
To: Thomas Wood; +Cc: Intel Graphics Development
2015-10-23 12:55 GMT-02:00 Thomas Wood <thomas.wood@intel.com>:
> On 23 October 2015 at 12:42, David Weinehall
> <david.weinehall@linux.intel.com> wrote:
>> Some tests should not be run by default, due to their slow,
>> and sometimes superfluous, nature.
>>
>> We still want to be able to run these tests though in some cases.
>> Until now there's been no unified way of handling this. Remedy
>> this by introducing the --with-slow-combinatorial option to
>> igt_core, and use it in gem_concurrent_blit & kms_frontbuffer_tracking.
>> ---
>> lib/igt_core.c | 19 ++++++
>> lib/igt_core.h | 1 +
>> tests/gem_concurrent_blit.c | 40 ++++++++----
>> tests/kms_frontbuffer_tracking.c | 135 +++++++++++++++++++++++++++------------
>> 4 files changed, 142 insertions(+), 53 deletions(-)
>>
>> diff --git a/lib/igt_core.c b/lib/igt_core.c
>> index 59127cafe606..ba40ce0e0ead 100644
>> --- a/lib/igt_core.c
>> +++ b/lib/igt_core.c
>> @@ -216,6 +216,7 @@ const char *igt_interactive_debug;
>>
>> /* subtests helpers */
>> static bool list_subtests = false;
>> +static bool with_slow_combinatorial = false;
>> static char *run_single_subtest = NULL;
>> static bool run_single_subtest_found = false;
>> static const char *in_subtest = NULL;
>> @@ -235,6 +236,7 @@ bool test_child;
>>
>> enum {
>> OPT_LIST_SUBTESTS,
>> + OPT_WITH_SLOW_COMBINATORIAL,
>> OPT_RUN_SUBTEST,
>> OPT_DESCRIPTION,
>> OPT_DEBUG,
>> @@ -478,6 +480,7 @@ static void print_usage(const char *help_str, bool output_on_stderr)
>>
>> fprintf(f, "Usage: %s [OPTIONS]\n", command_str);
>> fprintf(f, " --list-subtests\n"
>> + " --with-slow-combinatorial\n"
>> " --run-subtest <pattern>\n"
>> " --debug[=log-domain]\n"
>> " --interactive-debug[=domain]\n"
>> @@ -510,6 +513,7 @@ static int common_init(int *argc, char **argv,
>> int c, option_index = 0, i, x;
>> static struct option long_options[] = {
>> {"list-subtests", 0, 0, OPT_LIST_SUBTESTS},
>> + {"with-slow-combinatorial", 0, 0, OPT_WITH_SLOW_COMBINATORIAL},
>> {"run-subtest", 1, 0, OPT_RUN_SUBTEST},
>> {"help-description", 0, 0, OPT_DESCRIPTION},
>> {"debug", optional_argument, 0, OPT_DEBUG},
>> @@ -617,6 +621,10 @@ static int common_init(int *argc, char **argv,
>> if (!run_single_subtest)
>> list_subtests = true;
>> break;
>> + case OPT_WITH_SLOW_COMBINATORIAL:
>> + if (!run_single_subtest)
>
> This will cause piglit (and therefore QA) to unconditionally run all
> tests marked as slow, since it runs subtests individually.
>
>
>> + with_slow_combinatorial = true;
>> + break;
>> case OPT_RUN_SUBTEST:
>> if (!list_subtests)
>> run_single_subtest = strdup(optarg);
>> @@ -1629,6 +1637,17 @@ void igt_skip_on_simulation(void)
>> igt_require(!igt_run_in_simulation());
>> }
>>
>> +/**
>> + * igt_slow_combinatorial:
>> + *
>> + * This is used to define subtests that should only be listed/run
>> + * when the "--with-slow-combinatorial" has been specified
>
> This isn't quite correct, as the subtests that use
> igt_slow_combinatorial will still always be listed.
>
>> + */
>> +void igt_slow_combinatorial(void)
>> +{
>> + igt_skip_on(!with_slow_combinatorial);
>
> Although it is convenient to just skip the tests when the
> --with-slow-combinatorial flag is passed, it may be useful to be able
> to classify the subtests before they are run, so that they are
> filtered out from the test list entirely.
Maybe we could make --list-subtests not list these subtests unless you
also pass --with-slow-combinatorial too? That should help solving the
biggest problem for my scripts.
So we'd have "./test --list-subtests" for normal usage, and "./test
--list-subtests --with-slow-combinatorial" for the full set.
This should make these tests remain 100% transparent to QA (given they
don't use --with-slow-combinatorial). They won't start showing up as
SKIPs on our dashboards.
> An approach that can do this
> might also be used to mark tests as being part of the basic acceptance
> tests, so that they can be marked as such without relying on the
> naming convention.
>
>
>> +}
>> +
>> /* structured logging */
>>
>> /**
>> diff --git a/lib/igt_core.h b/lib/igt_core.h
>> index 5ae09653fd55..6ddf25563275 100644
>> --- a/lib/igt_core.h
>> +++ b/lib/igt_core.h
>> @@ -680,6 +680,7 @@ bool igt_run_in_simulation(void);
>> #define SLOW_QUICK(slow,quick) (igt_run_in_simulation() ? (quick) : (slow))
>>
>> void igt_skip_on_simulation(void);
>> +void igt_slow_combinatorial(void);
>>
>> extern const char *igt_interactive_debug;
>>
>> diff --git a/tests/gem_concurrent_blit.c b/tests/gem_concurrent_blit.c
>> index 1d2d787202df..311b6829e984 100644
>> --- a/tests/gem_concurrent_blit.c
>> +++ b/tests/gem_concurrent_blit.c
>> @@ -931,9 +931,6 @@ run_basic_modes(const struct access_mode *mode,
>> struct buffers buffers;
>>
>> for (h = hangs; h->suffix; h++) {
>> - if (!all && *h->suffix)
>> - continue;
>> -
>> for (p = all ? pipelines : pskip; p->prefix; p++) {
>> igt_fixture {
>> batch = buffers_init(&buffers, mode, fd);
>> @@ -941,6 +938,8 @@ run_basic_modes(const struct access_mode *mode,
>>
>> /* try to overwrite the source values */
>> igt_subtest_f("%s-%s-overwrite-source-one%s%s", mode->name, p->prefix, suffix, h->suffix) {
>> + if (*h->suffix)
>> + igt_slow_combinatorial();
>> h->require();
>> p->require();
>> buffers_create(&buffers, num_buffers);
>> @@ -950,6 +949,8 @@ run_basic_modes(const struct access_mode *mode,
>> }
>>
>> igt_subtest_f("%s-%s-overwrite-source%s%s", mode->name, p->prefix, suffix, h->suffix) {
>> + if (*h->suffix)
>> + igt_slow_combinatorial();
>> h->require();
>> p->require();
>> buffers_create(&buffers, num_buffers);
>> @@ -959,6 +960,8 @@ run_basic_modes(const struct access_mode *mode,
>> }
>>
>> igt_subtest_f("%s-%s-overwrite-source-read-bcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
>> + if (*h->suffix)
>> + igt_slow_combinatorial();
>> h->require();
>> p->require();
>> buffers_create(&buffers, num_buffers);
>> @@ -968,6 +971,8 @@ run_basic_modes(const struct access_mode *mode,
>> }
>>
>> igt_subtest_f("%s-%s-overwrite-source-read-rcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
>> + if (*h->suffix)
>> + igt_slow_combinatorial();
>> h->require();
>> p->require();
>> igt_require(rendercopy);
>> @@ -978,6 +983,8 @@ run_basic_modes(const struct access_mode *mode,
>> }
>>
>> igt_subtest_f("%s-%s-overwrite-source-rev%s%s", mode->name, p->prefix, suffix, h->suffix) {
>> + if (*h->suffix)
>> + igt_slow_combinatorial();
>> h->require();
>> p->require();
>> buffers_create(&buffers, num_buffers);
>> @@ -988,6 +995,8 @@ run_basic_modes(const struct access_mode *mode,
>>
>> /* try to intermix copies with GPU copies*/
>> igt_subtest_f("%s-%s-intermix-rcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
>> + if (*h->suffix)
>> + igt_slow_combinatorial();
>> h->require();
>> p->require();
>> igt_require(rendercopy);
>> @@ -997,6 +1006,8 @@ run_basic_modes(const struct access_mode *mode,
>> p->copy, h->hang);
>> }
>> igt_subtest_f("%s-%s-intermix-bcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
>> + if (*h->suffix)
>> + igt_slow_combinatorial();
>> h->require();
>> p->require();
>> igt_require(rendercopy);
>> @@ -1006,6 +1017,8 @@ run_basic_modes(const struct access_mode *mode,
>> p->copy, h->hang);
>> }
>> igt_subtest_f("%s-%s-intermix-both%s%s", mode->name, p->prefix, suffix, h->suffix) {
>> + if (*h->suffix)
>> + igt_slow_combinatorial();
>> h->require();
>> p->require();
>> igt_require(rendercopy);
>> @@ -1017,6 +1030,8 @@ run_basic_modes(const struct access_mode *mode,
>>
>> /* try to read the results before the copy completes */
>> igt_subtest_f("%s-%s-early-read%s%s", mode->name, p->prefix, suffix, h->suffix) {
>> + if (*h->suffix)
>> + igt_slow_combinatorial();
>> h->require();
>> p->require();
>> buffers_create(&buffers, num_buffers);
>> @@ -1027,6 +1042,8 @@ run_basic_modes(const struct access_mode *mode,
>>
>> /* concurrent reads */
>> igt_subtest_f("%s-%s-read-read-bcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
>> + if (*h->suffix)
>> + igt_slow_combinatorial();
>> h->require();
>> p->require();
>> buffers_create(&buffers, num_buffers);
>> @@ -1035,6 +1052,8 @@ run_basic_modes(const struct access_mode *mode,
>> p->copy, h->hang);
>> }
>> igt_subtest_f("%s-%s-read-read-rcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
>> + if (*h->suffix)
>> + igt_slow_combinatorial();
>> h->require();
>> p->require();
>> igt_require(rendercopy);
>> @@ -1046,6 +1065,8 @@ run_basic_modes(const struct access_mode *mode,
>>
>> /* and finally try to trick the kernel into loosing the pending write */
>> igt_subtest_f("%s-%s-gpu-read-after-write%s%s", mode->name, p->prefix, suffix, h->suffix) {
>> + if (*h->suffix)
>> + igt_slow_combinatorial();
>> h->require();
>> p->require();
>> buffers_create(&buffers, num_buffers);
>> @@ -1064,13 +1085,11 @@ run_basic_modes(const struct access_mode *mode,
>> static void
>> run_modes(const struct access_mode *mode)
>> {
>> - if (all) {
>> - run_basic_modes(mode, "", run_single);
>> + run_basic_modes(mode, "", run_single);
>>
>> - igt_fork_signal_helper();
>> - run_basic_modes(mode, "-interruptible", run_interruptible);
>> - igt_stop_signal_helper();
>> - }
>> + igt_fork_signal_helper();
>> + run_basic_modes(mode, "-interruptible", run_interruptible);
>> + igt_stop_signal_helper();
>>
>> igt_fork_signal_helper();
>> run_basic_modes(mode, "-forked", run_forked);
>> @@ -1083,9 +1102,6 @@ igt_main
>>
>> igt_skip_on_simulation();
>>
>> - if (strstr(igt_test_name(), "all"))
>> - all = true;
>> -
>> igt_fixture {
>> fd = drm_open_driver(DRIVER_INTEL);
>> devid = intel_get_drm_devid(fd);
>> diff --git a/tests/kms_frontbuffer_tracking.c b/tests/kms_frontbuffer_tracking.c
>> index d97e148c5073..6f84ef0813d9 100644
>> --- a/tests/kms_frontbuffer_tracking.c
>> +++ b/tests/kms_frontbuffer_tracking.c
>> @@ -47,8 +47,8 @@ IGT_TEST_DESCRIPTION("Test the Kernel's frontbuffer tracking mechanism and "
>> * combinations that are somewhat redundant and don't add much value to the
>> * test. For example, since we already do the offscreen testing with a single
>> * pipe enabled, there's no much value in doing it again with dual pipes. If you
>> - * still want to try these redundant tests, you need to use the --show-hidden
>> - * option.
>> + * still want to try these redundant tests, you need to use the
>> + * "--with-slow-combinatorial" option.
>> *
>> * The most important hidden thing is the FEATURE_NONE set of tests. Whenever
>> * you get a failure on any test, it is important to check whether the same test
>> @@ -116,6 +116,10 @@ struct test_mode {
>> } format;
>>
>> enum igt_draw_method method;
>> +
>> + /* The test is slow and/or combinatorial;
>> + * skip unless otherwise specified */
>> + bool slow;
>> };
>>
>> enum flip_type {
>> @@ -237,7 +241,6 @@ struct {
>> bool fbc_check_last_action;
>> bool no_edp;
>> bool small_modes;
>> - bool show_hidden;
>> int step;
>> int only_feature;
>> int only_pipes;
>> @@ -250,7 +253,6 @@ struct {
>> .fbc_check_last_action = true,
>> .no_edp = false,
>> .small_modes = false,
>> - .show_hidden= false,
>> .step = 0,
>> .only_feature = FEATURE_COUNT,
>> .only_pipes = PIPE_COUNT,
>> @@ -2892,9 +2894,6 @@ static int opt_handler(int option, int option_index, void *data)
>> case 'm':
>> opt.small_modes = true;
>> break;
>> - case 'i':
>> - opt.show_hidden = true;
>> - break;
>> case 't':
>> opt.step++;
>> break;
>> @@ -2942,7 +2941,6 @@ const char *help_str =
>> " --no-fbc-action-check Don't check for the FBC last action\n"
>> " --no-edp Don't use eDP monitors\n"
>> " --use-small-modes Use smaller resolutions for the modes\n"
>> -" --show-hidden Show hidden subtests\n"
>> " --step Stop on each step so you can check the screen\n"
>> " --nop-only Only run the \"nop\" feature subtests\n"
>> " --fbc-only Only run the \"fbc\" feature subtests\n"
>> @@ -3036,6 +3034,7 @@ static const char *format_str(enum pixel_format format)
>>
>> #define TEST_MODE_ITER_BEGIN(t) \
>> t.format = FORMAT_DEFAULT; \
>> + t.slow = false; \
>> for (t.feature = 0; t.feature < FEATURE_COUNT; t.feature++) { \
>> for (t.pipes = 0; t.pipes < PIPE_COUNT; t.pipes++) { \
>> for (t.screen = 0; t.screen < SCREEN_COUNT; t.screen++) { \
>> @@ -3046,15 +3045,15 @@ static const char *format_str(enum pixel_format format)
>> continue; \
>> if (t.screen == SCREEN_OFFSCREEN && t.plane != PLANE_PRI) \
>> continue; \
>> - if (!opt.show_hidden && t.pipes == PIPE_DUAL && \
>> + if (t.pipes == PIPE_DUAL && \
>> t.screen == SCREEN_OFFSCREEN) \
>> - continue; \
>> - if ((!opt.show_hidden && opt.only_feature != FEATURE_NONE) \
>> - && t.feature == FEATURE_NONE) \
>> - continue; \
>> - if (!opt.show_hidden && t.fbs == FBS_SHARED && \
>> + t.slow = true; \
>> + if (opt.only_feature != FEATURE_NONE && \
>> + t.feature == FEATURE_NONE) \
>> + t.slow = true; \
>> + if (t.fbs == FBS_SHARED && \
>> (t.plane == PLANE_CUR || t.plane == PLANE_SPR)) \
>> - continue;
>> + t.slow = true;
>>
>>
>> #define TEST_MODE_ITER_END } } } } } }
>> @@ -3069,7 +3068,6 @@ int main(int argc, char *argv[])
>> { "no-fbc-action-check", 0, 0, 'a'},
>> { "no-edp", 0, 0, 'e'},
>> { "use-small-modes", 0, 0, 'm'},
>> - { "show-hidden", 0, 0, 'i'},
>> { "step", 0, 0, 't'},
>> { "nop-only", 0, 0, 'n'},
>> { "fbc-only", 0, 0, 'f'},
>> @@ -3088,9 +3086,11 @@ int main(int argc, char *argv[])
>> setup_environment();
>>
>> for (t.feature = 0; t.feature < FEATURE_COUNT; t.feature++) {
>> - if ((!opt.show_hidden && opt.only_feature != FEATURE_NONE)
>> - && t.feature == FEATURE_NONE)
>> - continue;
>> + bool slow = false;
>> +
>> + if (opt.only_feature != FEATURE_NONE &&
>> + t.feature == FEATURE_NONE)
>> + slow = true;
>> for (t.pipes = 0; t.pipes < PIPE_COUNT; t.pipes++) {
>> t.screen = SCREEN_PRIM;
>> t.plane = PLANE_PRI;
>> @@ -3101,8 +3101,11 @@ int main(int argc, char *argv[])
>>
>> igt_subtest_f("%s-%s-rte",
>> feature_str(t.feature),
>> - pipes_str(t.pipes))
>> + pipes_str(t.pipes)) {
>> + if (slow)
>> + igt_slow_combinatorial();
>> rte_subtest(&t);
>> + }
>> }
>> }
>>
>> @@ -3113,39 +3116,52 @@ int main(int argc, char *argv[])
>> screen_str(t.screen),
>> plane_str(t.plane),
>> fbs_str(t.fbs),
>> - igt_draw_get_method_name(t.method))
>> + igt_draw_get_method_name(t.method)) {
>> + if (t.slow)
>> + igt_slow_combinatorial();
>> draw_subtest(&t);
>> + }
>> TEST_MODE_ITER_END
>>
>> TEST_MODE_ITER_BEGIN(t)
>> if (t.plane != PLANE_PRI ||
>> - t.screen == SCREEN_OFFSCREEN ||
>> - (!opt.show_hidden && t.method != IGT_DRAW_BLT))
>> + t.screen == SCREEN_OFFSCREEN)
>> continue;
>> + if (t.method != IGT_DRAW_BLT)
>> + t.slow = true;
>>
>> igt_subtest_f("%s-%s-%s-%s-flip-%s",
>> feature_str(t.feature),
>> pipes_str(t.pipes),
>> screen_str(t.screen),
>> fbs_str(t.fbs),
>> - igt_draw_get_method_name(t.method))
>> + igt_draw_get_method_name(t.method)) {
>> + if (t.slow)
>> + igt_slow_combinatorial();
>> flip_subtest(&t, FLIP_PAGEFLIP);
>> + }
>>
>> igt_subtest_f("%s-%s-%s-%s-evflip-%s",
>> feature_str(t.feature),
>> pipes_str(t.pipes),
>> screen_str(t.screen),
>> fbs_str(t.fbs),
>> - igt_draw_get_method_name(t.method))
>> + igt_draw_get_method_name(t.method)) {
>> + if (t.slow)
>> + igt_slow_combinatorial();
>> flip_subtest(&t, FLIP_PAGEFLIP_EVENT);
>> + }
>>
>> igt_subtest_f("%s-%s-%s-%s-msflip-%s",
>> feature_str(t.feature),
>> pipes_str(t.pipes),
>> screen_str(t.screen),
>> fbs_str(t.fbs),
>> - igt_draw_get_method_name(t.method))
>> + igt_draw_get_method_name(t.method)) {
>> + if (t.slow)
>> + igt_slow_combinatorial();
>> flip_subtest(&t, FLIP_MODESET);
>> + }
>>
>> TEST_MODE_ITER_END
>>
>> @@ -3159,8 +3175,11 @@ int main(int argc, char *argv[])
>> igt_subtest_f("%s-%s-%s-fliptrack",
>> feature_str(t.feature),
>> pipes_str(t.pipes),
>> - fbs_str(t.fbs))
>> + fbs_str(t.fbs)) {
>> + if (t.slow)
>> + igt_slow_combinatorial();
>> fliptrack_subtest(&t, FLIP_PAGEFLIP);
>> + }
>> TEST_MODE_ITER_END
>>
>> TEST_MODE_ITER_BEGIN(t)
>> @@ -3174,16 +3193,22 @@ int main(int argc, char *argv[])
>> pipes_str(t.pipes),
>> screen_str(t.screen),
>> plane_str(t.plane),
>> - fbs_str(t.fbs))
>> + fbs_str(t.fbs)) {
>> + if (t.slow)
>> + igt_slow_combinatorial();
>> move_subtest(&t);
>> + }
>>
>> igt_subtest_f("%s-%s-%s-%s-%s-onoff",
>> feature_str(t.feature),
>> pipes_str(t.pipes),
>> screen_str(t.screen),
>> plane_str(t.plane),
>> - fbs_str(t.fbs))
>> + fbs_str(t.fbs)) {
>> + if (t.slow)
>> + igt_slow_combinatorial();
>> onoff_subtest(&t);
>> + }
>> TEST_MODE_ITER_END
>>
>> TEST_MODE_ITER_BEGIN(t)
>> @@ -3197,23 +3222,30 @@ int main(int argc, char *argv[])
>> pipes_str(t.pipes),
>> screen_str(t.screen),
>> plane_str(t.plane),
>> - fbs_str(t.fbs))
>> + fbs_str(t.fbs)) {
>> + if (t.slow)
>> + igt_slow_combinatorial();
>> fullscreen_plane_subtest(&t);
>> + }
>> TEST_MODE_ITER_END
>>
>> TEST_MODE_ITER_BEGIN(t)
>> if (t.screen != SCREEN_PRIM ||
>> - t.method != IGT_DRAW_BLT ||
>> - (!opt.show_hidden && t.plane != PLANE_PRI) ||
>> - (!opt.show_hidden && t.fbs != FBS_INDIVIDUAL))
>> + t.method != IGT_DRAW_BLT)
>> continue;
>> + if (t.plane != PLANE_PRI ||
>> + t.fbs != FBS_INDIVIDUAL)
>> + t.slow = true;
>>
>> igt_subtest_f("%s-%s-%s-%s-multidraw",
>> feature_str(t.feature),
>> pipes_str(t.pipes),
>> plane_str(t.plane),
>> - fbs_str(t.fbs))
>> + fbs_str(t.fbs)) {
>> + if (t.slow)
>> + igt_slow_combinatorial();
>> multidraw_subtest(&t);
>> + }
>> TEST_MODE_ITER_END
>>
>> TEST_MODE_ITER_BEGIN(t)
>> @@ -3224,8 +3256,11 @@ int main(int argc, char *argv[])
>> t.method != IGT_DRAW_MMAP_GTT)
>> continue;
>>
>> - igt_subtest_f("%s-farfromfence", feature_str(t.feature))
>> + igt_subtest_f("%s-farfromfence", feature_str(t.feature)) {
>> + if (t.slow)
>> + igt_slow_combinatorial();
>> farfromfence_subtest(&t);
>> + }
>> TEST_MODE_ITER_END
>>
>> TEST_MODE_ITER_BEGIN(t)
>> @@ -3243,8 +3278,11 @@ int main(int argc, char *argv[])
>> igt_subtest_f("%s-%s-draw-%s",
>> feature_str(t.feature),
>> format_str(t.format),
>> - igt_draw_get_method_name(t.method))
>> + igt_draw_get_method_name(t.method)) {
>> + if (t.slow)
>> + igt_slow_combinatorial();
>> format_draw_subtest(&t);
>> + }
>> }
>> TEST_MODE_ITER_END
>>
>> @@ -3256,8 +3294,11 @@ int main(int argc, char *argv[])
>> continue;
>> igt_subtest_f("%s-%s-scaledprimary",
>> feature_str(t.feature),
>> - fbs_str(t.fbs))
>> + fbs_str(t.fbs)) {
>> + if (t.slow)
>> + igt_slow_combinatorial();
>> scaledprimary_subtest(&t);
>> + }
>> TEST_MODE_ITER_END
>>
>> TEST_MODE_ITER_BEGIN(t)
>> @@ -3268,19 +3309,31 @@ int main(int argc, char *argv[])
>> t.method != IGT_DRAW_MMAP_CPU)
>> continue;
>>
>> - igt_subtest_f("%s-modesetfrombusy", feature_str(t.feature))
>> + igt_subtest_f("%s-modesetfrombusy", feature_str(t.feature)) {
>> + if (t.slow)
>> + igt_slow_combinatorial();
>> modesetfrombusy_subtest(&t);
>> + }
>>
>> if (t.feature & FEATURE_FBC)
>> - igt_subtest_f("%s-badstride", feature_str(t.feature))
>> + igt_subtest_f("%s-badstride", feature_str(t.feature)) {
>> + if (t.slow)
>> + igt_slow_combinatorial();
>> badstride_subtest(&t);
>> + }
>>
>> if (t.feature & FEATURE_PSR)
>> - igt_subtest_f("%s-slowdraw", feature_str(t.feature))
>> + igt_subtest_f("%s-slowdraw", feature_str(t.feature)) {
>> + if (t.slow)
>> + igt_slow_combinatorial();
>> slow_draw_subtest(&t);
>> + }
>>
>> - igt_subtest_f("%s-suspend", feature_str(t.feature))
>> + igt_subtest_f("%s-suspend", feature_str(t.feature)) {
>> + if (t.slow)
>> + igt_slow_combinatorial();
>> suspend_subtest(&t);
>> + }
>> TEST_MODE_ITER_END
>>
>> igt_fixture
>> --
>> 2.6.1
>>
>> _______________________________________________
>> Intel-gfx mailing list
>> Intel-gfx@lists.freedesktop.org
>> http://lists.freedesktop.org/mailman/listinfo/intel-gfx
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/intel-gfx
--
Paulo Zanoni
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH i-g-t 2/3] Unify handling of slow/combinatorial tests
2015-10-26 17:59 ` Paulo Zanoni
@ 2015-10-27 6:47 ` David Weinehall
2015-11-17 15:33 ` Daniel Vetter
2015-11-17 15:34 ` Daniel Vetter
1 sibling, 1 reply; 41+ messages in thread
From: David Weinehall @ 2015-10-27 6:47 UTC (permalink / raw)
To: Paulo Zanoni; +Cc: Daniel Vetter, Intel Graphics Development
On Mon, Oct 26, 2015 at 03:59:24PM -0200, Paulo Zanoni wrote:
> 2015-10-26 15:30 GMT-02:00 David Weinehall <david.weinehall@linux.intel.com>:
> > On Mon, Oct 26, 2015 at 02:44:18PM -0200, Paulo Zanoni wrote:
> >> 2015-10-26 12:59 GMT-02:00 David Weinehall <david.weinehall@linux.intel.com>:
> >> > On Fri, Oct 23, 2015 at 11:50:46AM -0200, Paulo Zanoni wrote:
> >> >
> >> > [snip]
> >> >
> >> >> It's not clear to me, please clarify: now the tests that were
> >> >> previously completely hidden will be listed in --list-subtests and
> >> >> will be shown as skipped during normal runs?
> >> >
> >> > Yes. Daniel and I discussed this and he thought listing all test
> >> > cases, even the slow ones, would not be an issue, since QA should
> >> > be running the default set not the full list
> >> > (and for that matter, shouldn't QA know what they are doing too? :P).
> >>
> >> If that's the case, I really think your patch should not touch
> >> kms_frontbuffer_tracking.c. The hidden subtests should not appear on
> >> the list. People shouldn't even have to ask themselves why they are
> >> getting 800 skips from a single testcase. Those are only for debugging
> >> purposes.
> >
> > Fair enough. I'll try to come up with a resonable way to exclude them
> > from the list in a generic manner. Because that's the whole point of
> > this exercise -- to standardise this rather than have every test case
> > implement its own method of choosing whether or not to run all tests.
>
> Maybe instead of marking these tests as SKIP we could use some other
> flag. That would avoid the confusion between "skipped because some
> condition was not match but the test is useful" vs "skipped because
> the test is unnecessary".
I'd prefer a method that wouldn't require patching piglit.
Regards, David
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH i-g-t 0/3 v2] Unify slow/combinatorial test handling
2015-10-23 11:42 [PATCH i-g-t 0/3] Unify slow/combinatorial test handling David Weinehall
` (3 preceding siblings ...)
2015-10-23 11:58 ` [PATCH i-g-t 0/3] Unify slow/combinatorial test handling Chris Wilson
@ 2015-10-28 11:29 ` David Weinehall
2015-10-28 11:29 ` [PATCH i-g-t 1/3] Copy gem_concurrent_all to gem_concurrent_blit David Weinehall
` (2 more replies)
2015-10-30 13:18 ` [PATCH i-g-t 0/3 v3] Unify slow/combinatorial test handling David Weinehall
5 siblings, 3 replies; 41+ messages in thread
From: David Weinehall @ 2015-10-28 11:29 UTC (permalink / raw)
To: intel-gfx
Until now we've had no unified way to handle slow/combinatorial tests.
Most of the time we don't want to run slow/combinatorial tests, so this
should remain the default, but when we do want to run such tests,
it has been handled differently in different tests.
This patch adds an --all command line option to igt_core, changes
gem_concurrent_blit and kms_frontbuffer_tracking to use this instead of
their own methods, and removes gem_concurrent_all in the process, since
it's now unnecessary.
v2: Incorporate various suggestions from reviewers.
David Weinehall (3):
Copy gem_concurrent_all to gem_concurrent_blit
Unify handling of slow/combinatorial tests
Remove superfluous gem_concurrent_all.c
lib/igt_core.c | 24 +
lib/igt_core.h | 7 +
tests/.gitignore | 1 -
tests/Makefile.sources | 1 -
tests/gem_concurrent_all.c | 1108 --------------------------------------
tests/gem_concurrent_blit.c | 1108 +++++++++++++++++++++++++++++++++++++-
tests/kms_frontbuffer_tracking.c | 208 +++----
7 files changed, 1247 insertions(+), 1210 deletions(-)
delete mode 100644 tests/gem_concurrent_all.c
--
2.6.2
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH i-g-t 1/3] Copy gem_concurrent_all to gem_concurrent_blit
2015-10-28 11:29 ` [PATCH i-g-t 0/3 v2] " David Weinehall
@ 2015-10-28 11:29 ` David Weinehall
2015-10-28 11:29 ` [PATCH i-g-t 2/3] Unify handling of slow/combinatorial tests David Weinehall
2015-10-28 11:29 ` [PATCH i-g-t 3/3] Remove superfluous gem_concurrent_all.c David Weinehall
2 siblings, 0 replies; 41+ messages in thread
From: David Weinehall @ 2015-10-28 11:29 UTC (permalink / raw)
To: intel-gfx
We'll both rename gem_concurrent_all over gem_concurrent_blit
and change gem_concurrent_blit in this changeset. To make
this easier to follow we first do the the rename.
Signed-off-by: David Weinehall <david.weinehall@linux.intel.com>
---
tests/gem_concurrent_blit.c | 1116 ++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 1108 insertions(+), 8 deletions(-)
diff --git a/tests/gem_concurrent_blit.c b/tests/gem_concurrent_blit.c
index 513de4a1b719..1d2d787202df 100644
--- a/tests/gem_concurrent_blit.c
+++ b/tests/gem_concurrent_blit.c
@@ -1,8 +1,1108 @@
-/* This test is just a duplicate of gem_concurrent_all. */
-/* However the executeable will be gem_concurrent_blit. */
-/* The main function examines argv[0] and, in the case */
-/* of gem_concurent_blit runs only a subset of the */
-/* available subtests. This avoids the use of */
-/* non-standard command line parameters which can cause */
-/* problems for automated testing */
-#include "gem_concurrent_all.c"
+/*
+ * Copyright © 2009,2012,2013 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ * Eric Anholt <eric@anholt.net>
+ * Chris Wilson <chris@chris-wilson.co.uk>
+ * Daniel Vetter <daniel.vetter@ffwll.ch>
+ *
+ */
+
+/** @file gem_concurrent.c
+ *
+ * This is a test of pread/pwrite/mmap behavior when writing to active
+ * buffers.
+ *
+ * Based on gem_gtt_concurrent_blt.
+ */
+
+#include "igt.h"
+#include <stdlib.h>
+#include <stdio.h>
+#include <string.h>
+#include <fcntl.h>
+#include <inttypes.h>
+#include <errno.h>
+#include <sys/stat.h>
+#include <sys/time.h>
+#include <sys/wait.h>
+
+#include <drm.h>
+
+#include "intel_bufmgr.h"
+
+IGT_TEST_DESCRIPTION("Test of pread/pwrite/mmap behavior when writing to active"
+ " buffers.");
+
+int fd, devid, gen;
+struct intel_batchbuffer *batch;
+int all;
+
+static void
+nop_release_bo(drm_intel_bo *bo)
+{
+ drm_intel_bo_unreference(bo);
+}
+
+static void
+prw_set_bo(drm_intel_bo *bo, uint32_t val, int width, int height)
+{
+ int size = width * height, i;
+ uint32_t *tmp;
+
+ tmp = malloc(4*size);
+ if (tmp) {
+ for (i = 0; i < size; i++)
+ tmp[i] = val;
+ drm_intel_bo_subdata(bo, 0, 4*size, tmp);
+ free(tmp);
+ } else {
+ for (i = 0; i < size; i++)
+ drm_intel_bo_subdata(bo, 4*i, 4, &val);
+ }
+}
+
+static void
+prw_cmp_bo(drm_intel_bo *bo, uint32_t val, int width, int height, drm_intel_bo *tmp)
+{
+ int size = width * height, i;
+ uint32_t *vaddr;
+
+ do_or_die(drm_intel_bo_map(tmp, true));
+ do_or_die(drm_intel_bo_get_subdata(bo, 0, 4*size, tmp->virtual));
+ vaddr = tmp->virtual;
+ for (i = 0; i < size; i++)
+ igt_assert_eq_u32(vaddr[i], val);
+ drm_intel_bo_unmap(tmp);
+}
+
+static drm_intel_bo *
+unmapped_create_bo(drm_intel_bufmgr *bufmgr, int width, int height)
+{
+ drm_intel_bo *bo;
+
+ bo = drm_intel_bo_alloc(bufmgr, "bo", 4*width*height, 0);
+ igt_assert(bo);
+
+ return bo;
+}
+
+static drm_intel_bo *
+snoop_create_bo(drm_intel_bufmgr *bufmgr, int width, int height)
+{
+ drm_intel_bo *bo;
+
+ igt_skip_on(gem_has_llc(fd));
+
+ bo = unmapped_create_bo(bufmgr, width, height);
+ gem_set_caching(fd, bo->handle, I915_CACHING_CACHED);
+ drm_intel_bo_disable_reuse(bo);
+
+ return bo;
+}
+
+static void
+gtt_set_bo(drm_intel_bo *bo, uint32_t val, int width, int height)
+{
+ uint32_t *vaddr = bo->virtual;
+ int size = width * height;
+
+ drm_intel_gem_bo_start_gtt_access(bo, true);
+ while (size--)
+ *vaddr++ = val;
+}
+
+static void
+gtt_cmp_bo(drm_intel_bo *bo, uint32_t val, int width, int height, drm_intel_bo *tmp)
+{
+ uint32_t *vaddr = bo->virtual;
+ int y;
+
+ /* GTT access is slow. So we just compare a few points */
+ drm_intel_gem_bo_start_gtt_access(bo, false);
+ for (y = 0; y < height; y++)
+ igt_assert_eq_u32(vaddr[y*width+y], val);
+}
+
+static drm_intel_bo *
+map_bo(drm_intel_bo *bo)
+{
+ /* gtt map doesn't have a write parameter, so just keep the mapping
+ * around (to avoid the set_domain with the gtt write domain set) and
+ * manually tell the kernel when we start access the gtt. */
+ do_or_die(drm_intel_gem_bo_map_gtt(bo));
+
+ return bo;
+}
+
+static drm_intel_bo *
+tile_bo(drm_intel_bo *bo, int width)
+{
+ uint32_t tiling = I915_TILING_X;
+ uint32_t stride = width * 4;
+
+ do_or_die(drm_intel_bo_set_tiling(bo, &tiling, stride));
+
+ return bo;
+}
+
+static drm_intel_bo *
+gtt_create_bo(drm_intel_bufmgr *bufmgr, int width, int height)
+{
+ return map_bo(unmapped_create_bo(bufmgr, width, height));
+}
+
+static drm_intel_bo *
+gttX_create_bo(drm_intel_bufmgr *bufmgr, int width, int height)
+{
+ return tile_bo(gtt_create_bo(bufmgr, width, height), width);
+}
+
+static drm_intel_bo *
+wc_create_bo(drm_intel_bufmgr *bufmgr, int width, int height)
+{
+ drm_intel_bo *bo;
+
+ gem_require_mmap_wc(fd);
+
+ bo = unmapped_create_bo(bufmgr, width, height);
+ bo->virtual = __gem_mmap__wc(fd, bo->handle, 0, bo->size, PROT_READ | PROT_WRITE);
+ return bo;
+}
+
+static void
+wc_release_bo(drm_intel_bo *bo)
+{
+ munmap(bo->virtual, bo->size);
+ bo->virtual = NULL;
+
+ nop_release_bo(bo);
+}
+
+static drm_intel_bo *
+gpu_create_bo(drm_intel_bufmgr *bufmgr, int width, int height)
+{
+ return unmapped_create_bo(bufmgr, width, height);
+}
+
+
+static drm_intel_bo *
+gpuX_create_bo(drm_intel_bufmgr *bufmgr, int width, int height)
+{
+ return tile_bo(gpu_create_bo(bufmgr, width, height), width);
+}
+
+static void
+cpu_set_bo(drm_intel_bo *bo, uint32_t val, int width, int height)
+{
+ int size = width * height;
+ uint32_t *vaddr;
+
+ do_or_die(drm_intel_bo_map(bo, true));
+ vaddr = bo->virtual;
+ while (size--)
+ *vaddr++ = val;
+ drm_intel_bo_unmap(bo);
+}
+
+static void
+cpu_cmp_bo(drm_intel_bo *bo, uint32_t val, int width, int height, drm_intel_bo *tmp)
+{
+ int size = width * height;
+ uint32_t *vaddr;
+
+ do_or_die(drm_intel_bo_map(bo, false));
+ vaddr = bo->virtual;
+ while (size--)
+ igt_assert_eq_u32(*vaddr++, val);
+ drm_intel_bo_unmap(bo);
+}
+
+static void
+gpu_set_bo(drm_intel_bo *bo, uint32_t val, int width, int height)
+{
+ struct drm_i915_gem_relocation_entry reloc[1];
+ struct drm_i915_gem_exec_object2 gem_exec[2];
+ struct drm_i915_gem_execbuffer2 execbuf;
+ struct drm_i915_gem_pwrite gem_pwrite;
+ struct drm_i915_gem_create create;
+ uint32_t buf[10], *b;
+ uint32_t tiling, swizzle;
+
+ drm_intel_bo_get_tiling(bo, &tiling, &swizzle);
+
+ memset(reloc, 0, sizeof(reloc));
+ memset(gem_exec, 0, sizeof(gem_exec));
+ memset(&execbuf, 0, sizeof(execbuf));
+
+ b = buf;
+ *b++ = XY_COLOR_BLT_CMD_NOLEN |
+ ((gen >= 8) ? 5 : 4) |
+ COLOR_BLT_WRITE_ALPHA | XY_COLOR_BLT_WRITE_RGB;
+ if (gen >= 4 && tiling) {
+ b[-1] |= XY_COLOR_BLT_TILED;
+ *b = width;
+ } else
+ *b = width << 2;
+ *b++ |= 0xf0 << 16 | 1 << 25 | 1 << 24;
+ *b++ = 0;
+ *b++ = height << 16 | width;
+ reloc[0].offset = (b - buf) * sizeof(uint32_t);
+ reloc[0].target_handle = bo->handle;
+ reloc[0].read_domains = I915_GEM_DOMAIN_RENDER;
+ reloc[0].write_domain = I915_GEM_DOMAIN_RENDER;
+ *b++ = 0;
+ if (gen >= 8)
+ *b++ = 0;
+ *b++ = val;
+ *b++ = MI_BATCH_BUFFER_END;
+ if ((b - buf) & 1)
+ *b++ = 0;
+
+ gem_exec[0].handle = bo->handle;
+ gem_exec[0].flags = EXEC_OBJECT_NEEDS_FENCE;
+
+ create.handle = 0;
+ create.size = 4096;
+ drmIoctl(fd, DRM_IOCTL_I915_GEM_CREATE, &create);
+ gem_exec[1].handle = create.handle;
+ gem_exec[1].relocation_count = 1;
+ gem_exec[1].relocs_ptr = (uintptr_t)reloc;
+
+ execbuf.buffers_ptr = (uintptr_t)gem_exec;
+ execbuf.buffer_count = 2;
+ execbuf.batch_len = (b - buf) * sizeof(buf[0]);
+ if (gen >= 6)
+ execbuf.flags = I915_EXEC_BLT;
+
+ gem_pwrite.handle = gem_exec[1].handle;
+ gem_pwrite.offset = 0;
+ gem_pwrite.size = execbuf.batch_len;
+ gem_pwrite.data_ptr = (uintptr_t)buf;
+ do_or_die(drmIoctl(fd, DRM_IOCTL_I915_GEM_PWRITE, &gem_pwrite));
+ do_or_die(drmIoctl(fd, DRM_IOCTL_I915_GEM_EXECBUFFER2, &execbuf));
+
+ drmIoctl(fd, DRM_IOCTL_GEM_CLOSE, &create.handle);
+}
+
+static void
+gpu_cmp_bo(drm_intel_bo *bo, uint32_t val, int width, int height, drm_intel_bo *tmp)
+{
+ intel_blt_copy(batch,
+ bo, 0, 0, 4*width,
+ tmp, 0, 0, 4*width,
+ width, height, 32);
+ cpu_cmp_bo(tmp, val, width, height, NULL);
+}
+
+const struct access_mode {
+ const char *name;
+ void (*set_bo)(drm_intel_bo *bo, uint32_t val, int w, int h);
+ void (*cmp_bo)(drm_intel_bo *bo, uint32_t val, int w, int h, drm_intel_bo *tmp);
+ drm_intel_bo *(*create_bo)(drm_intel_bufmgr *bufmgr, int width, int height);
+ void (*release_bo)(drm_intel_bo *bo);
+} access_modes[] = {
+ {
+ .name = "prw",
+ .set_bo = prw_set_bo,
+ .cmp_bo = prw_cmp_bo,
+ .create_bo = unmapped_create_bo,
+ .release_bo = nop_release_bo,
+ },
+ {
+ .name = "cpu",
+ .set_bo = cpu_set_bo,
+ .cmp_bo = cpu_cmp_bo,
+ .create_bo = unmapped_create_bo,
+ .release_bo = nop_release_bo,
+ },
+ {
+ .name = "snoop",
+ .set_bo = cpu_set_bo,
+ .cmp_bo = cpu_cmp_bo,
+ .create_bo = snoop_create_bo,
+ .release_bo = nop_release_bo,
+ },
+ {
+ .name = "gtt",
+ .set_bo = gtt_set_bo,
+ .cmp_bo = gtt_cmp_bo,
+ .create_bo = gtt_create_bo,
+ .release_bo = nop_release_bo,
+ },
+ {
+ .name = "gttX",
+ .set_bo = gtt_set_bo,
+ .cmp_bo = gtt_cmp_bo,
+ .create_bo = gttX_create_bo,
+ .release_bo = nop_release_bo,
+ },
+ {
+ .name = "wc",
+ .set_bo = gtt_set_bo,
+ .cmp_bo = gtt_cmp_bo,
+ .create_bo = wc_create_bo,
+ .release_bo = wc_release_bo,
+ },
+ {
+ .name = "gpu",
+ .set_bo = gpu_set_bo,
+ .cmp_bo = gpu_cmp_bo,
+ .create_bo = gpu_create_bo,
+ .release_bo = nop_release_bo,
+ },
+ {
+ .name = "gpuX",
+ .set_bo = gpu_set_bo,
+ .cmp_bo = gpu_cmp_bo,
+ .create_bo = gpuX_create_bo,
+ .release_bo = nop_release_bo,
+ },
+};
+
+#define MAX_NUM_BUFFERS 1024
+int num_buffers = MAX_NUM_BUFFERS;
+const int width = 512, height = 512;
+igt_render_copyfunc_t rendercopy;
+
+struct buffers {
+ const struct access_mode *mode;
+ drm_intel_bufmgr *bufmgr;
+ drm_intel_bo *src[MAX_NUM_BUFFERS], *dst[MAX_NUM_BUFFERS];
+ drm_intel_bo *dummy, *spare;
+ int count;
+};
+
+static void *buffers_init(struct buffers *data,
+ const struct access_mode *mode,
+ int _fd)
+{
+ data->mode = mode;
+ data->count = 0;
+
+ data->bufmgr = drm_intel_bufmgr_gem_init(_fd, 4096);
+ igt_assert(data->bufmgr);
+
+ drm_intel_bufmgr_gem_enable_reuse(data->bufmgr);
+ return intel_batchbuffer_alloc(data->bufmgr, devid);
+}
+
+static void buffers_destroy(struct buffers *data)
+{
+ if (data->count == 0)
+ return;
+
+ for (int i = 0; i < data->count; i++) {
+ data->mode->release_bo(data->src[i]);
+ data->mode->release_bo(data->dst[i]);
+ }
+ data->mode->release_bo(data->dummy);
+ data->mode->release_bo(data->spare);
+ data->count = 0;
+}
+
+static void buffers_create(struct buffers *data,
+ int count)
+{
+ igt_assert(data->bufmgr);
+
+ buffers_destroy(data);
+
+ for (int i = 0; i < count; i++) {
+ data->src[i] =
+ data->mode->create_bo(data->bufmgr, width, height);
+ data->dst[i] =
+ data->mode->create_bo(data->bufmgr, width, height);
+ }
+ data->dummy = data->mode->create_bo(data->bufmgr, width, height);
+ data->spare = data->mode->create_bo(data->bufmgr, width, height);
+ data->count = count;
+}
+
+static void buffers_fini(struct buffers *data)
+{
+ if (data->bufmgr == NULL)
+ return;
+
+ buffers_destroy(data);
+
+ intel_batchbuffer_free(batch);
+ drm_intel_bufmgr_destroy(data->bufmgr);
+ data->bufmgr = NULL;
+}
+
+typedef void (*do_copy)(drm_intel_bo *dst, drm_intel_bo *src);
+typedef struct igt_hang_ring (*do_hang)(void);
+
+static void render_copy_bo(drm_intel_bo *dst, drm_intel_bo *src)
+{
+ struct igt_buf d = {
+ .bo = dst,
+ .size = width * height * 4,
+ .num_tiles = width * height * 4,
+ .stride = width * 4,
+ }, s = {
+ .bo = src,
+ .size = width * height * 4,
+ .num_tiles = width * height * 4,
+ .stride = width * 4,
+ };
+ uint32_t swizzle;
+
+ drm_intel_bo_get_tiling(dst, &d.tiling, &swizzle);
+ drm_intel_bo_get_tiling(src, &s.tiling, &swizzle);
+
+ rendercopy(batch, NULL,
+ &s, 0, 0,
+ width, height,
+ &d, 0, 0);
+}
+
+static void blt_copy_bo(drm_intel_bo *dst, drm_intel_bo *src)
+{
+ intel_blt_copy(batch,
+ src, 0, 0, 4*width,
+ dst, 0, 0, 4*width,
+ width, height, 32);
+}
+
+static void cpu_copy_bo(drm_intel_bo *dst, drm_intel_bo *src)
+{
+ const int size = width * height * sizeof(uint32_t);
+ void *d, *s;
+
+ gem_set_domain(fd, src->handle, I915_GEM_DOMAIN_CPU, 0);
+ gem_set_domain(fd, dst->handle, I915_GEM_DOMAIN_CPU, I915_GEM_DOMAIN_CPU);
+ s = gem_mmap__cpu(fd, src->handle, 0, size, PROT_READ);
+ d = gem_mmap__cpu(fd, dst->handle, 0, size, PROT_WRITE);
+
+ memcpy(d, s, size);
+
+ munmap(d, size);
+ munmap(s, size);
+}
+
+static void gtt_copy_bo(drm_intel_bo *dst, drm_intel_bo *src)
+{
+ const int size = width * height * sizeof(uint32_t);
+ void *d, *s;
+
+ gem_set_domain(fd, src->handle, I915_GEM_DOMAIN_GTT, 0);
+ gem_set_domain(fd, dst->handle, I915_GEM_DOMAIN_GTT, I915_GEM_DOMAIN_GTT);
+
+ s = gem_mmap__gtt(fd, src->handle, size, PROT_READ);
+ d = gem_mmap__gtt(fd, dst->handle, size, PROT_WRITE);
+
+ memcpy(d, s, size);
+
+ munmap(d, size);
+ munmap(s, size);
+}
+
+static void wc_copy_bo(drm_intel_bo *dst, drm_intel_bo *src)
+{
+ const int size = width * height * sizeof(uint32_t);
+ void *d, *s;
+
+ gem_set_domain(fd, src->handle, I915_GEM_DOMAIN_GTT, 0);
+ gem_set_domain(fd, dst->handle, I915_GEM_DOMAIN_GTT, I915_GEM_DOMAIN_GTT);
+
+ s = gem_mmap__wc(fd, src->handle, 0, size, PROT_READ);
+ d = gem_mmap__wc(fd, dst->handle, 0, size, PROT_WRITE);
+
+ memcpy(d, s, size);
+
+ munmap(d, size);
+ munmap(s, size);
+}
+
+static struct igt_hang_ring no_hang(void)
+{
+ return (struct igt_hang_ring){0, 0};
+}
+
+static struct igt_hang_ring bcs_hang(void)
+{
+ return igt_hang_ring(fd, I915_EXEC_BLT);
+}
+
+static struct igt_hang_ring rcs_hang(void)
+{
+ return igt_hang_ring(fd, I915_EXEC_RENDER);
+}
+
+static void hang_require(void)
+{
+ igt_require_hang_ring(fd, -1);
+}
+
+static void do_overwrite_source(struct buffers *buffers,
+ do_copy do_copy_func,
+ do_hang do_hang_func)
+{
+ struct igt_hang_ring hang;
+ int i;
+
+ gem_quiescent_gpu(fd);
+ for (i = 0; i < buffers->count; i++) {
+ buffers->mode->set_bo(buffers->src[i], i, width, height);
+ buffers->mode->set_bo(buffers->dst[i], ~i, width, height);
+ }
+ for (i = 0; i < buffers->count; i++)
+ do_copy_func(buffers->dst[i], buffers->src[i]);
+ hang = do_hang_func();
+ for (i = buffers->count; i--; )
+ buffers->mode->set_bo(buffers->src[i], 0xdeadbeef, width, height);
+ for (i = 0; i < buffers->count; i++)
+ buffers->mode->cmp_bo(buffers->dst[i], i, width, height, buffers->dummy);
+ igt_post_hang_ring(fd, hang);
+}
+
+static void do_overwrite_source_read(struct buffers *buffers,
+ do_copy do_copy_func,
+ do_hang do_hang_func,
+ int do_rcs)
+{
+ const int half = buffers->count/2;
+ struct igt_hang_ring hang;
+ int i;
+
+ gem_quiescent_gpu(fd);
+ for (i = 0; i < half; i++) {
+ buffers->mode->set_bo(buffers->src[i], i, width, height);
+ buffers->mode->set_bo(buffers->dst[i], ~i, width, height);
+ buffers->mode->set_bo(buffers->dst[i+half], ~i, width, height);
+ }
+ for (i = 0; i < half; i++) {
+ do_copy_func(buffers->dst[i], buffers->src[i]);
+ if (do_rcs)
+ render_copy_bo(buffers->dst[i+half], buffers->src[i]);
+ else
+ blt_copy_bo(buffers->dst[i+half], buffers->src[i]);
+ }
+ hang = do_hang_func();
+ for (i = half; i--; )
+ buffers->mode->set_bo(buffers->src[i], 0xdeadbeef, width, height);
+ for (i = 0; i < half; i++) {
+ buffers->mode->cmp_bo(buffers->dst[i], i, width, height, buffers->dummy);
+ buffers->mode->cmp_bo(buffers->dst[i+half], i, width, height, buffers->dummy);
+ }
+ igt_post_hang_ring(fd, hang);
+}
+
+static void do_overwrite_source_read_bcs(struct buffers *buffers,
+ do_copy do_copy_func,
+ do_hang do_hang_func)
+{
+ do_overwrite_source_read(buffers, do_copy_func, do_hang_func, 0);
+}
+
+static void do_overwrite_source_read_rcs(struct buffers *buffers,
+ do_copy do_copy_func,
+ do_hang do_hang_func)
+{
+ do_overwrite_source_read(buffers, do_copy_func, do_hang_func, 1);
+}
+
+static void do_overwrite_source__rev(struct buffers *buffers,
+ do_copy do_copy_func,
+ do_hang do_hang_func)
+{
+ struct igt_hang_ring hang;
+ int i;
+
+ gem_quiescent_gpu(fd);
+ for (i = 0; i < buffers->count; i++) {
+ buffers->mode->set_bo(buffers->src[i], i, width, height);
+ buffers->mode->set_bo(buffers->dst[i], ~i, width, height);
+ }
+ for (i = 0; i < buffers->count; i++)
+ do_copy_func(buffers->dst[i], buffers->src[i]);
+ hang = do_hang_func();
+ for (i = 0; i < buffers->count; i++)
+ buffers->mode->set_bo(buffers->src[i], 0xdeadbeef, width, height);
+ for (i = buffers->count; i--; )
+ buffers->mode->cmp_bo(buffers->dst[i], i, width, height, buffers->dummy);
+ igt_post_hang_ring(fd, hang);
+}
+
+static void do_overwrite_source__one(struct buffers *buffers,
+ do_copy do_copy_func,
+ do_hang do_hang_func)
+{
+ struct igt_hang_ring hang;
+
+ gem_quiescent_gpu(fd);
+ buffers->mode->set_bo(buffers->src[0], 0, width, height);
+ buffers->mode->set_bo(buffers->dst[0], ~0, width, height);
+ do_copy_func(buffers->dst[0], buffers->src[0]);
+ hang = do_hang_func();
+ buffers->mode->set_bo(buffers->src[0], 0xdeadbeef, width, height);
+ buffers->mode->cmp_bo(buffers->dst[0], 0, width, height, buffers->dummy);
+ igt_post_hang_ring(fd, hang);
+}
+
+static void do_intermix(struct buffers *buffers,
+ do_copy do_copy_func,
+ do_hang do_hang_func,
+ int do_rcs)
+{
+ const int half = buffers->count/2;
+ struct igt_hang_ring hang;
+ int i;
+
+ gem_quiescent_gpu(fd);
+ for (i = 0; i < buffers->count; i++) {
+ buffers->mode->set_bo(buffers->src[i], 0xdeadbeef^~i, width, height);
+ buffers->mode->set_bo(buffers->dst[i], i, width, height);
+ }
+ for (i = 0; i < half; i++) {
+ if (do_rcs == 1 || (do_rcs == -1 && i & 1))
+ render_copy_bo(buffers->dst[i], buffers->src[i]);
+ else
+ blt_copy_bo(buffers->dst[i], buffers->src[i]);
+
+ do_copy_func(buffers->dst[i+half], buffers->src[i]);
+
+ if (do_rcs == 1 || (do_rcs == -1 && (i & 1) == 0))
+ render_copy_bo(buffers->dst[i], buffers->dst[i+half]);
+ else
+ blt_copy_bo(buffers->dst[i], buffers->dst[i+half]);
+
+ do_copy_func(buffers->dst[i+half], buffers->src[i+half]);
+ }
+ hang = do_hang_func();
+ for (i = 0; i < 2*half; i++)
+ buffers->mode->cmp_bo(buffers->dst[i], 0xdeadbeef^~i, width, height, buffers->dummy);
+ igt_post_hang_ring(fd, hang);
+}
+
+static void do_intermix_rcs(struct buffers *buffers,
+ do_copy do_copy_func,
+ do_hang do_hang_func)
+{
+ do_intermix(buffers, do_copy_func, do_hang_func, 1);
+}
+
+static void do_intermix_bcs(struct buffers *buffers,
+ do_copy do_copy_func,
+ do_hang do_hang_func)
+{
+ do_intermix(buffers, do_copy_func, do_hang_func, 0);
+}
+
+static void do_intermix_both(struct buffers *buffers,
+ do_copy do_copy_func,
+ do_hang do_hang_func)
+{
+ do_intermix(buffers, do_copy_func, do_hang_func, -1);
+}
+
+static void do_early_read(struct buffers *buffers,
+ do_copy do_copy_func,
+ do_hang do_hang_func)
+{
+ struct igt_hang_ring hang;
+ int i;
+
+ gem_quiescent_gpu(fd);
+ for (i = buffers->count; i--; )
+ buffers->mode->set_bo(buffers->src[i], 0xdeadbeef, width, height);
+ for (i = 0; i < buffers->count; i++)
+ do_copy_func(buffers->dst[i], buffers->src[i]);
+ hang = do_hang_func();
+ for (i = buffers->count; i--; )
+ buffers->mode->cmp_bo(buffers->dst[i], 0xdeadbeef, width, height, buffers->dummy);
+ igt_post_hang_ring(fd, hang);
+}
+
+static void do_read_read_bcs(struct buffers *buffers,
+ do_copy do_copy_func,
+ do_hang do_hang_func)
+{
+ struct igt_hang_ring hang;
+ int i;
+
+ gem_quiescent_gpu(fd);
+ for (i = buffers->count; i--; )
+ buffers->mode->set_bo(buffers->src[i], 0xdeadbeef ^ i, width, height);
+ for (i = 0; i < buffers->count; i++) {
+ do_copy_func(buffers->dst[i], buffers->src[i]);
+ blt_copy_bo(buffers->spare, buffers->src[i]);
+ }
+ cpu_cmp_bo(buffers->spare, 0xdeadbeef^(buffers->count-1), width, height, NULL);
+ hang = do_hang_func();
+ for (i = buffers->count; i--; )
+ buffers->mode->cmp_bo(buffers->dst[i], 0xdeadbeef ^ i, width, height, buffers->dummy);
+ igt_post_hang_ring(fd, hang);
+}
+
+static void do_read_read_rcs(struct buffers *buffers,
+ do_copy do_copy_func,
+ do_hang do_hang_func)
+{
+ struct igt_hang_ring hang;
+ int i;
+
+ gem_quiescent_gpu(fd);
+ for (i = buffers->count; i--; )
+ buffers->mode->set_bo(buffers->src[i], 0xdeadbeef ^ i, width, height);
+ for (i = 0; i < buffers->count; i++) {
+ do_copy_func(buffers->dst[i], buffers->src[i]);
+ render_copy_bo(buffers->spare, buffers->src[i]);
+ }
+ cpu_cmp_bo(buffers->spare, 0xdeadbeef^(buffers->count-1), width, height, NULL);
+ hang = do_hang_func();
+ for (i = buffers->count; i--; )
+ buffers->mode->cmp_bo(buffers->dst[i], 0xdeadbeef ^ i, width, height, buffers->dummy);
+ igt_post_hang_ring(fd, hang);
+}
+
+static void do_gpu_read_after_write(struct buffers *buffers,
+ do_copy do_copy_func,
+ do_hang do_hang_func)
+{
+ struct igt_hang_ring hang;
+ int i;
+
+ gem_quiescent_gpu(fd);
+ for (i = buffers->count; i--; )
+ buffers->mode->set_bo(buffers->src[i], 0xabcdabcd, width, height);
+ for (i = 0; i < buffers->count; i++)
+ do_copy_func(buffers->dst[i], buffers->src[i]);
+ for (i = buffers->count; i--; )
+ do_copy_func(buffers->dummy, buffers->dst[i]);
+ hang = do_hang_func();
+ for (i = buffers->count; i--; )
+ buffers->mode->cmp_bo(buffers->dst[i], 0xabcdabcd, width, height, buffers->dummy);
+ igt_post_hang_ring(fd, hang);
+}
+
+typedef void (*do_test)(struct buffers *buffers,
+ do_copy do_copy_func,
+ do_hang do_hang_func);
+
+typedef void (*run_wrap)(struct buffers *buffers,
+ do_test do_test_func,
+ do_copy do_copy_func,
+ do_hang do_hang_func);
+
+static void run_single(struct buffers *buffers,
+ do_test do_test_func,
+ do_copy do_copy_func,
+ do_hang do_hang_func)
+{
+ do_test_func(buffers, do_copy_func, do_hang_func);
+}
+
+static void run_interruptible(struct buffers *buffers,
+ do_test do_test_func,
+ do_copy do_copy_func,
+ do_hang do_hang_func)
+{
+ int loop;
+
+ for (loop = 0; loop < 10; loop++)
+ do_test_func(buffers, do_copy_func, do_hang_func);
+}
+
+static void run_forked(struct buffers *buffers,
+ do_test do_test_func,
+ do_copy do_copy_func,
+ do_hang do_hang_func)
+{
+ const int old_num_buffers = num_buffers;
+
+ num_buffers /= 16;
+ num_buffers += 2;
+
+ igt_fork(child, 16) {
+ /* recreate process local variables */
+ buffers->count = 0;
+ fd = drm_open_driver(DRIVER_INTEL);
+
+ batch = buffers_init(buffers, buffers->mode, fd);
+
+ buffers_create(buffers, num_buffers);
+ for (int loop = 0; loop < 10; loop++)
+ do_test_func(buffers, do_copy_func, do_hang_func);
+
+ buffers_fini(buffers);
+ }
+
+ igt_waitchildren();
+
+ num_buffers = old_num_buffers;
+}
+
+static void bit17_require(void)
+{
+ struct drm_i915_gem_get_tiling2 {
+ uint32_t handle;
+ uint32_t tiling_mode;
+ uint32_t swizzle_mode;
+ uint32_t phys_swizzle_mode;
+ } arg;
+#define DRM_IOCTL_I915_GEM_GET_TILING2 DRM_IOWR (DRM_COMMAND_BASE + DRM_I915_GEM_GET_TILING, struct drm_i915_gem_get_tiling2)
+
+ memset(&arg, 0, sizeof(arg));
+ arg.handle = gem_create(fd, 4096);
+ gem_set_tiling(fd, arg.handle, I915_TILING_X, 512);
+
+ do_or_die(drmIoctl(fd, DRM_IOCTL_I915_GEM_GET_TILING2, &arg));
+ gem_close(fd, arg.handle);
+ igt_require(arg.phys_swizzle_mode == arg.swizzle_mode);
+}
+
+static void cpu_require(void)
+{
+ bit17_require();
+}
+
+static void gtt_require(void)
+{
+}
+
+static void wc_require(void)
+{
+ bit17_require();
+ gem_require_mmap_wc(fd);
+}
+
+static void bcs_require(void)
+{
+}
+
+static void rcs_require(void)
+{
+ igt_require(rendercopy);
+}
+
+static void no_require(void)
+{
+}
+
+static void
+run_basic_modes(const struct access_mode *mode,
+ const char *suffix,
+ run_wrap run_wrap_func)
+{
+ const struct {
+ const char *prefix;
+ do_copy copy;
+ void (*require)(void);
+ } pipelines[] = {
+ { "cpu", cpu_copy_bo, cpu_require },
+ { "gtt", gtt_copy_bo, gtt_require },
+ { "wc", wc_copy_bo, wc_require },
+ { "blt", blt_copy_bo, bcs_require },
+ { "render", render_copy_bo, rcs_require },
+ { NULL, NULL }
+ }, *pskip = pipelines + 3, *p;
+ const struct {
+ const char *suffix;
+ do_hang hang;
+ void (*require)(void);
+ } hangs[] = {
+ { "", no_hang, no_require },
+ { "-hang-blt", bcs_hang, hang_require },
+ { "-hang-render", rcs_hang, hang_require },
+ { NULL, NULL },
+ }, *h;
+ struct buffers buffers;
+
+ for (h = hangs; h->suffix; h++) {
+ if (!all && *h->suffix)
+ continue;
+
+ for (p = all ? pipelines : pskip; p->prefix; p++) {
+ igt_fixture {
+ batch = buffers_init(&buffers, mode, fd);
+ }
+
+ /* try to overwrite the source values */
+ igt_subtest_f("%s-%s-overwrite-source-one%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ h->require();
+ p->require();
+ buffers_create(&buffers, num_buffers);
+ run_wrap_func(&buffers,
+ do_overwrite_source__one,
+ p->copy, h->hang);
+ }
+
+ igt_subtest_f("%s-%s-overwrite-source%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ h->require();
+ p->require();
+ buffers_create(&buffers, num_buffers);
+ run_wrap_func(&buffers,
+ do_overwrite_source,
+ p->copy, h->hang);
+ }
+
+ igt_subtest_f("%s-%s-overwrite-source-read-bcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ h->require();
+ p->require();
+ buffers_create(&buffers, num_buffers);
+ run_wrap_func(&buffers,
+ do_overwrite_source_read_bcs,
+ p->copy, h->hang);
+ }
+
+ igt_subtest_f("%s-%s-overwrite-source-read-rcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ h->require();
+ p->require();
+ igt_require(rendercopy);
+ buffers_create(&buffers, num_buffers);
+ run_wrap_func(&buffers,
+ do_overwrite_source_read_rcs,
+ p->copy, h->hang);
+ }
+
+ igt_subtest_f("%s-%s-overwrite-source-rev%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ h->require();
+ p->require();
+ buffers_create(&buffers, num_buffers);
+ run_wrap_func(&buffers,
+ do_overwrite_source__rev,
+ p->copy, h->hang);
+ }
+
+ /* try to intermix copies with GPU copies*/
+ igt_subtest_f("%s-%s-intermix-rcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ h->require();
+ p->require();
+ igt_require(rendercopy);
+ buffers_create(&buffers, num_buffers);
+ run_wrap_func(&buffers,
+ do_intermix_rcs,
+ p->copy, h->hang);
+ }
+ igt_subtest_f("%s-%s-intermix-bcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ h->require();
+ p->require();
+ igt_require(rendercopy);
+ buffers_create(&buffers, num_buffers);
+ run_wrap_func(&buffers,
+ do_intermix_bcs,
+ p->copy, h->hang);
+ }
+ igt_subtest_f("%s-%s-intermix-both%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ h->require();
+ p->require();
+ igt_require(rendercopy);
+ buffers_create(&buffers, num_buffers);
+ run_wrap_func(&buffers,
+ do_intermix_both,
+ p->copy, h->hang);
+ }
+
+ /* try to read the results before the copy completes */
+ igt_subtest_f("%s-%s-early-read%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ h->require();
+ p->require();
+ buffers_create(&buffers, num_buffers);
+ run_wrap_func(&buffers,
+ do_early_read,
+ p->copy, h->hang);
+ }
+
+ /* concurrent reads */
+ igt_subtest_f("%s-%s-read-read-bcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ h->require();
+ p->require();
+ buffers_create(&buffers, num_buffers);
+ run_wrap_func(&buffers,
+ do_read_read_bcs,
+ p->copy, h->hang);
+ }
+ igt_subtest_f("%s-%s-read-read-rcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ h->require();
+ p->require();
+ igt_require(rendercopy);
+ buffers_create(&buffers, num_buffers);
+ run_wrap_func(&buffers,
+ do_read_read_rcs,
+ p->copy, h->hang);
+ }
+
+ /* and finally try to trick the kernel into loosing the pending write */
+ igt_subtest_f("%s-%s-gpu-read-after-write%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ h->require();
+ p->require();
+ buffers_create(&buffers, num_buffers);
+ run_wrap_func(&buffers,
+ do_gpu_read_after_write,
+ p->copy, h->hang);
+ }
+
+ igt_fixture {
+ buffers_fini(&buffers);
+ }
+ }
+ }
+}
+
+static void
+run_modes(const struct access_mode *mode)
+{
+ if (all) {
+ run_basic_modes(mode, "", run_single);
+
+ igt_fork_signal_helper();
+ run_basic_modes(mode, "-interruptible", run_interruptible);
+ igt_stop_signal_helper();
+ }
+
+ igt_fork_signal_helper();
+ run_basic_modes(mode, "-forked", run_forked);
+ igt_stop_signal_helper();
+}
+
+igt_main
+{
+ int max, i;
+
+ igt_skip_on_simulation();
+
+ if (strstr(igt_test_name(), "all"))
+ all = true;
+
+ igt_fixture {
+ fd = drm_open_driver(DRIVER_INTEL);
+ devid = intel_get_drm_devid(fd);
+ gen = intel_gen(devid);
+ rendercopy = igt_get_render_copyfunc(devid);
+
+ max = gem_aperture_size (fd) / (1024 * 1024) / 2;
+ if (num_buffers > max)
+ num_buffers = max;
+
+ max = intel_get_total_ram_mb() * 3 / 4;
+ if (num_buffers > max)
+ num_buffers = max;
+ num_buffers /= 2;
+ igt_info("using 2x%d buffers, each 1MiB\n", num_buffers);
+ }
+
+ for (i = 0; i < ARRAY_SIZE(access_modes); i++)
+ run_modes(&access_modes[i]);
+}
--
2.6.2
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH i-g-t 2/3] Unify handling of slow/combinatorial tests
2015-10-28 11:29 ` [PATCH i-g-t 0/3 v2] " David Weinehall
2015-10-28 11:29 ` [PATCH i-g-t 1/3] Copy gem_concurrent_all to gem_concurrent_blit David Weinehall
@ 2015-10-28 11:29 ` David Weinehall
2015-10-28 16:12 ` Paulo Zanoni
2015-10-28 17:14 ` Thomas Wood
2015-10-28 11:29 ` [PATCH i-g-t 3/3] Remove superfluous gem_concurrent_all.c David Weinehall
2 siblings, 2 replies; 41+ messages in thread
From: David Weinehall @ 2015-10-28 11:29 UTC (permalink / raw)
To: intel-gfx
Some tests should not be run by default, due to their slow,
and sometimes superfluous, nature.
We still want to be able to run these tests in some cases.
Until now there's been no unified way of handling this. Remedy
this by introducing the --all option to igt_core,
and use it in gem_concurrent_blit & kms_frontbuffer_tracking.
Signed-off-by: David Weinehall <david.weinehall@linux.intel.com>
---
lib/igt_core.c | 24 +++++
lib/igt_core.h | 7 ++
tests/gem_concurrent_blit.c | 44 ++++-----
tests/kms_frontbuffer_tracking.c | 208 ++++++++++++++++++++++-----------------
4 files changed, 165 insertions(+), 118 deletions(-)
diff --git a/lib/igt_core.c b/lib/igt_core.c
index 59127cafe606..6575b9d6bf0d 100644
--- a/lib/igt_core.c
+++ b/lib/igt_core.c
@@ -216,6 +216,7 @@ const char *igt_interactive_debug;
/* subtests helpers */
static bool list_subtests = false;
+static bool with_slow_combinatorial = false;
static char *run_single_subtest = NULL;
static bool run_single_subtest_found = false;
static const char *in_subtest = NULL;
@@ -235,6 +236,7 @@ bool test_child;
enum {
OPT_LIST_SUBTESTS,
+ OPT_WITH_SLOW_COMBINATORIAL,
OPT_RUN_SUBTEST,
OPT_DESCRIPTION,
OPT_DEBUG,
@@ -478,6 +480,7 @@ static void print_usage(const char *help_str, bool output_on_stderr)
fprintf(f, "Usage: %s [OPTIONS]\n", command_str);
fprintf(f, " --list-subtests\n"
+ " --all\n"
" --run-subtest <pattern>\n"
" --debug[=log-domain]\n"
" --interactive-debug[=domain]\n"
@@ -510,6 +513,7 @@ static int common_init(int *argc, char **argv,
int c, option_index = 0, i, x;
static struct option long_options[] = {
{"list-subtests", 0, 0, OPT_LIST_SUBTESTS},
+ {"all", 0, 0, OPT_WITH_SLOW_COMBINATORIAL},
{"run-subtest", 1, 0, OPT_RUN_SUBTEST},
{"help-description", 0, 0, OPT_DESCRIPTION},
{"debug", optional_argument, 0, OPT_DEBUG},
@@ -617,6 +621,10 @@ static int common_init(int *argc, char **argv,
if (!run_single_subtest)
list_subtests = true;
break;
+ case OPT_WITH_SLOW_COMBINATORIAL:
+ if (!run_single_subtest)
+ with_slow_combinatorial = true;
+ break;
case OPT_RUN_SUBTEST:
if (!list_subtests)
run_single_subtest = strdup(optarg);
@@ -1629,6 +1637,22 @@ void igt_skip_on_simulation(void)
igt_require(!igt_run_in_simulation());
}
+/**
+ * __igt_slow_combinatorial:
+ *
+ * This is used to skip subtests that should only be included
+ * when the "--all" command line option has been specified. This version
+ * is intended as a test.
+ *
+ * @slow_test: true if the subtest is part of the slow/combinatorial set
+ *
+ * Returns: true if the test should be run, false if the test should be skipped
+ */
+bool __igt_slow_combinatorial(bool slow_test)
+{
+ return !slow_test || with_slow_combinatorial;
+}
+
/* structured logging */
/**
diff --git a/lib/igt_core.h b/lib/igt_core.h
index 5ae09653fd55..7b592278bf6c 100644
--- a/lib/igt_core.h
+++ b/lib/igt_core.h
@@ -191,6 +191,12 @@ bool __igt_run_subtest(const char *subtest_name);
#define igt_subtest_f(f...) \
__igt_subtest_f(igt_tokencat(__tmpchar, __LINE__), f)
+bool __igt_slow_combinatorial(bool slow_test);
+
+#define igt_subtest_slow_f(__slow, f...) \
+ if (__igt_slow_combinatorial(__slow)) \
+ __igt_subtest_f(igt_tokencat(__tmpchar, __LINE__), f)
+
const char *igt_subtest_name(void);
bool igt_only_list_subtests(void);
@@ -669,6 +675,7 @@ void igt_disable_exit_handler(void);
/* helpers to automatically reduce test runtime in simulation */
bool igt_run_in_simulation(void);
+
/**
* SLOW_QUICK:
* @slow: value in simulation mode
diff --git a/tests/gem_concurrent_blit.c b/tests/gem_concurrent_blit.c
index 1d2d787202df..fe37cc707583 100644
--- a/tests/gem_concurrent_blit.c
+++ b/tests/gem_concurrent_blit.c
@@ -55,7 +55,6 @@ IGT_TEST_DESCRIPTION("Test of pread/pwrite/mmap behavior when writing to active"
int fd, devid, gen;
struct intel_batchbuffer *batch;
-int all;
static void
nop_release_bo(drm_intel_bo *bo)
@@ -931,16 +930,14 @@ run_basic_modes(const struct access_mode *mode,
struct buffers buffers;
for (h = hangs; h->suffix; h++) {
- if (!all && *h->suffix)
- continue;
-
- for (p = all ? pipelines : pskip; p->prefix; p++) {
+ for (p = __igt_slow_combinatorial(true) ? pipelines : pskip;
+ p->prefix; p++) {
igt_fixture {
batch = buffers_init(&buffers, mode, fd);
}
/* try to overwrite the source values */
- igt_subtest_f("%s-%s-overwrite-source-one%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ igt_subtest_slow_f(*h->suffix, "%s-%s-overwrite-source-one%s%s", mode->name, p->prefix, suffix, h->suffix) {
h->require();
p->require();
buffers_create(&buffers, num_buffers);
@@ -949,7 +946,7 @@ run_basic_modes(const struct access_mode *mode,
p->copy, h->hang);
}
- igt_subtest_f("%s-%s-overwrite-source%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ igt_subtest_slow_f(*h->suffix, "%s-%s-overwrite-source%s%s", mode->name, p->prefix, suffix, h->suffix) {
h->require();
p->require();
buffers_create(&buffers, num_buffers);
@@ -958,7 +955,7 @@ run_basic_modes(const struct access_mode *mode,
p->copy, h->hang);
}
- igt_subtest_f("%s-%s-overwrite-source-read-bcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ igt_subtest_slow_f(*h->suffix, "%s-%s-overwrite-source-read-bcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
h->require();
p->require();
buffers_create(&buffers, num_buffers);
@@ -967,7 +964,7 @@ run_basic_modes(const struct access_mode *mode,
p->copy, h->hang);
}
- igt_subtest_f("%s-%s-overwrite-source-read-rcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ igt_subtest_slow_f(*h->suffix, "%s-%s-overwrite-source-read-rcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
h->require();
p->require();
igt_require(rendercopy);
@@ -977,7 +974,7 @@ run_basic_modes(const struct access_mode *mode,
p->copy, h->hang);
}
- igt_subtest_f("%s-%s-overwrite-source-rev%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ igt_subtest_slow_f(*h->suffix, "%s-%s-overwrite-source-rev%s%s", mode->name, p->prefix, suffix, h->suffix) {
h->require();
p->require();
buffers_create(&buffers, num_buffers);
@@ -987,7 +984,7 @@ run_basic_modes(const struct access_mode *mode,
}
/* try to intermix copies with GPU copies*/
- igt_subtest_f("%s-%s-intermix-rcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ igt_subtest_slow_f(*h->suffix, "%s-%s-intermix-rcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
h->require();
p->require();
igt_require(rendercopy);
@@ -996,7 +993,7 @@ run_basic_modes(const struct access_mode *mode,
do_intermix_rcs,
p->copy, h->hang);
}
- igt_subtest_f("%s-%s-intermix-bcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ igt_subtest_slow_f(*h->suffix, "%s-%s-intermix-bcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
h->require();
p->require();
igt_require(rendercopy);
@@ -1005,7 +1002,7 @@ run_basic_modes(const struct access_mode *mode,
do_intermix_bcs,
p->copy, h->hang);
}
- igt_subtest_f("%s-%s-intermix-both%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ igt_subtest_slow_f(*h->suffix, "%s-%s-intermix-both%s%s", mode->name, p->prefix, suffix, h->suffix) {
h->require();
p->require();
igt_require(rendercopy);
@@ -1016,7 +1013,7 @@ run_basic_modes(const struct access_mode *mode,
}
/* try to read the results before the copy completes */
- igt_subtest_f("%s-%s-early-read%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ igt_subtest_slow_f(*h->suffix, "%s-%s-early-read%s%s", mode->name, p->prefix, suffix, h->suffix) {
h->require();
p->require();
buffers_create(&buffers, num_buffers);
@@ -1026,7 +1023,7 @@ run_basic_modes(const struct access_mode *mode,
}
/* concurrent reads */
- igt_subtest_f("%s-%s-read-read-bcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ igt_subtest_slow_f(*h->suffix, "%s-%s-read-read-bcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
h->require();
p->require();
buffers_create(&buffers, num_buffers);
@@ -1034,7 +1031,7 @@ run_basic_modes(const struct access_mode *mode,
do_read_read_bcs,
p->copy, h->hang);
}
- igt_subtest_f("%s-%s-read-read-rcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ igt_subtest_slow_f(*h->suffix, "%s-%s-read-read-rcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
h->require();
p->require();
igt_require(rendercopy);
@@ -1045,7 +1042,7 @@ run_basic_modes(const struct access_mode *mode,
}
/* and finally try to trick the kernel into loosing the pending write */
- igt_subtest_f("%s-%s-gpu-read-after-write%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ igt_subtest_slow_f(*h->suffix, "%s-%s-gpu-read-after-write%s%s", mode->name, p->prefix, suffix, h->suffix) {
h->require();
p->require();
buffers_create(&buffers, num_buffers);
@@ -1064,13 +1061,11 @@ run_basic_modes(const struct access_mode *mode,
static void
run_modes(const struct access_mode *mode)
{
- if (all) {
- run_basic_modes(mode, "", run_single);
+ run_basic_modes(mode, "", run_single);
- igt_fork_signal_helper();
- run_basic_modes(mode, "-interruptible", run_interruptible);
- igt_stop_signal_helper();
- }
+ igt_fork_signal_helper();
+ run_basic_modes(mode, "-interruptible", run_interruptible);
+ igt_stop_signal_helper();
igt_fork_signal_helper();
run_basic_modes(mode, "-forked", run_forked);
@@ -1083,9 +1078,6 @@ igt_main
igt_skip_on_simulation();
- if (strstr(igt_test_name(), "all"))
- all = true;
-
igt_fixture {
fd = drm_open_driver(DRIVER_INTEL);
devid = intel_get_drm_devid(fd);
diff --git a/tests/kms_frontbuffer_tracking.c b/tests/kms_frontbuffer_tracking.c
index 15707b9b9040..86fd7ca08692 100644
--- a/tests/kms_frontbuffer_tracking.c
+++ b/tests/kms_frontbuffer_tracking.c
@@ -47,8 +47,7 @@ IGT_TEST_DESCRIPTION("Test the Kernel's frontbuffer tracking mechanism and "
* combinations that are somewhat redundant and don't add much value to the
* test. For example, since we already do the offscreen testing with a single
* pipe enabled, there's no much value in doing it again with dual pipes. If you
- * still want to try these redundant tests, you need to use the --show-hidden
- * option.
+ * still want to try these redundant tests, you need to use the --all option.
*
* The most important hidden thing is the FEATURE_NONE set of tests. Whenever
* you get a failure on any test, it is important to check whether the same test
@@ -116,6 +115,10 @@ struct test_mode {
} format;
enum igt_draw_method method;
+
+ /* The test is slow and/or combinatorial;
+ * skip unless otherwise specified */
+ bool slow;
};
enum flip_type {
@@ -237,7 +240,6 @@ struct {
bool fbc_check_last_action;
bool no_edp;
bool small_modes;
- bool show_hidden;
int step;
int only_pipes;
int shared_fb_x_offset;
@@ -249,7 +251,6 @@ struct {
.fbc_check_last_action = true,
.no_edp = false,
.small_modes = false,
- .show_hidden= false,
.step = 0,
.only_pipes = PIPE_COUNT,
.shared_fb_x_offset = 500,
@@ -2933,9 +2934,6 @@ static int opt_handler(int option, int option_index, void *data)
case 'm':
opt.small_modes = true;
break;
- case 'i':
- opt.show_hidden = true;
- break;
case 't':
opt.step++;
break;
@@ -2971,7 +2969,6 @@ const char *help_str =
" --no-fbc-action-check Don't check for the FBC last action\n"
" --no-edp Don't use eDP monitors\n"
" --use-small-modes Use smaller resolutions for the modes\n"
-" --show-hidden Show hidden subtests\n"
" --step Stop on each step so you can check the screen\n"
" --shared-fb-x offset Use 'offset' as the X offset for the shared FB\n"
" --shared-fb-y offset Use 'offset' as the Y offset for the shared FB\n"
@@ -3068,18 +3065,19 @@ static const char *format_str(enum pixel_format format)
for (t.plane = 0; t.plane < PLANE_COUNT; t.plane++) { \
for (t.fbs = 0; t.fbs < FBS_COUNT; t.fbs++) { \
for (t.method = 0; t.method < IGT_DRAW_METHOD_COUNT; t.method++) { \
+ t.slow = false; \
if (t.pipes == PIPE_SINGLE && t.screen == SCREEN_SCND) \
continue; \
if (t.screen == SCREEN_OFFSCREEN && t.plane != PLANE_PRI) \
continue; \
- if (!opt.show_hidden && t.pipes == PIPE_DUAL && \
+ if (t.pipes == PIPE_DUAL && \
t.screen == SCREEN_OFFSCREEN) \
- continue; \
- if (!opt.show_hidden && t.feature == FEATURE_NONE) \
- continue; \
- if (!opt.show_hidden && t.fbs == FBS_SHARED && \
+ t.slow = true; \
+ if (t.feature == FEATURE_NONE) \
+ t.slow = true; \
+ if (t.fbs == FBS_SHARED && \
(t.plane == PLANE_CUR || t.plane == PLANE_SPR)) \
- continue;
+ t.slow = true;
#define TEST_MODE_ITER_END } } } } } }
@@ -3094,7 +3092,6 @@ int main(int argc, char *argv[])
{ "no-fbc-action-check", 0, 0, 'a'},
{ "no-edp", 0, 0, 'e'},
{ "use-small-modes", 0, 0, 'm'},
- { "show-hidden", 0, 0, 'i'},
{ "step", 0, 0, 't'},
{ "shared-fb-x", 1, 0, 'x'},
{ "shared-fb-y", 1, 0, 'y'},
@@ -3110,8 +3107,9 @@ int main(int argc, char *argv[])
setup_environment();
for (t.feature = 0; t.feature < FEATURE_COUNT; t.feature++) {
- if (!opt.show_hidden && t.feature == FEATURE_NONE)
- continue;
+ t.slow = false;
+ if (t.feature == FEATURE_NONE)
+ t.slow = true;
for (t.pipes = 0; t.pipes < PIPE_COUNT; t.pipes++) {
t.screen = SCREEN_PRIM;
t.plane = PLANE_PRI;
@@ -3120,52 +3118,58 @@ int main(int argc, char *argv[])
/* Make sure nothing is using this value. */
t.method = -1;
- igt_subtest_f("%s-%s-rte",
- feature_str(t.feature),
- pipes_str(t.pipes))
+ igt_subtest_slow_f(t.slow,
+ "%s-%s-rte",
+ feature_str(t.feature),
+ pipes_str(t.pipes))
rte_subtest(&t);
}
}
TEST_MODE_ITER_BEGIN(t)
- igt_subtest_f("%s-%s-%s-%s-%s-draw-%s",
- feature_str(t.feature),
- pipes_str(t.pipes),
- screen_str(t.screen),
- plane_str(t.plane),
- fbs_str(t.fbs),
- igt_draw_get_method_name(t.method))
+ igt_subtest_slow_f(t.slow,
+ "%s-%s-%s-%s-%s-draw-%s",
+ feature_str(t.feature),
+ pipes_str(t.pipes),
+ screen_str(t.screen),
+ plane_str(t.plane),
+ fbs_str(t.fbs),
+ igt_draw_get_method_name(t.method))
draw_subtest(&t);
TEST_MODE_ITER_END
TEST_MODE_ITER_BEGIN(t)
if (t.plane != PLANE_PRI ||
- t.screen == SCREEN_OFFSCREEN ||
- (!opt.show_hidden && t.method != IGT_DRAW_BLT))
+ t.screen == SCREEN_OFFSCREEN)
continue;
-
- igt_subtest_f("%s-%s-%s-%s-flip-%s",
- feature_str(t.feature),
- pipes_str(t.pipes),
- screen_str(t.screen),
- fbs_str(t.fbs),
- igt_draw_get_method_name(t.method))
+ if (t.method != IGT_DRAW_BLT)
+ t.slow = true;
+
+ igt_subtest_slow_f(t.slow,
+ "%s-%s-%s-%s-flip-%s",
+ feature_str(t.feature),
+ pipes_str(t.pipes),
+ screen_str(t.screen),
+ fbs_str(t.fbs),
+ igt_draw_get_method_name(t.method))
flip_subtest(&t, FLIP_PAGEFLIP);
- igt_subtest_f("%s-%s-%s-%s-evflip-%s",
- feature_str(t.feature),
- pipes_str(t.pipes),
- screen_str(t.screen),
- fbs_str(t.fbs),
- igt_draw_get_method_name(t.method))
+ igt_subtest_slow_f(t.slow,
+ "%s-%s-%s-%s-evflip-%s",
+ feature_str(t.feature),
+ pipes_str(t.pipes),
+ screen_str(t.screen),
+ fbs_str(t.fbs),
+ igt_draw_get_method_name(t.method))
flip_subtest(&t, FLIP_PAGEFLIP_EVENT);
- igt_subtest_f("%s-%s-%s-%s-msflip-%s",
- feature_str(t.feature),
- pipes_str(t.pipes),
- screen_str(t.screen),
- fbs_str(t.fbs),
- igt_draw_get_method_name(t.method))
+ igt_subtest_slow_f(t.slow,
+ "%s-%s-%s-%s-msflip-%s",
+ feature_str(t.feature),
+ pipes_str(t.pipes),
+ screen_str(t.screen),
+ fbs_str(t.fbs),
+ igt_draw_get_method_name(t.method))
flip_subtest(&t, FLIP_MODESET);
TEST_MODE_ITER_END
@@ -3177,10 +3181,11 @@ int main(int argc, char *argv[])
(t.feature & FEATURE_FBC) == 0)
continue;
- igt_subtest_f("%s-%s-%s-fliptrack",
- feature_str(t.feature),
- pipes_str(t.pipes),
- fbs_str(t.fbs))
+ igt_subtest_slow_f(t.slow,
+ "%s-%s-%s-fliptrack",
+ feature_str(t.feature),
+ pipes_str(t.pipes),
+ fbs_str(t.fbs))
fliptrack_subtest(&t, FLIP_PAGEFLIP);
TEST_MODE_ITER_END
@@ -3190,20 +3195,22 @@ int main(int argc, char *argv[])
t.plane == PLANE_PRI)
continue;
- igt_subtest_f("%s-%s-%s-%s-%s-move",
- feature_str(t.feature),
- pipes_str(t.pipes),
- screen_str(t.screen),
- plane_str(t.plane),
- fbs_str(t.fbs))
+ igt_subtest_slow_f(t.slow,
+ "%s-%s-%s-%s-%s-move",
+ feature_str(t.feature),
+ pipes_str(t.pipes),
+ screen_str(t.screen),
+ plane_str(t.plane),
+ fbs_str(t.fbs))
move_subtest(&t);
- igt_subtest_f("%s-%s-%s-%s-%s-onoff",
- feature_str(t.feature),
- pipes_str(t.pipes),
- screen_str(t.screen),
- plane_str(t.plane),
- fbs_str(t.fbs))
+ igt_subtest_slow_f(t.slow,
+ "%s-%s-%s-%s-%s-onoff",
+ feature_str(t.feature),
+ pipes_str(t.pipes),
+ screen_str(t.screen),
+ plane_str(t.plane),
+ fbs_str(t.fbs))
onoff_subtest(&t);
TEST_MODE_ITER_END
@@ -3213,27 +3220,30 @@ int main(int argc, char *argv[])
t.plane != PLANE_SPR)
continue;
- igt_subtest_f("%s-%s-%s-%s-%s-fullscreen",
- feature_str(t.feature),
- pipes_str(t.pipes),
- screen_str(t.screen),
- plane_str(t.plane),
- fbs_str(t.fbs))
+ igt_subtest_slow_f(t.slow,
+ "%s-%s-%s-%s-%s-fullscreen",
+ feature_str(t.feature),
+ pipes_str(t.pipes),
+ screen_str(t.screen),
+ plane_str(t.plane),
+ fbs_str(t.fbs))
fullscreen_plane_subtest(&t);
TEST_MODE_ITER_END
TEST_MODE_ITER_BEGIN(t)
if (t.screen != SCREEN_PRIM ||
- t.method != IGT_DRAW_BLT ||
- (!opt.show_hidden && t.plane != PLANE_PRI) ||
- (!opt.show_hidden && t.fbs != FBS_INDIVIDUAL))
+ t.method != IGT_DRAW_BLT)
continue;
-
- igt_subtest_f("%s-%s-%s-%s-multidraw",
- feature_str(t.feature),
- pipes_str(t.pipes),
- plane_str(t.plane),
- fbs_str(t.fbs))
+ if (t.plane != PLANE_PRI ||
+ t.fbs != FBS_INDIVIDUAL)
+ t.slow = true;
+
+ igt_subtest_slow_f(t.slow,
+ "%s-%s-%s-%s-multidraw",
+ feature_str(t.feature),
+ pipes_str(t.pipes),
+ plane_str(t.plane),
+ fbs_str(t.fbs))
multidraw_subtest(&t);
TEST_MODE_ITER_END
@@ -3245,7 +3255,9 @@ int main(int argc, char *argv[])
t.method != IGT_DRAW_MMAP_GTT)
continue;
- igt_subtest_f("%s-farfromfence", feature_str(t.feature))
+ igt_subtest_slow_f(t.slow,
+ "%s-farfromfence",
+ feature_str(t.feature))
farfromfence_subtest(&t);
TEST_MODE_ITER_END
@@ -3261,10 +3273,11 @@ int main(int argc, char *argv[])
if (t.format == FORMAT_DEFAULT)
continue;
- igt_subtest_f("%s-%s-draw-%s",
- feature_str(t.feature),
- format_str(t.format),
- igt_draw_get_method_name(t.method))
+ igt_subtest_slow_f(t.slow,
+ "%s-%s-draw-%s",
+ feature_str(t.feature),
+ format_str(t.format),
+ igt_draw_get_method_name(t.method))
format_draw_subtest(&t);
}
TEST_MODE_ITER_END
@@ -3275,9 +3288,10 @@ int main(int argc, char *argv[])
t.plane != PLANE_PRI ||
t.method != IGT_DRAW_MMAP_CPU)
continue;
- igt_subtest_f("%s-%s-scaledprimary",
- feature_str(t.feature),
- fbs_str(t.fbs))
+ igt_subtest_slow_f(t.slow,
+ "%s-%s-scaledprimary",
+ feature_str(t.feature),
+ fbs_str(t.fbs))
scaledprimary_subtest(&t);
TEST_MODE_ITER_END
@@ -3289,22 +3303,32 @@ int main(int argc, char *argv[])
t.method != IGT_DRAW_MMAP_CPU)
continue;
- igt_subtest_f("%s-modesetfrombusy", feature_str(t.feature))
+ igt_subtest_slow_f(t.slow,
+ "%s-modesetfrombusy",
+ feature_str(t.feature))
modesetfrombusy_subtest(&t);
if (t.feature & FEATURE_FBC) {
- igt_subtest_f("%s-badstride", feature_str(t.feature))
+ igt_subtest_slow_f(t.slow,
+ "%s-badstride",
+ feature_str(t.feature))
badstride_subtest(&t);
- igt_subtest_f("%s-stridechange", feature_str(t.feature))
+ igt_subtest_slow_f(t.slow,
+ "%s-stridechange",
+ feature_str(t.feature))
stridechange_subtest(&t);
}
if (t.feature & FEATURE_PSR)
- igt_subtest_f("%s-slowdraw", feature_str(t.feature))
+ igt_subtest_slow_f(t.slow,
+ "%s-slowdraw",
+ feature_str(t.feature))
slow_draw_subtest(&t);
- igt_subtest_f("%s-suspend", feature_str(t.feature))
+ igt_subtest_slow_f(t.slow,
+ "%s-suspend",
+ feature_str(t.feature))
suspend_subtest(&t);
TEST_MODE_ITER_END
--
2.6.2
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH i-g-t 3/3] Remove superfluous gem_concurrent_all.c
2015-10-28 11:29 ` [PATCH i-g-t 0/3 v2] " David Weinehall
2015-10-28 11:29 ` [PATCH i-g-t 1/3] Copy gem_concurrent_all to gem_concurrent_blit David Weinehall
2015-10-28 11:29 ` [PATCH i-g-t 2/3] Unify handling of slow/combinatorial tests David Weinehall
@ 2015-10-28 11:29 ` David Weinehall
2 siblings, 0 replies; 41+ messages in thread
From: David Weinehall @ 2015-10-28 11:29 UTC (permalink / raw)
To: intel-gfx
When gem_concurrent_blit was converted to use the new common framework
for choosing whether or not to include slow/combinatorial tests,
gem_concurrent_all became superfluous. This patch removes it.
Signed-off-by: David Weinehall <david.weinehall@linux.intel.com>
---
tests/.gitignore | 1 -
tests/Makefile.sources | 1 -
tests/gem_concurrent_all.c | 1108 --------------------------------------------
3 files changed, 1110 deletions(-)
delete mode 100644 tests/gem_concurrent_all.c
diff --git a/tests/.gitignore b/tests/.gitignore
index beda5117da5c..da4f9961fc60 100644
--- a/tests/.gitignore
+++ b/tests/.gitignore
@@ -23,7 +23,6 @@ gem_bad_reloc
gem_basic
gem_caching
gem_close_race
-gem_concurrent_all
gem_concurrent_blit
gem_cpu_reloc
gem_cs_prefetch
diff --git a/tests/Makefile.sources b/tests/Makefile.sources
index ac731f90dcb2..321c7f33e4d3 100644
--- a/tests/Makefile.sources
+++ b/tests/Makefile.sources
@@ -14,7 +14,6 @@ TESTS_progs_M = \
gem_caching \
gem_close_race \
gem_concurrent_blit \
- gem_concurrent_all \
gem_cs_tlb \
gem_ctx_param_basic \
gem_ctx_bad_exec \
diff --git a/tests/gem_concurrent_all.c b/tests/gem_concurrent_all.c
deleted file mode 100644
index 1d2d787202df..000000000000
--- a/tests/gem_concurrent_all.c
+++ /dev/null
@@ -1,1108 +0,0 @@
-/*
- * Copyright © 2009,2012,2013 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- * Authors:
- * Eric Anholt <eric@anholt.net>
- * Chris Wilson <chris@chris-wilson.co.uk>
- * Daniel Vetter <daniel.vetter@ffwll.ch>
- *
- */
-
-/** @file gem_concurrent.c
- *
- * This is a test of pread/pwrite/mmap behavior when writing to active
- * buffers.
- *
- * Based on gem_gtt_concurrent_blt.
- */
-
-#include "igt.h"
-#include <stdlib.h>
-#include <stdio.h>
-#include <string.h>
-#include <fcntl.h>
-#include <inttypes.h>
-#include <errno.h>
-#include <sys/stat.h>
-#include <sys/time.h>
-#include <sys/wait.h>
-
-#include <drm.h>
-
-#include "intel_bufmgr.h"
-
-IGT_TEST_DESCRIPTION("Test of pread/pwrite/mmap behavior when writing to active"
- " buffers.");
-
-int fd, devid, gen;
-struct intel_batchbuffer *batch;
-int all;
-
-static void
-nop_release_bo(drm_intel_bo *bo)
-{
- drm_intel_bo_unreference(bo);
-}
-
-static void
-prw_set_bo(drm_intel_bo *bo, uint32_t val, int width, int height)
-{
- int size = width * height, i;
- uint32_t *tmp;
-
- tmp = malloc(4*size);
- if (tmp) {
- for (i = 0; i < size; i++)
- tmp[i] = val;
- drm_intel_bo_subdata(bo, 0, 4*size, tmp);
- free(tmp);
- } else {
- for (i = 0; i < size; i++)
- drm_intel_bo_subdata(bo, 4*i, 4, &val);
- }
-}
-
-static void
-prw_cmp_bo(drm_intel_bo *bo, uint32_t val, int width, int height, drm_intel_bo *tmp)
-{
- int size = width * height, i;
- uint32_t *vaddr;
-
- do_or_die(drm_intel_bo_map(tmp, true));
- do_or_die(drm_intel_bo_get_subdata(bo, 0, 4*size, tmp->virtual));
- vaddr = tmp->virtual;
- for (i = 0; i < size; i++)
- igt_assert_eq_u32(vaddr[i], val);
- drm_intel_bo_unmap(tmp);
-}
-
-static drm_intel_bo *
-unmapped_create_bo(drm_intel_bufmgr *bufmgr, int width, int height)
-{
- drm_intel_bo *bo;
-
- bo = drm_intel_bo_alloc(bufmgr, "bo", 4*width*height, 0);
- igt_assert(bo);
-
- return bo;
-}
-
-static drm_intel_bo *
-snoop_create_bo(drm_intel_bufmgr *bufmgr, int width, int height)
-{
- drm_intel_bo *bo;
-
- igt_skip_on(gem_has_llc(fd));
-
- bo = unmapped_create_bo(bufmgr, width, height);
- gem_set_caching(fd, bo->handle, I915_CACHING_CACHED);
- drm_intel_bo_disable_reuse(bo);
-
- return bo;
-}
-
-static void
-gtt_set_bo(drm_intel_bo *bo, uint32_t val, int width, int height)
-{
- uint32_t *vaddr = bo->virtual;
- int size = width * height;
-
- drm_intel_gem_bo_start_gtt_access(bo, true);
- while (size--)
- *vaddr++ = val;
-}
-
-static void
-gtt_cmp_bo(drm_intel_bo *bo, uint32_t val, int width, int height, drm_intel_bo *tmp)
-{
- uint32_t *vaddr = bo->virtual;
- int y;
-
- /* GTT access is slow. So we just compare a few points */
- drm_intel_gem_bo_start_gtt_access(bo, false);
- for (y = 0; y < height; y++)
- igt_assert_eq_u32(vaddr[y*width+y], val);
-}
-
-static drm_intel_bo *
-map_bo(drm_intel_bo *bo)
-{
- /* gtt map doesn't have a write parameter, so just keep the mapping
- * around (to avoid the set_domain with the gtt write domain set) and
- * manually tell the kernel when we start access the gtt. */
- do_or_die(drm_intel_gem_bo_map_gtt(bo));
-
- return bo;
-}
-
-static drm_intel_bo *
-tile_bo(drm_intel_bo *bo, int width)
-{
- uint32_t tiling = I915_TILING_X;
- uint32_t stride = width * 4;
-
- do_or_die(drm_intel_bo_set_tiling(bo, &tiling, stride));
-
- return bo;
-}
-
-static drm_intel_bo *
-gtt_create_bo(drm_intel_bufmgr *bufmgr, int width, int height)
-{
- return map_bo(unmapped_create_bo(bufmgr, width, height));
-}
-
-static drm_intel_bo *
-gttX_create_bo(drm_intel_bufmgr *bufmgr, int width, int height)
-{
- return tile_bo(gtt_create_bo(bufmgr, width, height), width);
-}
-
-static drm_intel_bo *
-wc_create_bo(drm_intel_bufmgr *bufmgr, int width, int height)
-{
- drm_intel_bo *bo;
-
- gem_require_mmap_wc(fd);
-
- bo = unmapped_create_bo(bufmgr, width, height);
- bo->virtual = __gem_mmap__wc(fd, bo->handle, 0, bo->size, PROT_READ | PROT_WRITE);
- return bo;
-}
-
-static void
-wc_release_bo(drm_intel_bo *bo)
-{
- munmap(bo->virtual, bo->size);
- bo->virtual = NULL;
-
- nop_release_bo(bo);
-}
-
-static drm_intel_bo *
-gpu_create_bo(drm_intel_bufmgr *bufmgr, int width, int height)
-{
- return unmapped_create_bo(bufmgr, width, height);
-}
-
-
-static drm_intel_bo *
-gpuX_create_bo(drm_intel_bufmgr *bufmgr, int width, int height)
-{
- return tile_bo(gpu_create_bo(bufmgr, width, height), width);
-}
-
-static void
-cpu_set_bo(drm_intel_bo *bo, uint32_t val, int width, int height)
-{
- int size = width * height;
- uint32_t *vaddr;
-
- do_or_die(drm_intel_bo_map(bo, true));
- vaddr = bo->virtual;
- while (size--)
- *vaddr++ = val;
- drm_intel_bo_unmap(bo);
-}
-
-static void
-cpu_cmp_bo(drm_intel_bo *bo, uint32_t val, int width, int height, drm_intel_bo *tmp)
-{
- int size = width * height;
- uint32_t *vaddr;
-
- do_or_die(drm_intel_bo_map(bo, false));
- vaddr = bo->virtual;
- while (size--)
- igt_assert_eq_u32(*vaddr++, val);
- drm_intel_bo_unmap(bo);
-}
-
-static void
-gpu_set_bo(drm_intel_bo *bo, uint32_t val, int width, int height)
-{
- struct drm_i915_gem_relocation_entry reloc[1];
- struct drm_i915_gem_exec_object2 gem_exec[2];
- struct drm_i915_gem_execbuffer2 execbuf;
- struct drm_i915_gem_pwrite gem_pwrite;
- struct drm_i915_gem_create create;
- uint32_t buf[10], *b;
- uint32_t tiling, swizzle;
-
- drm_intel_bo_get_tiling(bo, &tiling, &swizzle);
-
- memset(reloc, 0, sizeof(reloc));
- memset(gem_exec, 0, sizeof(gem_exec));
- memset(&execbuf, 0, sizeof(execbuf));
-
- b = buf;
- *b++ = XY_COLOR_BLT_CMD_NOLEN |
- ((gen >= 8) ? 5 : 4) |
- COLOR_BLT_WRITE_ALPHA | XY_COLOR_BLT_WRITE_RGB;
- if (gen >= 4 && tiling) {
- b[-1] |= XY_COLOR_BLT_TILED;
- *b = width;
- } else
- *b = width << 2;
- *b++ |= 0xf0 << 16 | 1 << 25 | 1 << 24;
- *b++ = 0;
- *b++ = height << 16 | width;
- reloc[0].offset = (b - buf) * sizeof(uint32_t);
- reloc[0].target_handle = bo->handle;
- reloc[0].read_domains = I915_GEM_DOMAIN_RENDER;
- reloc[0].write_domain = I915_GEM_DOMAIN_RENDER;
- *b++ = 0;
- if (gen >= 8)
- *b++ = 0;
- *b++ = val;
- *b++ = MI_BATCH_BUFFER_END;
- if ((b - buf) & 1)
- *b++ = 0;
-
- gem_exec[0].handle = bo->handle;
- gem_exec[0].flags = EXEC_OBJECT_NEEDS_FENCE;
-
- create.handle = 0;
- create.size = 4096;
- drmIoctl(fd, DRM_IOCTL_I915_GEM_CREATE, &create);
- gem_exec[1].handle = create.handle;
- gem_exec[1].relocation_count = 1;
- gem_exec[1].relocs_ptr = (uintptr_t)reloc;
-
- execbuf.buffers_ptr = (uintptr_t)gem_exec;
- execbuf.buffer_count = 2;
- execbuf.batch_len = (b - buf) * sizeof(buf[0]);
- if (gen >= 6)
- execbuf.flags = I915_EXEC_BLT;
-
- gem_pwrite.handle = gem_exec[1].handle;
- gem_pwrite.offset = 0;
- gem_pwrite.size = execbuf.batch_len;
- gem_pwrite.data_ptr = (uintptr_t)buf;
- do_or_die(drmIoctl(fd, DRM_IOCTL_I915_GEM_PWRITE, &gem_pwrite));
- do_or_die(drmIoctl(fd, DRM_IOCTL_I915_GEM_EXECBUFFER2, &execbuf));
-
- drmIoctl(fd, DRM_IOCTL_GEM_CLOSE, &create.handle);
-}
-
-static void
-gpu_cmp_bo(drm_intel_bo *bo, uint32_t val, int width, int height, drm_intel_bo *tmp)
-{
- intel_blt_copy(batch,
- bo, 0, 0, 4*width,
- tmp, 0, 0, 4*width,
- width, height, 32);
- cpu_cmp_bo(tmp, val, width, height, NULL);
-}
-
-const struct access_mode {
- const char *name;
- void (*set_bo)(drm_intel_bo *bo, uint32_t val, int w, int h);
- void (*cmp_bo)(drm_intel_bo *bo, uint32_t val, int w, int h, drm_intel_bo *tmp);
- drm_intel_bo *(*create_bo)(drm_intel_bufmgr *bufmgr, int width, int height);
- void (*release_bo)(drm_intel_bo *bo);
-} access_modes[] = {
- {
- .name = "prw",
- .set_bo = prw_set_bo,
- .cmp_bo = prw_cmp_bo,
- .create_bo = unmapped_create_bo,
- .release_bo = nop_release_bo,
- },
- {
- .name = "cpu",
- .set_bo = cpu_set_bo,
- .cmp_bo = cpu_cmp_bo,
- .create_bo = unmapped_create_bo,
- .release_bo = nop_release_bo,
- },
- {
- .name = "snoop",
- .set_bo = cpu_set_bo,
- .cmp_bo = cpu_cmp_bo,
- .create_bo = snoop_create_bo,
- .release_bo = nop_release_bo,
- },
- {
- .name = "gtt",
- .set_bo = gtt_set_bo,
- .cmp_bo = gtt_cmp_bo,
- .create_bo = gtt_create_bo,
- .release_bo = nop_release_bo,
- },
- {
- .name = "gttX",
- .set_bo = gtt_set_bo,
- .cmp_bo = gtt_cmp_bo,
- .create_bo = gttX_create_bo,
- .release_bo = nop_release_bo,
- },
- {
- .name = "wc",
- .set_bo = gtt_set_bo,
- .cmp_bo = gtt_cmp_bo,
- .create_bo = wc_create_bo,
- .release_bo = wc_release_bo,
- },
- {
- .name = "gpu",
- .set_bo = gpu_set_bo,
- .cmp_bo = gpu_cmp_bo,
- .create_bo = gpu_create_bo,
- .release_bo = nop_release_bo,
- },
- {
- .name = "gpuX",
- .set_bo = gpu_set_bo,
- .cmp_bo = gpu_cmp_bo,
- .create_bo = gpuX_create_bo,
- .release_bo = nop_release_bo,
- },
-};
-
-#define MAX_NUM_BUFFERS 1024
-int num_buffers = MAX_NUM_BUFFERS;
-const int width = 512, height = 512;
-igt_render_copyfunc_t rendercopy;
-
-struct buffers {
- const struct access_mode *mode;
- drm_intel_bufmgr *bufmgr;
- drm_intel_bo *src[MAX_NUM_BUFFERS], *dst[MAX_NUM_BUFFERS];
- drm_intel_bo *dummy, *spare;
- int count;
-};
-
-static void *buffers_init(struct buffers *data,
- const struct access_mode *mode,
- int _fd)
-{
- data->mode = mode;
- data->count = 0;
-
- data->bufmgr = drm_intel_bufmgr_gem_init(_fd, 4096);
- igt_assert(data->bufmgr);
-
- drm_intel_bufmgr_gem_enable_reuse(data->bufmgr);
- return intel_batchbuffer_alloc(data->bufmgr, devid);
-}
-
-static void buffers_destroy(struct buffers *data)
-{
- if (data->count == 0)
- return;
-
- for (int i = 0; i < data->count; i++) {
- data->mode->release_bo(data->src[i]);
- data->mode->release_bo(data->dst[i]);
- }
- data->mode->release_bo(data->dummy);
- data->mode->release_bo(data->spare);
- data->count = 0;
-}
-
-static void buffers_create(struct buffers *data,
- int count)
-{
- igt_assert(data->bufmgr);
-
- buffers_destroy(data);
-
- for (int i = 0; i < count; i++) {
- data->src[i] =
- data->mode->create_bo(data->bufmgr, width, height);
- data->dst[i] =
- data->mode->create_bo(data->bufmgr, width, height);
- }
- data->dummy = data->mode->create_bo(data->bufmgr, width, height);
- data->spare = data->mode->create_bo(data->bufmgr, width, height);
- data->count = count;
-}
-
-static void buffers_fini(struct buffers *data)
-{
- if (data->bufmgr == NULL)
- return;
-
- buffers_destroy(data);
-
- intel_batchbuffer_free(batch);
- drm_intel_bufmgr_destroy(data->bufmgr);
- data->bufmgr = NULL;
-}
-
-typedef void (*do_copy)(drm_intel_bo *dst, drm_intel_bo *src);
-typedef struct igt_hang_ring (*do_hang)(void);
-
-static void render_copy_bo(drm_intel_bo *dst, drm_intel_bo *src)
-{
- struct igt_buf d = {
- .bo = dst,
- .size = width * height * 4,
- .num_tiles = width * height * 4,
- .stride = width * 4,
- }, s = {
- .bo = src,
- .size = width * height * 4,
- .num_tiles = width * height * 4,
- .stride = width * 4,
- };
- uint32_t swizzle;
-
- drm_intel_bo_get_tiling(dst, &d.tiling, &swizzle);
- drm_intel_bo_get_tiling(src, &s.tiling, &swizzle);
-
- rendercopy(batch, NULL,
- &s, 0, 0,
- width, height,
- &d, 0, 0);
-}
-
-static void blt_copy_bo(drm_intel_bo *dst, drm_intel_bo *src)
-{
- intel_blt_copy(batch,
- src, 0, 0, 4*width,
- dst, 0, 0, 4*width,
- width, height, 32);
-}
-
-static void cpu_copy_bo(drm_intel_bo *dst, drm_intel_bo *src)
-{
- const int size = width * height * sizeof(uint32_t);
- void *d, *s;
-
- gem_set_domain(fd, src->handle, I915_GEM_DOMAIN_CPU, 0);
- gem_set_domain(fd, dst->handle, I915_GEM_DOMAIN_CPU, I915_GEM_DOMAIN_CPU);
- s = gem_mmap__cpu(fd, src->handle, 0, size, PROT_READ);
- d = gem_mmap__cpu(fd, dst->handle, 0, size, PROT_WRITE);
-
- memcpy(d, s, size);
-
- munmap(d, size);
- munmap(s, size);
-}
-
-static void gtt_copy_bo(drm_intel_bo *dst, drm_intel_bo *src)
-{
- const int size = width * height * sizeof(uint32_t);
- void *d, *s;
-
- gem_set_domain(fd, src->handle, I915_GEM_DOMAIN_GTT, 0);
- gem_set_domain(fd, dst->handle, I915_GEM_DOMAIN_GTT, I915_GEM_DOMAIN_GTT);
-
- s = gem_mmap__gtt(fd, src->handle, size, PROT_READ);
- d = gem_mmap__gtt(fd, dst->handle, size, PROT_WRITE);
-
- memcpy(d, s, size);
-
- munmap(d, size);
- munmap(s, size);
-}
-
-static void wc_copy_bo(drm_intel_bo *dst, drm_intel_bo *src)
-{
- const int size = width * height * sizeof(uint32_t);
- void *d, *s;
-
- gem_set_domain(fd, src->handle, I915_GEM_DOMAIN_GTT, 0);
- gem_set_domain(fd, dst->handle, I915_GEM_DOMAIN_GTT, I915_GEM_DOMAIN_GTT);
-
- s = gem_mmap__wc(fd, src->handle, 0, size, PROT_READ);
- d = gem_mmap__wc(fd, dst->handle, 0, size, PROT_WRITE);
-
- memcpy(d, s, size);
-
- munmap(d, size);
- munmap(s, size);
-}
-
-static struct igt_hang_ring no_hang(void)
-{
- return (struct igt_hang_ring){0, 0};
-}
-
-static struct igt_hang_ring bcs_hang(void)
-{
- return igt_hang_ring(fd, I915_EXEC_BLT);
-}
-
-static struct igt_hang_ring rcs_hang(void)
-{
- return igt_hang_ring(fd, I915_EXEC_RENDER);
-}
-
-static void hang_require(void)
-{
- igt_require_hang_ring(fd, -1);
-}
-
-static void do_overwrite_source(struct buffers *buffers,
- do_copy do_copy_func,
- do_hang do_hang_func)
-{
- struct igt_hang_ring hang;
- int i;
-
- gem_quiescent_gpu(fd);
- for (i = 0; i < buffers->count; i++) {
- buffers->mode->set_bo(buffers->src[i], i, width, height);
- buffers->mode->set_bo(buffers->dst[i], ~i, width, height);
- }
- for (i = 0; i < buffers->count; i++)
- do_copy_func(buffers->dst[i], buffers->src[i]);
- hang = do_hang_func();
- for (i = buffers->count; i--; )
- buffers->mode->set_bo(buffers->src[i], 0xdeadbeef, width, height);
- for (i = 0; i < buffers->count; i++)
- buffers->mode->cmp_bo(buffers->dst[i], i, width, height, buffers->dummy);
- igt_post_hang_ring(fd, hang);
-}
-
-static void do_overwrite_source_read(struct buffers *buffers,
- do_copy do_copy_func,
- do_hang do_hang_func,
- int do_rcs)
-{
- const int half = buffers->count/2;
- struct igt_hang_ring hang;
- int i;
-
- gem_quiescent_gpu(fd);
- for (i = 0; i < half; i++) {
- buffers->mode->set_bo(buffers->src[i], i, width, height);
- buffers->mode->set_bo(buffers->dst[i], ~i, width, height);
- buffers->mode->set_bo(buffers->dst[i+half], ~i, width, height);
- }
- for (i = 0; i < half; i++) {
- do_copy_func(buffers->dst[i], buffers->src[i]);
- if (do_rcs)
- render_copy_bo(buffers->dst[i+half], buffers->src[i]);
- else
- blt_copy_bo(buffers->dst[i+half], buffers->src[i]);
- }
- hang = do_hang_func();
- for (i = half; i--; )
- buffers->mode->set_bo(buffers->src[i], 0xdeadbeef, width, height);
- for (i = 0; i < half; i++) {
- buffers->mode->cmp_bo(buffers->dst[i], i, width, height, buffers->dummy);
- buffers->mode->cmp_bo(buffers->dst[i+half], i, width, height, buffers->dummy);
- }
- igt_post_hang_ring(fd, hang);
-}
-
-static void do_overwrite_source_read_bcs(struct buffers *buffers,
- do_copy do_copy_func,
- do_hang do_hang_func)
-{
- do_overwrite_source_read(buffers, do_copy_func, do_hang_func, 0);
-}
-
-static void do_overwrite_source_read_rcs(struct buffers *buffers,
- do_copy do_copy_func,
- do_hang do_hang_func)
-{
- do_overwrite_source_read(buffers, do_copy_func, do_hang_func, 1);
-}
-
-static void do_overwrite_source__rev(struct buffers *buffers,
- do_copy do_copy_func,
- do_hang do_hang_func)
-{
- struct igt_hang_ring hang;
- int i;
-
- gem_quiescent_gpu(fd);
- for (i = 0; i < buffers->count; i++) {
- buffers->mode->set_bo(buffers->src[i], i, width, height);
- buffers->mode->set_bo(buffers->dst[i], ~i, width, height);
- }
- for (i = 0; i < buffers->count; i++)
- do_copy_func(buffers->dst[i], buffers->src[i]);
- hang = do_hang_func();
- for (i = 0; i < buffers->count; i++)
- buffers->mode->set_bo(buffers->src[i], 0xdeadbeef, width, height);
- for (i = buffers->count; i--; )
- buffers->mode->cmp_bo(buffers->dst[i], i, width, height, buffers->dummy);
- igt_post_hang_ring(fd, hang);
-}
-
-static void do_overwrite_source__one(struct buffers *buffers,
- do_copy do_copy_func,
- do_hang do_hang_func)
-{
- struct igt_hang_ring hang;
-
- gem_quiescent_gpu(fd);
- buffers->mode->set_bo(buffers->src[0], 0, width, height);
- buffers->mode->set_bo(buffers->dst[0], ~0, width, height);
- do_copy_func(buffers->dst[0], buffers->src[0]);
- hang = do_hang_func();
- buffers->mode->set_bo(buffers->src[0], 0xdeadbeef, width, height);
- buffers->mode->cmp_bo(buffers->dst[0], 0, width, height, buffers->dummy);
- igt_post_hang_ring(fd, hang);
-}
-
-static void do_intermix(struct buffers *buffers,
- do_copy do_copy_func,
- do_hang do_hang_func,
- int do_rcs)
-{
- const int half = buffers->count/2;
- struct igt_hang_ring hang;
- int i;
-
- gem_quiescent_gpu(fd);
- for (i = 0; i < buffers->count; i++) {
- buffers->mode->set_bo(buffers->src[i], 0xdeadbeef^~i, width, height);
- buffers->mode->set_bo(buffers->dst[i], i, width, height);
- }
- for (i = 0; i < half; i++) {
- if (do_rcs == 1 || (do_rcs == -1 && i & 1))
- render_copy_bo(buffers->dst[i], buffers->src[i]);
- else
- blt_copy_bo(buffers->dst[i], buffers->src[i]);
-
- do_copy_func(buffers->dst[i+half], buffers->src[i]);
-
- if (do_rcs == 1 || (do_rcs == -1 && (i & 1) == 0))
- render_copy_bo(buffers->dst[i], buffers->dst[i+half]);
- else
- blt_copy_bo(buffers->dst[i], buffers->dst[i+half]);
-
- do_copy_func(buffers->dst[i+half], buffers->src[i+half]);
- }
- hang = do_hang_func();
- for (i = 0; i < 2*half; i++)
- buffers->mode->cmp_bo(buffers->dst[i], 0xdeadbeef^~i, width, height, buffers->dummy);
- igt_post_hang_ring(fd, hang);
-}
-
-static void do_intermix_rcs(struct buffers *buffers,
- do_copy do_copy_func,
- do_hang do_hang_func)
-{
- do_intermix(buffers, do_copy_func, do_hang_func, 1);
-}
-
-static void do_intermix_bcs(struct buffers *buffers,
- do_copy do_copy_func,
- do_hang do_hang_func)
-{
- do_intermix(buffers, do_copy_func, do_hang_func, 0);
-}
-
-static void do_intermix_both(struct buffers *buffers,
- do_copy do_copy_func,
- do_hang do_hang_func)
-{
- do_intermix(buffers, do_copy_func, do_hang_func, -1);
-}
-
-static void do_early_read(struct buffers *buffers,
- do_copy do_copy_func,
- do_hang do_hang_func)
-{
- struct igt_hang_ring hang;
- int i;
-
- gem_quiescent_gpu(fd);
- for (i = buffers->count; i--; )
- buffers->mode->set_bo(buffers->src[i], 0xdeadbeef, width, height);
- for (i = 0; i < buffers->count; i++)
- do_copy_func(buffers->dst[i], buffers->src[i]);
- hang = do_hang_func();
- for (i = buffers->count; i--; )
- buffers->mode->cmp_bo(buffers->dst[i], 0xdeadbeef, width, height, buffers->dummy);
- igt_post_hang_ring(fd, hang);
-}
-
-static void do_read_read_bcs(struct buffers *buffers,
- do_copy do_copy_func,
- do_hang do_hang_func)
-{
- struct igt_hang_ring hang;
- int i;
-
- gem_quiescent_gpu(fd);
- for (i = buffers->count; i--; )
- buffers->mode->set_bo(buffers->src[i], 0xdeadbeef ^ i, width, height);
- for (i = 0; i < buffers->count; i++) {
- do_copy_func(buffers->dst[i], buffers->src[i]);
- blt_copy_bo(buffers->spare, buffers->src[i]);
- }
- cpu_cmp_bo(buffers->spare, 0xdeadbeef^(buffers->count-1), width, height, NULL);
- hang = do_hang_func();
- for (i = buffers->count; i--; )
- buffers->mode->cmp_bo(buffers->dst[i], 0xdeadbeef ^ i, width, height, buffers->dummy);
- igt_post_hang_ring(fd, hang);
-}
-
-static void do_read_read_rcs(struct buffers *buffers,
- do_copy do_copy_func,
- do_hang do_hang_func)
-{
- struct igt_hang_ring hang;
- int i;
-
- gem_quiescent_gpu(fd);
- for (i = buffers->count; i--; )
- buffers->mode->set_bo(buffers->src[i], 0xdeadbeef ^ i, width, height);
- for (i = 0; i < buffers->count; i++) {
- do_copy_func(buffers->dst[i], buffers->src[i]);
- render_copy_bo(buffers->spare, buffers->src[i]);
- }
- cpu_cmp_bo(buffers->spare, 0xdeadbeef^(buffers->count-1), width, height, NULL);
- hang = do_hang_func();
- for (i = buffers->count; i--; )
- buffers->mode->cmp_bo(buffers->dst[i], 0xdeadbeef ^ i, width, height, buffers->dummy);
- igt_post_hang_ring(fd, hang);
-}
-
-static void do_gpu_read_after_write(struct buffers *buffers,
- do_copy do_copy_func,
- do_hang do_hang_func)
-{
- struct igt_hang_ring hang;
- int i;
-
- gem_quiescent_gpu(fd);
- for (i = buffers->count; i--; )
- buffers->mode->set_bo(buffers->src[i], 0xabcdabcd, width, height);
- for (i = 0; i < buffers->count; i++)
- do_copy_func(buffers->dst[i], buffers->src[i]);
- for (i = buffers->count; i--; )
- do_copy_func(buffers->dummy, buffers->dst[i]);
- hang = do_hang_func();
- for (i = buffers->count; i--; )
- buffers->mode->cmp_bo(buffers->dst[i], 0xabcdabcd, width, height, buffers->dummy);
- igt_post_hang_ring(fd, hang);
-}
-
-typedef void (*do_test)(struct buffers *buffers,
- do_copy do_copy_func,
- do_hang do_hang_func);
-
-typedef void (*run_wrap)(struct buffers *buffers,
- do_test do_test_func,
- do_copy do_copy_func,
- do_hang do_hang_func);
-
-static void run_single(struct buffers *buffers,
- do_test do_test_func,
- do_copy do_copy_func,
- do_hang do_hang_func)
-{
- do_test_func(buffers, do_copy_func, do_hang_func);
-}
-
-static void run_interruptible(struct buffers *buffers,
- do_test do_test_func,
- do_copy do_copy_func,
- do_hang do_hang_func)
-{
- int loop;
-
- for (loop = 0; loop < 10; loop++)
- do_test_func(buffers, do_copy_func, do_hang_func);
-}
-
-static void run_forked(struct buffers *buffers,
- do_test do_test_func,
- do_copy do_copy_func,
- do_hang do_hang_func)
-{
- const int old_num_buffers = num_buffers;
-
- num_buffers /= 16;
- num_buffers += 2;
-
- igt_fork(child, 16) {
- /* recreate process local variables */
- buffers->count = 0;
- fd = drm_open_driver(DRIVER_INTEL);
-
- batch = buffers_init(buffers, buffers->mode, fd);
-
- buffers_create(buffers, num_buffers);
- for (int loop = 0; loop < 10; loop++)
- do_test_func(buffers, do_copy_func, do_hang_func);
-
- buffers_fini(buffers);
- }
-
- igt_waitchildren();
-
- num_buffers = old_num_buffers;
-}
-
-static void bit17_require(void)
-{
- struct drm_i915_gem_get_tiling2 {
- uint32_t handle;
- uint32_t tiling_mode;
- uint32_t swizzle_mode;
- uint32_t phys_swizzle_mode;
- } arg;
-#define DRM_IOCTL_I915_GEM_GET_TILING2 DRM_IOWR (DRM_COMMAND_BASE + DRM_I915_GEM_GET_TILING, struct drm_i915_gem_get_tiling2)
-
- memset(&arg, 0, sizeof(arg));
- arg.handle = gem_create(fd, 4096);
- gem_set_tiling(fd, arg.handle, I915_TILING_X, 512);
-
- do_or_die(drmIoctl(fd, DRM_IOCTL_I915_GEM_GET_TILING2, &arg));
- gem_close(fd, arg.handle);
- igt_require(arg.phys_swizzle_mode == arg.swizzle_mode);
-}
-
-static void cpu_require(void)
-{
- bit17_require();
-}
-
-static void gtt_require(void)
-{
-}
-
-static void wc_require(void)
-{
- bit17_require();
- gem_require_mmap_wc(fd);
-}
-
-static void bcs_require(void)
-{
-}
-
-static void rcs_require(void)
-{
- igt_require(rendercopy);
-}
-
-static void no_require(void)
-{
-}
-
-static void
-run_basic_modes(const struct access_mode *mode,
- const char *suffix,
- run_wrap run_wrap_func)
-{
- const struct {
- const char *prefix;
- do_copy copy;
- void (*require)(void);
- } pipelines[] = {
- { "cpu", cpu_copy_bo, cpu_require },
- { "gtt", gtt_copy_bo, gtt_require },
- { "wc", wc_copy_bo, wc_require },
- { "blt", blt_copy_bo, bcs_require },
- { "render", render_copy_bo, rcs_require },
- { NULL, NULL }
- }, *pskip = pipelines + 3, *p;
- const struct {
- const char *suffix;
- do_hang hang;
- void (*require)(void);
- } hangs[] = {
- { "", no_hang, no_require },
- { "-hang-blt", bcs_hang, hang_require },
- { "-hang-render", rcs_hang, hang_require },
- { NULL, NULL },
- }, *h;
- struct buffers buffers;
-
- for (h = hangs; h->suffix; h++) {
- if (!all && *h->suffix)
- continue;
-
- for (p = all ? pipelines : pskip; p->prefix; p++) {
- igt_fixture {
- batch = buffers_init(&buffers, mode, fd);
- }
-
- /* try to overwrite the source values */
- igt_subtest_f("%s-%s-overwrite-source-one%s%s", mode->name, p->prefix, suffix, h->suffix) {
- h->require();
- p->require();
- buffers_create(&buffers, num_buffers);
- run_wrap_func(&buffers,
- do_overwrite_source__one,
- p->copy, h->hang);
- }
-
- igt_subtest_f("%s-%s-overwrite-source%s%s", mode->name, p->prefix, suffix, h->suffix) {
- h->require();
- p->require();
- buffers_create(&buffers, num_buffers);
- run_wrap_func(&buffers,
- do_overwrite_source,
- p->copy, h->hang);
- }
-
- igt_subtest_f("%s-%s-overwrite-source-read-bcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
- h->require();
- p->require();
- buffers_create(&buffers, num_buffers);
- run_wrap_func(&buffers,
- do_overwrite_source_read_bcs,
- p->copy, h->hang);
- }
-
- igt_subtest_f("%s-%s-overwrite-source-read-rcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
- h->require();
- p->require();
- igt_require(rendercopy);
- buffers_create(&buffers, num_buffers);
- run_wrap_func(&buffers,
- do_overwrite_source_read_rcs,
- p->copy, h->hang);
- }
-
- igt_subtest_f("%s-%s-overwrite-source-rev%s%s", mode->name, p->prefix, suffix, h->suffix) {
- h->require();
- p->require();
- buffers_create(&buffers, num_buffers);
- run_wrap_func(&buffers,
- do_overwrite_source__rev,
- p->copy, h->hang);
- }
-
- /* try to intermix copies with GPU copies*/
- igt_subtest_f("%s-%s-intermix-rcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
- h->require();
- p->require();
- igt_require(rendercopy);
- buffers_create(&buffers, num_buffers);
- run_wrap_func(&buffers,
- do_intermix_rcs,
- p->copy, h->hang);
- }
- igt_subtest_f("%s-%s-intermix-bcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
- h->require();
- p->require();
- igt_require(rendercopy);
- buffers_create(&buffers, num_buffers);
- run_wrap_func(&buffers,
- do_intermix_bcs,
- p->copy, h->hang);
- }
- igt_subtest_f("%s-%s-intermix-both%s%s", mode->name, p->prefix, suffix, h->suffix) {
- h->require();
- p->require();
- igt_require(rendercopy);
- buffers_create(&buffers, num_buffers);
- run_wrap_func(&buffers,
- do_intermix_both,
- p->copy, h->hang);
- }
-
- /* try to read the results before the copy completes */
- igt_subtest_f("%s-%s-early-read%s%s", mode->name, p->prefix, suffix, h->suffix) {
- h->require();
- p->require();
- buffers_create(&buffers, num_buffers);
- run_wrap_func(&buffers,
- do_early_read,
- p->copy, h->hang);
- }
-
- /* concurrent reads */
- igt_subtest_f("%s-%s-read-read-bcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
- h->require();
- p->require();
- buffers_create(&buffers, num_buffers);
- run_wrap_func(&buffers,
- do_read_read_bcs,
- p->copy, h->hang);
- }
- igt_subtest_f("%s-%s-read-read-rcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
- h->require();
- p->require();
- igt_require(rendercopy);
- buffers_create(&buffers, num_buffers);
- run_wrap_func(&buffers,
- do_read_read_rcs,
- p->copy, h->hang);
- }
-
- /* and finally try to trick the kernel into loosing the pending write */
- igt_subtest_f("%s-%s-gpu-read-after-write%s%s", mode->name, p->prefix, suffix, h->suffix) {
- h->require();
- p->require();
- buffers_create(&buffers, num_buffers);
- run_wrap_func(&buffers,
- do_gpu_read_after_write,
- p->copy, h->hang);
- }
-
- igt_fixture {
- buffers_fini(&buffers);
- }
- }
- }
-}
-
-static void
-run_modes(const struct access_mode *mode)
-{
- if (all) {
- run_basic_modes(mode, "", run_single);
-
- igt_fork_signal_helper();
- run_basic_modes(mode, "-interruptible", run_interruptible);
- igt_stop_signal_helper();
- }
-
- igt_fork_signal_helper();
- run_basic_modes(mode, "-forked", run_forked);
- igt_stop_signal_helper();
-}
-
-igt_main
-{
- int max, i;
-
- igt_skip_on_simulation();
-
- if (strstr(igt_test_name(), "all"))
- all = true;
-
- igt_fixture {
- fd = drm_open_driver(DRIVER_INTEL);
- devid = intel_get_drm_devid(fd);
- gen = intel_gen(devid);
- rendercopy = igt_get_render_copyfunc(devid);
-
- max = gem_aperture_size (fd) / (1024 * 1024) / 2;
- if (num_buffers > max)
- num_buffers = max;
-
- max = intel_get_total_ram_mb() * 3 / 4;
- if (num_buffers > max)
- num_buffers = max;
- num_buffers /= 2;
- igt_info("using 2x%d buffers, each 1MiB\n", num_buffers);
- }
-
- for (i = 0; i < ARRAY_SIZE(access_modes); i++)
- run_modes(&access_modes[i]);
-}
--
2.6.2
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 41+ messages in thread
* Re: [PATCH i-g-t 2/3] Unify handling of slow/combinatorial tests
2015-10-28 11:29 ` [PATCH i-g-t 2/3] Unify handling of slow/combinatorial tests David Weinehall
@ 2015-10-28 16:12 ` Paulo Zanoni
2015-10-30 7:56 ` David Weinehall
2015-10-28 17:14 ` Thomas Wood
1 sibling, 1 reply; 41+ messages in thread
From: Paulo Zanoni @ 2015-10-28 16:12 UTC (permalink / raw)
To: David Weinehall; +Cc: Intel Graphics Development
2015-10-28 9:29 GMT-02:00 David Weinehall <david.weinehall@linux.intel.com>:
> Some tests should not be run by default, due to their slow,
> and sometimes superfluous, nature.
>
> We still want to be able to run these tests in some cases.
> Until now there's been no unified way of handling this. Remedy
> this by introducing the --all option to igt_core,
> and use it in gem_concurrent_blit & kms_frontbuffer_tracking.
I really think you should explain both your plan and its
implementation in more details here.
>
> Signed-off-by: David Weinehall <david.weinehall@linux.intel.com>
> ---
> lib/igt_core.c | 24 +++++
> lib/igt_core.h | 7 ++
> tests/gem_concurrent_blit.c | 44 ++++-----
> tests/kms_frontbuffer_tracking.c | 208 ++++++++++++++++++++++-----------------
> 4 files changed, 165 insertions(+), 118 deletions(-)
>
> diff --git a/lib/igt_core.c b/lib/igt_core.c
> index 59127cafe606..6575b9d6bf0d 100644
> --- a/lib/igt_core.c
> +++ b/lib/igt_core.c
> @@ -216,6 +216,7 @@ const char *igt_interactive_debug;
>
> /* subtests helpers */
> static bool list_subtests = false;
> +static bool with_slow_combinatorial = false;
The option is called --all, the new subtest macro is _slow and the
variables and enums are called with_slow_combinatorial. Is this
intentional?
> static char *run_single_subtest = NULL;
> static bool run_single_subtest_found = false;
> static const char *in_subtest = NULL;
> @@ -235,6 +236,7 @@ bool test_child;
>
> enum {
> OPT_LIST_SUBTESTS,
> + OPT_WITH_SLOW_COMBINATORIAL,
> OPT_RUN_SUBTEST,
> OPT_DESCRIPTION,
> OPT_DEBUG,
> @@ -478,6 +480,7 @@ static void print_usage(const char *help_str, bool output_on_stderr)
>
> fprintf(f, "Usage: %s [OPTIONS]\n", command_str);
> fprintf(f, " --list-subtests\n"
> + " --all\n"
> " --run-subtest <pattern>\n"
> " --debug[=log-domain]\n"
> " --interactive-debug[=domain]\n"
> @@ -510,6 +513,7 @@ static int common_init(int *argc, char **argv,
> int c, option_index = 0, i, x;
> static struct option long_options[] = {
> {"list-subtests", 0, 0, OPT_LIST_SUBTESTS},
> + {"all", 0, 0, OPT_WITH_SLOW_COMBINATORIAL},
> {"run-subtest", 1, 0, OPT_RUN_SUBTEST},
> {"help-description", 0, 0, OPT_DESCRIPTION},
> {"debug", optional_argument, 0, OPT_DEBUG},
> @@ -617,6 +621,10 @@ static int common_init(int *argc, char **argv,
> if (!run_single_subtest)
> list_subtests = true;
> break;
> + case OPT_WITH_SLOW_COMBINATORIAL:
> + if (!run_single_subtest)
> + with_slow_combinatorial = true;
> + break;
> case OPT_RUN_SUBTEST:
> if (!list_subtests)
> run_single_subtest = strdup(optarg);
> @@ -1629,6 +1637,22 @@ void igt_skip_on_simulation(void)
> igt_require(!igt_run_in_simulation());
> }
>
> +/**
> + * __igt_slow_combinatorial:
> + *
> + * This is used to skip subtests that should only be included
> + * when the "--all" command line option has been specified. This version
> + * is intended as a test.
> + *
> + * @slow_test: true if the subtest is part of the slow/combinatorial set
> + *
> + * Returns: true if the test should be run, false if the test should be skipped
> + */
> +bool __igt_slow_combinatorial(bool slow_test)
> +{
> + return !slow_test || with_slow_combinatorial;
> +}
> +
> /* structured logging */
>
> /**
> diff --git a/lib/igt_core.h b/lib/igt_core.h
> index 5ae09653fd55..7b592278bf6c 100644
> --- a/lib/igt_core.h
> +++ b/lib/igt_core.h
> @@ -191,6 +191,12 @@ bool __igt_run_subtest(const char *subtest_name);
> #define igt_subtest_f(f...) \
> __igt_subtest_f(igt_tokencat(__tmpchar, __LINE__), f)
>
> +bool __igt_slow_combinatorial(bool slow_test);
> +
We also need a igt_subtest_slow() version (without "_f") and some
comments explaining what's the real difference between them and the
other macros, like the other igt_subtest_* macros.
> +#define igt_subtest_slow_f(__slow, f...) \
> + if (__igt_slow_combinatorial(__slow)) \
> + __igt_subtest_f(igt_tokencat(__tmpchar, __LINE__), f)
Missing tab in the line above.
> +
> const char *igt_subtest_name(void);
> bool igt_only_list_subtests(void);
>
> @@ -669,6 +675,7 @@ void igt_disable_exit_handler(void);
>
> /* helpers to automatically reduce test runtime in simulation */
> bool igt_run_in_simulation(void);
> +
Bad chunk.
> /**
> * SLOW_QUICK:
> * @slow: value in simulation mode
> diff --git a/tests/gem_concurrent_blit.c b/tests/gem_concurrent_blit.c
> index 1d2d787202df..fe37cc707583 100644
> --- a/tests/gem_concurrent_blit.c
> +++ b/tests/gem_concurrent_blit.c
> @@ -55,7 +55,6 @@ IGT_TEST_DESCRIPTION("Test of pread/pwrite/mmap behavior when writing to active"
>
> int fd, devid, gen;
> struct intel_batchbuffer *batch;
> -int all;
>
> static void
> nop_release_bo(drm_intel_bo *bo)
> @@ -931,16 +930,14 @@ run_basic_modes(const struct access_mode *mode,
> struct buffers buffers;
>
> for (h = hangs; h->suffix; h++) {
> - if (!all && *h->suffix)
> - continue;
> -
> - for (p = all ? pipelines : pskip; p->prefix; p++) {
> + for (p = __igt_slow_combinatorial(true) ? pipelines : pskip;
> + p->prefix; p++) {
> igt_fixture {
> batch = buffers_init(&buffers, mode, fd);
> }
>
> /* try to overwrite the source values */
> - igt_subtest_f("%s-%s-overwrite-source-one%s%s", mode->name, p->prefix, suffix, h->suffix) {
> + igt_subtest_slow_f(*h->suffix, "%s-%s-overwrite-source-one%s%s", mode->name, p->prefix, suffix, h->suffix) {
> h->require();
> p->require();
> buffers_create(&buffers, num_buffers);
> @@ -949,7 +946,7 @@ run_basic_modes(const struct access_mode *mode,
> p->copy, h->hang);
> }
>
> - igt_subtest_f("%s-%s-overwrite-source%s%s", mode->name, p->prefix, suffix, h->suffix) {
> + igt_subtest_slow_f(*h->suffix, "%s-%s-overwrite-source%s%s", mode->name, p->prefix, suffix, h->suffix) {
> h->require();
> p->require();
> buffers_create(&buffers, num_buffers);
> @@ -958,7 +955,7 @@ run_basic_modes(const struct access_mode *mode,
> p->copy, h->hang);
> }
>
> - igt_subtest_f("%s-%s-overwrite-source-read-bcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
> + igt_subtest_slow_f(*h->suffix, "%s-%s-overwrite-source-read-bcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
> h->require();
> p->require();
> buffers_create(&buffers, num_buffers);
> @@ -967,7 +964,7 @@ run_basic_modes(const struct access_mode *mode,
> p->copy, h->hang);
> }
>
> - igt_subtest_f("%s-%s-overwrite-source-read-rcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
> + igt_subtest_slow_f(*h->suffix, "%s-%s-overwrite-source-read-rcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
> h->require();
> p->require();
> igt_require(rendercopy);
> @@ -977,7 +974,7 @@ run_basic_modes(const struct access_mode *mode,
> p->copy, h->hang);
> }
>
> - igt_subtest_f("%s-%s-overwrite-source-rev%s%s", mode->name, p->prefix, suffix, h->suffix) {
> + igt_subtest_slow_f(*h->suffix, "%s-%s-overwrite-source-rev%s%s", mode->name, p->prefix, suffix, h->suffix) {
> h->require();
> p->require();
> buffers_create(&buffers, num_buffers);
> @@ -987,7 +984,7 @@ run_basic_modes(const struct access_mode *mode,
> }
>
> /* try to intermix copies with GPU copies*/
> - igt_subtest_f("%s-%s-intermix-rcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
> + igt_subtest_slow_f(*h->suffix, "%s-%s-intermix-rcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
> h->require();
> p->require();
> igt_require(rendercopy);
> @@ -996,7 +993,7 @@ run_basic_modes(const struct access_mode *mode,
> do_intermix_rcs,
> p->copy, h->hang);
> }
> - igt_subtest_f("%s-%s-intermix-bcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
> + igt_subtest_slow_f(*h->suffix, "%s-%s-intermix-bcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
> h->require();
> p->require();
> igt_require(rendercopy);
> @@ -1005,7 +1002,7 @@ run_basic_modes(const struct access_mode *mode,
> do_intermix_bcs,
> p->copy, h->hang);
> }
> - igt_subtest_f("%s-%s-intermix-both%s%s", mode->name, p->prefix, suffix, h->suffix) {
> + igt_subtest_slow_f(*h->suffix, "%s-%s-intermix-both%s%s", mode->name, p->prefix, suffix, h->suffix) {
> h->require();
> p->require();
> igt_require(rendercopy);
> @@ -1016,7 +1013,7 @@ run_basic_modes(const struct access_mode *mode,
> }
>
> /* try to read the results before the copy completes */
> - igt_subtest_f("%s-%s-early-read%s%s", mode->name, p->prefix, suffix, h->suffix) {
> + igt_subtest_slow_f(*h->suffix, "%s-%s-early-read%s%s", mode->name, p->prefix, suffix, h->suffix) {
> h->require();
> p->require();
> buffers_create(&buffers, num_buffers);
> @@ -1026,7 +1023,7 @@ run_basic_modes(const struct access_mode *mode,
> }
>
> /* concurrent reads */
> - igt_subtest_f("%s-%s-read-read-bcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
> + igt_subtest_slow_f(*h->suffix, "%s-%s-read-read-bcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
> h->require();
> p->require();
> buffers_create(&buffers, num_buffers);
> @@ -1034,7 +1031,7 @@ run_basic_modes(const struct access_mode *mode,
> do_read_read_bcs,
> p->copy, h->hang);
> }
> - igt_subtest_f("%s-%s-read-read-rcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
> + igt_subtest_slow_f(*h->suffix, "%s-%s-read-read-rcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
> h->require();
> p->require();
> igt_require(rendercopy);
> @@ -1045,7 +1042,7 @@ run_basic_modes(const struct access_mode *mode,
> }
>
> /* and finally try to trick the kernel into loosing the pending write */
> - igt_subtest_f("%s-%s-gpu-read-after-write%s%s", mode->name, p->prefix, suffix, h->suffix) {
> + igt_subtest_slow_f(*h->suffix, "%s-%s-gpu-read-after-write%s%s", mode->name, p->prefix, suffix, h->suffix) {
> h->require();
> p->require();
> buffers_create(&buffers, num_buffers);
> @@ -1064,13 +1061,11 @@ run_basic_modes(const struct access_mode *mode,
> static void
> run_modes(const struct access_mode *mode)
> {
> - if (all) {
> - run_basic_modes(mode, "", run_single);
> + run_basic_modes(mode, "", run_single);
>
> - igt_fork_signal_helper();
> - run_basic_modes(mode, "-interruptible", run_interruptible);
> - igt_stop_signal_helper();
> - }
> + igt_fork_signal_helper();
> + run_basic_modes(mode, "-interruptible", run_interruptible);
> + igt_stop_signal_helper();
>
> igt_fork_signal_helper();
> run_basic_modes(mode, "-forked", run_forked);
> @@ -1083,9 +1078,6 @@ igt_main
>
> igt_skip_on_simulation();
>
> - if (strstr(igt_test_name(), "all"))
> - all = true;
> -
> igt_fixture {
> fd = drm_open_driver(DRIVER_INTEL);
> devid = intel_get_drm_devid(fd);
> diff --git a/tests/kms_frontbuffer_tracking.c b/tests/kms_frontbuffer_tracking.c
> index 15707b9b9040..86fd7ca08692 100644
> --- a/tests/kms_frontbuffer_tracking.c
> +++ b/tests/kms_frontbuffer_tracking.c
> @@ -47,8 +47,7 @@ IGT_TEST_DESCRIPTION("Test the Kernel's frontbuffer tracking mechanism and "
> * combinations that are somewhat redundant and don't add much value to the
> * test. For example, since we already do the offscreen testing with a single
> * pipe enabled, there's no much value in doing it again with dual pipes. If you
> - * still want to try these redundant tests, you need to use the --show-hidden
> - * option.
> + * still want to try these redundant tests, you need to use the --all option.
> *
> * The most important hidden thing is the FEATURE_NONE set of tests. Whenever
> * you get a failure on any test, it is important to check whether the same test
> @@ -116,6 +115,10 @@ struct test_mode {
> } format;
>
> enum igt_draw_method method;
> +
> + /* The test is slow and/or combinatorial;
> + * skip unless otherwise specified */
> + bool slow;
My problem with this is that exactly none of the tests marked as
"slow" are actually slow here... They're either redudant or for debug
purposes, not slow.
> };
>
> enum flip_type {
> @@ -237,7 +240,6 @@ struct {
> bool fbc_check_last_action;
> bool no_edp;
> bool small_modes;
> - bool show_hidden;
> int step;
> int only_pipes;
> int shared_fb_x_offset;
> @@ -249,7 +251,6 @@ struct {
> .fbc_check_last_action = true,
> .no_edp = false,
> .small_modes = false,
> - .show_hidden= false,
> .step = 0,
> .only_pipes = PIPE_COUNT,
> .shared_fb_x_offset = 500,
> @@ -2933,9 +2934,6 @@ static int opt_handler(int option, int option_index, void *data)
> case 'm':
> opt.small_modes = true;
> break;
> - case 'i':
> - opt.show_hidden = true;
> - break;
> case 't':
> opt.step++;
> break;
> @@ -2971,7 +2969,6 @@ const char *help_str =
> " --no-fbc-action-check Don't check for the FBC last action\n"
> " --no-edp Don't use eDP monitors\n"
> " --use-small-modes Use smaller resolutions for the modes\n"
> -" --show-hidden Show hidden subtests\n"
> " --step Stop on each step so you can check the screen\n"
> " --shared-fb-x offset Use 'offset' as the X offset for the shared FB\n"
> " --shared-fb-y offset Use 'offset' as the Y offset for the shared FB\n"
> @@ -3068,18 +3065,19 @@ static const char *format_str(enum pixel_format format)
> for (t.plane = 0; t.plane < PLANE_COUNT; t.plane++) { \
> for (t.fbs = 0; t.fbs < FBS_COUNT; t.fbs++) { \
> for (t.method = 0; t.method < IGT_DRAW_METHOD_COUNT; t.method++) { \
> + t.slow = false; \
> if (t.pipes == PIPE_SINGLE && t.screen == SCREEN_SCND) \
> continue; \
> if (t.screen == SCREEN_OFFSCREEN && t.plane != PLANE_PRI) \
> continue; \
> - if (!opt.show_hidden && t.pipes == PIPE_DUAL && \
> + if (t.pipes == PIPE_DUAL && \
> t.screen == SCREEN_OFFSCREEN) \
> - continue; \
> - if (!opt.show_hidden && t.feature == FEATURE_NONE) \
> - continue; \
> - if (!opt.show_hidden && t.fbs == FBS_SHARED && \
> + t.slow = true; \
> + if (t.feature == FEATURE_NONE) \
> + t.slow = true; \
> + if (t.fbs == FBS_SHARED && \
> (t.plane == PLANE_CUR || t.plane == PLANE_SPR)) \
> - continue;
> + t.slow = true;
>
>
> #define TEST_MODE_ITER_END } } } } } }
> @@ -3094,7 +3092,6 @@ int main(int argc, char *argv[])
> { "no-fbc-action-check", 0, 0, 'a'},
> { "no-edp", 0, 0, 'e'},
> { "use-small-modes", 0, 0, 'm'},
> - { "show-hidden", 0, 0, 'i'},
> { "step", 0, 0, 't'},
> { "shared-fb-x", 1, 0, 'x'},
> { "shared-fb-y", 1, 0, 'y'},
> @@ -3110,8 +3107,9 @@ int main(int argc, char *argv[])
> setup_environment();
>
> for (t.feature = 0; t.feature < FEATURE_COUNT; t.feature++) {
> - if (!opt.show_hidden && t.feature == FEATURE_NONE)
> - continue;
> + t.slow = false;
> + if (t.feature == FEATURE_NONE)
> + t.slow = true;
> for (t.pipes = 0; t.pipes < PIPE_COUNT; t.pipes++) {
> t.screen = SCREEN_PRIM;
> t.plane = PLANE_PRI;
> @@ -3120,52 +3118,58 @@ int main(int argc, char *argv[])
> /* Make sure nothing is using this value. */
> t.method = -1;
>
> - igt_subtest_f("%s-%s-rte",
> - feature_str(t.feature),
> - pipes_str(t.pipes))
> + igt_subtest_slow_f(t.slow,
> + "%s-%s-rte",
> + feature_str(t.feature),
> + pipes_str(t.pipes))
> rte_subtest(&t);
> }
> }
>
> TEST_MODE_ITER_BEGIN(t)
> - igt_subtest_f("%s-%s-%s-%s-%s-draw-%s",
> - feature_str(t.feature),
> - pipes_str(t.pipes),
> - screen_str(t.screen),
> - plane_str(t.plane),
> - fbs_str(t.fbs),
> - igt_draw_get_method_name(t.method))
> + igt_subtest_slow_f(t.slow,
> + "%s-%s-%s-%s-%s-draw-%s",
> + feature_str(t.feature),
> + pipes_str(t.pipes),
> + screen_str(t.screen),
> + plane_str(t.plane),
> + fbs_str(t.fbs),
> + igt_draw_get_method_name(t.method))
> draw_subtest(&t);
> TEST_MODE_ITER_END
>
> TEST_MODE_ITER_BEGIN(t)
> if (t.plane != PLANE_PRI ||
> - t.screen == SCREEN_OFFSCREEN ||
> - (!opt.show_hidden && t.method != IGT_DRAW_BLT))
> + t.screen == SCREEN_OFFSCREEN)
> continue;
> -
> - igt_subtest_f("%s-%s-%s-%s-flip-%s",
> - feature_str(t.feature),
> - pipes_str(t.pipes),
> - screen_str(t.screen),
> - fbs_str(t.fbs),
> - igt_draw_get_method_name(t.method))
> + if (t.method != IGT_DRAW_BLT)
> + t.slow = true;
> +
> + igt_subtest_slow_f(t.slow,
> + "%s-%s-%s-%s-flip-%s",
> + feature_str(t.feature),
> + pipes_str(t.pipes),
> + screen_str(t.screen),
> + fbs_str(t.fbs),
> + igt_draw_get_method_name(t.method))
> flip_subtest(&t, FLIP_PAGEFLIP);
>
> - igt_subtest_f("%s-%s-%s-%s-evflip-%s",
> - feature_str(t.feature),
> - pipes_str(t.pipes),
> - screen_str(t.screen),
> - fbs_str(t.fbs),
> - igt_draw_get_method_name(t.method))
> + igt_subtest_slow_f(t.slow,
> + "%s-%s-%s-%s-evflip-%s",
> + feature_str(t.feature),
> + pipes_str(t.pipes),
> + screen_str(t.screen),
> + fbs_str(t.fbs),
> + igt_draw_get_method_name(t.method))
> flip_subtest(&t, FLIP_PAGEFLIP_EVENT);
>
> - igt_subtest_f("%s-%s-%s-%s-msflip-%s",
> - feature_str(t.feature),
> - pipes_str(t.pipes),
> - screen_str(t.screen),
> - fbs_str(t.fbs),
> - igt_draw_get_method_name(t.method))
> + igt_subtest_slow_f(t.slow,
> + "%s-%s-%s-%s-msflip-%s",
> + feature_str(t.feature),
> + pipes_str(t.pipes),
> + screen_str(t.screen),
> + fbs_str(t.fbs),
> + igt_draw_get_method_name(t.method))
> flip_subtest(&t, FLIP_MODESET);
>
> TEST_MODE_ITER_END
> @@ -3177,10 +3181,11 @@ int main(int argc, char *argv[])
> (t.feature & FEATURE_FBC) == 0)
> continue;
>
> - igt_subtest_f("%s-%s-%s-fliptrack",
> - feature_str(t.feature),
> - pipes_str(t.pipes),
> - fbs_str(t.fbs))
> + igt_subtest_slow_f(t.slow,
> + "%s-%s-%s-fliptrack",
> + feature_str(t.feature),
> + pipes_str(t.pipes),
> + fbs_str(t.fbs))
> fliptrack_subtest(&t, FLIP_PAGEFLIP);
> TEST_MODE_ITER_END
>
> @@ -3190,20 +3195,22 @@ int main(int argc, char *argv[])
> t.plane == PLANE_PRI)
> continue;
>
> - igt_subtest_f("%s-%s-%s-%s-%s-move",
> - feature_str(t.feature),
> - pipes_str(t.pipes),
> - screen_str(t.screen),
> - plane_str(t.plane),
> - fbs_str(t.fbs))
> + igt_subtest_slow_f(t.slow,
> + "%s-%s-%s-%s-%s-move",
> + feature_str(t.feature),
> + pipes_str(t.pipes),
> + screen_str(t.screen),
> + plane_str(t.plane),
> + fbs_str(t.fbs))
> move_subtest(&t);
>
> - igt_subtest_f("%s-%s-%s-%s-%s-onoff",
> - feature_str(t.feature),
> - pipes_str(t.pipes),
> - screen_str(t.screen),
> - plane_str(t.plane),
> - fbs_str(t.fbs))
> + igt_subtest_slow_f(t.slow,
> + "%s-%s-%s-%s-%s-onoff",
> + feature_str(t.feature),
> + pipes_str(t.pipes),
> + screen_str(t.screen),
> + plane_str(t.plane),
> + fbs_str(t.fbs))
> onoff_subtest(&t);
> TEST_MODE_ITER_END
>
> @@ -3213,27 +3220,30 @@ int main(int argc, char *argv[])
> t.plane != PLANE_SPR)
> continue;
>
> - igt_subtest_f("%s-%s-%s-%s-%s-fullscreen",
> - feature_str(t.feature),
> - pipes_str(t.pipes),
> - screen_str(t.screen),
> - plane_str(t.plane),
> - fbs_str(t.fbs))
> + igt_subtest_slow_f(t.slow,
> + "%s-%s-%s-%s-%s-fullscreen",
> + feature_str(t.feature),
> + pipes_str(t.pipes),
> + screen_str(t.screen),
> + plane_str(t.plane),
> + fbs_str(t.fbs))
> fullscreen_plane_subtest(&t);
> TEST_MODE_ITER_END
>
> TEST_MODE_ITER_BEGIN(t)
> if (t.screen != SCREEN_PRIM ||
> - t.method != IGT_DRAW_BLT ||
> - (!opt.show_hidden && t.plane != PLANE_PRI) ||
> - (!opt.show_hidden && t.fbs != FBS_INDIVIDUAL))
> + t.method != IGT_DRAW_BLT)
> continue;
> -
> - igt_subtest_f("%s-%s-%s-%s-multidraw",
> - feature_str(t.feature),
> - pipes_str(t.pipes),
> - plane_str(t.plane),
> - fbs_str(t.fbs))
> + if (t.plane != PLANE_PRI ||
> + t.fbs != FBS_INDIVIDUAL)
> + t.slow = true;
> +
> + igt_subtest_slow_f(t.slow,
> + "%s-%s-%s-%s-multidraw",
> + feature_str(t.feature),
> + pipes_str(t.pipes),
> + plane_str(t.plane),
> + fbs_str(t.fbs))
> multidraw_subtest(&t);
> TEST_MODE_ITER_END
>
> @@ -3245,7 +3255,9 @@ int main(int argc, char *argv[])
> t.method != IGT_DRAW_MMAP_GTT)
> continue;
>
> - igt_subtest_f("%s-farfromfence", feature_str(t.feature))
> + igt_subtest_slow_f(t.slow,
> + "%s-farfromfence",
> + feature_str(t.feature))
> farfromfence_subtest(&t);
> TEST_MODE_ITER_END
>
> @@ -3261,10 +3273,11 @@ int main(int argc, char *argv[])
> if (t.format == FORMAT_DEFAULT)
> continue;
>
> - igt_subtest_f("%s-%s-draw-%s",
> - feature_str(t.feature),
> - format_str(t.format),
> - igt_draw_get_method_name(t.method))
> + igt_subtest_slow_f(t.slow,
> + "%s-%s-draw-%s",
> + feature_str(t.feature),
> + format_str(t.format),
> + igt_draw_get_method_name(t.method))
> format_draw_subtest(&t);
> }
> TEST_MODE_ITER_END
> @@ -3275,9 +3288,10 @@ int main(int argc, char *argv[])
> t.plane != PLANE_PRI ||
> t.method != IGT_DRAW_MMAP_CPU)
> continue;
> - igt_subtest_f("%s-%s-scaledprimary",
> - feature_str(t.feature),
> - fbs_str(t.fbs))
> + igt_subtest_slow_f(t.slow,
> + "%s-%s-scaledprimary",
> + feature_str(t.feature),
> + fbs_str(t.fbs))
> scaledprimary_subtest(&t);
> TEST_MODE_ITER_END
>
> @@ -3289,22 +3303,32 @@ int main(int argc, char *argv[])
> t.method != IGT_DRAW_MMAP_CPU)
> continue;
>
> - igt_subtest_f("%s-modesetfrombusy", feature_str(t.feature))
> + igt_subtest_slow_f(t.slow,
> + "%s-modesetfrombusy",
> + feature_str(t.feature))
> modesetfrombusy_subtest(&t);
>
> if (t.feature & FEATURE_FBC) {
> - igt_subtest_f("%s-badstride", feature_str(t.feature))
> + igt_subtest_slow_f(t.slow,
> + "%s-badstride",
> + feature_str(t.feature))
> badstride_subtest(&t);
>
> - igt_subtest_f("%s-stridechange", feature_str(t.feature))
> + igt_subtest_slow_f(t.slow,
> + "%s-stridechange",
> + feature_str(t.feature))
> stridechange_subtest(&t);
> }
>
> if (t.feature & FEATURE_PSR)
> - igt_subtest_f("%s-slowdraw", feature_str(t.feature))
> + igt_subtest_slow_f(t.slow,
> + "%s-slowdraw",
> + feature_str(t.feature))
> slow_draw_subtest(&t);
>
> - igt_subtest_f("%s-suspend", feature_str(t.feature))
> + igt_subtest_slow_f(t.slow,
> + "%s-suspend",
> + feature_str(t.feature))
> suspend_subtest(&t);
> TEST_MODE_ITER_END
>
> --
> 2.6.2
>
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/intel-gfx
--
Paulo Zanoni
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH i-g-t 2/3] Unify handling of slow/combinatorial tests
2015-10-28 11:29 ` [PATCH i-g-t 2/3] Unify handling of slow/combinatorial tests David Weinehall
2015-10-28 16:12 ` Paulo Zanoni
@ 2015-10-28 17:14 ` Thomas Wood
2015-10-30 7:44 ` David Weinehall
1 sibling, 1 reply; 41+ messages in thread
From: Thomas Wood @ 2015-10-28 17:14 UTC (permalink / raw)
To: David Weinehall; +Cc: Intel Graphics Development
On 28 October 2015 at 11:29, David Weinehall
<david.weinehall@linux.intel.com> wrote:
> Some tests should not be run by default, due to their slow,
> and sometimes superfluous, nature.
>
> We still want to be able to run these tests in some cases.
> Until now there's been no unified way of handling this. Remedy
> this by introducing the --all option to igt_core,
> and use it in gem_concurrent_blit & kms_frontbuffer_tracking.
>
> Signed-off-by: David Weinehall <david.weinehall@linux.intel.com>
> ---
> lib/igt_core.c | 24 +++++
> lib/igt_core.h | 7 ++
> tests/gem_concurrent_blit.c | 44 ++++-----
> tests/kms_frontbuffer_tracking.c | 208 ++++++++++++++++++++++-----------------
> 4 files changed, 165 insertions(+), 118 deletions(-)
>
> diff --git a/lib/igt_core.c b/lib/igt_core.c
> index 59127cafe606..6575b9d6bf0d 100644
> --- a/lib/igt_core.c
> +++ b/lib/igt_core.c
> @@ -216,6 +216,7 @@ const char *igt_interactive_debug;
>
> /* subtests helpers */
> static bool list_subtests = false;
> +static bool with_slow_combinatorial = false;
> static char *run_single_subtest = NULL;
> static bool run_single_subtest_found = false;
> static const char *in_subtest = NULL;
> @@ -235,6 +236,7 @@ bool test_child;
>
> enum {
> OPT_LIST_SUBTESTS,
> + OPT_WITH_SLOW_COMBINATORIAL,
> OPT_RUN_SUBTEST,
> OPT_DESCRIPTION,
> OPT_DEBUG,
> @@ -478,6 +480,7 @@ static void print_usage(const char *help_str, bool output_on_stderr)
>
> fprintf(f, "Usage: %s [OPTIONS]\n", command_str);
> fprintf(f, " --list-subtests\n"
> + " --all\n"
> " --run-subtest <pattern>\n"
> " --debug[=log-domain]\n"
> " --interactive-debug[=domain]\n"
> @@ -510,6 +513,7 @@ static int common_init(int *argc, char **argv,
> int c, option_index = 0, i, x;
> static struct option long_options[] = {
> {"list-subtests", 0, 0, OPT_LIST_SUBTESTS},
> + {"all", 0, 0, OPT_WITH_SLOW_COMBINATORIAL},
> {"run-subtest", 1, 0, OPT_RUN_SUBTEST},
> {"help-description", 0, 0, OPT_DESCRIPTION},
> {"debug", optional_argument, 0, OPT_DEBUG},
> @@ -617,6 +621,10 @@ static int common_init(int *argc, char **argv,
> if (!run_single_subtest)
> list_subtests = true;
> break;
> + case OPT_WITH_SLOW_COMBINATORIAL:
> + if (!run_single_subtest)
> + with_slow_combinatorial = true;
> + break;
> case OPT_RUN_SUBTEST:
> if (!list_subtests)
> run_single_subtest = strdup(optarg);
> @@ -1629,6 +1637,22 @@ void igt_skip_on_simulation(void)
> igt_require(!igt_run_in_simulation());
> }
>
> +/**
> + * __igt_slow_combinatorial:
If this is intended to be documented and used in tests, then it should
be included in the public API (i.e. without the underscore prefix).
> + *
> + * This is used to skip subtests that should only be included
> + * when the "--all" command line option has been specified. This version
> + * is intended as a test.
> + *
> + * @slow_test: true if the subtest is part of the slow/combinatorial set
If this is used to test if a slow subtest should be run, shouldn't
slow_test always be true?
> + *
> + * Returns: true if the test should be run, false if the test should be skipped
> + */
> +bool __igt_slow_combinatorial(bool slow_test)
> +{
> + return !slow_test || with_slow_combinatorial;
> +}
> +
> /* structured logging */
>
> /**
> diff --git a/lib/igt_core.h b/lib/igt_core.h
> index 5ae09653fd55..7b592278bf6c 100644
> --- a/lib/igt_core.h
> +++ b/lib/igt_core.h
> @@ -191,6 +191,12 @@ bool __igt_run_subtest(const char *subtest_name);
> #define igt_subtest_f(f...) \
> __igt_subtest_f(igt_tokencat(__tmpchar, __LINE__), f)
>
> +bool __igt_slow_combinatorial(bool slow_test);
> +
Documentation for igt_subtest_slow_f is needed here. If __slow is
false, this macro just defines a normal subtest, which is
contradictory to its name. Perhaps igt_subtest_with_flags_f (or
similar) would be better and would also allow for future expansion
with other categories.
> +#define igt_subtest_slow_f(__slow, f...) \
> + if (__igt_slow_combinatorial(__slow)) \
> + __igt_subtest_f(igt_tokencat(__tmpchar, __LINE__), f)
> +
> const char *igt_subtest_name(void);
> bool igt_only_list_subtests(void);
>
> @@ -669,6 +675,7 @@ void igt_disable_exit_handler(void);
>
> /* helpers to automatically reduce test runtime in simulation */
> bool igt_run_in_simulation(void);
> +
> /**
> * SLOW_QUICK:
> * @slow: value in simulation mode
> diff --git a/tests/gem_concurrent_blit.c b/tests/gem_concurrent_blit.c
> index 1d2d787202df..fe37cc707583 100644
> --- a/tests/gem_concurrent_blit.c
> +++ b/tests/gem_concurrent_blit.c
> @@ -55,7 +55,6 @@ IGT_TEST_DESCRIPTION("Test of pread/pwrite/mmap behavior when writing to active"
>
> int fd, devid, gen;
> struct intel_batchbuffer *batch;
> -int all;
>
> static void
> nop_release_bo(drm_intel_bo *bo)
> @@ -931,16 +930,14 @@ run_basic_modes(const struct access_mode *mode,
> struct buffers buffers;
>
> for (h = hangs; h->suffix; h++) {
> - if (!all && *h->suffix)
> - continue;
> -
> - for (p = all ? pipelines : pskip; p->prefix; p++) {
> + for (p = __igt_slow_combinatorial(true) ? pipelines : pskip;
> + p->prefix; p++) {
> igt_fixture {
> batch = buffers_init(&buffers, mode, fd);
> }
>
> /* try to overwrite the source values */
> - igt_subtest_f("%s-%s-overwrite-source-one%s%s", mode->name, p->prefix, suffix, h->suffix) {
> + igt_subtest_slow_f(*h->suffix, "%s-%s-overwrite-source-one%s%s", mode->name, p->prefix, suffix, h->suffix) {
> h->require();
> p->require();
> buffers_create(&buffers, num_buffers);
> @@ -949,7 +946,7 @@ run_basic_modes(const struct access_mode *mode,
> p->copy, h->hang);
> }
>
> - igt_subtest_f("%s-%s-overwrite-source%s%s", mode->name, p->prefix, suffix, h->suffix) {
> + igt_subtest_slow_f(*h->suffix, "%s-%s-overwrite-source%s%s", mode->name, p->prefix, suffix, h->suffix) {
> h->require();
> p->require();
> buffers_create(&buffers, num_buffers);
> @@ -958,7 +955,7 @@ run_basic_modes(const struct access_mode *mode,
> p->copy, h->hang);
> }
>
> - igt_subtest_f("%s-%s-overwrite-source-read-bcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
> + igt_subtest_slow_f(*h->suffix, "%s-%s-overwrite-source-read-bcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
> h->require();
> p->require();
> buffers_create(&buffers, num_buffers);
> @@ -967,7 +964,7 @@ run_basic_modes(const struct access_mode *mode,
> p->copy, h->hang);
> }
>
> - igt_subtest_f("%s-%s-overwrite-source-read-rcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
> + igt_subtest_slow_f(*h->suffix, "%s-%s-overwrite-source-read-rcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
> h->require();
> p->require();
> igt_require(rendercopy);
> @@ -977,7 +974,7 @@ run_basic_modes(const struct access_mode *mode,
> p->copy, h->hang);
> }
>
> - igt_subtest_f("%s-%s-overwrite-source-rev%s%s", mode->name, p->prefix, suffix, h->suffix) {
> + igt_subtest_slow_f(*h->suffix, "%s-%s-overwrite-source-rev%s%s", mode->name, p->prefix, suffix, h->suffix) {
> h->require();
> p->require();
> buffers_create(&buffers, num_buffers);
> @@ -987,7 +984,7 @@ run_basic_modes(const struct access_mode *mode,
> }
>
> /* try to intermix copies with GPU copies*/
> - igt_subtest_f("%s-%s-intermix-rcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
> + igt_subtest_slow_f(*h->suffix, "%s-%s-intermix-rcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
> h->require();
> p->require();
> igt_require(rendercopy);
> @@ -996,7 +993,7 @@ run_basic_modes(const struct access_mode *mode,
> do_intermix_rcs,
> p->copy, h->hang);
> }
> - igt_subtest_f("%s-%s-intermix-bcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
> + igt_subtest_slow_f(*h->suffix, "%s-%s-intermix-bcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
> h->require();
> p->require();
> igt_require(rendercopy);
> @@ -1005,7 +1002,7 @@ run_basic_modes(const struct access_mode *mode,
> do_intermix_bcs,
> p->copy, h->hang);
> }
> - igt_subtest_f("%s-%s-intermix-both%s%s", mode->name, p->prefix, suffix, h->suffix) {
> + igt_subtest_slow_f(*h->suffix, "%s-%s-intermix-both%s%s", mode->name, p->prefix, suffix, h->suffix) {
> h->require();
> p->require();
> igt_require(rendercopy);
> @@ -1016,7 +1013,7 @@ run_basic_modes(const struct access_mode *mode,
> }
>
> /* try to read the results before the copy completes */
> - igt_subtest_f("%s-%s-early-read%s%s", mode->name, p->prefix, suffix, h->suffix) {
> + igt_subtest_slow_f(*h->suffix, "%s-%s-early-read%s%s", mode->name, p->prefix, suffix, h->suffix) {
> h->require();
> p->require();
> buffers_create(&buffers, num_buffers);
> @@ -1026,7 +1023,7 @@ run_basic_modes(const struct access_mode *mode,
> }
>
> /* concurrent reads */
> - igt_subtest_f("%s-%s-read-read-bcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
> + igt_subtest_slow_f(*h->suffix, "%s-%s-read-read-bcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
> h->require();
> p->require();
> buffers_create(&buffers, num_buffers);
> @@ -1034,7 +1031,7 @@ run_basic_modes(const struct access_mode *mode,
> do_read_read_bcs,
> p->copy, h->hang);
> }
> - igt_subtest_f("%s-%s-read-read-rcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
> + igt_subtest_slow_f(*h->suffix, "%s-%s-read-read-rcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
> h->require();
> p->require();
> igt_require(rendercopy);
> @@ -1045,7 +1042,7 @@ run_basic_modes(const struct access_mode *mode,
> }
>
> /* and finally try to trick the kernel into loosing the pending write */
> - igt_subtest_f("%s-%s-gpu-read-after-write%s%s", mode->name, p->prefix, suffix, h->suffix) {
> + igt_subtest_slow_f(*h->suffix, "%s-%s-gpu-read-after-write%s%s", mode->name, p->prefix, suffix, h->suffix) {
> h->require();
> p->require();
> buffers_create(&buffers, num_buffers);
> @@ -1064,13 +1061,11 @@ run_basic_modes(const struct access_mode *mode,
> static void
> run_modes(const struct access_mode *mode)
> {
> - if (all) {
> - run_basic_modes(mode, "", run_single);
> + run_basic_modes(mode, "", run_single);
>
> - igt_fork_signal_helper();
> - run_basic_modes(mode, "-interruptible", run_interruptible);
> - igt_stop_signal_helper();
> - }
> + igt_fork_signal_helper();
> + run_basic_modes(mode, "-interruptible", run_interruptible);
> + igt_stop_signal_helper();
>
> igt_fork_signal_helper();
> run_basic_modes(mode, "-forked", run_forked);
> @@ -1083,9 +1078,6 @@ igt_main
>
> igt_skip_on_simulation();
>
> - if (strstr(igt_test_name(), "all"))
> - all = true;
> -
> igt_fixture {
> fd = drm_open_driver(DRIVER_INTEL);
> devid = intel_get_drm_devid(fd);
> diff --git a/tests/kms_frontbuffer_tracking.c b/tests/kms_frontbuffer_tracking.c
> index 15707b9b9040..86fd7ca08692 100644
> --- a/tests/kms_frontbuffer_tracking.c
> +++ b/tests/kms_frontbuffer_tracking.c
> @@ -47,8 +47,7 @@ IGT_TEST_DESCRIPTION("Test the Kernel's frontbuffer tracking mechanism and "
> * combinations that are somewhat redundant and don't add much value to the
> * test. For example, since we already do the offscreen testing with a single
> * pipe enabled, there's no much value in doing it again with dual pipes. If you
> - * still want to try these redundant tests, you need to use the --show-hidden
> - * option.
> + * still want to try these redundant tests, you need to use the --all option.
> *
> * The most important hidden thing is the FEATURE_NONE set of tests. Whenever
> * you get a failure on any test, it is important to check whether the same test
> @@ -116,6 +115,10 @@ struct test_mode {
> } format;
>
> enum igt_draw_method method;
> +
> + /* The test is slow and/or combinatorial;
> + * skip unless otherwise specified */
> + bool slow;
> };
>
> enum flip_type {
> @@ -237,7 +240,6 @@ struct {
> bool fbc_check_last_action;
> bool no_edp;
> bool small_modes;
> - bool show_hidden;
> int step;
> int only_pipes;
> int shared_fb_x_offset;
> @@ -249,7 +251,6 @@ struct {
> .fbc_check_last_action = true,
> .no_edp = false,
> .small_modes = false,
> - .show_hidden= false,
> .step = 0,
> .only_pipes = PIPE_COUNT,
> .shared_fb_x_offset = 500,
> @@ -2933,9 +2934,6 @@ static int opt_handler(int option, int option_index, void *data)
> case 'm':
> opt.small_modes = true;
> break;
> - case 'i':
> - opt.show_hidden = true;
> - break;
> case 't':
> opt.step++;
> break;
> @@ -2971,7 +2969,6 @@ const char *help_str =
> " --no-fbc-action-check Don't check for the FBC last action\n"
> " --no-edp Don't use eDP monitors\n"
> " --use-small-modes Use smaller resolutions for the modes\n"
> -" --show-hidden Show hidden subtests\n"
> " --step Stop on each step so you can check the screen\n"
> " --shared-fb-x offset Use 'offset' as the X offset for the shared FB\n"
> " --shared-fb-y offset Use 'offset' as the Y offset for the shared FB\n"
> @@ -3068,18 +3065,19 @@ static const char *format_str(enum pixel_format format)
> for (t.plane = 0; t.plane < PLANE_COUNT; t.plane++) { \
> for (t.fbs = 0; t.fbs < FBS_COUNT; t.fbs++) { \
> for (t.method = 0; t.method < IGT_DRAW_METHOD_COUNT; t.method++) { \
> + t.slow = false; \
> if (t.pipes == PIPE_SINGLE && t.screen == SCREEN_SCND) \
> continue; \
> if (t.screen == SCREEN_OFFSCREEN && t.plane != PLANE_PRI) \
> continue; \
> - if (!opt.show_hidden && t.pipes == PIPE_DUAL && \
> + if (t.pipes == PIPE_DUAL && \
> t.screen == SCREEN_OFFSCREEN) \
> - continue; \
> - if (!opt.show_hidden && t.feature == FEATURE_NONE) \
> - continue; \
> - if (!opt.show_hidden && t.fbs == FBS_SHARED && \
> + t.slow = true; \
> + if (t.feature == FEATURE_NONE) \
> + t.slow = true; \
> + if (t.fbs == FBS_SHARED && \
> (t.plane == PLANE_CUR || t.plane == PLANE_SPR)) \
> - continue;
> + t.slow = true;
>
>
> #define TEST_MODE_ITER_END } } } } } }
> @@ -3094,7 +3092,6 @@ int main(int argc, char *argv[])
> { "no-fbc-action-check", 0, 0, 'a'},
> { "no-edp", 0, 0, 'e'},
> { "use-small-modes", 0, 0, 'm'},
> - { "show-hidden", 0, 0, 'i'},
> { "step", 0, 0, 't'},
> { "shared-fb-x", 1, 0, 'x'},
> { "shared-fb-y", 1, 0, 'y'},
> @@ -3110,8 +3107,9 @@ int main(int argc, char *argv[])
> setup_environment();
>
> for (t.feature = 0; t.feature < FEATURE_COUNT; t.feature++) {
> - if (!opt.show_hidden && t.feature == FEATURE_NONE)
> - continue;
> + t.slow = false;
> + if (t.feature == FEATURE_NONE)
> + t.slow = true;
> for (t.pipes = 0; t.pipes < PIPE_COUNT; t.pipes++) {
> t.screen = SCREEN_PRIM;
> t.plane = PLANE_PRI;
> @@ -3120,52 +3118,58 @@ int main(int argc, char *argv[])
> /* Make sure nothing is using this value. */
> t.method = -1;
>
> - igt_subtest_f("%s-%s-rte",
> - feature_str(t.feature),
> - pipes_str(t.pipes))
> + igt_subtest_slow_f(t.slow,
> + "%s-%s-rte",
> + feature_str(t.feature),
> + pipes_str(t.pipes))
> rte_subtest(&t);
> }
> }
>
> TEST_MODE_ITER_BEGIN(t)
> - igt_subtest_f("%s-%s-%s-%s-%s-draw-%s",
> - feature_str(t.feature),
> - pipes_str(t.pipes),
> - screen_str(t.screen),
> - plane_str(t.plane),
> - fbs_str(t.fbs),
> - igt_draw_get_method_name(t.method))
> + igt_subtest_slow_f(t.slow,
> + "%s-%s-%s-%s-%s-draw-%s",
> + feature_str(t.feature),
> + pipes_str(t.pipes),
> + screen_str(t.screen),
> + plane_str(t.plane),
> + fbs_str(t.fbs),
> + igt_draw_get_method_name(t.method))
> draw_subtest(&t);
> TEST_MODE_ITER_END
>
> TEST_MODE_ITER_BEGIN(t)
> if (t.plane != PLANE_PRI ||
> - t.screen == SCREEN_OFFSCREEN ||
> - (!opt.show_hidden && t.method != IGT_DRAW_BLT))
> + t.screen == SCREEN_OFFSCREEN)
> continue;
> -
> - igt_subtest_f("%s-%s-%s-%s-flip-%s",
> - feature_str(t.feature),
> - pipes_str(t.pipes),
> - screen_str(t.screen),
> - fbs_str(t.fbs),
> - igt_draw_get_method_name(t.method))
> + if (t.method != IGT_DRAW_BLT)
> + t.slow = true;
> +
> + igt_subtest_slow_f(t.slow,
> + "%s-%s-%s-%s-flip-%s",
> + feature_str(t.feature),
> + pipes_str(t.pipes),
> + screen_str(t.screen),
> + fbs_str(t.fbs),
> + igt_draw_get_method_name(t.method))
> flip_subtest(&t, FLIP_PAGEFLIP);
>
> - igt_subtest_f("%s-%s-%s-%s-evflip-%s",
> - feature_str(t.feature),
> - pipes_str(t.pipes),
> - screen_str(t.screen),
> - fbs_str(t.fbs),
> - igt_draw_get_method_name(t.method))
> + igt_subtest_slow_f(t.slow,
> + "%s-%s-%s-%s-evflip-%s",
> + feature_str(t.feature),
> + pipes_str(t.pipes),
> + screen_str(t.screen),
> + fbs_str(t.fbs),
> + igt_draw_get_method_name(t.method))
> flip_subtest(&t, FLIP_PAGEFLIP_EVENT);
>
> - igt_subtest_f("%s-%s-%s-%s-msflip-%s",
> - feature_str(t.feature),
> - pipes_str(t.pipes),
> - screen_str(t.screen),
> - fbs_str(t.fbs),
> - igt_draw_get_method_name(t.method))
> + igt_subtest_slow_f(t.slow,
> + "%s-%s-%s-%s-msflip-%s",
> + feature_str(t.feature),
> + pipes_str(t.pipes),
> + screen_str(t.screen),
> + fbs_str(t.fbs),
> + igt_draw_get_method_name(t.method))
> flip_subtest(&t, FLIP_MODESET);
>
> TEST_MODE_ITER_END
> @@ -3177,10 +3181,11 @@ int main(int argc, char *argv[])
> (t.feature & FEATURE_FBC) == 0)
> continue;
>
> - igt_subtest_f("%s-%s-%s-fliptrack",
> - feature_str(t.feature),
> - pipes_str(t.pipes),
> - fbs_str(t.fbs))
> + igt_subtest_slow_f(t.slow,
> + "%s-%s-%s-fliptrack",
> + feature_str(t.feature),
> + pipes_str(t.pipes),
> + fbs_str(t.fbs))
> fliptrack_subtest(&t, FLIP_PAGEFLIP);
> TEST_MODE_ITER_END
>
> @@ -3190,20 +3195,22 @@ int main(int argc, char *argv[])
> t.plane == PLANE_PRI)
> continue;
>
> - igt_subtest_f("%s-%s-%s-%s-%s-move",
> - feature_str(t.feature),
> - pipes_str(t.pipes),
> - screen_str(t.screen),
> - plane_str(t.plane),
> - fbs_str(t.fbs))
> + igt_subtest_slow_f(t.slow,
> + "%s-%s-%s-%s-%s-move",
> + feature_str(t.feature),
> + pipes_str(t.pipes),
> + screen_str(t.screen),
> + plane_str(t.plane),
> + fbs_str(t.fbs))
> move_subtest(&t);
>
> - igt_subtest_f("%s-%s-%s-%s-%s-onoff",
> - feature_str(t.feature),
> - pipes_str(t.pipes),
> - screen_str(t.screen),
> - plane_str(t.plane),
> - fbs_str(t.fbs))
> + igt_subtest_slow_f(t.slow,
> + "%s-%s-%s-%s-%s-onoff",
> + feature_str(t.feature),
> + pipes_str(t.pipes),
> + screen_str(t.screen),
> + plane_str(t.plane),
> + fbs_str(t.fbs))
> onoff_subtest(&t);
> TEST_MODE_ITER_END
>
> @@ -3213,27 +3220,30 @@ int main(int argc, char *argv[])
> t.plane != PLANE_SPR)
> continue;
>
> - igt_subtest_f("%s-%s-%s-%s-%s-fullscreen",
> - feature_str(t.feature),
> - pipes_str(t.pipes),
> - screen_str(t.screen),
> - plane_str(t.plane),
> - fbs_str(t.fbs))
> + igt_subtest_slow_f(t.slow,
> + "%s-%s-%s-%s-%s-fullscreen",
> + feature_str(t.feature),
> + pipes_str(t.pipes),
> + screen_str(t.screen),
> + plane_str(t.plane),
> + fbs_str(t.fbs))
> fullscreen_plane_subtest(&t);
> TEST_MODE_ITER_END
>
> TEST_MODE_ITER_BEGIN(t)
> if (t.screen != SCREEN_PRIM ||
> - t.method != IGT_DRAW_BLT ||
> - (!opt.show_hidden && t.plane != PLANE_PRI) ||
> - (!opt.show_hidden && t.fbs != FBS_INDIVIDUAL))
> + t.method != IGT_DRAW_BLT)
> continue;
> -
> - igt_subtest_f("%s-%s-%s-%s-multidraw",
> - feature_str(t.feature),
> - pipes_str(t.pipes),
> - plane_str(t.plane),
> - fbs_str(t.fbs))
> + if (t.plane != PLANE_PRI ||
> + t.fbs != FBS_INDIVIDUAL)
> + t.slow = true;
> +
> + igt_subtest_slow_f(t.slow,
> + "%s-%s-%s-%s-multidraw",
> + feature_str(t.feature),
> + pipes_str(t.pipes),
> + plane_str(t.plane),
> + fbs_str(t.fbs))
> multidraw_subtest(&t);
> TEST_MODE_ITER_END
>
> @@ -3245,7 +3255,9 @@ int main(int argc, char *argv[])
> t.method != IGT_DRAW_MMAP_GTT)
> continue;
>
> - igt_subtest_f("%s-farfromfence", feature_str(t.feature))
> + igt_subtest_slow_f(t.slow,
> + "%s-farfromfence",
> + feature_str(t.feature))
> farfromfence_subtest(&t);
> TEST_MODE_ITER_END
>
> @@ -3261,10 +3273,11 @@ int main(int argc, char *argv[])
> if (t.format == FORMAT_DEFAULT)
> continue;
>
> - igt_subtest_f("%s-%s-draw-%s",
> - feature_str(t.feature),
> - format_str(t.format),
> - igt_draw_get_method_name(t.method))
> + igt_subtest_slow_f(t.slow,
> + "%s-%s-draw-%s",
> + feature_str(t.feature),
> + format_str(t.format),
> + igt_draw_get_method_name(t.method))
> format_draw_subtest(&t);
> }
> TEST_MODE_ITER_END
> @@ -3275,9 +3288,10 @@ int main(int argc, char *argv[])
> t.plane != PLANE_PRI ||
> t.method != IGT_DRAW_MMAP_CPU)
> continue;
> - igt_subtest_f("%s-%s-scaledprimary",
> - feature_str(t.feature),
> - fbs_str(t.fbs))
> + igt_subtest_slow_f(t.slow,
> + "%s-%s-scaledprimary",
> + feature_str(t.feature),
> + fbs_str(t.fbs))
> scaledprimary_subtest(&t);
> TEST_MODE_ITER_END
>
> @@ -3289,22 +3303,32 @@ int main(int argc, char *argv[])
> t.method != IGT_DRAW_MMAP_CPU)
> continue;
>
> - igt_subtest_f("%s-modesetfrombusy", feature_str(t.feature))
> + igt_subtest_slow_f(t.slow,
> + "%s-modesetfrombusy",
> + feature_str(t.feature))
> modesetfrombusy_subtest(&t);
>
> if (t.feature & FEATURE_FBC) {
> - igt_subtest_f("%s-badstride", feature_str(t.feature))
> + igt_subtest_slow_f(t.slow,
> + "%s-badstride",
> + feature_str(t.feature))
> badstride_subtest(&t);
>
> - igt_subtest_f("%s-stridechange", feature_str(t.feature))
> + igt_subtest_slow_f(t.slow,
> + "%s-stridechange",
> + feature_str(t.feature))
> stridechange_subtest(&t);
> }
>
> if (t.feature & FEATURE_PSR)
> - igt_subtest_f("%s-slowdraw", feature_str(t.feature))
> + igt_subtest_slow_f(t.slow,
> + "%s-slowdraw",
> + feature_str(t.feature))
> slow_draw_subtest(&t);
>
> - igt_subtest_f("%s-suspend", feature_str(t.feature))
> + igt_subtest_slow_f(t.slow,
> + "%s-suspend",
> + feature_str(t.feature))
> suspend_subtest(&t);
> TEST_MODE_ITER_END
>
> --
> 2.6.2
>
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/intel-gfx
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH i-g-t 2/3] Unify handling of slow/combinatorial tests
2015-10-28 17:14 ` Thomas Wood
@ 2015-10-30 7:44 ` David Weinehall
0 siblings, 0 replies; 41+ messages in thread
From: David Weinehall @ 2015-10-30 7:44 UTC (permalink / raw)
To: Thomas Wood; +Cc: Intel Graphics Development
On Wed, Oct 28, 2015 at 05:14:28PM +0000, Thomas Wood wrote:
> If this is intended to be documented and used in tests, then it should
> be included in the public API (i.e. without the underscore prefix).
True. Will fix.
> > + *
> > + * This is used to skip subtests that should only be included
> > + * when the "--all" command line option has been specified. This version
> > + * is intended as a test.
> > + *
> > + * @slow_test: true if the subtest is part of the slow/combinatorial set
>
> If this is used to test if a slow subtest should be run, shouldn't
> slow_test always be true?
The test is written such that igt_subtest_slow_f() can always be used
both for fast and slow cases -- the slow flag will decide whether or not
it should bail out (combined with the --all flag, obviously).
So slow_test isn't always true.
> Documentation for igt_subtest_slow_f is needed here. If __slow is
> false, this macro just defines a normal subtest, which is
> contradictory to its name. Perhaps igt_subtest_with_flags_f (or
> similar) would be better and would also allow for future expansion
> with other categories.
Yeah, that could be a workable solution; in discussion with Daniel
earlier we agreed not to do a "flags" implementation, to keep things
simple for now, but it might indeed be better to do things right
from the start.
As we get to the point where we call igt_subtest with several different
flags things will probably get quite complex though; at that point
I suspect that the macro might start looking really hairy...
Kind regards, David
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH i-g-t 2/3] Unify handling of slow/combinatorial tests
2015-10-28 16:12 ` Paulo Zanoni
@ 2015-10-30 7:56 ` David Weinehall
2015-10-30 11:55 ` Paulo Zanoni
0 siblings, 1 reply; 41+ messages in thread
From: David Weinehall @ 2015-10-30 7:56 UTC (permalink / raw)
To: Paulo Zanoni; +Cc: Intel Graphics Development
On Wed, Oct 28, 2015 at 02:12:15PM -0200, Paulo Zanoni wrote:
> 2015-10-28 9:29 GMT-02:00 David Weinehall <david.weinehall@linux.intel.com>:
> > Some tests should not be run by default, due to their slow,
> > and sometimes superfluous, nature.
> >
> > We still want to be able to run these tests in some cases.
> > Until now there's been no unified way of handling this. Remedy
> > this by introducing the --all option to igt_core,
> > and use it in gem_concurrent_blit & kms_frontbuffer_tracking.
>
> I really think you should explain both your plan and its
> implementation in more details here.
Well, I don't see how much more there is to explain; the idea is simply
that different tests shouldn't implement similar behaviour in different
manners (current kms_frontbuffer_tracking uses a command line option,
gem_concurrent_blit changes behaviour depending on the file name it's
called with).
> >
> > Signed-off-by: David Weinehall <david.weinehall@linux.intel.com>
> > ---
> > lib/igt_core.c | 24 +++++
> > lib/igt_core.h | 7 ++
> > tests/gem_concurrent_blit.c | 44 ++++-----
> > tests/kms_frontbuffer_tracking.c | 208 ++++++++++++++++++++++-----------------
> > 4 files changed, 165 insertions(+), 118 deletions(-)
> >
> > diff --git a/lib/igt_core.c b/lib/igt_core.c
> > index 59127cafe606..6575b9d6bf0d 100644
> > --- a/lib/igt_core.c
> > +++ b/lib/igt_core.c
> > @@ -216,6 +216,7 @@ const char *igt_interactive_debug;
> >
> > /* subtests helpers */
> > static bool list_subtests = false;
> > +static bool with_slow_combinatorial = false;
>
> The option is called --all, the new subtest macro is _slow and the
> variables and enums are called with_slow_combinatorial. Is this
> intentional?
The option is called --all because "--with-slow-combinatorial" was
considered to be too much of a mouthful. The variables & enums are
still retaining these names because they're much more descriptive.
The macro is called _slow because I wanted to keep it a bit shorter,
but I can rename it to _slow_combinatorial if that's preferred.
>
> > static char *run_single_subtest = NULL;
> > static bool run_single_subtest_found = false;
> > static const char *in_subtest = NULL;
> > @@ -235,6 +236,7 @@ bool test_child;
> >
> > enum {
> > OPT_LIST_SUBTESTS,
> > + OPT_WITH_SLOW_COMBINATORIAL,
> > OPT_RUN_SUBTEST,
> > OPT_DESCRIPTION,
> > OPT_DEBUG,
> > @@ -478,6 +480,7 @@ static void print_usage(const char *help_str, bool output_on_stderr)
> >
> > fprintf(f, "Usage: %s [OPTIONS]\n", command_str);
> > fprintf(f, " --list-subtests\n"
> > + " --all\n"
> > " --run-subtest <pattern>\n"
> > " --debug[=log-domain]\n"
> > " --interactive-debug[=domain]\n"
> > @@ -510,6 +513,7 @@ static int common_init(int *argc, char **argv,
> > int c, option_index = 0, i, x;
> > static struct option long_options[] = {
> > {"list-subtests", 0, 0, OPT_LIST_SUBTESTS},
> > + {"all", 0, 0, OPT_WITH_SLOW_COMBINATORIAL},
> > {"run-subtest", 1, 0, OPT_RUN_SUBTEST},
> > {"help-description", 0, 0, OPT_DESCRIPTION},
> > {"debug", optional_argument, 0, OPT_DEBUG},
> > @@ -617,6 +621,10 @@ static int common_init(int *argc, char **argv,
> > if (!run_single_subtest)
> > list_subtests = true;
> > break;
> > + case OPT_WITH_SLOW_COMBINATORIAL:
> > + if (!run_single_subtest)
> > + with_slow_combinatorial = true;
> > + break;
> > case OPT_RUN_SUBTEST:
> > if (!list_subtests)
> > run_single_subtest = strdup(optarg);
> > @@ -1629,6 +1637,22 @@ void igt_skip_on_simulation(void)
> > igt_require(!igt_run_in_simulation());
> > }
> >
> > +/**
> > + * __igt_slow_combinatorial:
> > + *
> > + * This is used to skip subtests that should only be included
> > + * when the "--all" command line option has been specified. This version
> > + * is intended as a test.
> > + *
> > + * @slow_test: true if the subtest is part of the slow/combinatorial set
> > + *
> > + * Returns: true if the test should be run, false if the test should be skipped
> > + */
> > +bool __igt_slow_combinatorial(bool slow_test)
> > +{
> > + return !slow_test || with_slow_combinatorial;
> > +}
> > +
> > /* structured logging */
> >
> > /**
> > diff --git a/lib/igt_core.h b/lib/igt_core.h
> > index 5ae09653fd55..7b592278bf6c 100644
> > --- a/lib/igt_core.h
> > +++ b/lib/igt_core.h
> > @@ -191,6 +191,12 @@ bool __igt_run_subtest(const char *subtest_name);
> > #define igt_subtest_f(f...) \
> > __igt_subtest_f(igt_tokencat(__tmpchar, __LINE__), f)
> >
> > +bool __igt_slow_combinatorial(bool slow_test);
> > +
>
> We also need a igt_subtest_slow() version (without "_f") and some
> comments explaining what's the real difference between them and the
> other macros, like the other igt_subtest_* macros.
OK, fair enough.
> > +#define igt_subtest_slow_f(__slow, f...) \
> > + if (__igt_slow_combinatorial(__slow)) \
> > + __igt_subtest_f(igt_tokencat(__tmpchar, __LINE__), f)
>
> Missing tab in the line above.
Indeed, thanks for spotting. Will fix.
> > +
> > const char *igt_subtest_name(void);
> > bool igt_only_list_subtests(void);
> >
> > @@ -669,6 +675,7 @@ void igt_disable_exit_handler(void);
> >
> > /* helpers to automatically reduce test runtime in simulation */
> > bool igt_run_in_simulation(void);
> > +
>
> Bad chunk.
Doh, that's a remnant from moving things around. Will fix.
[snip]
> > diff --git a/tests/kms_frontbuffer_tracking.c b/tests/kms_frontbuffer_tracking.c
> > index 15707b9b9040..86fd7ca08692 100644
> > --- a/tests/kms_frontbuffer_tracking.c
> > +++ b/tests/kms_frontbuffer_tracking.c
> > @@ -47,8 +47,7 @@ IGT_TEST_DESCRIPTION("Test the Kernel's frontbuffer tracking mechanism and "
> > * combinations that are somewhat redundant and don't add much value to the
> > * test. For example, since we already do the offscreen testing with a single
> > * pipe enabled, there's no much value in doing it again with dual pipes. If you
> > - * still want to try these redundant tests, you need to use the --show-hidden
> > - * option.
> > + * still want to try these redundant tests, you need to use the --all option.
> > *
> > * The most important hidden thing is the FEATURE_NONE set of tests. Whenever
> > * you get a failure on any test, it is important to check whether the same test
> > @@ -116,6 +115,10 @@ struct test_mode {
> > } format;
> >
> > enum igt_draw_method method;
> > +
> > + /* The test is slow and/or combinatorial;
> > + * skip unless otherwise specified */
> > + bool slow;
>
> My problem with this is that exactly none of the tests marked as
> "slow" are actually slow here... They're either redudant or for debug
> purposes, not slow.
If they're redundant they should be removed (but that should be done by
you or someone else who know that they are indeed redundant), as I
already mentioned. They definitely are "slow" though, in the sense that
running with them is slower than not running with them (admittedly the
difference isn't comparable to that of gem_concurrent_blit, where a full
run on my test machine took 30x as long...).
If you'd like to categorise them into more categories than just
slow/non-slow (or slow_combinatorial/non-slow_combinatorial), then by
all means, I'll go for Thomas Wood's proposal to use the _flags
approach instead, but for that you need to provide a patch that actually
categorises them.
Regards, David
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH i-g-t 2/3] Unify handling of slow/combinatorial tests
2015-10-30 7:56 ` David Weinehall
@ 2015-10-30 11:55 ` Paulo Zanoni
2015-10-30 11:59 ` Chris Wilson
0 siblings, 1 reply; 41+ messages in thread
From: Paulo Zanoni @ 2015-10-30 11:55 UTC (permalink / raw)
To: Paulo Zanoni, Intel Graphics Development
2015-10-30 5:56 GMT-02:00 David Weinehall <david.weinehall@linux.intel.com>:
> On Wed, Oct 28, 2015 at 02:12:15PM -0200, Paulo Zanoni wrote:
>> 2015-10-28 9:29 GMT-02:00 David Weinehall <david.weinehall@linux.intel.com>:
>> > Some tests should not be run by default, due to their slow,
>> > and sometimes superfluous, nature.
>> >
>> > We still want to be able to run these tests in some cases.
>> > Until now there's been no unified way of handling this. Remedy
>> > this by introducing the --all option to igt_core,
>> > and use it in gem_concurrent_blit & kms_frontbuffer_tracking.
>>
>> I really think you should explain both your plan and its
>> implementation in more details here.
>
> Well, I don't see how much more there is to explain; the idea is simply
> that different tests shouldn't implement similar behaviour in different
> manners (current kms_frontbuffer_tracking uses a command line option,
> gem_concurrent_blit changes behaviour depending on the file name it's
> called with).
What made me write that is that I noticed that now --all is required
during --list-subtests (thanks for doing this!) but it was not easy to
notice this in the commit message or in the code. So I was thinking
something simple, such as a description of how to use the new option
both when running IGT and when writing subtests:
"These tests will only appear in --list-subtests if you also specify
--all. Same for --run-subtest calls. There's this new macro
igt_subtest_slow_f (maybe igt_subtest_flags_f now?) which should be
used in case the subtest can be slow/combinatorial."
>
>> >
>> > Signed-off-by: David Weinehall <david.weinehall@linux.intel.com>
>> > ---
>> > lib/igt_core.c | 24 +++++
>> > lib/igt_core.h | 7 ++
>> > tests/gem_concurrent_blit.c | 44 ++++-----
>> > tests/kms_frontbuffer_tracking.c | 208 ++++++++++++++++++++++-----------------
>> > 4 files changed, 165 insertions(+), 118 deletions(-)
>> >
>> > diff --git a/lib/igt_core.c b/lib/igt_core.c
>> > index 59127cafe606..6575b9d6bf0d 100644
>> > --- a/lib/igt_core.c
>> > +++ b/lib/igt_core.c
>> > @@ -216,6 +216,7 @@ const char *igt_interactive_debug;
>> >
>> > /* subtests helpers */
>> > static bool list_subtests = false;
>> > +static bool with_slow_combinatorial = false;
>>
>> The option is called --all, the new subtest macro is _slow and the
>> variables and enums are called with_slow_combinatorial. Is this
>> intentional?
>
> The option is called --all because "--with-slow-combinatorial" was
> considered to be too much of a mouthful. The variables & enums are
> still retaining these names because they're much more descriptive.
Ok, let's keep is like this then (in case they don't change with
_flags suggestion).
>
> The macro is called _slow because I wanted to keep it a bit shorter,
> but I can rename it to _slow_combinatorial if that's preferred.
I agree _slow_combinatorial is too big, so let's keep it like this.
Maybe the new version is going to be _flags or something?
>
>>
>> > static char *run_single_subtest = NULL;
>> > static bool run_single_subtest_found = false;
>> > static const char *in_subtest = NULL;
>> > @@ -235,6 +236,7 @@ bool test_child;
>> >
>> > enum {
>> > OPT_LIST_SUBTESTS,
>> > + OPT_WITH_SLOW_COMBINATORIAL,
>> > OPT_RUN_SUBTEST,
>> > OPT_DESCRIPTION,
>> > OPT_DEBUG,
>> > @@ -478,6 +480,7 @@ static void print_usage(const char *help_str, bool output_on_stderr)
>> >
>> > fprintf(f, "Usage: %s [OPTIONS]\n", command_str);
>> > fprintf(f, " --list-subtests\n"
>> > + " --all\n"
>> > " --run-subtest <pattern>\n"
>> > " --debug[=log-domain]\n"
>> > " --interactive-debug[=domain]\n"
>> > @@ -510,6 +513,7 @@ static int common_init(int *argc, char **argv,
>> > int c, option_index = 0, i, x;
>> > static struct option long_options[] = {
>> > {"list-subtests", 0, 0, OPT_LIST_SUBTESTS},
>> > + {"all", 0, 0, OPT_WITH_SLOW_COMBINATORIAL},
>> > {"run-subtest", 1, 0, OPT_RUN_SUBTEST},
>> > {"help-description", 0, 0, OPT_DESCRIPTION},
>> > {"debug", optional_argument, 0, OPT_DEBUG},
>> > @@ -617,6 +621,10 @@ static int common_init(int *argc, char **argv,
>> > if (!run_single_subtest)
>> > list_subtests = true;
>> > break;
>> > + case OPT_WITH_SLOW_COMBINATORIAL:
>> > + if (!run_single_subtest)
>> > + with_slow_combinatorial = true;
>> > + break;
>> > case OPT_RUN_SUBTEST:
>> > if (!list_subtests)
>> > run_single_subtest = strdup(optarg);
>> > @@ -1629,6 +1637,22 @@ void igt_skip_on_simulation(void)
>> > igt_require(!igt_run_in_simulation());
>> > }
>> >
>> > +/**
>> > + * __igt_slow_combinatorial:
>> > + *
>> > + * This is used to skip subtests that should only be included
>> > + * when the "--all" command line option has been specified. This version
>> > + * is intended as a test.
>> > + *
>> > + * @slow_test: true if the subtest is part of the slow/combinatorial set
>> > + *
>> > + * Returns: true if the test should be run, false if the test should be skipped
>> > + */
>> > +bool __igt_slow_combinatorial(bool slow_test)
>> > +{
>> > + return !slow_test || with_slow_combinatorial;
>> > +}
>> > +
>> > /* structured logging */
>> >
>> > /**
>> > diff --git a/lib/igt_core.h b/lib/igt_core.h
>> > index 5ae09653fd55..7b592278bf6c 100644
>> > --- a/lib/igt_core.h
>> > +++ b/lib/igt_core.h
>> > @@ -191,6 +191,12 @@ bool __igt_run_subtest(const char *subtest_name);
>> > #define igt_subtest_f(f...) \
>> > __igt_subtest_f(igt_tokencat(__tmpchar, __LINE__), f)
>> >
>> > +bool __igt_slow_combinatorial(bool slow_test);
>> > +
>>
>> We also need a igt_subtest_slow() version (without "_f") and some
>> comments explaining what's the real difference between them and the
>> other macros, like the other igt_subtest_* macros.
>
> OK, fair enough.
>
>> > +#define igt_subtest_slow_f(__slow, f...) \
>> > + if (__igt_slow_combinatorial(__slow)) \
>> > + __igt_subtest_f(igt_tokencat(__tmpchar, __LINE__), f)
>>
>> Missing tab in the line above.
>
> Indeed, thanks for spotting. Will fix.
>
>> > +
>> > const char *igt_subtest_name(void);
>> > bool igt_only_list_subtests(void);
>> >
>> > @@ -669,6 +675,7 @@ void igt_disable_exit_handler(void);
>> >
>> > /* helpers to automatically reduce test runtime in simulation */
>> > bool igt_run_in_simulation(void);
>> > +
>>
>> Bad chunk.
>
> Doh, that's a remnant from moving things around. Will fix.
>
> [snip]
>
>> > diff --git a/tests/kms_frontbuffer_tracking.c b/tests/kms_frontbuffer_tracking.c
>> > index 15707b9b9040..86fd7ca08692 100644
>> > --- a/tests/kms_frontbuffer_tracking.c
>> > +++ b/tests/kms_frontbuffer_tracking.c
>> > @@ -47,8 +47,7 @@ IGT_TEST_DESCRIPTION("Test the Kernel's frontbuffer tracking mechanism and "
>> > * combinations that are somewhat redundant and don't add much value to the
>> > * test. For example, since we already do the offscreen testing with a single
>> > * pipe enabled, there's no much value in doing it again with dual pipes. If you
>> > - * still want to try these redundant tests, you need to use the --show-hidden
>> > - * option.
>> > + * still want to try these redundant tests, you need to use the --all option.
>> > *
>> > * The most important hidden thing is the FEATURE_NONE set of tests. Whenever
>> > * you get a failure on any test, it is important to check whether the same test
>> > @@ -116,6 +115,10 @@ struct test_mode {
>> > } format;
>> >
>> > enum igt_draw_method method;
>> > +
>> > + /* The test is slow and/or combinatorial;
>> > + * skip unless otherwise specified */
>> > + bool slow;
>>
>> My problem with this is that exactly none of the tests marked as
>> "slow" are actually slow here... They're either redudant or for debug
>> purposes, not slow.
>
> If they're redundant they should be removed (but that should be done by
> you or someone else who know that they are indeed redundant), as I
> already mentioned. They definitely are "slow" though, in the sense that
> running with them is slower than not running with them (admittedly the
> difference isn't comparable to that of gem_concurrent_blit, where a full
> run on my test machine took 30x as long...).
>
> If you'd like to categorise them into more categories than just
> slow/non-slow (or slow_combinatorial/non-slow_combinatorial), then by
> all means, I'll go for Thomas Wood's proposal to use the _flags
> approach instead, but for that you need to provide a patch that actually
> categorises them.
I actually like the _flags idea. It's easy to expand in case we want
to create more categories. I'm not sure if this is actually going to
be done, but I'd support it.
Bonus points if --all could become
--flags=slow,combinatorial,useless,stress,blacklisted,debug,todo,etc.
But let's not block your current patches on these things, we can leave
this for later if you want.
>
>
> Regards, David
--
Paulo Zanoni
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH i-g-t 2/3] Unify handling of slow/combinatorial tests
2015-10-30 11:55 ` Paulo Zanoni
@ 2015-10-30 11:59 ` Chris Wilson
0 siblings, 0 replies; 41+ messages in thread
From: Chris Wilson @ 2015-10-30 11:59 UTC (permalink / raw)
To: Paulo Zanoni; +Cc: Intel Graphics Development
On Fri, Oct 30, 2015 at 09:55:03AM -0200, Paulo Zanoni wrote:
> 2015-10-30 5:56 GMT-02:00 David Weinehall <david.weinehall@linux.intel.com>:
> > On Wed, Oct 28, 2015 at 02:12:15PM -0200, Paulo Zanoni wrote:
> >> 2015-10-28 9:29 GMT-02:00 David Weinehall <david.weinehall@linux.intel.com>:
> >> > Some tests should not be run by default, due to their slow,
> >> > and sometimes superfluous, nature.
> >> >
> >> > We still want to be able to run these tests in some cases.
> >> > Until now there's been no unified way of handling this. Remedy
> >> > this by introducing the --all option to igt_core,
> >> > and use it in gem_concurrent_blit & kms_frontbuffer_tracking.
> >>
> >> I really think you should explain both your plan and its
> >> implementation in more details here.
> >
> > Well, I don't see how much more there is to explain; the idea is simply
> > that different tests shouldn't implement similar behaviour in different
> > manners (current kms_frontbuffer_tracking uses a command line option,
> > gem_concurrent_blit changes behaviour depending on the file name it's
> > called with).
>
> What made me write that is that I noticed that now --all is required
> during --list-subtests (thanks for doing this!) but it was not easy to
> notice this in the commit message or in the code. So I was thinking
> something simple, such as a description of how to use the new option
> both when running IGT and when writing subtests:
>
> "These tests will only appear in --list-subtests if you also specify
> --all. Same for --run-subtest calls. There's this new macro
> igt_subtest_slow_f (maybe igt_subtest_flags_f now?) which should be
> used in case the subtest can be slow/combinatorial."
>
> >
> >> >
> >> > Signed-off-by: David Weinehall <david.weinehall@linux.intel.com>
> >> > ---
> >> > lib/igt_core.c | 24 +++++
> >> > lib/igt_core.h | 7 ++
> >> > tests/gem_concurrent_blit.c | 44 ++++-----
> >> > tests/kms_frontbuffer_tracking.c | 208 ++++++++++++++++++++++-----------------
> >> > 4 files changed, 165 insertions(+), 118 deletions(-)
> >> >
> >> > diff --git a/lib/igt_core.c b/lib/igt_core.c
> >> > index 59127cafe606..6575b9d6bf0d 100644
> >> > --- a/lib/igt_core.c
> >> > +++ b/lib/igt_core.c
> >> > @@ -216,6 +216,7 @@ const char *igt_interactive_debug;
> >> >
> >> > /* subtests helpers */
> >> > static bool list_subtests = false;
> >> > +static bool with_slow_combinatorial = false;
> >>
> >> The option is called --all, the new subtest macro is _slow and the
> >> variables and enums are called with_slow_combinatorial. Is this
> >> intentional?
> >
> > The option is called --all because "--with-slow-combinatorial" was
> > considered to be too much of a mouthful. The variables & enums are
> > still retaining these names because they're much more descriptive.
>
> Ok, let's keep is like this then (in case they don't change with
> _flags suggestion).
>
> >
> > The macro is called _slow because I wanted to keep it a bit shorter,
> > but I can rename it to _slow_combinatorial if that's preferred.
>
> I agree _slow_combinatorial is too big, so let's keep it like this.
> Maybe the new version is going to be _flags or something?
igt_subtest_cond_f().
The subtest is included in the test lists if and only if the conditional
experession is true.
And run with igt_subtest_flags.
-Chris
--
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH i-g-t 0/3 v3] Unify slow/combinatorial test handling
2015-10-23 11:42 [PATCH i-g-t 0/3] Unify slow/combinatorial test handling David Weinehall
` (4 preceding siblings ...)
2015-10-28 11:29 ` [PATCH i-g-t 0/3 v2] " David Weinehall
@ 2015-10-30 13:18 ` David Weinehall
2015-10-30 13:18 ` [PATCH i-g-t 1/3 v3] Copy gem_concurrent_all to gem_concurrent_blit David Weinehall
` (2 more replies)
5 siblings, 3 replies; 41+ messages in thread
From: David Weinehall @ 2015-10-30 13:18 UTC (permalink / raw)
To: intel-gfx
Until now we've had no unified way to handle slow/combinatorial tests.
Most of the time we don't want to run slow/combinatorial tests, so this
should remain the default, but when we do want to run such tests,
it has been handled differently in different tests.
This patch adds an --all command line option to igt_core, changes
gem_concurrent_blit and kms_frontbuffer_tracking to use this instead of
their own methods, and removes gem_concurrent_all in the process, since
it's now unnecessary.
Test cases that have subtests that should not be run by default should
use the igt_subtest_flags() / ugt_subtest_flags_f() functions and
pass the subtest types as part of the flags parameter.
v2: Incorporate various suggestions from reviewers.
v3: Rewrite to provide a generic mechanism for categorising
the subtests
David Weinehall (3):
Copy gem_concurrent_all to gem_concurrent_blit
Unify handling of slow/combinatorial tests
Remove superfluous gem_concurrent_all.c
lib/igt_core.c | 43 +-
lib/igt_core.h | 42 ++
tests/.gitignore | 1 -
tests/Makefile.sources | 1 -
tests/gem_concurrent_all.c | 1108 -------------------------------------
tests/gem_concurrent_blit.c | 1114 +++++++++++++++++++++++++++++++++++++-
tests/kms_frontbuffer_tracking.c | 207 +++----
7 files changed, 1300 insertions(+), 1216 deletions(-)
delete mode 100644 tests/gem_concurrent_all.c
--
2.6.2
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH i-g-t 1/3 v3] Copy gem_concurrent_all to gem_concurrent_blit
2015-10-30 13:18 ` [PATCH i-g-t 0/3 v3] Unify slow/combinatorial test handling David Weinehall
@ 2015-10-30 13:18 ` David Weinehall
2015-10-30 13:18 ` [PATCH i-g-t 2/3 v3] Unify handling of slow/combinatorial tests David Weinehall
2015-10-30 13:18 ` [PATCH i-g-t 3/3 v3] Remove superfluous gem_concurrent_all.c David Weinehall
2 siblings, 0 replies; 41+ messages in thread
From: David Weinehall @ 2015-10-30 13:18 UTC (permalink / raw)
To: intel-gfx
We'll both rename gem_concurrent_all over gem_concurrent_blit
and change gem_concurrent_blit in this changeset. To make
this easier to follow we first do the the rename.
Signed-off-by: David Weinehall <david.weinehall@linux.intel.com>
---
tests/gem_concurrent_blit.c | 1116 ++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 1108 insertions(+), 8 deletions(-)
diff --git a/tests/gem_concurrent_blit.c b/tests/gem_concurrent_blit.c
index 513de4a1b719..1d2d787202df 100644
--- a/tests/gem_concurrent_blit.c
+++ b/tests/gem_concurrent_blit.c
@@ -1,8 +1,1108 @@
-/* This test is just a duplicate of gem_concurrent_all. */
-/* However the executeable will be gem_concurrent_blit. */
-/* The main function examines argv[0] and, in the case */
-/* of gem_concurent_blit runs only a subset of the */
-/* available subtests. This avoids the use of */
-/* non-standard command line parameters which can cause */
-/* problems for automated testing */
-#include "gem_concurrent_all.c"
+/*
+ * Copyright © 2009,2012,2013 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ * Eric Anholt <eric@anholt.net>
+ * Chris Wilson <chris@chris-wilson.co.uk>
+ * Daniel Vetter <daniel.vetter@ffwll.ch>
+ *
+ */
+
+/** @file gem_concurrent.c
+ *
+ * This is a test of pread/pwrite/mmap behavior when writing to active
+ * buffers.
+ *
+ * Based on gem_gtt_concurrent_blt.
+ */
+
+#include "igt.h"
+#include <stdlib.h>
+#include <stdio.h>
+#include <string.h>
+#include <fcntl.h>
+#include <inttypes.h>
+#include <errno.h>
+#include <sys/stat.h>
+#include <sys/time.h>
+#include <sys/wait.h>
+
+#include <drm.h>
+
+#include "intel_bufmgr.h"
+
+IGT_TEST_DESCRIPTION("Test of pread/pwrite/mmap behavior when writing to active"
+ " buffers.");
+
+int fd, devid, gen;
+struct intel_batchbuffer *batch;
+int all;
+
+static void
+nop_release_bo(drm_intel_bo *bo)
+{
+ drm_intel_bo_unreference(bo);
+}
+
+static void
+prw_set_bo(drm_intel_bo *bo, uint32_t val, int width, int height)
+{
+ int size = width * height, i;
+ uint32_t *tmp;
+
+ tmp = malloc(4*size);
+ if (tmp) {
+ for (i = 0; i < size; i++)
+ tmp[i] = val;
+ drm_intel_bo_subdata(bo, 0, 4*size, tmp);
+ free(tmp);
+ } else {
+ for (i = 0; i < size; i++)
+ drm_intel_bo_subdata(bo, 4*i, 4, &val);
+ }
+}
+
+static void
+prw_cmp_bo(drm_intel_bo *bo, uint32_t val, int width, int height, drm_intel_bo *tmp)
+{
+ int size = width * height, i;
+ uint32_t *vaddr;
+
+ do_or_die(drm_intel_bo_map(tmp, true));
+ do_or_die(drm_intel_bo_get_subdata(bo, 0, 4*size, tmp->virtual));
+ vaddr = tmp->virtual;
+ for (i = 0; i < size; i++)
+ igt_assert_eq_u32(vaddr[i], val);
+ drm_intel_bo_unmap(tmp);
+}
+
+static drm_intel_bo *
+unmapped_create_bo(drm_intel_bufmgr *bufmgr, int width, int height)
+{
+ drm_intel_bo *bo;
+
+ bo = drm_intel_bo_alloc(bufmgr, "bo", 4*width*height, 0);
+ igt_assert(bo);
+
+ return bo;
+}
+
+static drm_intel_bo *
+snoop_create_bo(drm_intel_bufmgr *bufmgr, int width, int height)
+{
+ drm_intel_bo *bo;
+
+ igt_skip_on(gem_has_llc(fd));
+
+ bo = unmapped_create_bo(bufmgr, width, height);
+ gem_set_caching(fd, bo->handle, I915_CACHING_CACHED);
+ drm_intel_bo_disable_reuse(bo);
+
+ return bo;
+}
+
+static void
+gtt_set_bo(drm_intel_bo *bo, uint32_t val, int width, int height)
+{
+ uint32_t *vaddr = bo->virtual;
+ int size = width * height;
+
+ drm_intel_gem_bo_start_gtt_access(bo, true);
+ while (size--)
+ *vaddr++ = val;
+}
+
+static void
+gtt_cmp_bo(drm_intel_bo *bo, uint32_t val, int width, int height, drm_intel_bo *tmp)
+{
+ uint32_t *vaddr = bo->virtual;
+ int y;
+
+ /* GTT access is slow. So we just compare a few points */
+ drm_intel_gem_bo_start_gtt_access(bo, false);
+ for (y = 0; y < height; y++)
+ igt_assert_eq_u32(vaddr[y*width+y], val);
+}
+
+static drm_intel_bo *
+map_bo(drm_intel_bo *bo)
+{
+ /* gtt map doesn't have a write parameter, so just keep the mapping
+ * around (to avoid the set_domain with the gtt write domain set) and
+ * manually tell the kernel when we start access the gtt. */
+ do_or_die(drm_intel_gem_bo_map_gtt(bo));
+
+ return bo;
+}
+
+static drm_intel_bo *
+tile_bo(drm_intel_bo *bo, int width)
+{
+ uint32_t tiling = I915_TILING_X;
+ uint32_t stride = width * 4;
+
+ do_or_die(drm_intel_bo_set_tiling(bo, &tiling, stride));
+
+ return bo;
+}
+
+static drm_intel_bo *
+gtt_create_bo(drm_intel_bufmgr *bufmgr, int width, int height)
+{
+ return map_bo(unmapped_create_bo(bufmgr, width, height));
+}
+
+static drm_intel_bo *
+gttX_create_bo(drm_intel_bufmgr *bufmgr, int width, int height)
+{
+ return tile_bo(gtt_create_bo(bufmgr, width, height), width);
+}
+
+static drm_intel_bo *
+wc_create_bo(drm_intel_bufmgr *bufmgr, int width, int height)
+{
+ drm_intel_bo *bo;
+
+ gem_require_mmap_wc(fd);
+
+ bo = unmapped_create_bo(bufmgr, width, height);
+ bo->virtual = __gem_mmap__wc(fd, bo->handle, 0, bo->size, PROT_READ | PROT_WRITE);
+ return bo;
+}
+
+static void
+wc_release_bo(drm_intel_bo *bo)
+{
+ munmap(bo->virtual, bo->size);
+ bo->virtual = NULL;
+
+ nop_release_bo(bo);
+}
+
+static drm_intel_bo *
+gpu_create_bo(drm_intel_bufmgr *bufmgr, int width, int height)
+{
+ return unmapped_create_bo(bufmgr, width, height);
+}
+
+
+static drm_intel_bo *
+gpuX_create_bo(drm_intel_bufmgr *bufmgr, int width, int height)
+{
+ return tile_bo(gpu_create_bo(bufmgr, width, height), width);
+}
+
+static void
+cpu_set_bo(drm_intel_bo *bo, uint32_t val, int width, int height)
+{
+ int size = width * height;
+ uint32_t *vaddr;
+
+ do_or_die(drm_intel_bo_map(bo, true));
+ vaddr = bo->virtual;
+ while (size--)
+ *vaddr++ = val;
+ drm_intel_bo_unmap(bo);
+}
+
+static void
+cpu_cmp_bo(drm_intel_bo *bo, uint32_t val, int width, int height, drm_intel_bo *tmp)
+{
+ int size = width * height;
+ uint32_t *vaddr;
+
+ do_or_die(drm_intel_bo_map(bo, false));
+ vaddr = bo->virtual;
+ while (size--)
+ igt_assert_eq_u32(*vaddr++, val);
+ drm_intel_bo_unmap(bo);
+}
+
+static void
+gpu_set_bo(drm_intel_bo *bo, uint32_t val, int width, int height)
+{
+ struct drm_i915_gem_relocation_entry reloc[1];
+ struct drm_i915_gem_exec_object2 gem_exec[2];
+ struct drm_i915_gem_execbuffer2 execbuf;
+ struct drm_i915_gem_pwrite gem_pwrite;
+ struct drm_i915_gem_create create;
+ uint32_t buf[10], *b;
+ uint32_t tiling, swizzle;
+
+ drm_intel_bo_get_tiling(bo, &tiling, &swizzle);
+
+ memset(reloc, 0, sizeof(reloc));
+ memset(gem_exec, 0, sizeof(gem_exec));
+ memset(&execbuf, 0, sizeof(execbuf));
+
+ b = buf;
+ *b++ = XY_COLOR_BLT_CMD_NOLEN |
+ ((gen >= 8) ? 5 : 4) |
+ COLOR_BLT_WRITE_ALPHA | XY_COLOR_BLT_WRITE_RGB;
+ if (gen >= 4 && tiling) {
+ b[-1] |= XY_COLOR_BLT_TILED;
+ *b = width;
+ } else
+ *b = width << 2;
+ *b++ |= 0xf0 << 16 | 1 << 25 | 1 << 24;
+ *b++ = 0;
+ *b++ = height << 16 | width;
+ reloc[0].offset = (b - buf) * sizeof(uint32_t);
+ reloc[0].target_handle = bo->handle;
+ reloc[0].read_domains = I915_GEM_DOMAIN_RENDER;
+ reloc[0].write_domain = I915_GEM_DOMAIN_RENDER;
+ *b++ = 0;
+ if (gen >= 8)
+ *b++ = 0;
+ *b++ = val;
+ *b++ = MI_BATCH_BUFFER_END;
+ if ((b - buf) & 1)
+ *b++ = 0;
+
+ gem_exec[0].handle = bo->handle;
+ gem_exec[0].flags = EXEC_OBJECT_NEEDS_FENCE;
+
+ create.handle = 0;
+ create.size = 4096;
+ drmIoctl(fd, DRM_IOCTL_I915_GEM_CREATE, &create);
+ gem_exec[1].handle = create.handle;
+ gem_exec[1].relocation_count = 1;
+ gem_exec[1].relocs_ptr = (uintptr_t)reloc;
+
+ execbuf.buffers_ptr = (uintptr_t)gem_exec;
+ execbuf.buffer_count = 2;
+ execbuf.batch_len = (b - buf) * sizeof(buf[0]);
+ if (gen >= 6)
+ execbuf.flags = I915_EXEC_BLT;
+
+ gem_pwrite.handle = gem_exec[1].handle;
+ gem_pwrite.offset = 0;
+ gem_pwrite.size = execbuf.batch_len;
+ gem_pwrite.data_ptr = (uintptr_t)buf;
+ do_or_die(drmIoctl(fd, DRM_IOCTL_I915_GEM_PWRITE, &gem_pwrite));
+ do_or_die(drmIoctl(fd, DRM_IOCTL_I915_GEM_EXECBUFFER2, &execbuf));
+
+ drmIoctl(fd, DRM_IOCTL_GEM_CLOSE, &create.handle);
+}
+
+static void
+gpu_cmp_bo(drm_intel_bo *bo, uint32_t val, int width, int height, drm_intel_bo *tmp)
+{
+ intel_blt_copy(batch,
+ bo, 0, 0, 4*width,
+ tmp, 0, 0, 4*width,
+ width, height, 32);
+ cpu_cmp_bo(tmp, val, width, height, NULL);
+}
+
+const struct access_mode {
+ const char *name;
+ void (*set_bo)(drm_intel_bo *bo, uint32_t val, int w, int h);
+ void (*cmp_bo)(drm_intel_bo *bo, uint32_t val, int w, int h, drm_intel_bo *tmp);
+ drm_intel_bo *(*create_bo)(drm_intel_bufmgr *bufmgr, int width, int height);
+ void (*release_bo)(drm_intel_bo *bo);
+} access_modes[] = {
+ {
+ .name = "prw",
+ .set_bo = prw_set_bo,
+ .cmp_bo = prw_cmp_bo,
+ .create_bo = unmapped_create_bo,
+ .release_bo = nop_release_bo,
+ },
+ {
+ .name = "cpu",
+ .set_bo = cpu_set_bo,
+ .cmp_bo = cpu_cmp_bo,
+ .create_bo = unmapped_create_bo,
+ .release_bo = nop_release_bo,
+ },
+ {
+ .name = "snoop",
+ .set_bo = cpu_set_bo,
+ .cmp_bo = cpu_cmp_bo,
+ .create_bo = snoop_create_bo,
+ .release_bo = nop_release_bo,
+ },
+ {
+ .name = "gtt",
+ .set_bo = gtt_set_bo,
+ .cmp_bo = gtt_cmp_bo,
+ .create_bo = gtt_create_bo,
+ .release_bo = nop_release_bo,
+ },
+ {
+ .name = "gttX",
+ .set_bo = gtt_set_bo,
+ .cmp_bo = gtt_cmp_bo,
+ .create_bo = gttX_create_bo,
+ .release_bo = nop_release_bo,
+ },
+ {
+ .name = "wc",
+ .set_bo = gtt_set_bo,
+ .cmp_bo = gtt_cmp_bo,
+ .create_bo = wc_create_bo,
+ .release_bo = wc_release_bo,
+ },
+ {
+ .name = "gpu",
+ .set_bo = gpu_set_bo,
+ .cmp_bo = gpu_cmp_bo,
+ .create_bo = gpu_create_bo,
+ .release_bo = nop_release_bo,
+ },
+ {
+ .name = "gpuX",
+ .set_bo = gpu_set_bo,
+ .cmp_bo = gpu_cmp_bo,
+ .create_bo = gpuX_create_bo,
+ .release_bo = nop_release_bo,
+ },
+};
+
+#define MAX_NUM_BUFFERS 1024
+int num_buffers = MAX_NUM_BUFFERS;
+const int width = 512, height = 512;
+igt_render_copyfunc_t rendercopy;
+
+struct buffers {
+ const struct access_mode *mode;
+ drm_intel_bufmgr *bufmgr;
+ drm_intel_bo *src[MAX_NUM_BUFFERS], *dst[MAX_NUM_BUFFERS];
+ drm_intel_bo *dummy, *spare;
+ int count;
+};
+
+static void *buffers_init(struct buffers *data,
+ const struct access_mode *mode,
+ int _fd)
+{
+ data->mode = mode;
+ data->count = 0;
+
+ data->bufmgr = drm_intel_bufmgr_gem_init(_fd, 4096);
+ igt_assert(data->bufmgr);
+
+ drm_intel_bufmgr_gem_enable_reuse(data->bufmgr);
+ return intel_batchbuffer_alloc(data->bufmgr, devid);
+}
+
+static void buffers_destroy(struct buffers *data)
+{
+ if (data->count == 0)
+ return;
+
+ for (int i = 0; i < data->count; i++) {
+ data->mode->release_bo(data->src[i]);
+ data->mode->release_bo(data->dst[i]);
+ }
+ data->mode->release_bo(data->dummy);
+ data->mode->release_bo(data->spare);
+ data->count = 0;
+}
+
+static void buffers_create(struct buffers *data,
+ int count)
+{
+ igt_assert(data->bufmgr);
+
+ buffers_destroy(data);
+
+ for (int i = 0; i < count; i++) {
+ data->src[i] =
+ data->mode->create_bo(data->bufmgr, width, height);
+ data->dst[i] =
+ data->mode->create_bo(data->bufmgr, width, height);
+ }
+ data->dummy = data->mode->create_bo(data->bufmgr, width, height);
+ data->spare = data->mode->create_bo(data->bufmgr, width, height);
+ data->count = count;
+}
+
+static void buffers_fini(struct buffers *data)
+{
+ if (data->bufmgr == NULL)
+ return;
+
+ buffers_destroy(data);
+
+ intel_batchbuffer_free(batch);
+ drm_intel_bufmgr_destroy(data->bufmgr);
+ data->bufmgr = NULL;
+}
+
+typedef void (*do_copy)(drm_intel_bo *dst, drm_intel_bo *src);
+typedef struct igt_hang_ring (*do_hang)(void);
+
+static void render_copy_bo(drm_intel_bo *dst, drm_intel_bo *src)
+{
+ struct igt_buf d = {
+ .bo = dst,
+ .size = width * height * 4,
+ .num_tiles = width * height * 4,
+ .stride = width * 4,
+ }, s = {
+ .bo = src,
+ .size = width * height * 4,
+ .num_tiles = width * height * 4,
+ .stride = width * 4,
+ };
+ uint32_t swizzle;
+
+ drm_intel_bo_get_tiling(dst, &d.tiling, &swizzle);
+ drm_intel_bo_get_tiling(src, &s.tiling, &swizzle);
+
+ rendercopy(batch, NULL,
+ &s, 0, 0,
+ width, height,
+ &d, 0, 0);
+}
+
+static void blt_copy_bo(drm_intel_bo *dst, drm_intel_bo *src)
+{
+ intel_blt_copy(batch,
+ src, 0, 0, 4*width,
+ dst, 0, 0, 4*width,
+ width, height, 32);
+}
+
+static void cpu_copy_bo(drm_intel_bo *dst, drm_intel_bo *src)
+{
+ const int size = width * height * sizeof(uint32_t);
+ void *d, *s;
+
+ gem_set_domain(fd, src->handle, I915_GEM_DOMAIN_CPU, 0);
+ gem_set_domain(fd, dst->handle, I915_GEM_DOMAIN_CPU, I915_GEM_DOMAIN_CPU);
+ s = gem_mmap__cpu(fd, src->handle, 0, size, PROT_READ);
+ d = gem_mmap__cpu(fd, dst->handle, 0, size, PROT_WRITE);
+
+ memcpy(d, s, size);
+
+ munmap(d, size);
+ munmap(s, size);
+}
+
+static void gtt_copy_bo(drm_intel_bo *dst, drm_intel_bo *src)
+{
+ const int size = width * height * sizeof(uint32_t);
+ void *d, *s;
+
+ gem_set_domain(fd, src->handle, I915_GEM_DOMAIN_GTT, 0);
+ gem_set_domain(fd, dst->handle, I915_GEM_DOMAIN_GTT, I915_GEM_DOMAIN_GTT);
+
+ s = gem_mmap__gtt(fd, src->handle, size, PROT_READ);
+ d = gem_mmap__gtt(fd, dst->handle, size, PROT_WRITE);
+
+ memcpy(d, s, size);
+
+ munmap(d, size);
+ munmap(s, size);
+}
+
+static void wc_copy_bo(drm_intel_bo *dst, drm_intel_bo *src)
+{
+ const int size = width * height * sizeof(uint32_t);
+ void *d, *s;
+
+ gem_set_domain(fd, src->handle, I915_GEM_DOMAIN_GTT, 0);
+ gem_set_domain(fd, dst->handle, I915_GEM_DOMAIN_GTT, I915_GEM_DOMAIN_GTT);
+
+ s = gem_mmap__wc(fd, src->handle, 0, size, PROT_READ);
+ d = gem_mmap__wc(fd, dst->handle, 0, size, PROT_WRITE);
+
+ memcpy(d, s, size);
+
+ munmap(d, size);
+ munmap(s, size);
+}
+
+static struct igt_hang_ring no_hang(void)
+{
+ return (struct igt_hang_ring){0, 0};
+}
+
+static struct igt_hang_ring bcs_hang(void)
+{
+ return igt_hang_ring(fd, I915_EXEC_BLT);
+}
+
+static struct igt_hang_ring rcs_hang(void)
+{
+ return igt_hang_ring(fd, I915_EXEC_RENDER);
+}
+
+static void hang_require(void)
+{
+ igt_require_hang_ring(fd, -1);
+}
+
+static void do_overwrite_source(struct buffers *buffers,
+ do_copy do_copy_func,
+ do_hang do_hang_func)
+{
+ struct igt_hang_ring hang;
+ int i;
+
+ gem_quiescent_gpu(fd);
+ for (i = 0; i < buffers->count; i++) {
+ buffers->mode->set_bo(buffers->src[i], i, width, height);
+ buffers->mode->set_bo(buffers->dst[i], ~i, width, height);
+ }
+ for (i = 0; i < buffers->count; i++)
+ do_copy_func(buffers->dst[i], buffers->src[i]);
+ hang = do_hang_func();
+ for (i = buffers->count; i--; )
+ buffers->mode->set_bo(buffers->src[i], 0xdeadbeef, width, height);
+ for (i = 0; i < buffers->count; i++)
+ buffers->mode->cmp_bo(buffers->dst[i], i, width, height, buffers->dummy);
+ igt_post_hang_ring(fd, hang);
+}
+
+static void do_overwrite_source_read(struct buffers *buffers,
+ do_copy do_copy_func,
+ do_hang do_hang_func,
+ int do_rcs)
+{
+ const int half = buffers->count/2;
+ struct igt_hang_ring hang;
+ int i;
+
+ gem_quiescent_gpu(fd);
+ for (i = 0; i < half; i++) {
+ buffers->mode->set_bo(buffers->src[i], i, width, height);
+ buffers->mode->set_bo(buffers->dst[i], ~i, width, height);
+ buffers->mode->set_bo(buffers->dst[i+half], ~i, width, height);
+ }
+ for (i = 0; i < half; i++) {
+ do_copy_func(buffers->dst[i], buffers->src[i]);
+ if (do_rcs)
+ render_copy_bo(buffers->dst[i+half], buffers->src[i]);
+ else
+ blt_copy_bo(buffers->dst[i+half], buffers->src[i]);
+ }
+ hang = do_hang_func();
+ for (i = half; i--; )
+ buffers->mode->set_bo(buffers->src[i], 0xdeadbeef, width, height);
+ for (i = 0; i < half; i++) {
+ buffers->mode->cmp_bo(buffers->dst[i], i, width, height, buffers->dummy);
+ buffers->mode->cmp_bo(buffers->dst[i+half], i, width, height, buffers->dummy);
+ }
+ igt_post_hang_ring(fd, hang);
+}
+
+static void do_overwrite_source_read_bcs(struct buffers *buffers,
+ do_copy do_copy_func,
+ do_hang do_hang_func)
+{
+ do_overwrite_source_read(buffers, do_copy_func, do_hang_func, 0);
+}
+
+static void do_overwrite_source_read_rcs(struct buffers *buffers,
+ do_copy do_copy_func,
+ do_hang do_hang_func)
+{
+ do_overwrite_source_read(buffers, do_copy_func, do_hang_func, 1);
+}
+
+static void do_overwrite_source__rev(struct buffers *buffers,
+ do_copy do_copy_func,
+ do_hang do_hang_func)
+{
+ struct igt_hang_ring hang;
+ int i;
+
+ gem_quiescent_gpu(fd);
+ for (i = 0; i < buffers->count; i++) {
+ buffers->mode->set_bo(buffers->src[i], i, width, height);
+ buffers->mode->set_bo(buffers->dst[i], ~i, width, height);
+ }
+ for (i = 0; i < buffers->count; i++)
+ do_copy_func(buffers->dst[i], buffers->src[i]);
+ hang = do_hang_func();
+ for (i = 0; i < buffers->count; i++)
+ buffers->mode->set_bo(buffers->src[i], 0xdeadbeef, width, height);
+ for (i = buffers->count; i--; )
+ buffers->mode->cmp_bo(buffers->dst[i], i, width, height, buffers->dummy);
+ igt_post_hang_ring(fd, hang);
+}
+
+static void do_overwrite_source__one(struct buffers *buffers,
+ do_copy do_copy_func,
+ do_hang do_hang_func)
+{
+ struct igt_hang_ring hang;
+
+ gem_quiescent_gpu(fd);
+ buffers->mode->set_bo(buffers->src[0], 0, width, height);
+ buffers->mode->set_bo(buffers->dst[0], ~0, width, height);
+ do_copy_func(buffers->dst[0], buffers->src[0]);
+ hang = do_hang_func();
+ buffers->mode->set_bo(buffers->src[0], 0xdeadbeef, width, height);
+ buffers->mode->cmp_bo(buffers->dst[0], 0, width, height, buffers->dummy);
+ igt_post_hang_ring(fd, hang);
+}
+
+static void do_intermix(struct buffers *buffers,
+ do_copy do_copy_func,
+ do_hang do_hang_func,
+ int do_rcs)
+{
+ const int half = buffers->count/2;
+ struct igt_hang_ring hang;
+ int i;
+
+ gem_quiescent_gpu(fd);
+ for (i = 0; i < buffers->count; i++) {
+ buffers->mode->set_bo(buffers->src[i], 0xdeadbeef^~i, width, height);
+ buffers->mode->set_bo(buffers->dst[i], i, width, height);
+ }
+ for (i = 0; i < half; i++) {
+ if (do_rcs == 1 || (do_rcs == -1 && i & 1))
+ render_copy_bo(buffers->dst[i], buffers->src[i]);
+ else
+ blt_copy_bo(buffers->dst[i], buffers->src[i]);
+
+ do_copy_func(buffers->dst[i+half], buffers->src[i]);
+
+ if (do_rcs == 1 || (do_rcs == -1 && (i & 1) == 0))
+ render_copy_bo(buffers->dst[i], buffers->dst[i+half]);
+ else
+ blt_copy_bo(buffers->dst[i], buffers->dst[i+half]);
+
+ do_copy_func(buffers->dst[i+half], buffers->src[i+half]);
+ }
+ hang = do_hang_func();
+ for (i = 0; i < 2*half; i++)
+ buffers->mode->cmp_bo(buffers->dst[i], 0xdeadbeef^~i, width, height, buffers->dummy);
+ igt_post_hang_ring(fd, hang);
+}
+
+static void do_intermix_rcs(struct buffers *buffers,
+ do_copy do_copy_func,
+ do_hang do_hang_func)
+{
+ do_intermix(buffers, do_copy_func, do_hang_func, 1);
+}
+
+static void do_intermix_bcs(struct buffers *buffers,
+ do_copy do_copy_func,
+ do_hang do_hang_func)
+{
+ do_intermix(buffers, do_copy_func, do_hang_func, 0);
+}
+
+static void do_intermix_both(struct buffers *buffers,
+ do_copy do_copy_func,
+ do_hang do_hang_func)
+{
+ do_intermix(buffers, do_copy_func, do_hang_func, -1);
+}
+
+static void do_early_read(struct buffers *buffers,
+ do_copy do_copy_func,
+ do_hang do_hang_func)
+{
+ struct igt_hang_ring hang;
+ int i;
+
+ gem_quiescent_gpu(fd);
+ for (i = buffers->count; i--; )
+ buffers->mode->set_bo(buffers->src[i], 0xdeadbeef, width, height);
+ for (i = 0; i < buffers->count; i++)
+ do_copy_func(buffers->dst[i], buffers->src[i]);
+ hang = do_hang_func();
+ for (i = buffers->count; i--; )
+ buffers->mode->cmp_bo(buffers->dst[i], 0xdeadbeef, width, height, buffers->dummy);
+ igt_post_hang_ring(fd, hang);
+}
+
+static void do_read_read_bcs(struct buffers *buffers,
+ do_copy do_copy_func,
+ do_hang do_hang_func)
+{
+ struct igt_hang_ring hang;
+ int i;
+
+ gem_quiescent_gpu(fd);
+ for (i = buffers->count; i--; )
+ buffers->mode->set_bo(buffers->src[i], 0xdeadbeef ^ i, width, height);
+ for (i = 0; i < buffers->count; i++) {
+ do_copy_func(buffers->dst[i], buffers->src[i]);
+ blt_copy_bo(buffers->spare, buffers->src[i]);
+ }
+ cpu_cmp_bo(buffers->spare, 0xdeadbeef^(buffers->count-1), width, height, NULL);
+ hang = do_hang_func();
+ for (i = buffers->count; i--; )
+ buffers->mode->cmp_bo(buffers->dst[i], 0xdeadbeef ^ i, width, height, buffers->dummy);
+ igt_post_hang_ring(fd, hang);
+}
+
+static void do_read_read_rcs(struct buffers *buffers,
+ do_copy do_copy_func,
+ do_hang do_hang_func)
+{
+ struct igt_hang_ring hang;
+ int i;
+
+ gem_quiescent_gpu(fd);
+ for (i = buffers->count; i--; )
+ buffers->mode->set_bo(buffers->src[i], 0xdeadbeef ^ i, width, height);
+ for (i = 0; i < buffers->count; i++) {
+ do_copy_func(buffers->dst[i], buffers->src[i]);
+ render_copy_bo(buffers->spare, buffers->src[i]);
+ }
+ cpu_cmp_bo(buffers->spare, 0xdeadbeef^(buffers->count-1), width, height, NULL);
+ hang = do_hang_func();
+ for (i = buffers->count; i--; )
+ buffers->mode->cmp_bo(buffers->dst[i], 0xdeadbeef ^ i, width, height, buffers->dummy);
+ igt_post_hang_ring(fd, hang);
+}
+
+static void do_gpu_read_after_write(struct buffers *buffers,
+ do_copy do_copy_func,
+ do_hang do_hang_func)
+{
+ struct igt_hang_ring hang;
+ int i;
+
+ gem_quiescent_gpu(fd);
+ for (i = buffers->count; i--; )
+ buffers->mode->set_bo(buffers->src[i], 0xabcdabcd, width, height);
+ for (i = 0; i < buffers->count; i++)
+ do_copy_func(buffers->dst[i], buffers->src[i]);
+ for (i = buffers->count; i--; )
+ do_copy_func(buffers->dummy, buffers->dst[i]);
+ hang = do_hang_func();
+ for (i = buffers->count; i--; )
+ buffers->mode->cmp_bo(buffers->dst[i], 0xabcdabcd, width, height, buffers->dummy);
+ igt_post_hang_ring(fd, hang);
+}
+
+typedef void (*do_test)(struct buffers *buffers,
+ do_copy do_copy_func,
+ do_hang do_hang_func);
+
+typedef void (*run_wrap)(struct buffers *buffers,
+ do_test do_test_func,
+ do_copy do_copy_func,
+ do_hang do_hang_func);
+
+static void run_single(struct buffers *buffers,
+ do_test do_test_func,
+ do_copy do_copy_func,
+ do_hang do_hang_func)
+{
+ do_test_func(buffers, do_copy_func, do_hang_func);
+}
+
+static void run_interruptible(struct buffers *buffers,
+ do_test do_test_func,
+ do_copy do_copy_func,
+ do_hang do_hang_func)
+{
+ int loop;
+
+ for (loop = 0; loop < 10; loop++)
+ do_test_func(buffers, do_copy_func, do_hang_func);
+}
+
+static void run_forked(struct buffers *buffers,
+ do_test do_test_func,
+ do_copy do_copy_func,
+ do_hang do_hang_func)
+{
+ const int old_num_buffers = num_buffers;
+
+ num_buffers /= 16;
+ num_buffers += 2;
+
+ igt_fork(child, 16) {
+ /* recreate process local variables */
+ buffers->count = 0;
+ fd = drm_open_driver(DRIVER_INTEL);
+
+ batch = buffers_init(buffers, buffers->mode, fd);
+
+ buffers_create(buffers, num_buffers);
+ for (int loop = 0; loop < 10; loop++)
+ do_test_func(buffers, do_copy_func, do_hang_func);
+
+ buffers_fini(buffers);
+ }
+
+ igt_waitchildren();
+
+ num_buffers = old_num_buffers;
+}
+
+static void bit17_require(void)
+{
+ struct drm_i915_gem_get_tiling2 {
+ uint32_t handle;
+ uint32_t tiling_mode;
+ uint32_t swizzle_mode;
+ uint32_t phys_swizzle_mode;
+ } arg;
+#define DRM_IOCTL_I915_GEM_GET_TILING2 DRM_IOWR (DRM_COMMAND_BASE + DRM_I915_GEM_GET_TILING, struct drm_i915_gem_get_tiling2)
+
+ memset(&arg, 0, sizeof(arg));
+ arg.handle = gem_create(fd, 4096);
+ gem_set_tiling(fd, arg.handle, I915_TILING_X, 512);
+
+ do_or_die(drmIoctl(fd, DRM_IOCTL_I915_GEM_GET_TILING2, &arg));
+ gem_close(fd, arg.handle);
+ igt_require(arg.phys_swizzle_mode == arg.swizzle_mode);
+}
+
+static void cpu_require(void)
+{
+ bit17_require();
+}
+
+static void gtt_require(void)
+{
+}
+
+static void wc_require(void)
+{
+ bit17_require();
+ gem_require_mmap_wc(fd);
+}
+
+static void bcs_require(void)
+{
+}
+
+static void rcs_require(void)
+{
+ igt_require(rendercopy);
+}
+
+static void no_require(void)
+{
+}
+
+static void
+run_basic_modes(const struct access_mode *mode,
+ const char *suffix,
+ run_wrap run_wrap_func)
+{
+ const struct {
+ const char *prefix;
+ do_copy copy;
+ void (*require)(void);
+ } pipelines[] = {
+ { "cpu", cpu_copy_bo, cpu_require },
+ { "gtt", gtt_copy_bo, gtt_require },
+ { "wc", wc_copy_bo, wc_require },
+ { "blt", blt_copy_bo, bcs_require },
+ { "render", render_copy_bo, rcs_require },
+ { NULL, NULL }
+ }, *pskip = pipelines + 3, *p;
+ const struct {
+ const char *suffix;
+ do_hang hang;
+ void (*require)(void);
+ } hangs[] = {
+ { "", no_hang, no_require },
+ { "-hang-blt", bcs_hang, hang_require },
+ { "-hang-render", rcs_hang, hang_require },
+ { NULL, NULL },
+ }, *h;
+ struct buffers buffers;
+
+ for (h = hangs; h->suffix; h++) {
+ if (!all && *h->suffix)
+ continue;
+
+ for (p = all ? pipelines : pskip; p->prefix; p++) {
+ igt_fixture {
+ batch = buffers_init(&buffers, mode, fd);
+ }
+
+ /* try to overwrite the source values */
+ igt_subtest_f("%s-%s-overwrite-source-one%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ h->require();
+ p->require();
+ buffers_create(&buffers, num_buffers);
+ run_wrap_func(&buffers,
+ do_overwrite_source__one,
+ p->copy, h->hang);
+ }
+
+ igt_subtest_f("%s-%s-overwrite-source%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ h->require();
+ p->require();
+ buffers_create(&buffers, num_buffers);
+ run_wrap_func(&buffers,
+ do_overwrite_source,
+ p->copy, h->hang);
+ }
+
+ igt_subtest_f("%s-%s-overwrite-source-read-bcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ h->require();
+ p->require();
+ buffers_create(&buffers, num_buffers);
+ run_wrap_func(&buffers,
+ do_overwrite_source_read_bcs,
+ p->copy, h->hang);
+ }
+
+ igt_subtest_f("%s-%s-overwrite-source-read-rcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ h->require();
+ p->require();
+ igt_require(rendercopy);
+ buffers_create(&buffers, num_buffers);
+ run_wrap_func(&buffers,
+ do_overwrite_source_read_rcs,
+ p->copy, h->hang);
+ }
+
+ igt_subtest_f("%s-%s-overwrite-source-rev%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ h->require();
+ p->require();
+ buffers_create(&buffers, num_buffers);
+ run_wrap_func(&buffers,
+ do_overwrite_source__rev,
+ p->copy, h->hang);
+ }
+
+ /* try to intermix copies with GPU copies*/
+ igt_subtest_f("%s-%s-intermix-rcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ h->require();
+ p->require();
+ igt_require(rendercopy);
+ buffers_create(&buffers, num_buffers);
+ run_wrap_func(&buffers,
+ do_intermix_rcs,
+ p->copy, h->hang);
+ }
+ igt_subtest_f("%s-%s-intermix-bcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ h->require();
+ p->require();
+ igt_require(rendercopy);
+ buffers_create(&buffers, num_buffers);
+ run_wrap_func(&buffers,
+ do_intermix_bcs,
+ p->copy, h->hang);
+ }
+ igt_subtest_f("%s-%s-intermix-both%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ h->require();
+ p->require();
+ igt_require(rendercopy);
+ buffers_create(&buffers, num_buffers);
+ run_wrap_func(&buffers,
+ do_intermix_both,
+ p->copy, h->hang);
+ }
+
+ /* try to read the results before the copy completes */
+ igt_subtest_f("%s-%s-early-read%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ h->require();
+ p->require();
+ buffers_create(&buffers, num_buffers);
+ run_wrap_func(&buffers,
+ do_early_read,
+ p->copy, h->hang);
+ }
+
+ /* concurrent reads */
+ igt_subtest_f("%s-%s-read-read-bcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ h->require();
+ p->require();
+ buffers_create(&buffers, num_buffers);
+ run_wrap_func(&buffers,
+ do_read_read_bcs,
+ p->copy, h->hang);
+ }
+ igt_subtest_f("%s-%s-read-read-rcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ h->require();
+ p->require();
+ igt_require(rendercopy);
+ buffers_create(&buffers, num_buffers);
+ run_wrap_func(&buffers,
+ do_read_read_rcs,
+ p->copy, h->hang);
+ }
+
+ /* and finally try to trick the kernel into loosing the pending write */
+ igt_subtest_f("%s-%s-gpu-read-after-write%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ h->require();
+ p->require();
+ buffers_create(&buffers, num_buffers);
+ run_wrap_func(&buffers,
+ do_gpu_read_after_write,
+ p->copy, h->hang);
+ }
+
+ igt_fixture {
+ buffers_fini(&buffers);
+ }
+ }
+ }
+}
+
+static void
+run_modes(const struct access_mode *mode)
+{
+ if (all) {
+ run_basic_modes(mode, "", run_single);
+
+ igt_fork_signal_helper();
+ run_basic_modes(mode, "-interruptible", run_interruptible);
+ igt_stop_signal_helper();
+ }
+
+ igt_fork_signal_helper();
+ run_basic_modes(mode, "-forked", run_forked);
+ igt_stop_signal_helper();
+}
+
+igt_main
+{
+ int max, i;
+
+ igt_skip_on_simulation();
+
+ if (strstr(igt_test_name(), "all"))
+ all = true;
+
+ igt_fixture {
+ fd = drm_open_driver(DRIVER_INTEL);
+ devid = intel_get_drm_devid(fd);
+ gen = intel_gen(devid);
+ rendercopy = igt_get_render_copyfunc(devid);
+
+ max = gem_aperture_size (fd) / (1024 * 1024) / 2;
+ if (num_buffers > max)
+ num_buffers = max;
+
+ max = intel_get_total_ram_mb() * 3 / 4;
+ if (num_buffers > max)
+ num_buffers = max;
+ num_buffers /= 2;
+ igt_info("using 2x%d buffers, each 1MiB\n", num_buffers);
+ }
+
+ for (i = 0; i < ARRAY_SIZE(access_modes); i++)
+ run_modes(&access_modes[i]);
+}
--
2.6.2
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH i-g-t 2/3 v3] Unify handling of slow/combinatorial tests
2015-10-30 13:18 ` [PATCH i-g-t 0/3 v3] Unify slow/combinatorial test handling David Weinehall
2015-10-30 13:18 ` [PATCH i-g-t 1/3 v3] Copy gem_concurrent_all to gem_concurrent_blit David Weinehall
@ 2015-10-30 13:18 ` David Weinehall
2015-10-30 13:52 ` Chris Wilson
2015-10-30 13:18 ` [PATCH i-g-t 3/3 v3] Remove superfluous gem_concurrent_all.c David Weinehall
2 siblings, 1 reply; 41+ messages in thread
From: David Weinehall @ 2015-10-30 13:18 UTC (permalink / raw)
To: intel-gfx
Some subtests are not run by default, for various reasons;
be it because they're only for debugging, because they're slow,
or because they are not of high enough quality.
This patch aims to introduce a common mechanism for categorising
the subtests and introduces a flag (--all) that runs/lists all
subtests instead of just the default set.
Signed-off-by: David Weinehall <david.weinehall@linux.intel.com>
---
lib/igt_core.c | 43 ++++++--
lib/igt_core.h | 42 ++++++++
tests/gem_concurrent_blit.c | 50 +++++-----
tests/kms_frontbuffer_tracking.c | 207 ++++++++++++++++++++++-----------------
4 files changed, 218 insertions(+), 124 deletions(-)
diff --git a/lib/igt_core.c b/lib/igt_core.c
index 59127cafe606..2034bc33ad78 100644
--- a/lib/igt_core.c
+++ b/lib/igt_core.c
@@ -216,6 +216,7 @@ const char *igt_interactive_debug;
/* subtests helpers */
static bool list_subtests = false;
+static unsigned int subtest_types_mask = SUBTEST_TYPE_NORMAL;
static char *run_single_subtest = NULL;
static bool run_single_subtest_found = false;
static const char *in_subtest = NULL;
@@ -234,12 +235,13 @@ int test_children_sz;
bool test_child;
enum {
- OPT_LIST_SUBTESTS,
- OPT_RUN_SUBTEST,
- OPT_DESCRIPTION,
- OPT_DEBUG,
- OPT_INTERACTIVE_DEBUG,
- OPT_HELP = 'h'
+ OPT_LIST_SUBTESTS,
+ OPT_WITH_ALL_SUBTESTS,
+ OPT_RUN_SUBTEST,
+ OPT_DESCRIPTION,
+ OPT_DEBUG,
+ OPT_INTERACTIVE_DEBUG,
+ OPT_HELP = 'h'
};
static int igt_exitcode = IGT_EXIT_SUCCESS;
@@ -478,6 +480,7 @@ static void print_usage(const char *help_str, bool output_on_stderr)
fprintf(f, "Usage: %s [OPTIONS]\n", command_str);
fprintf(f, " --list-subtests\n"
+ " --all\n"
" --run-subtest <pattern>\n"
" --debug[=log-domain]\n"
" --interactive-debug[=domain]\n"
@@ -510,6 +513,7 @@ static int common_init(int *argc, char **argv,
int c, option_index = 0, i, x;
static struct option long_options[] = {
{"list-subtests", 0, 0, OPT_LIST_SUBTESTS},
+ {"all", 0, 0, OPT_WITH_ALL_SUBTESTS},
{"run-subtest", 1, 0, OPT_RUN_SUBTEST},
{"help-description", 0, 0, OPT_DESCRIPTION},
{"debug", optional_argument, 0, OPT_DEBUG},
@@ -617,6 +621,10 @@ static int common_init(int *argc, char **argv,
if (!run_single_subtest)
list_subtests = true;
break;
+ case OPT_WITH_ALL_SUBTESTS:
+ if (!run_single_subtest)
+ subtest_types_mask = SUBTEST_TYPE_ALL;
+ break;
case OPT_RUN_SUBTEST:
if (!list_subtests)
run_single_subtest = strdup(optarg);
@@ -1629,6 +1637,29 @@ void igt_skip_on_simulation(void)
igt_require(!igt_run_in_simulation());
}
+/**
+ * igt_match_subtest_flags:
+ *
+ * This function is used to check whether the attributes of the subtest
+ * makes it a candidate for inclusion in the test run; this is used to
+ * categorise tests, for instance to exclude tests that are purely for
+ * debug purposes, tests that are specific to certain environments,
+ * or tests that are very slow.
+ *
+ * Note that a test has to have all its flags met to be run; for instance
+ * a subtest with the flags SUBTEST_TYPE_SLOW | SUBTEST_TYPE_DEBUG requires
+ * "--subtest-types=slow,debug" or "--all" to be executed
+ *
+ * @subtest_flags: The subtests to check for
+ *
+ * Returns: true if the subtest test should be run,
+ * false if the subtest should be skipped
+ */
+bool igt_match_subtest_flags(unsigned long subtest_flags)
+{
+ return ((subtest_flags & subtest_types_mask) == subtest_flags);
+}
+
/* structured logging */
/**
diff --git a/lib/igt_core.h b/lib/igt_core.h
index 5ae09653fd55..495cb77a8aea 100644
--- a/lib/igt_core.h
+++ b/lib/igt_core.h
@@ -191,6 +191,48 @@ bool __igt_run_subtest(const char *subtest_name);
#define igt_subtest_f(f...) \
__igt_subtest_f(igt_tokencat(__tmpchar, __LINE__), f)
+enum {
+ /* The set of tests run if nothing else is specified */
+ SUBTEST_TYPE_NORMAL = 1 << 0,
+ /* Basic Acceptance Testing set */
+ SUBTEST_TYPE_BASIC = 1 << 1,
+ /* Tests that are very slow */
+ SUBTEST_TYPE_SLOW = 1 << 2,
+ /* Tests that mainly intended for debugging */
+ SUBTEST_TYPE_DEBUG = 1 << 3,
+ SUBTEST_TYPE_ALL = ~0
+} subtest_types;
+
+bool igt_match_subtest_flags(unsigned long subtest_flags);
+
+/**
+ * igt_subtest_flags:
+ * @name: name of the subtest
+ * @__subtest_flags: the categories the subtest belongs to
+ *
+ * This is a wrapper around igt_subtest that will only execute the
+ * testcase if all of the flags passed to this function match those
+ * specified by the list of subtest categories passed from the
+ * command line; the default category is SUBTEST_TYPE_NORMAL.
+ */
+#define igt_subtest_flags(name, __subtest_flags) \
+ if (igt_match_subtest_flags(__subtest_flags)) \
+ igt_subtest(name)
+
+/**
+ * igt_subtest_flags_f:
+ * @...: format string and optional arguments
+ * @__subtest_flags: the categories the subtest belongs to
+ *
+ * This is a wrapper around igt_subtest_f that will only execute the
+ * testcase if all of the flags passed to this function match those
+ * specified by the list of subtest categories passed from the
+ * command line; the default category is SUBTEST_TYPE_NORMAL.
+ */
+#define igt_subtest_flags_f(__subtest_flags, f...) \
+ if (igt_match_subtest_flags(__subtest_flags)) \
+ __igt_subtest_f(igt_tokencat(__tmpchar, __LINE__), f)
+
const char *igt_subtest_name(void);
bool igt_only_list_subtests(void);
diff --git a/tests/gem_concurrent_blit.c b/tests/gem_concurrent_blit.c
index 1d2d787202df..a4be0e79fbd1 100644
--- a/tests/gem_concurrent_blit.c
+++ b/tests/gem_concurrent_blit.c
@@ -55,7 +55,6 @@ IGT_TEST_DESCRIPTION("Test of pread/pwrite/mmap behavior when writing to active"
int fd, devid, gen;
struct intel_batchbuffer *batch;
-int all;
static void
nop_release_bo(drm_intel_bo *bo)
@@ -931,16 +930,20 @@ run_basic_modes(const struct access_mode *mode,
struct buffers buffers;
for (h = hangs; h->suffix; h++) {
- if (!all && *h->suffix)
- continue;
+ unsigned int subtest_flags;
- for (p = all ? pipelines : pskip; p->prefix; p++) {
+ if (*h->suffix)
+ subtest_flags = SUBTEST_TYPE_SLOW;
+ else
+ subtest_flags = SUBTEST_TYPE_NORMAL;
+
+ for (p = igt_match_subtest_flags(SUBTEST_TYPE_SLOW) ? pipelines : pskip; p->prefix; p++) {
igt_fixture {
batch = buffers_init(&buffers, mode, fd);
}
/* try to overwrite the source values */
- igt_subtest_f("%s-%s-overwrite-source-one%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ igt_subtest_flags_f(subtest_flags, "%s-%s-overwrite-source-one%s%s", mode->name, p->prefix, suffix, h->suffix) {
h->require();
p->require();
buffers_create(&buffers, num_buffers);
@@ -949,7 +952,7 @@ run_basic_modes(const struct access_mode *mode,
p->copy, h->hang);
}
- igt_subtest_f("%s-%s-overwrite-source%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ igt_subtest_flags_f(subtest_flags, "%s-%s-overwrite-source%s%s", mode->name, p->prefix, suffix, h->suffix) {
h->require();
p->require();
buffers_create(&buffers, num_buffers);
@@ -958,7 +961,7 @@ run_basic_modes(const struct access_mode *mode,
p->copy, h->hang);
}
- igt_subtest_f("%s-%s-overwrite-source-read-bcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ igt_subtest_flags_f(subtest_flags, "%s-%s-overwrite-source-read-bcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
h->require();
p->require();
buffers_create(&buffers, num_buffers);
@@ -967,7 +970,7 @@ run_basic_modes(const struct access_mode *mode,
p->copy, h->hang);
}
- igt_subtest_f("%s-%s-overwrite-source-read-rcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ igt_subtest_flags_f(subtest_flags, "%s-%s-overwrite-source-read-rcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
h->require();
p->require();
igt_require(rendercopy);
@@ -977,7 +980,7 @@ run_basic_modes(const struct access_mode *mode,
p->copy, h->hang);
}
- igt_subtest_f("%s-%s-overwrite-source-rev%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ igt_subtest_flags_f(subtest_flags, "%s-%s-overwrite-source-rev%s%s", mode->name, p->prefix, suffix, h->suffix) {
h->require();
p->require();
buffers_create(&buffers, num_buffers);
@@ -987,7 +990,7 @@ run_basic_modes(const struct access_mode *mode,
}
/* try to intermix copies with GPU copies*/
- igt_subtest_f("%s-%s-intermix-rcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ igt_subtest_flags_f(subtest_flags, "%s-%s-intermix-rcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
h->require();
p->require();
igt_require(rendercopy);
@@ -996,7 +999,7 @@ run_basic_modes(const struct access_mode *mode,
do_intermix_rcs,
p->copy, h->hang);
}
- igt_subtest_f("%s-%s-intermix-bcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ igt_subtest_flags_f(subtest_flags, "%s-%s-intermix-bcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
h->require();
p->require();
igt_require(rendercopy);
@@ -1005,7 +1008,7 @@ run_basic_modes(const struct access_mode *mode,
do_intermix_bcs,
p->copy, h->hang);
}
- igt_subtest_f("%s-%s-intermix-both%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ igt_subtest_flags_f(subtest_flags, "%s-%s-intermix-both%s%s", mode->name, p->prefix, suffix, h->suffix) {
h->require();
p->require();
igt_require(rendercopy);
@@ -1016,7 +1019,7 @@ run_basic_modes(const struct access_mode *mode,
}
/* try to read the results before the copy completes */
- igt_subtest_f("%s-%s-early-read%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ igt_subtest_flags_f(subtest_flags, "%s-%s-early-read%s%s", mode->name, p->prefix, suffix, h->suffix) {
h->require();
p->require();
buffers_create(&buffers, num_buffers);
@@ -1026,7 +1029,7 @@ run_basic_modes(const struct access_mode *mode,
}
/* concurrent reads */
- igt_subtest_f("%s-%s-read-read-bcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ igt_subtest_flags_f(subtest_flags, "%s-%s-read-read-bcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
h->require();
p->require();
buffers_create(&buffers, num_buffers);
@@ -1034,7 +1037,7 @@ run_basic_modes(const struct access_mode *mode,
do_read_read_bcs,
p->copy, h->hang);
}
- igt_subtest_f("%s-%s-read-read-rcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ igt_subtest_flags_f(subtest_flags, "%s-%s-read-read-rcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
h->require();
p->require();
igt_require(rendercopy);
@@ -1044,8 +1047,8 @@ run_basic_modes(const struct access_mode *mode,
p->copy, h->hang);
}
- /* and finally try to trick the kernel into loosing the pending write */
- igt_subtest_f("%s-%s-gpu-read-after-write%s%s", mode->name, p->prefix, suffix, h->suffix) {
+ /* and finally try to trick the kernel into losing the pending write */
+ igt_subtest_flags_f(subtest_flags, "%s-%s-gpu-read-after-write%s%s", mode->name, p->prefix, suffix, h->suffix) {
h->require();
p->require();
buffers_create(&buffers, num_buffers);
@@ -1064,13 +1067,11 @@ run_basic_modes(const struct access_mode *mode,
static void
run_modes(const struct access_mode *mode)
{
- if (all) {
- run_basic_modes(mode, "", run_single);
+ run_basic_modes(mode, "", run_single);
- igt_fork_signal_helper();
- run_basic_modes(mode, "-interruptible", run_interruptible);
- igt_stop_signal_helper();
- }
+ igt_fork_signal_helper();
+ run_basic_modes(mode, "-interruptible", run_interruptible);
+ igt_stop_signal_helper();
igt_fork_signal_helper();
run_basic_modes(mode, "-forked", run_forked);
@@ -1083,9 +1084,6 @@ igt_main
igt_skip_on_simulation();
- if (strstr(igt_test_name(), "all"))
- all = true;
-
igt_fixture {
fd = drm_open_driver(DRIVER_INTEL);
devid = intel_get_drm_devid(fd);
diff --git a/tests/kms_frontbuffer_tracking.c b/tests/kms_frontbuffer_tracking.c
index 15707b9b9040..39f3c37a8f00 100644
--- a/tests/kms_frontbuffer_tracking.c
+++ b/tests/kms_frontbuffer_tracking.c
@@ -47,8 +47,7 @@ IGT_TEST_DESCRIPTION("Test the Kernel's frontbuffer tracking mechanism and "
* combinations that are somewhat redundant and don't add much value to the
* test. For example, since we already do the offscreen testing with a single
* pipe enabled, there's no much value in doing it again with dual pipes. If you
- * still want to try these redundant tests, you need to use the --show-hidden
- * option.
+ * still want to try these redundant tests, you need to use the --all option.
*
* The most important hidden thing is the FEATURE_NONE set of tests. Whenever
* you get a failure on any test, it is important to check whether the same test
@@ -116,6 +115,9 @@ struct test_mode {
} format;
enum igt_draw_method method;
+
+ /* Specifies the subtest categories this subtest belongs to */
+ unsigned long subtest_flags;
};
enum flip_type {
@@ -237,7 +239,6 @@ struct {
bool fbc_check_last_action;
bool no_edp;
bool small_modes;
- bool show_hidden;
int step;
int only_pipes;
int shared_fb_x_offset;
@@ -249,7 +250,6 @@ struct {
.fbc_check_last_action = true,
.no_edp = false,
.small_modes = false,
- .show_hidden= false,
.step = 0,
.only_pipes = PIPE_COUNT,
.shared_fb_x_offset = 500,
@@ -2933,9 +2933,6 @@ static int opt_handler(int option, int option_index, void *data)
case 'm':
opt.small_modes = true;
break;
- case 'i':
- opt.show_hidden = true;
- break;
case 't':
opt.step++;
break;
@@ -2971,7 +2968,6 @@ const char *help_str =
" --no-fbc-action-check Don't check for the FBC last action\n"
" --no-edp Don't use eDP monitors\n"
" --use-small-modes Use smaller resolutions for the modes\n"
-" --show-hidden Show hidden subtests\n"
" --step Stop on each step so you can check the screen\n"
" --shared-fb-x offset Use 'offset' as the X offset for the shared FB\n"
" --shared-fb-y offset Use 'offset' as the Y offset for the shared FB\n"
@@ -3068,18 +3064,19 @@ static const char *format_str(enum pixel_format format)
for (t.plane = 0; t.plane < PLANE_COUNT; t.plane++) { \
for (t.fbs = 0; t.fbs < FBS_COUNT; t.fbs++) { \
for (t.method = 0; t.method < IGT_DRAW_METHOD_COUNT; t.method++) { \
+ t.subtest_flags = SUBTEST_TYPE_NORMAL; \
if (t.pipes == PIPE_SINGLE && t.screen == SCREEN_SCND) \
continue; \
if (t.screen == SCREEN_OFFSCREEN && t.plane != PLANE_PRI) \
continue; \
- if (!opt.show_hidden && t.pipes == PIPE_DUAL && \
+ if (t.pipes == PIPE_DUAL && \
t.screen == SCREEN_OFFSCREEN) \
- continue; \
- if (!opt.show_hidden && t.feature == FEATURE_NONE) \
- continue; \
- if (!opt.show_hidden && t.fbs == FBS_SHARED && \
+ t.subtest_flags = SUBTEST_TYPE_SLOW; \
+ if (t.feature == FEATURE_NONE) \
+ t.subtest_flags = SUBTEST_TYPE_SLOW; \
+ if (t.fbs == FBS_SHARED && \
(t.plane == PLANE_CUR || t.plane == PLANE_SPR)) \
- continue;
+ t.subtest_flags = SUBTEST_TYPE_SLOW;
#define TEST_MODE_ITER_END } } } } } }
@@ -3094,7 +3091,6 @@ int main(int argc, char *argv[])
{ "no-fbc-action-check", 0, 0, 'a'},
{ "no-edp", 0, 0, 'e'},
{ "use-small-modes", 0, 0, 'm'},
- { "show-hidden", 0, 0, 'i'},
{ "step", 0, 0, 't'},
{ "shared-fb-x", 1, 0, 'x'},
{ "shared-fb-y", 1, 0, 'y'},
@@ -3110,8 +3106,9 @@ int main(int argc, char *argv[])
setup_environment();
for (t.feature = 0; t.feature < FEATURE_COUNT; t.feature++) {
- if (!opt.show_hidden && t.feature == FEATURE_NONE)
- continue;
+ t.subtest_flags = SUBTEST_TYPE_NORMAL;
+ if (t.feature == FEATURE_NONE)
+ t.subtest_flags = SUBTEST_TYPE_SLOW;
for (t.pipes = 0; t.pipes < PIPE_COUNT; t.pipes++) {
t.screen = SCREEN_PRIM;
t.plane = PLANE_PRI;
@@ -3120,52 +3117,58 @@ int main(int argc, char *argv[])
/* Make sure nothing is using this value. */
t.method = -1;
- igt_subtest_f("%s-%s-rte",
- feature_str(t.feature),
- pipes_str(t.pipes))
+ igt_subtest_flags_f(t.subtest_flags,
+ "%s-%s-rte",
+ feature_str(t.feature),
+ pipes_str(t.pipes))
rte_subtest(&t);
}
}
TEST_MODE_ITER_BEGIN(t)
- igt_subtest_f("%s-%s-%s-%s-%s-draw-%s",
- feature_str(t.feature),
- pipes_str(t.pipes),
- screen_str(t.screen),
- plane_str(t.plane),
- fbs_str(t.fbs),
- igt_draw_get_method_name(t.method))
+ igt_subtest_flags_f(t.subtest_flags,
+ "%s-%s-%s-%s-%s-draw-%s",
+ feature_str(t.feature),
+ pipes_str(t.pipes),
+ screen_str(t.screen),
+ plane_str(t.plane),
+ fbs_str(t.fbs),
+ igt_draw_get_method_name(t.method))
draw_subtest(&t);
TEST_MODE_ITER_END
TEST_MODE_ITER_BEGIN(t)
if (t.plane != PLANE_PRI ||
- t.screen == SCREEN_OFFSCREEN ||
- (!opt.show_hidden && t.method != IGT_DRAW_BLT))
+ t.screen == SCREEN_OFFSCREEN)
continue;
-
- igt_subtest_f("%s-%s-%s-%s-flip-%s",
- feature_str(t.feature),
- pipes_str(t.pipes),
- screen_str(t.screen),
- fbs_str(t.fbs),
- igt_draw_get_method_name(t.method))
+ if (t.method != IGT_DRAW_BLT)
+ t.subtest_flags = SUBTEST_TYPE_SLOW;
+
+ igt_subtest_flags_f(t.subtest_flags,
+ "%s-%s-%s-%s-flip-%s",
+ feature_str(t.feature),
+ pipes_str(t.pipes),
+ screen_str(t.screen),
+ fbs_str(t.fbs),
+ igt_draw_get_method_name(t.method))
flip_subtest(&t, FLIP_PAGEFLIP);
- igt_subtest_f("%s-%s-%s-%s-evflip-%s",
- feature_str(t.feature),
- pipes_str(t.pipes),
- screen_str(t.screen),
- fbs_str(t.fbs),
- igt_draw_get_method_name(t.method))
+ igt_subtest_flags_f(t.subtest_flags,
+ "%s-%s-%s-%s-evflip-%s",
+ feature_str(t.feature),
+ pipes_str(t.pipes),
+ screen_str(t.screen),
+ fbs_str(t.fbs),
+ igt_draw_get_method_name(t.method))
flip_subtest(&t, FLIP_PAGEFLIP_EVENT);
- igt_subtest_f("%s-%s-%s-%s-msflip-%s",
- feature_str(t.feature),
- pipes_str(t.pipes),
- screen_str(t.screen),
- fbs_str(t.fbs),
- igt_draw_get_method_name(t.method))
+ igt_subtest_flags_f(t.subtest_flags,
+ "%s-%s-%s-%s-msflip-%s",
+ feature_str(t.feature),
+ pipes_str(t.pipes),
+ screen_str(t.screen),
+ fbs_str(t.fbs),
+ igt_draw_get_method_name(t.method))
flip_subtest(&t, FLIP_MODESET);
TEST_MODE_ITER_END
@@ -3177,10 +3180,11 @@ int main(int argc, char *argv[])
(t.feature & FEATURE_FBC) == 0)
continue;
- igt_subtest_f("%s-%s-%s-fliptrack",
- feature_str(t.feature),
- pipes_str(t.pipes),
- fbs_str(t.fbs))
+ igt_subtest_flags_f(t.subtest_flags,
+ "%s-%s-%s-fliptrack",
+ feature_str(t.feature),
+ pipes_str(t.pipes),
+ fbs_str(t.fbs))
fliptrack_subtest(&t, FLIP_PAGEFLIP);
TEST_MODE_ITER_END
@@ -3190,20 +3194,22 @@ int main(int argc, char *argv[])
t.plane == PLANE_PRI)
continue;
- igt_subtest_f("%s-%s-%s-%s-%s-move",
- feature_str(t.feature),
- pipes_str(t.pipes),
- screen_str(t.screen),
- plane_str(t.plane),
- fbs_str(t.fbs))
+ igt_subtest_flags_f(t.subtest_flags,
+ "%s-%s-%s-%s-%s-move",
+ feature_str(t.feature),
+ pipes_str(t.pipes),
+ screen_str(t.screen),
+ plane_str(t.plane),
+ fbs_str(t.fbs))
move_subtest(&t);
- igt_subtest_f("%s-%s-%s-%s-%s-onoff",
- feature_str(t.feature),
- pipes_str(t.pipes),
- screen_str(t.screen),
- plane_str(t.plane),
- fbs_str(t.fbs))
+ igt_subtest_flags_f(t.subtest_flags,
+ "%s-%s-%s-%s-%s-onoff",
+ feature_str(t.feature),
+ pipes_str(t.pipes),
+ screen_str(t.screen),
+ plane_str(t.plane),
+ fbs_str(t.fbs))
onoff_subtest(&t);
TEST_MODE_ITER_END
@@ -3213,27 +3219,30 @@ int main(int argc, char *argv[])
t.plane != PLANE_SPR)
continue;
- igt_subtest_f("%s-%s-%s-%s-%s-fullscreen",
- feature_str(t.feature),
- pipes_str(t.pipes),
- screen_str(t.screen),
- plane_str(t.plane),
- fbs_str(t.fbs))
+ igt_subtest_flags_f(t.subtest_flags,
+ "%s-%s-%s-%s-%s-fullscreen",
+ feature_str(t.feature),
+ pipes_str(t.pipes),
+ screen_str(t.screen),
+ plane_str(t.plane),
+ fbs_str(t.fbs))
fullscreen_plane_subtest(&t);
TEST_MODE_ITER_END
TEST_MODE_ITER_BEGIN(t)
if (t.screen != SCREEN_PRIM ||
- t.method != IGT_DRAW_BLT ||
- (!opt.show_hidden && t.plane != PLANE_PRI) ||
- (!opt.show_hidden && t.fbs != FBS_INDIVIDUAL))
+ t.method != IGT_DRAW_BLT)
continue;
-
- igt_subtest_f("%s-%s-%s-%s-multidraw",
- feature_str(t.feature),
- pipes_str(t.pipes),
- plane_str(t.plane),
- fbs_str(t.fbs))
+ if (t.plane != PLANE_PRI ||
+ t.fbs != FBS_INDIVIDUAL)
+ t.subtest_flags = SUBTEST_TYPE_SLOW;
+
+ igt_subtest_flags_f(t.subtest_flags,
+ "%s-%s-%s-%s-multidraw",
+ feature_str(t.feature),
+ pipes_str(t.pipes),
+ plane_str(t.plane),
+ fbs_str(t.fbs))
multidraw_subtest(&t);
TEST_MODE_ITER_END
@@ -3245,7 +3254,9 @@ int main(int argc, char *argv[])
t.method != IGT_DRAW_MMAP_GTT)
continue;
- igt_subtest_f("%s-farfromfence", feature_str(t.feature))
+ igt_subtest_flags_f(t.subtest_flags,
+ "%s-farfromfence",
+ feature_str(t.feature))
farfromfence_subtest(&t);
TEST_MODE_ITER_END
@@ -3261,10 +3272,11 @@ int main(int argc, char *argv[])
if (t.format == FORMAT_DEFAULT)
continue;
- igt_subtest_f("%s-%s-draw-%s",
- feature_str(t.feature),
- format_str(t.format),
- igt_draw_get_method_name(t.method))
+ igt_subtest_flags_f(t.subtest_flags,
+ "%s-%s-draw-%s",
+ feature_str(t.feature),
+ format_str(t.format),
+ igt_draw_get_method_name(t.method))
format_draw_subtest(&t);
}
TEST_MODE_ITER_END
@@ -3275,9 +3287,10 @@ int main(int argc, char *argv[])
t.plane != PLANE_PRI ||
t.method != IGT_DRAW_MMAP_CPU)
continue;
- igt_subtest_f("%s-%s-scaledprimary",
- feature_str(t.feature),
- fbs_str(t.fbs))
+ igt_subtest_flags_f(t.subtest_flags,
+ "%s-%s-scaledprimary",
+ feature_str(t.feature),
+ fbs_str(t.fbs))
scaledprimary_subtest(&t);
TEST_MODE_ITER_END
@@ -3289,22 +3302,32 @@ int main(int argc, char *argv[])
t.method != IGT_DRAW_MMAP_CPU)
continue;
- igt_subtest_f("%s-modesetfrombusy", feature_str(t.feature))
+ igt_subtest_flags_f(t.subtest_flags,
+ "%s-modesetfrombusy",
+ feature_str(t.feature))
modesetfrombusy_subtest(&t);
if (t.feature & FEATURE_FBC) {
- igt_subtest_f("%s-badstride", feature_str(t.feature))
+ igt_subtest_flags_f(t.subtest_flags,
+ "%s-badstride",
+ feature_str(t.feature))
badstride_subtest(&t);
- igt_subtest_f("%s-stridechange", feature_str(t.feature))
+ igt_subtest_flags_f(t.subtest_flags,
+ "%s-stridechange",
+ feature_str(t.feature))
stridechange_subtest(&t);
}
if (t.feature & FEATURE_PSR)
- igt_subtest_f("%s-slowdraw", feature_str(t.feature))
+ igt_subtest_flags_f(t.subtest_flags,
+ "%s-slowdraw",
+ feature_str(t.feature))
slow_draw_subtest(&t);
- igt_subtest_f("%s-suspend", feature_str(t.feature))
+ igt_subtest_flags_f(t.subtest_flags,
+ "%s-suspend",
+ feature_str(t.feature))
suspend_subtest(&t);
TEST_MODE_ITER_END
--
2.6.2
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH i-g-t 3/3 v3] Remove superfluous gem_concurrent_all.c
2015-10-30 13:18 ` [PATCH i-g-t 0/3 v3] Unify slow/combinatorial test handling David Weinehall
2015-10-30 13:18 ` [PATCH i-g-t 1/3 v3] Copy gem_concurrent_all to gem_concurrent_blit David Weinehall
2015-10-30 13:18 ` [PATCH i-g-t 2/3 v3] Unify handling of slow/combinatorial tests David Weinehall
@ 2015-10-30 13:18 ` David Weinehall
2 siblings, 0 replies; 41+ messages in thread
From: David Weinehall @ 2015-10-30 13:18 UTC (permalink / raw)
To: intel-gfx
When gem_concurrent_blit was converted to use the new common framework
for choosing whether or not to include slow/combinatorial tests,
gem_concurrent_all became superfluous. This patch removes it.
Signed-off-by: David Weinehall <david.weinehall@linux.intel.com>
---
tests/.gitignore | 1 -
tests/Makefile.sources | 1 -
tests/gem_concurrent_all.c | 1108 --------------------------------------------
3 files changed, 1110 deletions(-)
delete mode 100644 tests/gem_concurrent_all.c
diff --git a/tests/.gitignore b/tests/.gitignore
index beda5117da5c..da4f9961fc60 100644
--- a/tests/.gitignore
+++ b/tests/.gitignore
@@ -23,7 +23,6 @@ gem_bad_reloc
gem_basic
gem_caching
gem_close_race
-gem_concurrent_all
gem_concurrent_blit
gem_cpu_reloc
gem_cs_prefetch
diff --git a/tests/Makefile.sources b/tests/Makefile.sources
index ac731f90dcb2..321c7f33e4d3 100644
--- a/tests/Makefile.sources
+++ b/tests/Makefile.sources
@@ -14,7 +14,6 @@ TESTS_progs_M = \
gem_caching \
gem_close_race \
gem_concurrent_blit \
- gem_concurrent_all \
gem_cs_tlb \
gem_ctx_param_basic \
gem_ctx_bad_exec \
diff --git a/tests/gem_concurrent_all.c b/tests/gem_concurrent_all.c
deleted file mode 100644
index 1d2d787202df..000000000000
--- a/tests/gem_concurrent_all.c
+++ /dev/null
@@ -1,1108 +0,0 @@
-/*
- * Copyright © 2009,2012,2013 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- * Authors:
- * Eric Anholt <eric@anholt.net>
- * Chris Wilson <chris@chris-wilson.co.uk>
- * Daniel Vetter <daniel.vetter@ffwll.ch>
- *
- */
-
-/** @file gem_concurrent.c
- *
- * This is a test of pread/pwrite/mmap behavior when writing to active
- * buffers.
- *
- * Based on gem_gtt_concurrent_blt.
- */
-
-#include "igt.h"
-#include <stdlib.h>
-#include <stdio.h>
-#include <string.h>
-#include <fcntl.h>
-#include <inttypes.h>
-#include <errno.h>
-#include <sys/stat.h>
-#include <sys/time.h>
-#include <sys/wait.h>
-
-#include <drm.h>
-
-#include "intel_bufmgr.h"
-
-IGT_TEST_DESCRIPTION("Test of pread/pwrite/mmap behavior when writing to active"
- " buffers.");
-
-int fd, devid, gen;
-struct intel_batchbuffer *batch;
-int all;
-
-static void
-nop_release_bo(drm_intel_bo *bo)
-{
- drm_intel_bo_unreference(bo);
-}
-
-static void
-prw_set_bo(drm_intel_bo *bo, uint32_t val, int width, int height)
-{
- int size = width * height, i;
- uint32_t *tmp;
-
- tmp = malloc(4*size);
- if (tmp) {
- for (i = 0; i < size; i++)
- tmp[i] = val;
- drm_intel_bo_subdata(bo, 0, 4*size, tmp);
- free(tmp);
- } else {
- for (i = 0; i < size; i++)
- drm_intel_bo_subdata(bo, 4*i, 4, &val);
- }
-}
-
-static void
-prw_cmp_bo(drm_intel_bo *bo, uint32_t val, int width, int height, drm_intel_bo *tmp)
-{
- int size = width * height, i;
- uint32_t *vaddr;
-
- do_or_die(drm_intel_bo_map(tmp, true));
- do_or_die(drm_intel_bo_get_subdata(bo, 0, 4*size, tmp->virtual));
- vaddr = tmp->virtual;
- for (i = 0; i < size; i++)
- igt_assert_eq_u32(vaddr[i], val);
- drm_intel_bo_unmap(tmp);
-}
-
-static drm_intel_bo *
-unmapped_create_bo(drm_intel_bufmgr *bufmgr, int width, int height)
-{
- drm_intel_bo *bo;
-
- bo = drm_intel_bo_alloc(bufmgr, "bo", 4*width*height, 0);
- igt_assert(bo);
-
- return bo;
-}
-
-static drm_intel_bo *
-snoop_create_bo(drm_intel_bufmgr *bufmgr, int width, int height)
-{
- drm_intel_bo *bo;
-
- igt_skip_on(gem_has_llc(fd));
-
- bo = unmapped_create_bo(bufmgr, width, height);
- gem_set_caching(fd, bo->handle, I915_CACHING_CACHED);
- drm_intel_bo_disable_reuse(bo);
-
- return bo;
-}
-
-static void
-gtt_set_bo(drm_intel_bo *bo, uint32_t val, int width, int height)
-{
- uint32_t *vaddr = bo->virtual;
- int size = width * height;
-
- drm_intel_gem_bo_start_gtt_access(bo, true);
- while (size--)
- *vaddr++ = val;
-}
-
-static void
-gtt_cmp_bo(drm_intel_bo *bo, uint32_t val, int width, int height, drm_intel_bo *tmp)
-{
- uint32_t *vaddr = bo->virtual;
- int y;
-
- /* GTT access is slow. So we just compare a few points */
- drm_intel_gem_bo_start_gtt_access(bo, false);
- for (y = 0; y < height; y++)
- igt_assert_eq_u32(vaddr[y*width+y], val);
-}
-
-static drm_intel_bo *
-map_bo(drm_intel_bo *bo)
-{
- /* gtt map doesn't have a write parameter, so just keep the mapping
- * around (to avoid the set_domain with the gtt write domain set) and
- * manually tell the kernel when we start access the gtt. */
- do_or_die(drm_intel_gem_bo_map_gtt(bo));
-
- return bo;
-}
-
-static drm_intel_bo *
-tile_bo(drm_intel_bo *bo, int width)
-{
- uint32_t tiling = I915_TILING_X;
- uint32_t stride = width * 4;
-
- do_or_die(drm_intel_bo_set_tiling(bo, &tiling, stride));
-
- return bo;
-}
-
-static drm_intel_bo *
-gtt_create_bo(drm_intel_bufmgr *bufmgr, int width, int height)
-{
- return map_bo(unmapped_create_bo(bufmgr, width, height));
-}
-
-static drm_intel_bo *
-gttX_create_bo(drm_intel_bufmgr *bufmgr, int width, int height)
-{
- return tile_bo(gtt_create_bo(bufmgr, width, height), width);
-}
-
-static drm_intel_bo *
-wc_create_bo(drm_intel_bufmgr *bufmgr, int width, int height)
-{
- drm_intel_bo *bo;
-
- gem_require_mmap_wc(fd);
-
- bo = unmapped_create_bo(bufmgr, width, height);
- bo->virtual = __gem_mmap__wc(fd, bo->handle, 0, bo->size, PROT_READ | PROT_WRITE);
- return bo;
-}
-
-static void
-wc_release_bo(drm_intel_bo *bo)
-{
- munmap(bo->virtual, bo->size);
- bo->virtual = NULL;
-
- nop_release_bo(bo);
-}
-
-static drm_intel_bo *
-gpu_create_bo(drm_intel_bufmgr *bufmgr, int width, int height)
-{
- return unmapped_create_bo(bufmgr, width, height);
-}
-
-
-static drm_intel_bo *
-gpuX_create_bo(drm_intel_bufmgr *bufmgr, int width, int height)
-{
- return tile_bo(gpu_create_bo(bufmgr, width, height), width);
-}
-
-static void
-cpu_set_bo(drm_intel_bo *bo, uint32_t val, int width, int height)
-{
- int size = width * height;
- uint32_t *vaddr;
-
- do_or_die(drm_intel_bo_map(bo, true));
- vaddr = bo->virtual;
- while (size--)
- *vaddr++ = val;
- drm_intel_bo_unmap(bo);
-}
-
-static void
-cpu_cmp_bo(drm_intel_bo *bo, uint32_t val, int width, int height, drm_intel_bo *tmp)
-{
- int size = width * height;
- uint32_t *vaddr;
-
- do_or_die(drm_intel_bo_map(bo, false));
- vaddr = bo->virtual;
- while (size--)
- igt_assert_eq_u32(*vaddr++, val);
- drm_intel_bo_unmap(bo);
-}
-
-static void
-gpu_set_bo(drm_intel_bo *bo, uint32_t val, int width, int height)
-{
- struct drm_i915_gem_relocation_entry reloc[1];
- struct drm_i915_gem_exec_object2 gem_exec[2];
- struct drm_i915_gem_execbuffer2 execbuf;
- struct drm_i915_gem_pwrite gem_pwrite;
- struct drm_i915_gem_create create;
- uint32_t buf[10], *b;
- uint32_t tiling, swizzle;
-
- drm_intel_bo_get_tiling(bo, &tiling, &swizzle);
-
- memset(reloc, 0, sizeof(reloc));
- memset(gem_exec, 0, sizeof(gem_exec));
- memset(&execbuf, 0, sizeof(execbuf));
-
- b = buf;
- *b++ = XY_COLOR_BLT_CMD_NOLEN |
- ((gen >= 8) ? 5 : 4) |
- COLOR_BLT_WRITE_ALPHA | XY_COLOR_BLT_WRITE_RGB;
- if (gen >= 4 && tiling) {
- b[-1] |= XY_COLOR_BLT_TILED;
- *b = width;
- } else
- *b = width << 2;
- *b++ |= 0xf0 << 16 | 1 << 25 | 1 << 24;
- *b++ = 0;
- *b++ = height << 16 | width;
- reloc[0].offset = (b - buf) * sizeof(uint32_t);
- reloc[0].target_handle = bo->handle;
- reloc[0].read_domains = I915_GEM_DOMAIN_RENDER;
- reloc[0].write_domain = I915_GEM_DOMAIN_RENDER;
- *b++ = 0;
- if (gen >= 8)
- *b++ = 0;
- *b++ = val;
- *b++ = MI_BATCH_BUFFER_END;
- if ((b - buf) & 1)
- *b++ = 0;
-
- gem_exec[0].handle = bo->handle;
- gem_exec[0].flags = EXEC_OBJECT_NEEDS_FENCE;
-
- create.handle = 0;
- create.size = 4096;
- drmIoctl(fd, DRM_IOCTL_I915_GEM_CREATE, &create);
- gem_exec[1].handle = create.handle;
- gem_exec[1].relocation_count = 1;
- gem_exec[1].relocs_ptr = (uintptr_t)reloc;
-
- execbuf.buffers_ptr = (uintptr_t)gem_exec;
- execbuf.buffer_count = 2;
- execbuf.batch_len = (b - buf) * sizeof(buf[0]);
- if (gen >= 6)
- execbuf.flags = I915_EXEC_BLT;
-
- gem_pwrite.handle = gem_exec[1].handle;
- gem_pwrite.offset = 0;
- gem_pwrite.size = execbuf.batch_len;
- gem_pwrite.data_ptr = (uintptr_t)buf;
- do_or_die(drmIoctl(fd, DRM_IOCTL_I915_GEM_PWRITE, &gem_pwrite));
- do_or_die(drmIoctl(fd, DRM_IOCTL_I915_GEM_EXECBUFFER2, &execbuf));
-
- drmIoctl(fd, DRM_IOCTL_GEM_CLOSE, &create.handle);
-}
-
-static void
-gpu_cmp_bo(drm_intel_bo *bo, uint32_t val, int width, int height, drm_intel_bo *tmp)
-{
- intel_blt_copy(batch,
- bo, 0, 0, 4*width,
- tmp, 0, 0, 4*width,
- width, height, 32);
- cpu_cmp_bo(tmp, val, width, height, NULL);
-}
-
-const struct access_mode {
- const char *name;
- void (*set_bo)(drm_intel_bo *bo, uint32_t val, int w, int h);
- void (*cmp_bo)(drm_intel_bo *bo, uint32_t val, int w, int h, drm_intel_bo *tmp);
- drm_intel_bo *(*create_bo)(drm_intel_bufmgr *bufmgr, int width, int height);
- void (*release_bo)(drm_intel_bo *bo);
-} access_modes[] = {
- {
- .name = "prw",
- .set_bo = prw_set_bo,
- .cmp_bo = prw_cmp_bo,
- .create_bo = unmapped_create_bo,
- .release_bo = nop_release_bo,
- },
- {
- .name = "cpu",
- .set_bo = cpu_set_bo,
- .cmp_bo = cpu_cmp_bo,
- .create_bo = unmapped_create_bo,
- .release_bo = nop_release_bo,
- },
- {
- .name = "snoop",
- .set_bo = cpu_set_bo,
- .cmp_bo = cpu_cmp_bo,
- .create_bo = snoop_create_bo,
- .release_bo = nop_release_bo,
- },
- {
- .name = "gtt",
- .set_bo = gtt_set_bo,
- .cmp_bo = gtt_cmp_bo,
- .create_bo = gtt_create_bo,
- .release_bo = nop_release_bo,
- },
- {
- .name = "gttX",
- .set_bo = gtt_set_bo,
- .cmp_bo = gtt_cmp_bo,
- .create_bo = gttX_create_bo,
- .release_bo = nop_release_bo,
- },
- {
- .name = "wc",
- .set_bo = gtt_set_bo,
- .cmp_bo = gtt_cmp_bo,
- .create_bo = wc_create_bo,
- .release_bo = wc_release_bo,
- },
- {
- .name = "gpu",
- .set_bo = gpu_set_bo,
- .cmp_bo = gpu_cmp_bo,
- .create_bo = gpu_create_bo,
- .release_bo = nop_release_bo,
- },
- {
- .name = "gpuX",
- .set_bo = gpu_set_bo,
- .cmp_bo = gpu_cmp_bo,
- .create_bo = gpuX_create_bo,
- .release_bo = nop_release_bo,
- },
-};
-
-#define MAX_NUM_BUFFERS 1024
-int num_buffers = MAX_NUM_BUFFERS;
-const int width = 512, height = 512;
-igt_render_copyfunc_t rendercopy;
-
-struct buffers {
- const struct access_mode *mode;
- drm_intel_bufmgr *bufmgr;
- drm_intel_bo *src[MAX_NUM_BUFFERS], *dst[MAX_NUM_BUFFERS];
- drm_intel_bo *dummy, *spare;
- int count;
-};
-
-static void *buffers_init(struct buffers *data,
- const struct access_mode *mode,
- int _fd)
-{
- data->mode = mode;
- data->count = 0;
-
- data->bufmgr = drm_intel_bufmgr_gem_init(_fd, 4096);
- igt_assert(data->bufmgr);
-
- drm_intel_bufmgr_gem_enable_reuse(data->bufmgr);
- return intel_batchbuffer_alloc(data->bufmgr, devid);
-}
-
-static void buffers_destroy(struct buffers *data)
-{
- if (data->count == 0)
- return;
-
- for (int i = 0; i < data->count; i++) {
- data->mode->release_bo(data->src[i]);
- data->mode->release_bo(data->dst[i]);
- }
- data->mode->release_bo(data->dummy);
- data->mode->release_bo(data->spare);
- data->count = 0;
-}
-
-static void buffers_create(struct buffers *data,
- int count)
-{
- igt_assert(data->bufmgr);
-
- buffers_destroy(data);
-
- for (int i = 0; i < count; i++) {
- data->src[i] =
- data->mode->create_bo(data->bufmgr, width, height);
- data->dst[i] =
- data->mode->create_bo(data->bufmgr, width, height);
- }
- data->dummy = data->mode->create_bo(data->bufmgr, width, height);
- data->spare = data->mode->create_bo(data->bufmgr, width, height);
- data->count = count;
-}
-
-static void buffers_fini(struct buffers *data)
-{
- if (data->bufmgr == NULL)
- return;
-
- buffers_destroy(data);
-
- intel_batchbuffer_free(batch);
- drm_intel_bufmgr_destroy(data->bufmgr);
- data->bufmgr = NULL;
-}
-
-typedef void (*do_copy)(drm_intel_bo *dst, drm_intel_bo *src);
-typedef struct igt_hang_ring (*do_hang)(void);
-
-static void render_copy_bo(drm_intel_bo *dst, drm_intel_bo *src)
-{
- struct igt_buf d = {
- .bo = dst,
- .size = width * height * 4,
- .num_tiles = width * height * 4,
- .stride = width * 4,
- }, s = {
- .bo = src,
- .size = width * height * 4,
- .num_tiles = width * height * 4,
- .stride = width * 4,
- };
- uint32_t swizzle;
-
- drm_intel_bo_get_tiling(dst, &d.tiling, &swizzle);
- drm_intel_bo_get_tiling(src, &s.tiling, &swizzle);
-
- rendercopy(batch, NULL,
- &s, 0, 0,
- width, height,
- &d, 0, 0);
-}
-
-static void blt_copy_bo(drm_intel_bo *dst, drm_intel_bo *src)
-{
- intel_blt_copy(batch,
- src, 0, 0, 4*width,
- dst, 0, 0, 4*width,
- width, height, 32);
-}
-
-static void cpu_copy_bo(drm_intel_bo *dst, drm_intel_bo *src)
-{
- const int size = width * height * sizeof(uint32_t);
- void *d, *s;
-
- gem_set_domain(fd, src->handle, I915_GEM_DOMAIN_CPU, 0);
- gem_set_domain(fd, dst->handle, I915_GEM_DOMAIN_CPU, I915_GEM_DOMAIN_CPU);
- s = gem_mmap__cpu(fd, src->handle, 0, size, PROT_READ);
- d = gem_mmap__cpu(fd, dst->handle, 0, size, PROT_WRITE);
-
- memcpy(d, s, size);
-
- munmap(d, size);
- munmap(s, size);
-}
-
-static void gtt_copy_bo(drm_intel_bo *dst, drm_intel_bo *src)
-{
- const int size = width * height * sizeof(uint32_t);
- void *d, *s;
-
- gem_set_domain(fd, src->handle, I915_GEM_DOMAIN_GTT, 0);
- gem_set_domain(fd, dst->handle, I915_GEM_DOMAIN_GTT, I915_GEM_DOMAIN_GTT);
-
- s = gem_mmap__gtt(fd, src->handle, size, PROT_READ);
- d = gem_mmap__gtt(fd, dst->handle, size, PROT_WRITE);
-
- memcpy(d, s, size);
-
- munmap(d, size);
- munmap(s, size);
-}
-
-static void wc_copy_bo(drm_intel_bo *dst, drm_intel_bo *src)
-{
- const int size = width * height * sizeof(uint32_t);
- void *d, *s;
-
- gem_set_domain(fd, src->handle, I915_GEM_DOMAIN_GTT, 0);
- gem_set_domain(fd, dst->handle, I915_GEM_DOMAIN_GTT, I915_GEM_DOMAIN_GTT);
-
- s = gem_mmap__wc(fd, src->handle, 0, size, PROT_READ);
- d = gem_mmap__wc(fd, dst->handle, 0, size, PROT_WRITE);
-
- memcpy(d, s, size);
-
- munmap(d, size);
- munmap(s, size);
-}
-
-static struct igt_hang_ring no_hang(void)
-{
- return (struct igt_hang_ring){0, 0};
-}
-
-static struct igt_hang_ring bcs_hang(void)
-{
- return igt_hang_ring(fd, I915_EXEC_BLT);
-}
-
-static struct igt_hang_ring rcs_hang(void)
-{
- return igt_hang_ring(fd, I915_EXEC_RENDER);
-}
-
-static void hang_require(void)
-{
- igt_require_hang_ring(fd, -1);
-}
-
-static void do_overwrite_source(struct buffers *buffers,
- do_copy do_copy_func,
- do_hang do_hang_func)
-{
- struct igt_hang_ring hang;
- int i;
-
- gem_quiescent_gpu(fd);
- for (i = 0; i < buffers->count; i++) {
- buffers->mode->set_bo(buffers->src[i], i, width, height);
- buffers->mode->set_bo(buffers->dst[i], ~i, width, height);
- }
- for (i = 0; i < buffers->count; i++)
- do_copy_func(buffers->dst[i], buffers->src[i]);
- hang = do_hang_func();
- for (i = buffers->count; i--; )
- buffers->mode->set_bo(buffers->src[i], 0xdeadbeef, width, height);
- for (i = 0; i < buffers->count; i++)
- buffers->mode->cmp_bo(buffers->dst[i], i, width, height, buffers->dummy);
- igt_post_hang_ring(fd, hang);
-}
-
-static void do_overwrite_source_read(struct buffers *buffers,
- do_copy do_copy_func,
- do_hang do_hang_func,
- int do_rcs)
-{
- const int half = buffers->count/2;
- struct igt_hang_ring hang;
- int i;
-
- gem_quiescent_gpu(fd);
- for (i = 0; i < half; i++) {
- buffers->mode->set_bo(buffers->src[i], i, width, height);
- buffers->mode->set_bo(buffers->dst[i], ~i, width, height);
- buffers->mode->set_bo(buffers->dst[i+half], ~i, width, height);
- }
- for (i = 0; i < half; i++) {
- do_copy_func(buffers->dst[i], buffers->src[i]);
- if (do_rcs)
- render_copy_bo(buffers->dst[i+half], buffers->src[i]);
- else
- blt_copy_bo(buffers->dst[i+half], buffers->src[i]);
- }
- hang = do_hang_func();
- for (i = half; i--; )
- buffers->mode->set_bo(buffers->src[i], 0xdeadbeef, width, height);
- for (i = 0; i < half; i++) {
- buffers->mode->cmp_bo(buffers->dst[i], i, width, height, buffers->dummy);
- buffers->mode->cmp_bo(buffers->dst[i+half], i, width, height, buffers->dummy);
- }
- igt_post_hang_ring(fd, hang);
-}
-
-static void do_overwrite_source_read_bcs(struct buffers *buffers,
- do_copy do_copy_func,
- do_hang do_hang_func)
-{
- do_overwrite_source_read(buffers, do_copy_func, do_hang_func, 0);
-}
-
-static void do_overwrite_source_read_rcs(struct buffers *buffers,
- do_copy do_copy_func,
- do_hang do_hang_func)
-{
- do_overwrite_source_read(buffers, do_copy_func, do_hang_func, 1);
-}
-
-static void do_overwrite_source__rev(struct buffers *buffers,
- do_copy do_copy_func,
- do_hang do_hang_func)
-{
- struct igt_hang_ring hang;
- int i;
-
- gem_quiescent_gpu(fd);
- for (i = 0; i < buffers->count; i++) {
- buffers->mode->set_bo(buffers->src[i], i, width, height);
- buffers->mode->set_bo(buffers->dst[i], ~i, width, height);
- }
- for (i = 0; i < buffers->count; i++)
- do_copy_func(buffers->dst[i], buffers->src[i]);
- hang = do_hang_func();
- for (i = 0; i < buffers->count; i++)
- buffers->mode->set_bo(buffers->src[i], 0xdeadbeef, width, height);
- for (i = buffers->count; i--; )
- buffers->mode->cmp_bo(buffers->dst[i], i, width, height, buffers->dummy);
- igt_post_hang_ring(fd, hang);
-}
-
-static void do_overwrite_source__one(struct buffers *buffers,
- do_copy do_copy_func,
- do_hang do_hang_func)
-{
- struct igt_hang_ring hang;
-
- gem_quiescent_gpu(fd);
- buffers->mode->set_bo(buffers->src[0], 0, width, height);
- buffers->mode->set_bo(buffers->dst[0], ~0, width, height);
- do_copy_func(buffers->dst[0], buffers->src[0]);
- hang = do_hang_func();
- buffers->mode->set_bo(buffers->src[0], 0xdeadbeef, width, height);
- buffers->mode->cmp_bo(buffers->dst[0], 0, width, height, buffers->dummy);
- igt_post_hang_ring(fd, hang);
-}
-
-static void do_intermix(struct buffers *buffers,
- do_copy do_copy_func,
- do_hang do_hang_func,
- int do_rcs)
-{
- const int half = buffers->count/2;
- struct igt_hang_ring hang;
- int i;
-
- gem_quiescent_gpu(fd);
- for (i = 0; i < buffers->count; i++) {
- buffers->mode->set_bo(buffers->src[i], 0xdeadbeef^~i, width, height);
- buffers->mode->set_bo(buffers->dst[i], i, width, height);
- }
- for (i = 0; i < half; i++) {
- if (do_rcs == 1 || (do_rcs == -1 && i & 1))
- render_copy_bo(buffers->dst[i], buffers->src[i]);
- else
- blt_copy_bo(buffers->dst[i], buffers->src[i]);
-
- do_copy_func(buffers->dst[i+half], buffers->src[i]);
-
- if (do_rcs == 1 || (do_rcs == -1 && (i & 1) == 0))
- render_copy_bo(buffers->dst[i], buffers->dst[i+half]);
- else
- blt_copy_bo(buffers->dst[i], buffers->dst[i+half]);
-
- do_copy_func(buffers->dst[i+half], buffers->src[i+half]);
- }
- hang = do_hang_func();
- for (i = 0; i < 2*half; i++)
- buffers->mode->cmp_bo(buffers->dst[i], 0xdeadbeef^~i, width, height, buffers->dummy);
- igt_post_hang_ring(fd, hang);
-}
-
-static void do_intermix_rcs(struct buffers *buffers,
- do_copy do_copy_func,
- do_hang do_hang_func)
-{
- do_intermix(buffers, do_copy_func, do_hang_func, 1);
-}
-
-static void do_intermix_bcs(struct buffers *buffers,
- do_copy do_copy_func,
- do_hang do_hang_func)
-{
- do_intermix(buffers, do_copy_func, do_hang_func, 0);
-}
-
-static void do_intermix_both(struct buffers *buffers,
- do_copy do_copy_func,
- do_hang do_hang_func)
-{
- do_intermix(buffers, do_copy_func, do_hang_func, -1);
-}
-
-static void do_early_read(struct buffers *buffers,
- do_copy do_copy_func,
- do_hang do_hang_func)
-{
- struct igt_hang_ring hang;
- int i;
-
- gem_quiescent_gpu(fd);
- for (i = buffers->count; i--; )
- buffers->mode->set_bo(buffers->src[i], 0xdeadbeef, width, height);
- for (i = 0; i < buffers->count; i++)
- do_copy_func(buffers->dst[i], buffers->src[i]);
- hang = do_hang_func();
- for (i = buffers->count; i--; )
- buffers->mode->cmp_bo(buffers->dst[i], 0xdeadbeef, width, height, buffers->dummy);
- igt_post_hang_ring(fd, hang);
-}
-
-static void do_read_read_bcs(struct buffers *buffers,
- do_copy do_copy_func,
- do_hang do_hang_func)
-{
- struct igt_hang_ring hang;
- int i;
-
- gem_quiescent_gpu(fd);
- for (i = buffers->count; i--; )
- buffers->mode->set_bo(buffers->src[i], 0xdeadbeef ^ i, width, height);
- for (i = 0; i < buffers->count; i++) {
- do_copy_func(buffers->dst[i], buffers->src[i]);
- blt_copy_bo(buffers->spare, buffers->src[i]);
- }
- cpu_cmp_bo(buffers->spare, 0xdeadbeef^(buffers->count-1), width, height, NULL);
- hang = do_hang_func();
- for (i = buffers->count; i--; )
- buffers->mode->cmp_bo(buffers->dst[i], 0xdeadbeef ^ i, width, height, buffers->dummy);
- igt_post_hang_ring(fd, hang);
-}
-
-static void do_read_read_rcs(struct buffers *buffers,
- do_copy do_copy_func,
- do_hang do_hang_func)
-{
- struct igt_hang_ring hang;
- int i;
-
- gem_quiescent_gpu(fd);
- for (i = buffers->count; i--; )
- buffers->mode->set_bo(buffers->src[i], 0xdeadbeef ^ i, width, height);
- for (i = 0; i < buffers->count; i++) {
- do_copy_func(buffers->dst[i], buffers->src[i]);
- render_copy_bo(buffers->spare, buffers->src[i]);
- }
- cpu_cmp_bo(buffers->spare, 0xdeadbeef^(buffers->count-1), width, height, NULL);
- hang = do_hang_func();
- for (i = buffers->count; i--; )
- buffers->mode->cmp_bo(buffers->dst[i], 0xdeadbeef ^ i, width, height, buffers->dummy);
- igt_post_hang_ring(fd, hang);
-}
-
-static void do_gpu_read_after_write(struct buffers *buffers,
- do_copy do_copy_func,
- do_hang do_hang_func)
-{
- struct igt_hang_ring hang;
- int i;
-
- gem_quiescent_gpu(fd);
- for (i = buffers->count; i--; )
- buffers->mode->set_bo(buffers->src[i], 0xabcdabcd, width, height);
- for (i = 0; i < buffers->count; i++)
- do_copy_func(buffers->dst[i], buffers->src[i]);
- for (i = buffers->count; i--; )
- do_copy_func(buffers->dummy, buffers->dst[i]);
- hang = do_hang_func();
- for (i = buffers->count; i--; )
- buffers->mode->cmp_bo(buffers->dst[i], 0xabcdabcd, width, height, buffers->dummy);
- igt_post_hang_ring(fd, hang);
-}
-
-typedef void (*do_test)(struct buffers *buffers,
- do_copy do_copy_func,
- do_hang do_hang_func);
-
-typedef void (*run_wrap)(struct buffers *buffers,
- do_test do_test_func,
- do_copy do_copy_func,
- do_hang do_hang_func);
-
-static void run_single(struct buffers *buffers,
- do_test do_test_func,
- do_copy do_copy_func,
- do_hang do_hang_func)
-{
- do_test_func(buffers, do_copy_func, do_hang_func);
-}
-
-static void run_interruptible(struct buffers *buffers,
- do_test do_test_func,
- do_copy do_copy_func,
- do_hang do_hang_func)
-{
- int loop;
-
- for (loop = 0; loop < 10; loop++)
- do_test_func(buffers, do_copy_func, do_hang_func);
-}
-
-static void run_forked(struct buffers *buffers,
- do_test do_test_func,
- do_copy do_copy_func,
- do_hang do_hang_func)
-{
- const int old_num_buffers = num_buffers;
-
- num_buffers /= 16;
- num_buffers += 2;
-
- igt_fork(child, 16) {
- /* recreate process local variables */
- buffers->count = 0;
- fd = drm_open_driver(DRIVER_INTEL);
-
- batch = buffers_init(buffers, buffers->mode, fd);
-
- buffers_create(buffers, num_buffers);
- for (int loop = 0; loop < 10; loop++)
- do_test_func(buffers, do_copy_func, do_hang_func);
-
- buffers_fini(buffers);
- }
-
- igt_waitchildren();
-
- num_buffers = old_num_buffers;
-}
-
-static void bit17_require(void)
-{
- struct drm_i915_gem_get_tiling2 {
- uint32_t handle;
- uint32_t tiling_mode;
- uint32_t swizzle_mode;
- uint32_t phys_swizzle_mode;
- } arg;
-#define DRM_IOCTL_I915_GEM_GET_TILING2 DRM_IOWR (DRM_COMMAND_BASE + DRM_I915_GEM_GET_TILING, struct drm_i915_gem_get_tiling2)
-
- memset(&arg, 0, sizeof(arg));
- arg.handle = gem_create(fd, 4096);
- gem_set_tiling(fd, arg.handle, I915_TILING_X, 512);
-
- do_or_die(drmIoctl(fd, DRM_IOCTL_I915_GEM_GET_TILING2, &arg));
- gem_close(fd, arg.handle);
- igt_require(arg.phys_swizzle_mode == arg.swizzle_mode);
-}
-
-static void cpu_require(void)
-{
- bit17_require();
-}
-
-static void gtt_require(void)
-{
-}
-
-static void wc_require(void)
-{
- bit17_require();
- gem_require_mmap_wc(fd);
-}
-
-static void bcs_require(void)
-{
-}
-
-static void rcs_require(void)
-{
- igt_require(rendercopy);
-}
-
-static void no_require(void)
-{
-}
-
-static void
-run_basic_modes(const struct access_mode *mode,
- const char *suffix,
- run_wrap run_wrap_func)
-{
- const struct {
- const char *prefix;
- do_copy copy;
- void (*require)(void);
- } pipelines[] = {
- { "cpu", cpu_copy_bo, cpu_require },
- { "gtt", gtt_copy_bo, gtt_require },
- { "wc", wc_copy_bo, wc_require },
- { "blt", blt_copy_bo, bcs_require },
- { "render", render_copy_bo, rcs_require },
- { NULL, NULL }
- }, *pskip = pipelines + 3, *p;
- const struct {
- const char *suffix;
- do_hang hang;
- void (*require)(void);
- } hangs[] = {
- { "", no_hang, no_require },
- { "-hang-blt", bcs_hang, hang_require },
- { "-hang-render", rcs_hang, hang_require },
- { NULL, NULL },
- }, *h;
- struct buffers buffers;
-
- for (h = hangs; h->suffix; h++) {
- if (!all && *h->suffix)
- continue;
-
- for (p = all ? pipelines : pskip; p->prefix; p++) {
- igt_fixture {
- batch = buffers_init(&buffers, mode, fd);
- }
-
- /* try to overwrite the source values */
- igt_subtest_f("%s-%s-overwrite-source-one%s%s", mode->name, p->prefix, suffix, h->suffix) {
- h->require();
- p->require();
- buffers_create(&buffers, num_buffers);
- run_wrap_func(&buffers,
- do_overwrite_source__one,
- p->copy, h->hang);
- }
-
- igt_subtest_f("%s-%s-overwrite-source%s%s", mode->name, p->prefix, suffix, h->suffix) {
- h->require();
- p->require();
- buffers_create(&buffers, num_buffers);
- run_wrap_func(&buffers,
- do_overwrite_source,
- p->copy, h->hang);
- }
-
- igt_subtest_f("%s-%s-overwrite-source-read-bcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
- h->require();
- p->require();
- buffers_create(&buffers, num_buffers);
- run_wrap_func(&buffers,
- do_overwrite_source_read_bcs,
- p->copy, h->hang);
- }
-
- igt_subtest_f("%s-%s-overwrite-source-read-rcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
- h->require();
- p->require();
- igt_require(rendercopy);
- buffers_create(&buffers, num_buffers);
- run_wrap_func(&buffers,
- do_overwrite_source_read_rcs,
- p->copy, h->hang);
- }
-
- igt_subtest_f("%s-%s-overwrite-source-rev%s%s", mode->name, p->prefix, suffix, h->suffix) {
- h->require();
- p->require();
- buffers_create(&buffers, num_buffers);
- run_wrap_func(&buffers,
- do_overwrite_source__rev,
- p->copy, h->hang);
- }
-
- /* try to intermix copies with GPU copies*/
- igt_subtest_f("%s-%s-intermix-rcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
- h->require();
- p->require();
- igt_require(rendercopy);
- buffers_create(&buffers, num_buffers);
- run_wrap_func(&buffers,
- do_intermix_rcs,
- p->copy, h->hang);
- }
- igt_subtest_f("%s-%s-intermix-bcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
- h->require();
- p->require();
- igt_require(rendercopy);
- buffers_create(&buffers, num_buffers);
- run_wrap_func(&buffers,
- do_intermix_bcs,
- p->copy, h->hang);
- }
- igt_subtest_f("%s-%s-intermix-both%s%s", mode->name, p->prefix, suffix, h->suffix) {
- h->require();
- p->require();
- igt_require(rendercopy);
- buffers_create(&buffers, num_buffers);
- run_wrap_func(&buffers,
- do_intermix_both,
- p->copy, h->hang);
- }
-
- /* try to read the results before the copy completes */
- igt_subtest_f("%s-%s-early-read%s%s", mode->name, p->prefix, suffix, h->suffix) {
- h->require();
- p->require();
- buffers_create(&buffers, num_buffers);
- run_wrap_func(&buffers,
- do_early_read,
- p->copy, h->hang);
- }
-
- /* concurrent reads */
- igt_subtest_f("%s-%s-read-read-bcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
- h->require();
- p->require();
- buffers_create(&buffers, num_buffers);
- run_wrap_func(&buffers,
- do_read_read_bcs,
- p->copy, h->hang);
- }
- igt_subtest_f("%s-%s-read-read-rcs%s%s", mode->name, p->prefix, suffix, h->suffix) {
- h->require();
- p->require();
- igt_require(rendercopy);
- buffers_create(&buffers, num_buffers);
- run_wrap_func(&buffers,
- do_read_read_rcs,
- p->copy, h->hang);
- }
-
- /* and finally try to trick the kernel into loosing the pending write */
- igt_subtest_f("%s-%s-gpu-read-after-write%s%s", mode->name, p->prefix, suffix, h->suffix) {
- h->require();
- p->require();
- buffers_create(&buffers, num_buffers);
- run_wrap_func(&buffers,
- do_gpu_read_after_write,
- p->copy, h->hang);
- }
-
- igt_fixture {
- buffers_fini(&buffers);
- }
- }
- }
-}
-
-static void
-run_modes(const struct access_mode *mode)
-{
- if (all) {
- run_basic_modes(mode, "", run_single);
-
- igt_fork_signal_helper();
- run_basic_modes(mode, "-interruptible", run_interruptible);
- igt_stop_signal_helper();
- }
-
- igt_fork_signal_helper();
- run_basic_modes(mode, "-forked", run_forked);
- igt_stop_signal_helper();
-}
-
-igt_main
-{
- int max, i;
-
- igt_skip_on_simulation();
-
- if (strstr(igt_test_name(), "all"))
- all = true;
-
- igt_fixture {
- fd = drm_open_driver(DRIVER_INTEL);
- devid = intel_get_drm_devid(fd);
- gen = intel_gen(devid);
- rendercopy = igt_get_render_copyfunc(devid);
-
- max = gem_aperture_size (fd) / (1024 * 1024) / 2;
- if (num_buffers > max)
- num_buffers = max;
-
- max = intel_get_total_ram_mb() * 3 / 4;
- if (num_buffers > max)
- num_buffers = max;
- num_buffers /= 2;
- igt_info("using 2x%d buffers, each 1MiB\n", num_buffers);
- }
-
- for (i = 0; i < ARRAY_SIZE(access_modes); i++)
- run_modes(&access_modes[i]);
-}
--
2.6.2
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 41+ messages in thread
* Re: [PATCH i-g-t 2/3 v3] Unify handling of slow/combinatorial tests
2015-10-30 13:18 ` [PATCH i-g-t 2/3 v3] Unify handling of slow/combinatorial tests David Weinehall
@ 2015-10-30 13:52 ` Chris Wilson
2015-11-12 11:00 ` David Weinehall
0 siblings, 1 reply; 41+ messages in thread
From: Chris Wilson @ 2015-10-30 13:52 UTC (permalink / raw)
To: David Weinehall; +Cc: intel-gfx
On Fri, Oct 30, 2015 at 03:18:30PM +0200, David Weinehall wrote:
> @@ -931,16 +930,20 @@ run_basic_modes(const struct access_mode *mode,
> struct buffers buffers;
>
> for (h = hangs; h->suffix; h++) {
> - if (!all && *h->suffix)
> - continue;
> + unsigned int subtest_flags;
>
> - for (p = all ? pipelines : pskip; p->prefix; p++) {
> + if (*h->suffix)
> + subtest_flags = SUBTEST_TYPE_SLOW;
They aren't all slow though. The hang tests are (because it takes a long
time for a hang to occur and we need to race many times for reasonable
coverage). Many of the tests here were being skipped because QA couldn't
handle the full set.
Now that you have flags, adding h->flags would be better than heuristic
based on h->suffix. Similary we can use p->flags, then we get
igt_subtest_flags_f(h->flags | p->flags, "foo"); And if we have more
faith in QA in being able to dtrt we really only need to mark the known
slow cases (many the hang injection) as being truly slow.
-Chris
--
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH i-g-t 2/3 v3] Unify handling of slow/combinatorial tests
2015-10-30 13:52 ` Chris Wilson
@ 2015-11-12 11:00 ` David Weinehall
0 siblings, 0 replies; 41+ messages in thread
From: David Weinehall @ 2015-11-12 11:00 UTC (permalink / raw)
To: Chris Wilson, intel-gfx
On Fri, Oct 30, 2015 at 01:52:48PM +0000, Chris Wilson wrote:
> On Fri, Oct 30, 2015 at 03:18:30PM +0200, David Weinehall wrote:
> > @@ -931,16 +930,20 @@ run_basic_modes(const struct access_mode *mode,
> > struct buffers buffers;
> >
> > for (h = hangs; h->suffix; h++) {
> > - if (!all && *h->suffix)
> > - continue;
> > + unsigned int subtest_flags;
> >
> > - for (p = all ? pipelines : pskip; p->prefix; p++) {
> > + if (*h->suffix)
> > + subtest_flags = SUBTEST_TYPE_SLOW;
>
> They aren't all slow though. The hang tests are (because it takes a long
> time for a hang to occur and we need to race many times for reasonable
> coverage). Many of the tests here were being skipped because QA couldn't
> handle the full set.
Of course. But unlike the creators of the tests in question I have very
little knowledge about why the tests in question weren't included in the
standard test set. Sorting tests into slow/cornercase/whatever is
something that should be done by people who properly know what
categories the bugs best belong to.
Kind regards, David
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH i-g-t 2/3] Unify handling of slow/combinatorial tests
2015-10-27 6:47 ` David Weinehall
@ 2015-11-17 15:33 ` Daniel Vetter
0 siblings, 0 replies; 41+ messages in thread
From: Daniel Vetter @ 2015-11-17 15:33 UTC (permalink / raw)
To: Paulo Zanoni, Intel Graphics Development, Daniel Vetter
On Tue, Oct 27, 2015 at 08:47:28AM +0200, David Weinehall wrote:
> On Mon, Oct 26, 2015 at 03:59:24PM -0200, Paulo Zanoni wrote:
> > 2015-10-26 15:30 GMT-02:00 David Weinehall <david.weinehall@linux.intel.com>:
> > > On Mon, Oct 26, 2015 at 02:44:18PM -0200, Paulo Zanoni wrote:
> > >> 2015-10-26 12:59 GMT-02:00 David Weinehall <david.weinehall@linux.intel.com>:
> > >> > On Fri, Oct 23, 2015 at 11:50:46AM -0200, Paulo Zanoni wrote:
> > >> >
> > >> > [snip]
> > >> >
> > >> >> It's not clear to me, please clarify: now the tests that were
> > >> >> previously completely hidden will be listed in --list-subtests and
> > >> >> will be shown as skipped during normal runs?
> > >> >
> > >> > Yes. Daniel and I discussed this and he thought listing all test
> > >> > cases, even the slow ones, would not be an issue, since QA should
> > >> > be running the default set not the full list
> > >> > (and for that matter, shouldn't QA know what they are doing too? :P).
> > >>
> > >> If that's the case, I really think your patch should not touch
> > >> kms_frontbuffer_tracking.c. The hidden subtests should not appear on
> > >> the list. People shouldn't even have to ask themselves why they are
> > >> getting 800 skips from a single testcase. Those are only for debugging
> > >> purposes.
> > >
> > > Fair enough. I'll try to come up with a resonable way to exclude them
> > > from the list in a generic manner. Because that's the whole point of
> > > this exercise -- to standardise this rather than have every test case
> > > implement its own method of choosing whether or not to run all tests.
> >
> > Maybe instead of marking these tests as SKIP we could use some other
> > flag. That would avoid the confusion between "skipped because some
> > condition was not match but the test is useful" vs "skipped because
> > the test is unnecessary".
>
> I'd prefer a method that wouldn't require patching piglit.
The entire "why was this skipped" question is currently unsolved, since
there's also "skipped because old kernel" and "skipped because wrong
platform" and "skipped because not enough ram" and "skipped because wrong
outputs connected" and "skipped because ...". In short, it's a much bigger
problem imo.
-Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH i-g-t 2/3] Unify handling of slow/combinatorial tests
2015-10-26 17:59 ` Paulo Zanoni
2015-10-27 6:47 ` David Weinehall
@ 2015-11-17 15:34 ` Daniel Vetter
2015-11-17 15:49 ` Paulo Zanoni
1 sibling, 1 reply; 41+ messages in thread
From: Daniel Vetter @ 2015-11-17 15:34 UTC (permalink / raw)
To: Paulo Zanoni; +Cc: Daniel Vetter, Intel Graphics Development
On Mon, Oct 26, 2015 at 03:59:24PM -0200, Paulo Zanoni wrote:
> 2015-10-26 15:30 GMT-02:00 David Weinehall <david.weinehall@linux.intel.com>:
> > On Mon, Oct 26, 2015 at 02:44:18PM -0200, Paulo Zanoni wrote:
> >> 2015-10-26 12:59 GMT-02:00 David Weinehall <david.weinehall@linux.intel.com>:
> >> > On Fri, Oct 23, 2015 at 11:50:46AM -0200, Paulo Zanoni wrote:
> >> >
> >> > [snip]
> >> >
> >> >> It's not clear to me, please clarify: now the tests that were
> >> >> previously completely hidden will be listed in --list-subtests and
> >> >> will be shown as skipped during normal runs?
> >> >
> >> > Yes. Daniel and I discussed this and he thought listing all test
> >> > cases, even the slow ones, would not be an issue, since QA should
> >> > be running the default set not the full list
> >> > (and for that matter, shouldn't QA know what they are doing too? :P).
> >>
> >> If that's the case, I really think your patch should not touch
> >> kms_frontbuffer_tracking.c. The hidden subtests should not appear on
> >> the list. People shouldn't even have to ask themselves why they are
> >> getting 800 skips from a single testcase. Those are only for debugging
> >> purposes.
> >
> > Fair enough. I'll try to come up with a resonable way to exclude them
> > from the list in a generic manner. Because that's the whole point of
> > this exercise -- to standardise this rather than have every test case
> > implement its own method of choosing whether or not to run all tests.
>
> Maybe instead of marking these tests as SKIP we could use some other
> flag. That would avoid the confusion between "skipped because some
> condition was not match but the test is useful" vs "skipped because
> the test is unnecessary".
>
> >
> >> >
> >> >> For kms_frontbuffer_tracking, hidden tests are supposed to be just for
> >> >> developers who know what they are doing. I hide them behind a special
> >> >> command-line switch that's not used by QA because I don't want QA
> >> >> wasting time running those tests. One third of the
> >> >> kms_frontbuffer_tracking hidden tests only serve the purpose of
> >> >> checking whether there's a bug in kms_frontbuffer_track itself or not.
> >> >> For some other hidden tests, they are there just to help better debug
> >> >> in case some other non-hidden tests fail. Some other hidden tests are
> >> >> 100% useless and superfluous.
> >> >
> >> > Shouldn't 100% useless and superfluous tests be excised completely?
> >>
> >> The change would be from "if (case && hidden) continue;" to "if (case)
> >> continue;". But that's not the focus. There are still tests that are
> >> useful for debugging but useless for QA.
> >
> > It's not the focus of my change, no. But if there are tests that are
> > useless and/or superfluous, then they should be dropped.
> > Note that
> > I'm not suggesting that all non-default tests be dropped, just that
> > if there indeed are tests that don't make sense, they shouldn't be
> > in the test case in the first place.
> >
> >> >
> >> >> QA should only run the non-hidden tests.
> >> >
> >> > Which is the default behaviour, AFAICT.
> >>
> >> Then why do you want to expose those tests that you're not even
> >> planning to run??
> >
> > To allow developers to see the options they have?
> >
> >> You're kinda implying that QA - or someone else -
> >> will run those tests at some point, and I say that, for
> >> kms_frontbuffer_tracking, that's a waste of time. Maybe this is the
> >> case for the other tests you're touching, but not here.
> >
> > No, I'm not implying that -- you're putting those words in my mouth.
> >
> > Anyway, the choice to expose all cases, not just those run without
> > specifying --all, was a suggestion by Daniel -- you'll have to prod him
> > to hear what his reasoning was.
>
> CC'ing Daniel.
I thought the hidden tests in kms_frontbuffer_tracking would be useful,
just really slow, but seems I'm mistaken. In general we have a bunch of
stress tests which we want to run, but at a lower priority. The idea is to
eventually use this knob to resurface them, but right now with only BAT
igt running in our CI we can't do that and still need to do a lot of other
things first.
-Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH i-g-t 2/3] Unify handling of slow/combinatorial tests
2015-11-17 15:34 ` Daniel Vetter
@ 2015-11-17 15:49 ` Paulo Zanoni
2015-11-18 10:19 ` David Weinehall
0 siblings, 1 reply; 41+ messages in thread
From: Paulo Zanoni @ 2015-11-17 15:49 UTC (permalink / raw)
To: Daniel Vetter; +Cc: Daniel Vetter, Intel Graphics Development
2015-11-17 13:34 GMT-02:00 Daniel Vetter <daniel@ffwll.ch>:
> On Mon, Oct 26, 2015 at 03:59:24PM -0200, Paulo Zanoni wrote:
>> 2015-10-26 15:30 GMT-02:00 David Weinehall <david.weinehall@linux.intel.com>:
>> > On Mon, Oct 26, 2015 at 02:44:18PM -0200, Paulo Zanoni wrote:
>> >> 2015-10-26 12:59 GMT-02:00 David Weinehall <david.weinehall@linux.intel.com>:
>> >> > On Fri, Oct 23, 2015 at 11:50:46AM -0200, Paulo Zanoni wrote:
>> >> >
>> >> > [snip]
>> >> >
>> >> >> It's not clear to me, please clarify: now the tests that were
>> >> >> previously completely hidden will be listed in --list-subtests and
>> >> >> will be shown as skipped during normal runs?
>> >> >
>> >> > Yes. Daniel and I discussed this and he thought listing all test
>> >> > cases, even the slow ones, would not be an issue, since QA should
>> >> > be running the default set not the full list
>> >> > (and for that matter, shouldn't QA know what they are doing too? :P).
>> >>
>> >> If that's the case, I really think your patch should not touch
>> >> kms_frontbuffer_tracking.c. The hidden subtests should not appear on
>> >> the list. People shouldn't even have to ask themselves why they are
>> >> getting 800 skips from a single testcase. Those are only for debugging
>> >> purposes.
>> >
>> > Fair enough. I'll try to come up with a resonable way to exclude them
>> > from the list in a generic manner. Because that's the whole point of
>> > this exercise -- to standardise this rather than have every test case
>> > implement its own method of choosing whether or not to run all tests.
>>
>> Maybe instead of marking these tests as SKIP we could use some other
>> flag. That would avoid the confusion between "skipped because some
>> condition was not match but the test is useful" vs "skipped because
>> the test is unnecessary".
>>
>> >
>> >> >
>> >> >> For kms_frontbuffer_tracking, hidden tests are supposed to be just for
>> >> >> developers who know what they are doing. I hide them behind a special
>> >> >> command-line switch that's not used by QA because I don't want QA
>> >> >> wasting time running those tests. One third of the
>> >> >> kms_frontbuffer_tracking hidden tests only serve the purpose of
>> >> >> checking whether there's a bug in kms_frontbuffer_track itself or not.
>> >> >> For some other hidden tests, they are there just to help better debug
>> >> >> in case some other non-hidden tests fail. Some other hidden tests are
>> >> >> 100% useless and superfluous.
>> >> >
>> >> > Shouldn't 100% useless and superfluous tests be excised completely?
>> >>
>> >> The change would be from "if (case && hidden) continue;" to "if (case)
>> >> continue;". But that's not the focus. There are still tests that are
>> >> useful for debugging but useless for QA.
>> >
>> > It's not the focus of my change, no. But if there are tests that are
>> > useless and/or superfluous, then they should be dropped.
>> > Note that
>> > I'm not suggesting that all non-default tests be dropped, just that
>> > if there indeed are tests that don't make sense, they shouldn't be
>> > in the test case in the first place.
>> >
>> >> >
>> >> >> QA should only run the non-hidden tests.
>> >> >
>> >> > Which is the default behaviour, AFAICT.
>> >>
>> >> Then why do you want to expose those tests that you're not even
>> >> planning to run??
>> >
>> > To allow developers to see the options they have?
>> >
>> >> You're kinda implying that QA - or someone else -
>> >> will run those tests at some point, and I say that, for
>> >> kms_frontbuffer_tracking, that's a waste of time. Maybe this is the
>> >> case for the other tests you're touching, but not here.
>> >
>> > No, I'm not implying that -- you're putting those words in my mouth.
>> >
>> > Anyway, the choice to expose all cases, not just those run without
>> > specifying --all, was a suggestion by Daniel -- you'll have to prod him
>> > to hear what his reasoning was.
>>
>> CC'ing Daniel.
>
> I thought the hidden tests in kms_frontbuffer_tracking would be useful,
> just really slow, but seems I'm mistaken. In general we have a bunch of
> stress tests which we want to run, but at a lower priority.
So it doesn't sound good to put both the kms_frontbuffer_trackign and
the slow-but-useful behind the same knob. Anyway, I think the "flags"
idea can solve the problem.
> The idea is to
> eventually use this knob to resurface them, but right now with only BAT
> igt running in our CI we can't do that and still need to do a lot of other
> things first.
> -Daniel
> --
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch
--
Paulo Zanoni
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH i-g-t 2/3] Unify handling of slow/combinatorial tests
2015-11-17 15:49 ` Paulo Zanoni
@ 2015-11-18 10:19 ` David Weinehall
0 siblings, 0 replies; 41+ messages in thread
From: David Weinehall @ 2015-11-18 10:19 UTC (permalink / raw)
To: Paulo Zanoni; +Cc: Daniel Vetter, Intel Graphics Development
On Tue, Nov 17, 2015 at 01:49:06PM -0200, Paulo Zanoni wrote:
> 2015-11-17 13:34 GMT-02:00 Daniel Vetter <daniel@ffwll.ch>:
[snip]
> > I thought the hidden tests in kms_frontbuffer_tracking would be useful,
> > just really slow, but seems I'm mistaken. In general we have a bunch of
> > stress tests which we want to run, but at a lower priority.
>
> So it doesn't sound good to put both the kms_frontbuffer_trackign and
> the slow-but-useful behind the same knob. Anyway, I think the "flags"
> idea can solve the problem.
Indeed it should be able to solve that problem. Obviously it
cannot solve the "skipped because the feature isn't available on this
platform", "blacklisted because this feature hasn't been implemented
yet", and what not, but that is, I think, out of scope here.
So, does anyone have any objections (philosophical, colour of bikeshed,
or technical) against the current "flags" implementation?
Kind regards, David
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 41+ messages in thread
end of thread, other threads:[~2015-11-18 10:19 UTC | newest]
Thread overview: 41+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-10-23 11:42 [PATCH i-g-t 0/3] Unify slow/combinatorial test handling David Weinehall
2015-10-23 11:42 ` [PATCH i-g-t 1/3] Rename gem_concurren_all over gem_concurrent_blit David Weinehall
2015-10-23 14:32 ` Thomas Wood
2015-10-26 15:03 ` David Weinehall
2015-10-23 11:42 ` [PATCH i-g-t 2/3] Unify handling of slow/combinatorial tests David Weinehall
2015-10-23 11:56 ` Chris Wilson
2015-10-23 13:50 ` Paulo Zanoni
2015-10-26 14:59 ` David Weinehall
2015-10-26 16:44 ` Paulo Zanoni
2015-10-26 17:30 ` David Weinehall
2015-10-26 17:59 ` Paulo Zanoni
2015-10-27 6:47 ` David Weinehall
2015-11-17 15:33 ` Daniel Vetter
2015-11-17 15:34 ` Daniel Vetter
2015-11-17 15:49 ` Paulo Zanoni
2015-11-18 10:19 ` David Weinehall
2015-10-23 14:55 ` Thomas Wood
2015-10-26 15:28 ` David Weinehall
2015-10-26 16:28 ` Thomas Wood
2015-10-26 17:34 ` David Weinehall
2015-10-26 18:15 ` Paulo Zanoni
2015-10-23 11:42 ` [PATCH i-g-t 3/3] Remove gem_concurrent_all, since it is now superfluous David Weinehall
2015-10-23 11:58 ` [PATCH i-g-t 0/3] Unify slow/combinatorial test handling Chris Wilson
2015-10-23 12:47 ` Daniel Vetter
2015-10-26 13:55 ` David Weinehall
2015-10-28 11:29 ` [PATCH i-g-t 0/3 v2] " David Weinehall
2015-10-28 11:29 ` [PATCH i-g-t 1/3] Copy gem_concurrent_all to gem_concurrent_blit David Weinehall
2015-10-28 11:29 ` [PATCH i-g-t 2/3] Unify handling of slow/combinatorial tests David Weinehall
2015-10-28 16:12 ` Paulo Zanoni
2015-10-30 7:56 ` David Weinehall
2015-10-30 11:55 ` Paulo Zanoni
2015-10-30 11:59 ` Chris Wilson
2015-10-28 17:14 ` Thomas Wood
2015-10-30 7:44 ` David Weinehall
2015-10-28 11:29 ` [PATCH i-g-t 3/3] Remove superfluous gem_concurrent_all.c David Weinehall
2015-10-30 13:18 ` [PATCH i-g-t 0/3 v3] Unify slow/combinatorial test handling David Weinehall
2015-10-30 13:18 ` [PATCH i-g-t 1/3 v3] Copy gem_concurrent_all to gem_concurrent_blit David Weinehall
2015-10-30 13:18 ` [PATCH i-g-t 2/3 v3] Unify handling of slow/combinatorial tests David Weinehall
2015-10-30 13:52 ` Chris Wilson
2015-11-12 11:00 ` David Weinehall
2015-10-30 13:18 ` [PATCH i-g-t 3/3 v3] Remove superfluous gem_concurrent_all.c David Weinehall
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox