From: Caz Yokoyama <Caz.Yokoyama@intel.com>
To: Chris Wilson <chris@chris-wilson.co.uk>, intel-gfx@lists.freedesktop.org
Cc: igt-dev@lists.freedesktop.org
Subject: Re: [igt-dev] [PATCH i-g-t] i915/gem_ctx_switch: Use minimum qlen over all engines and measure switches
Date: Mon, 25 Feb 2019 10:28:34 -0800 [thread overview]
Message-ID: <f6efd7ca287d86f112d8afe1c63ca1cdf9d37d42.camel@intel.com> (raw)
In-Reply-To: <20190223013405.14667-1-chris@chris-wilson.co.uk>
Chris,
By your patch, measure_qlen() reports how many gem_execbuf() can be
executed(queue length) within timeout of the slowest engine, correct?
Run time becomes 95 sec which is less than half.
-caz
On Sat, 2019-02-23 at 01:34 +0000, Chris Wilson wrote:
> Not all engines are created equal, and our weighting ends up
> favouring
> the many faster xCS rings at the expense of RCS. Our qlen estimation
> also failed to factor in the context switch overhead, which is a
> significant factor for nop batches. So we oversubscribe the number of
> batches submitted to RCS and end up waiting for those to complete at
> the
> end of our subtest timeslice.
>
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Caz Yokoyama <caz.yokoyama@intel.com>
> ---
> tests/i915/gem_ctx_switch.c | 39 +++++++++++++++++++++++++++++----
> ----
> 1 file changed, 31 insertions(+), 8 deletions(-)
>
> diff --git a/tests/i915/gem_ctx_switch.c
> b/tests/i915/gem_ctx_switch.c
> index 1208cb8d7..87e13b915 100644
> --- a/tests/i915/gem_ctx_switch.c
> +++ b/tests/i915/gem_ctx_switch.c
> @@ -26,6 +26,7 @@
> */
>
> #include "igt.h"
> +#include <limits.h>
> #include <unistd.h>
> #include <stdlib.h>
> #include <stdint.h>
> @@ -58,29 +59,50 @@ static int measure_qlen(int fd,
> {
> const struct drm_i915_gem_exec_object2 * const obj =
> (struct drm_i915_gem_exec_object2 *)(uintptr_t)execbuf-
> >buffers_ptr;
> - int qlen = 64;
> + uint32_t ctx[64];
> + int min = INT_MAX, max = 0;
> +
> + for (int i = 0; i < ARRAY_SIZE(ctx); i++)
> + ctx[i] = gem_context_create(fd);
>
> for (unsigned int n = 0; n < nengine; n++) {
> uint64_t saved = execbuf->flags;
> struct timespec tv = {};
> + int q;
>
> execbuf->flags |= engine[n];
>
> - igt_nsec_elapsed(&tv);
> - for (int loop = 0; loop < qlen; loop++)
> + for (int i = 0; i < ARRAY_SIZE(ctx); i++) {
> + execbuf->rsvd1 = ctx[i];
> gem_execbuf(fd, execbuf);
> + }
> gem_sync(fd, obj->handle);
>
> - execbuf->flags = saved;
> + igt_nsec_elapsed(&tv);
> + for (int i = 0; i < ARRAY_SIZE(ctx); i++) {
> + execbuf->rsvd1 = ctx[i];
> + gem_execbuf(fd, execbuf);
> + }
> + gem_sync(fd, obj->handle);
>
> /*
> * Be conservative and aim not to overshoot timeout, so
> scale
> * down by 8 for hopefully a max of 12.5% error.
> */
> - qlen = qlen * timeout * 1e9 / igt_nsec_elapsed(&tv) / 8
> + 1;
> + q = ARRAY_SIZE(ctx) * timeout * 1e9 /
> igt_nsec_elapsed(&tv) / 8 + 1;
> + if (q < min)
> + min = q;
> + if (q > max)
> + max = q;
> +
> + execbuf->flags = saved;
> }
>
> - return qlen;
> + for (int i = 0; i < ARRAY_SIZE(ctx); i++)
> + gem_context_destroy(fd, ctx[i]);
> +
> + igt_debug("Estimated qlen: {min:%d, max:%d}\n", min, max);
> + return min;
> }
>
> static void single(int fd, uint32_t handle,
> @@ -259,9 +281,10 @@ static void all(int fd, uint32_t handle,
> unsigned flags, int timeout)
> clock_gettime(CLOCK_MONOTONIC, &now);
> gem_close(fd, obj[0].handle);
>
> - igt_info("[%d:%d] %s: %'u cycles:
> %.3fus%s\n",
> + igt_info("[%d:%d] %s: %'u cycles:
> %.3fus%s (elapsed: %.3fs)\n",
> nctx, child, name[child],
> count, elapsed(&start, &now)*1e6 / count,
> - flags & INTERRUPTIBLE ? "
> (interruptible)" : "");
> + flags & INTERRUPTIBLE ? "
> (interruptible)" : "",
> + elapsed(&start, &now));
> }
> igt_waitchildren();
> }
_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev
next prev parent reply other threads:[~2019-02-25 18:28 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-02-23 6:54 [igt-dev] [igt PATCH v1 1/1] i915/gem_ctx_switch: evenly run 4 child processes Caz Yokoyama
2019-02-23 0:25 ` [igt-dev] ✓ Fi.CI.BAT: success for series starting with [v1,1/1] " Patchwork
2019-02-23 0:41 ` [igt-dev] [igt PATCH v1 1/1] " Chris Wilson
2019-02-26 16:07 ` Caz Yokoyama
2019-02-23 1:09 ` Antonio Argenziano
2019-02-23 1:34 ` [igt-dev] [PATCH i-g-t] i915/gem_ctx_switch: Use minimum qlen over all engines and measure switches Chris Wilson
2019-02-25 18:28 ` Caz Yokoyama [this message]
2019-02-25 18:29 ` [Intel-gfx] " Chris Wilson
2019-02-26 16:14 ` [igt-dev] " Caz Yokoyama
2019-02-26 16:15 ` Chris Wilson
2019-02-23 2:00 ` [igt-dev] ✓ Fi.CI.BAT: success for series starting with [i-g-t] i915/gem_ctx_switch: Use minimum qlen over all engines and measure switches (rev2) Patchwork
2019-02-23 6:42 ` [igt-dev] ✓ Fi.CI.IGT: success for series starting with [v1,1/1] i915/gem_ctx_switch: evenly run 4 child processes Patchwork
2019-02-23 9:31 ` [igt-dev] ✓ Fi.CI.IGT: success for series starting with [i-g-t] i915/gem_ctx_switch: Use minimum qlen over all engines and measure switches (rev2) Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=f6efd7ca287d86f112d8afe1c63ca1cdf9d37d42.camel@intel.com \
--to=caz.yokoyama@intel.com \
--cc=chris@chris-wilson.co.uk \
--cc=igt-dev@lists.freedesktop.org \
--cc=intel-gfx@lists.freedesktop.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox