From: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
To: Chris Wilson <chris@chris-wilson.co.uk>, intel-gfx@lists.freedesktop.org
Cc: Dmitry Ermilov <dmitry.ermilov@intel.com>
Subject: Re: [PATCH 4/5] drm/i915: Disable semaphore busywaits on saturated systems
Date: Tue, 30 Apr 2019 09:55:59 +0100 [thread overview]
Message-ID: <b2e49bdb-af3d-962e-314d-9a59ff4d3e0e@linux.intel.com> (raw)
In-Reply-To: <20190429180020.27274-4-chris@chris-wilson.co.uk>
On 29/04/2019 19:00, Chris Wilson wrote:
> Asking the GPU to busywait on a memory address, perhaps not unexpectedly
> in hindsight for a shared system, leads to bus contention that affects
> CPU programs trying to concurrently access memory. This can manifest as
> a drop in transcode throughput on highly over-saturated workloads.
>
> The only clue offered by perf, is that the bus-cycles (perf stat -e
> bus-cycles) jumped by 50% when enabling semaphores. This corresponds
> with extra CPU active cycles being attributed to intel_idle's mwait.
>
> This patch introduces a heuristic to try and detect when more than one
> client is submitting to the GPU pushing it into an oversaturated state.
> As we already keep track of when the semaphores are signaled, we can
> inspect their state on submitting the busywait batch and if we planned
> to use a semaphore but were too late, conclude that the GPU is
> overloaded and not try to use semaphores in future requests. In
> practice, this means we optimistically try to use semaphores for the
> first frame of a transcode job split over multiple engines, and fail is
> there are multiple clients active and continue not to use semaphores for
> the subsequent frames in the sequence. Periodically, trying to
> optimistically switch semaphores back on whenever the client waits to
> catch up with the transcode results.
>
[snipped long benchmark results]
> Indicating that we've recovered the regression from enabling semaphores
> on this saturated setup, with a hint towards an overall improvement.
>
> Very similar, but of smaller magnitude, results are observed on both
> Skylake(gt2) and Kabylake(gt4). This may be due to the reduced impact of
> bus-cycles, where we see a 50% hit on Broxton, it is only 10% on the big
> core, in this particular test.
>
> One observation to make here is that for a greedy client trying to
> maximise its own throughput, using semaphores is the right choice. It is
> only the holistic system-wide view that semaphores of one client
> impacts another and reduces the overall throughput where we would choose
> to disable semaphores.
Since we acknowledge problem is the shared nature of the iGPU, my
concern is that we still cannot account for both partners here when
deciding to omit semaphore emission. In other words we trade bus
throughput for submission latency.
Assuming a light GPU task (in the sense of not oversubscribing, but with
ping-pong inter-engine dependencies), simultaneous to a heavier CPU
task, our latency improvement still imposes a performance penalty on the
latter.
For instance a consumer level single stream transcoding session with CPU
heavy part of the pipeline, or a CPU intensive game.
(Ideally we would need a bus saturation signal to feed into our logic,
not just engine saturation. Which I don't think is possible.)
So I am still leaning towards being cautious and just abandoning
semaphores for now.
Regards,
Tvrtko
> The most noticeable negactive impact this has is on the no-op
> microbenchmarks, which are also very notable for having no cpu bus load.
> In particular, this increases the runtime and energy consumption of
> gem_exec_whisper.
>
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> Cc: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>
> Cc: Dmitry Ermilov <dmitry.ermilov@intel.com>
> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
> ---
> drivers/gpu/drm/i915/gt/intel_context.c | 2 ++
> drivers/gpu/drm/i915/gt/intel_context_types.h | 3 ++
> drivers/gpu/drm/i915/i915_request.c | 28 ++++++++++++++++++-
> 3 files changed, 32 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/i915/gt/intel_context.c b/drivers/gpu/drm/i915/gt/intel_context.c
> index 1f1761fc6597..5b31e1e05ddd 100644
> --- a/drivers/gpu/drm/i915/gt/intel_context.c
> +++ b/drivers/gpu/drm/i915/gt/intel_context.c
> @@ -116,6 +116,7 @@ intel_context_init(struct intel_context *ce,
> ce->engine = engine;
> ce->ops = engine->cops;
> ce->sseu = engine->sseu;
> + ce->saturated = 0;
>
> INIT_LIST_HEAD(&ce->signal_link);
> INIT_LIST_HEAD(&ce->signals);
> @@ -158,6 +159,7 @@ void intel_context_enter_engine(struct intel_context *ce)
>
> void intel_context_exit_engine(struct intel_context *ce)
> {
> + ce->saturated = 0;
> intel_engine_pm_put(ce->engine);
> }
>
> diff --git a/drivers/gpu/drm/i915/gt/intel_context_types.h b/drivers/gpu/drm/i915/gt/intel_context_types.h
> index d5a7dbd0daee..963a312430e6 100644
> --- a/drivers/gpu/drm/i915/gt/intel_context_types.h
> +++ b/drivers/gpu/drm/i915/gt/intel_context_types.h
> @@ -13,6 +13,7 @@
> #include <linux/types.h>
>
> #include "i915_active_types.h"
> +#include "intel_engine_types.h"
> #include "intel_sseu.h"
>
> struct i915_gem_context;
> @@ -52,6 +53,8 @@ struct intel_context {
> atomic_t pin_count;
> struct mutex pin_mutex; /* guards pinning and associated on-gpuing */
>
> + intel_engine_mask_t saturated; /* submitting semaphores too late? */
> +
> /**
> * active_tracker: Active tracker for the external rq activity
> * on this intel_context object.
> diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
> index 8cb3ed5531e3..2d429967f403 100644
> --- a/drivers/gpu/drm/i915/i915_request.c
> +++ b/drivers/gpu/drm/i915/i915_request.c
> @@ -410,6 +410,26 @@ void __i915_request_submit(struct i915_request *request)
> if (i915_gem_context_is_banned(request->gem_context))
> i915_request_skip(request, -EIO);
>
> + /*
> + * Are we using semaphores when the gpu is already saturated?
> + *
> + * Using semaphores incurs a cost in having the GPU poll a
> + * memory location, busywaiting for it to change. The continual
> + * memory reads can have a noticeable impact on the rest of the
> + * system with the extra bus traffic, stalling the cpu as it too
> + * tries to access memory across the bus (perf stat -e bus-cycles).
> + *
> + * If we installed a semaphore on this request and we only submit
> + * the request after the signaler completed, that indicates the
> + * system is overloaded and using semaphores at this time only
> + * increases the amount of work we are doing. If so, we disable
> + * further use of semaphores until we are idle again, whence we
> + * optimistically try again.
> + */
> + if (request->sched.semaphores &&
> + i915_sw_fence_signaled(&request->semaphore))
> + request->hw_context->saturated |= request->sched.semaphores;
> +
> /* We may be recursing from the signal callback of another i915 fence */
> spin_lock_nested(&request->lock, SINGLE_DEPTH_NESTING);
>
> @@ -785,6 +805,12 @@ i915_request_await_start(struct i915_request *rq, struct i915_request *signal)
> I915_FENCE_GFP);
> }
>
> +static intel_engine_mask_t
> +already_busywaiting(struct i915_request *rq)
> +{
> + return rq->sched.semaphores | rq->hw_context->saturated;
> +}
> +
> static int
> emit_semaphore_wait(struct i915_request *to,
> struct i915_request *from,
> @@ -798,7 +824,7 @@ emit_semaphore_wait(struct i915_request *to,
> GEM_BUG_ON(INTEL_GEN(to->i915) < 8);
>
> /* Just emit the first semaphore we see as request space is limited. */
> - if (to->sched.semaphores & from->engine->mask)
> + if (already_busywaiting(to) & from->engine->mask)
> return i915_sw_fence_await_dma_fence(&to->submit,
> &from->fence, 0,
> I915_FENCE_GFP);
>
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
next prev parent reply other threads:[~2019-04-30 8:56 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-04-29 18:00 [PATCH 1/5] drm/i915: Wait for the struct_mutex on idling Chris Wilson
2019-04-29 18:00 ` [PATCH 2/5] drm/i915: Only reschedule the submission tasklet if preemption is possible Chris Wilson
2019-04-30 9:35 ` [PATCH v2] " Chris Wilson
2019-04-29 18:00 ` [PATCH 3/5] drm/i915: Delay semaphore submission until the start of the signaler Chris Wilson
2019-04-29 18:00 ` [PATCH 4/5] drm/i915: Disable semaphore busywaits on saturated systems Chris Wilson
2019-04-30 8:55 ` Tvrtko Ursulin [this message]
2019-04-30 9:04 ` Chris Wilson
2019-04-30 9:07 ` Chris Wilson
2019-05-03 13:25 ` Tvrtko Ursulin
2019-05-03 14:04 ` Ville Syrjälä
2019-05-03 14:12 ` Chris Wilson
2019-05-03 14:32 ` Ville Syrjälä
2019-04-29 18:00 ` [PATCH 5/5] drm/i915/execlists: Don't apply priority boost for resets Chris Wilson
2019-04-29 18:08 ` ✗ Fi.CI.SPARSE: warning for series starting with [1/5] drm/i915: Wait for the struct_mutex on idling Patchwork
2019-04-30 7:39 ` ✓ Fi.CI.BAT: success " Patchwork
2019-04-30 11:12 ` ✗ Fi.CI.IGT: failure " Patchwork
2019-04-30 13:12 ` ✗ Fi.CI.SPARSE: warning for series starting with [1/5] drm/i915: Wait for the struct_mutex on idling (rev2) Patchwork
2019-04-30 13:27 ` ✗ Fi.CI.BAT: failure " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=b2e49bdb-af3d-962e-314d-9a59ff4d3e0e@linux.intel.com \
--to=tvrtko.ursulin@linux.intel.com \
--cc=chris@chris-wilson.co.uk \
--cc=dmitry.ermilov@intel.com \
--cc=intel-gfx@lists.freedesktop.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox