* [PATCH] drm/i915: Move CSB MMIO reads out of the execlists lock
@ 2016-03-01 13:25 Tvrtko Ursulin
2016-03-01 14:30 ` ✗ Fi.CI.BAT: warning for " Patchwork
` (2 more replies)
0 siblings, 3 replies; 9+ messages in thread
From: Tvrtko Ursulin @ 2016-03-01 13:25 UTC (permalink / raw)
To: Intel-gfx
From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
By reading the CSB (slow MMIO accesses) into a temporary local
buffer we can decrease the duration of holding the execlist
lock.
Main advantage is that during heavy batch buffer submission we
reduce the execlist lock contention, which should decrease the
latency and CPU usage between the submitting userspace process
and interrupt handling.
Downside is that we need to grab and relase the forcewake twice,
but as the below numbers will show this is completely hidden
by the primary gains.
Testing with "gem_latency -n 100" (submit batch buffers with a
hundred nops each) shows more than doubling of the throughput
and more than halving of the dispatch latency, overall latency
and CPU time spend in the submitting process.
Submitting empty batches ("gem_latency -n 0") does not seem
significantly affected by this change with throughput and CPU
time improving by half a percent, and overall latency worsening
by the same amount.
Above tests were done in a hundred runs on a big core Broadwell.
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/intel_lrc.c | 73 ++++++++++++++++++++--------------------
1 file changed, 36 insertions(+), 37 deletions(-)
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 27c9ee3f7372..9000d6ac8d65 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -416,15 +416,23 @@ static void execlists_update_context(struct drm_i915_gem_request *rq)
static void execlists_submit_requests(struct drm_i915_gem_request *rq0,
struct drm_i915_gem_request *rq1)
{
+ struct drm_i915_private *dev_priv = rq0->ring->dev->dev_private;
+
execlists_update_context(rq0);
if (rq1)
execlists_update_context(rq1);
+ spin_lock(&dev_priv->uncore.lock);
+ intel_uncore_forcewake_get__locked(dev_priv, FORCEWAKE_ALL);
+
execlists_elsp_write(rq0, rq1);
+
+ intel_uncore_forcewake_put__locked(dev_priv, FORCEWAKE_ALL);
+ spin_unlock(&dev_priv->uncore.lock);
}
-static void execlists_context_unqueue__locked(struct intel_engine_cs *ring)
+static void execlists_context_unqueue(struct intel_engine_cs *ring)
{
struct drm_i915_gem_request *req0 = NULL, *req1 = NULL;
struct drm_i915_gem_request *cursor, *tmp;
@@ -478,19 +486,6 @@ static void execlists_context_unqueue__locked(struct intel_engine_cs *ring)
execlists_submit_requests(req0, req1);
}
-static void execlists_context_unqueue(struct intel_engine_cs *ring)
-{
- struct drm_i915_private *dev_priv = ring->dev->dev_private;
-
- spin_lock(&dev_priv->uncore.lock);
- intel_uncore_forcewake_get__locked(dev_priv, FORCEWAKE_ALL);
-
- execlists_context_unqueue__locked(ring);
-
- intel_uncore_forcewake_put__locked(dev_priv, FORCEWAKE_ALL);
- spin_unlock(&dev_priv->uncore.lock);
-}
-
static unsigned int
execlists_check_remove_request(struct intel_engine_cs *ring, u32 request_id)
{
@@ -551,12 +546,10 @@ void intel_lrc_irq_handler(struct intel_engine_cs *ring)
struct drm_i915_private *dev_priv = ring->dev->dev_private;
u32 status_pointer;
unsigned int read_pointer, write_pointer;
- u32 status = 0;
- u32 status_id;
+ u32 csb[GEN8_CSB_ENTRIES][2];
+ unsigned int csb_read = 0, i;
unsigned int submit_contexts = 0;
- spin_lock(&ring->execlist_lock);
-
spin_lock(&dev_priv->uncore.lock);
intel_uncore_forcewake_get__locked(dev_priv, FORCEWAKE_ALL);
@@ -568,39 +561,45 @@ void intel_lrc_irq_handler(struct intel_engine_cs *ring)
write_pointer += GEN8_CSB_ENTRIES;
while (read_pointer < write_pointer) {
- status = get_context_status(ring, ++read_pointer, &status_id);
+ csb[csb_read][0] = get_context_status(ring, ++read_pointer,
+ &csb[csb_read][1]);
+ csb_read++;
+ }
- if (unlikely(status & GEN8_CTX_STATUS_PREEMPTED)) {
- if (status & GEN8_CTX_STATUS_LITE_RESTORE) {
- if (execlists_check_remove_request(ring, status_id))
+ ring->next_context_status_buffer = write_pointer % GEN8_CSB_ENTRIES;
+
+ /* Update the read pointer to the old write pointer. Manual ringbuffer
+ * management ftw </sarcasm> */
+ I915_WRITE_FW(RING_CONTEXT_STATUS_PTR(ring),
+ _MASKED_FIELD(GEN8_CSB_READ_PTR_MASK,
+ ring->next_context_status_buffer << 8));
+
+ intel_uncore_forcewake_put__locked(dev_priv, FORCEWAKE_ALL);
+ spin_unlock(&dev_priv->uncore.lock);
+
+ spin_lock(&ring->execlist_lock);
+
+ for (i = 0; i < csb_read; i++) {
+ if (unlikely(csb[i][0] & GEN8_CTX_STATUS_PREEMPTED)) {
+ if (csb[i][0] & GEN8_CTX_STATUS_LITE_RESTORE) {
+ if (execlists_check_remove_request(ring, csb[i][1]))
WARN(1, "Lite Restored request removed from queue\n");
} else
WARN(1, "Preemption without Lite Restore\n");
}
- if (status & (GEN8_CTX_STATUS_ACTIVE_IDLE |
+ if (csb[i][0] & (GEN8_CTX_STATUS_ACTIVE_IDLE |
GEN8_CTX_STATUS_ELEMENT_SWITCH))
submit_contexts +=
- execlists_check_remove_request(ring, status_id);
+ execlists_check_remove_request(ring, csb[i][1]);
}
if (submit_contexts) {
if (!ring->disable_lite_restore_wa ||
- (status & GEN8_CTX_STATUS_ACTIVE_IDLE))
- execlists_context_unqueue__locked(ring);
+ (csb[i][0] & GEN8_CTX_STATUS_ACTIVE_IDLE))
+ execlists_context_unqueue(ring);
}
- ring->next_context_status_buffer = write_pointer % GEN8_CSB_ENTRIES;
-
- /* Update the read pointer to the old write pointer. Manual ringbuffer
- * management ftw </sarcasm> */
- I915_WRITE_FW(RING_CONTEXT_STATUS_PTR(ring),
- _MASKED_FIELD(GEN8_CSB_READ_PTR_MASK,
- ring->next_context_status_buffer << 8));
-
- intel_uncore_forcewake_put__locked(dev_priv, FORCEWAKE_ALL);
- spin_unlock(&dev_priv->uncore.lock);
-
spin_unlock(&ring->execlist_lock);
if (unlikely(submit_contexts > 2))
--
1.9.1
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 9+ messages in thread
* ✗ Fi.CI.BAT: warning for drm/i915: Move CSB MMIO reads out of the execlists lock
2016-03-01 13:25 [PATCH] drm/i915: Move CSB MMIO reads out of the execlists lock Tvrtko Ursulin
@ 2016-03-01 14:30 ` Patchwork
2016-03-03 10:39 ` [PATCH v2] " Tvrtko Ursulin
2016-03-03 11:02 ` ✗ Fi.CI.BAT: warning for drm/i915: Move CSB MMIO reads out of the execlists lock (rev2) Patchwork
2 siblings, 0 replies; 9+ messages in thread
From: Patchwork @ 2016-03-01 14:30 UTC (permalink / raw)
To: Tvrtko Ursulin; +Cc: intel-gfx
== Series Details ==
Series: drm/i915: Move CSB MMIO reads out of the execlists lock
URL : https://patchwork.freedesktop.org/series/3973/
State : warning
== Summary ==
Series 3973v1 drm/i915: Move CSB MMIO reads out of the execlists lock
http://patchwork.freedesktop.org/api/1.0/series/3973/revisions/1/mbox/
Test kms_flip:
Subgroup basic-flip-vs-dpms:
pass -> DMESG-WARN (skl-i5k-2)
Subgroup basic-flip-vs-wf_vblank:
fail -> PASS (snb-x220t)
Test kms_force_connector_basic:
Subgroup force-connector-state:
pass -> SKIP (snb-x220t)
Test kms_pipe_crc_basic:
Subgroup read-crc-pipe-a-frame-sequence:
pass -> DMESG-WARN (bdw-ultra)
Subgroup suspend-read-crc-pipe-a:
incomplete -> PASS (hsw-gt2)
Subgroup suspend-read-crc-pipe-b:
pass -> DMESG-WARN (snb-x220t)
Subgroup suspend-read-crc-pipe-c:
pass -> DMESG-WARN (bsw-nuc-2)
Test pm_rpm:
Subgroup basic-pci-d3-state:
fail -> DMESG-FAIL (snb-x220t)
pass -> DMESG-WARN (snb-dellxps)
Subgroup basic-rte:
dmesg-warn -> PASS (snb-x220t)
bdw-nuci7 total:169 pass:158 dwarn:0 dfail:0 fail:0 skip:11
bdw-ultra total:169 pass:154 dwarn:1 dfail:0 fail:0 skip:14
bsw-nuc-2 total:169 pass:137 dwarn:1 dfail:0 fail:1 skip:30
byt-nuc total:169 pass:144 dwarn:0 dfail:0 fail:0 skip:25
hsw-brixbox total:169 pass:153 dwarn:1 dfail:0 fail:0 skip:15
hsw-gt2 total:169 pass:158 dwarn:0 dfail:0 fail:1 skip:10
ilk-hp8440p total:169 pass:118 dwarn:0 dfail:0 fail:1 skip:50
ivb-t430s total:169 pass:153 dwarn:0 dfail:0 fail:1 skip:15
skl-i5k-2 total:169 pass:152 dwarn:1 dfail:0 fail:0 skip:16
skl-i7k-2 total:169 pass:153 dwarn:0 dfail:0 fail:0 skip:16
snb-dellxps total:169 pass:143 dwarn:2 dfail:0 fail:1 skip:23
snb-x220t total:169 pass:143 dwarn:1 dfail:1 fail:1 skip:23
Results at /archive/results/CI_IGT_test/Patchwork_1503/
deb861ea08c6d5bbf682fb40ea1f0471a448aafd drm-intel-nightly: 2016y-03m-01d-11h-12m-16s UTC integration manifest
29e7863fedc6f7f99916ee5f5a8286a1d490963e drm/i915: Move CSB MMIO reads out of the execlists lock
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH v2] drm/i915: Move CSB MMIO reads out of the execlists lock
2016-03-01 13:25 [PATCH] drm/i915: Move CSB MMIO reads out of the execlists lock Tvrtko Ursulin
2016-03-01 14:30 ` ✗ Fi.CI.BAT: warning for " Patchwork
@ 2016-03-03 10:39 ` Tvrtko Ursulin
2016-03-03 11:02 ` ✗ Fi.CI.BAT: warning for drm/i915: Move CSB MMIO reads out of the execlists lock (rev2) Patchwork
2 siblings, 0 replies; 9+ messages in thread
From: Tvrtko Ursulin @ 2016-03-03 10:39 UTC (permalink / raw)
To: Intel-gfx
From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
By reading the CSB (slow MMIO accesses) into a temporary local
buffer we can decrease the duration of holding the execlist
lock.
Main advantage is that during heavy batch buffer submission we
reduce the execlist lock contention, which should decrease the
latency and CPU usage between the submitting userspace process
and interrupt handling.
Downside is that we need to grab and relase the forcewake twice,
but as the below numbers will show this is completely hidden
by the primary gains.
Testing with "gem_latency -n 100" (submit batch buffers with a
hundred nops each) shows more than doubling of the throughput
and more than halving of the dispatch latency, overall latency
and CPU time spend in the submitting process.
Submitting empty batches ("gem_latency -n 0") does not seem
significantly affected by this change with throughput and CPU
time improving by half a percent, and overall latency worsening
by the same amount.
Above tests were done in a hundred runs on a big core Broadwell.
v2:
* Overflow protection to local CSB buffer.
* Use closer dev_priv in execlists_submit_requests. (Chris Wilson)
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/intel_lrc.c | 75 ++++++++++++++++++++--------------------
1 file changed, 38 insertions(+), 37 deletions(-)
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 27c9ee3f7372..4f19ca7490c4 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -416,15 +416,23 @@ static void execlists_update_context(struct drm_i915_gem_request *rq)
static void execlists_submit_requests(struct drm_i915_gem_request *rq0,
struct drm_i915_gem_request *rq1)
{
+ struct drm_i915_private *dev_priv = rq0->i915;
+
execlists_update_context(rq0);
if (rq1)
execlists_update_context(rq1);
+ spin_lock(&dev_priv->uncore.lock);
+ intel_uncore_forcewake_get__locked(dev_priv, FORCEWAKE_ALL);
+
execlists_elsp_write(rq0, rq1);
+
+ intel_uncore_forcewake_put__locked(dev_priv, FORCEWAKE_ALL);
+ spin_unlock(&dev_priv->uncore.lock);
}
-static void execlists_context_unqueue__locked(struct intel_engine_cs *ring)
+static void execlists_context_unqueue(struct intel_engine_cs *ring)
{
struct drm_i915_gem_request *req0 = NULL, *req1 = NULL;
struct drm_i915_gem_request *cursor, *tmp;
@@ -478,19 +486,6 @@ static void execlists_context_unqueue__locked(struct intel_engine_cs *ring)
execlists_submit_requests(req0, req1);
}
-static void execlists_context_unqueue(struct intel_engine_cs *ring)
-{
- struct drm_i915_private *dev_priv = ring->dev->dev_private;
-
- spin_lock(&dev_priv->uncore.lock);
- intel_uncore_forcewake_get__locked(dev_priv, FORCEWAKE_ALL);
-
- execlists_context_unqueue__locked(ring);
-
- intel_uncore_forcewake_put__locked(dev_priv, FORCEWAKE_ALL);
- spin_unlock(&dev_priv->uncore.lock);
-}
-
static unsigned int
execlists_check_remove_request(struct intel_engine_cs *ring, u32 request_id)
{
@@ -551,12 +546,10 @@ void intel_lrc_irq_handler(struct intel_engine_cs *ring)
struct drm_i915_private *dev_priv = ring->dev->dev_private;
u32 status_pointer;
unsigned int read_pointer, write_pointer;
- u32 status = 0;
- u32 status_id;
+ u32 csb[GEN8_CSB_ENTRIES][2];
+ unsigned int csb_read = 0, i;
unsigned int submit_contexts = 0;
- spin_lock(&ring->execlist_lock);
-
spin_lock(&dev_priv->uncore.lock);
intel_uncore_forcewake_get__locked(dev_priv, FORCEWAKE_ALL);
@@ -568,39 +561,47 @@ void intel_lrc_irq_handler(struct intel_engine_cs *ring)
write_pointer += GEN8_CSB_ENTRIES;
while (read_pointer < write_pointer) {
- status = get_context_status(ring, ++read_pointer, &status_id);
+ if (WARN_ON_ONCE(csb_read == GEN8_CSB_ENTRIES))
+ break;
+ csb[csb_read][0] = get_context_status(ring, ++read_pointer,
+ &csb[csb_read][1]);
+ csb_read++;
+ }
- if (unlikely(status & GEN8_CTX_STATUS_PREEMPTED)) {
- if (status & GEN8_CTX_STATUS_LITE_RESTORE) {
- if (execlists_check_remove_request(ring, status_id))
+ ring->next_context_status_buffer = write_pointer % GEN8_CSB_ENTRIES;
+
+ /* Update the read pointer to the old write pointer. Manual ringbuffer
+ * management ftw </sarcasm> */
+ I915_WRITE_FW(RING_CONTEXT_STATUS_PTR(ring),
+ _MASKED_FIELD(GEN8_CSB_READ_PTR_MASK,
+ ring->next_context_status_buffer << 8));
+
+ intel_uncore_forcewake_put__locked(dev_priv, FORCEWAKE_ALL);
+ spin_unlock(&dev_priv->uncore.lock);
+
+ spin_lock(&ring->execlist_lock);
+
+ for (i = 0; i < csb_read; i++) {
+ if (unlikely(csb[i][0] & GEN8_CTX_STATUS_PREEMPTED)) {
+ if (csb[i][0] & GEN8_CTX_STATUS_LITE_RESTORE) {
+ if (execlists_check_remove_request(ring, csb[i][1]))
WARN(1, "Lite Restored request removed from queue\n");
} else
WARN(1, "Preemption without Lite Restore\n");
}
- if (status & (GEN8_CTX_STATUS_ACTIVE_IDLE |
+ if (csb[i][0] & (GEN8_CTX_STATUS_ACTIVE_IDLE |
GEN8_CTX_STATUS_ELEMENT_SWITCH))
submit_contexts +=
- execlists_check_remove_request(ring, status_id);
+ execlists_check_remove_request(ring, csb[i][1]);
}
if (submit_contexts) {
if (!ring->disable_lite_restore_wa ||
- (status & GEN8_CTX_STATUS_ACTIVE_IDLE))
- execlists_context_unqueue__locked(ring);
+ (csb[i][0] & GEN8_CTX_STATUS_ACTIVE_IDLE))
+ execlists_context_unqueue(ring);
}
- ring->next_context_status_buffer = write_pointer % GEN8_CSB_ENTRIES;
-
- /* Update the read pointer to the old write pointer. Manual ringbuffer
- * management ftw </sarcasm> */
- I915_WRITE_FW(RING_CONTEXT_STATUS_PTR(ring),
- _MASKED_FIELD(GEN8_CSB_READ_PTR_MASK,
- ring->next_context_status_buffer << 8));
-
- intel_uncore_forcewake_put__locked(dev_priv, FORCEWAKE_ALL);
- spin_unlock(&dev_priv->uncore.lock);
-
spin_unlock(&ring->execlist_lock);
if (unlikely(submit_contexts > 2))
--
1.9.1
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 9+ messages in thread
* ✗ Fi.CI.BAT: warning for drm/i915: Move CSB MMIO reads out of the execlists lock (rev2)
2016-03-01 13:25 [PATCH] drm/i915: Move CSB MMIO reads out of the execlists lock Tvrtko Ursulin
2016-03-01 14:30 ` ✗ Fi.CI.BAT: warning for " Patchwork
2016-03-03 10:39 ` [PATCH v2] " Tvrtko Ursulin
@ 2016-03-03 11:02 ` Patchwork
2016-03-03 17:47 ` Tvrtko Ursulin
2 siblings, 1 reply; 9+ messages in thread
From: Patchwork @ 2016-03-03 11:02 UTC (permalink / raw)
To: Tvrtko Ursulin; +Cc: intel-gfx
== Series Details ==
Series: drm/i915: Move CSB MMIO reads out of the execlists lock (rev2)
URL : https://patchwork.freedesktop.org/series/3973/
State : warning
== Summary ==
Series 3973v2 drm/i915: Move CSB MMIO reads out of the execlists lock
http://patchwork.freedesktop.org/api/1.0/series/3973/revisions/2/mbox/
Test drv_module_reload_basic:
incomplete -> PASS (snb-dellxps)
Test gem_storedw_loop:
Subgroup basic-blt:
pass -> DMESG-WARN (hsw-brixbox)
Test kms_flip:
Subgroup basic-flip-vs-wf_vblank:
pass -> DMESG-WARN (hsw-gt2)
Test pm_rpm:
Subgroup basic-pci-d3-state:
pass -> DMESG-WARN (bsw-nuc-2)
bdw-nuci7 total:169 pass:158 dwarn:0 dfail:0 fail:0 skip:11
bdw-ultra total:169 pass:155 dwarn:0 dfail:0 fail:0 skip:14
bsw-nuc-2 total:169 pass:136 dwarn:2 dfail:0 fail:1 skip:30
byt-nuc total:169 pass:144 dwarn:0 dfail:0 fail:0 skip:25
hsw-brixbox total:169 pass:153 dwarn:1 dfail:0 fail:0 skip:15
hsw-gt2 total:169 pass:158 dwarn:1 dfail:0 fail:0 skip:10
ilk-hp8440p total:169 pass:119 dwarn:0 dfail:0 fail:0 skip:50
ivb-t430s total:169 pass:154 dwarn:0 dfail:0 fail:0 skip:15
skl-i5k-2 total:169 pass:153 dwarn:0 dfail:0 fail:0 skip:16
skl-i7k-2 total:169 pass:153 dwarn:0 dfail:0 fail:0 skip:16
snb-dellxps total:169 pass:145 dwarn:1 dfail:0 fail:0 skip:23
snb-x220t total:169 pass:145 dwarn:0 dfail:1 fail:1 skip:22
Results at /archive/results/CI_IGT_test/Patchwork_1521/
99ba2daf15b76f01916dc645b8d1b03dbbabf13c drm-intel-nightly: 2016y-03m-03d-08h-27m-26s UTC integration manifest
999801d1943ae1642f4908e7ded8ee20a5699bcf drm/i915: Move CSB MMIO reads out of the execlists lock
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: ✗ Fi.CI.BAT: warning for drm/i915: Move CSB MMIO reads out of the execlists lock (rev2)
2016-03-03 11:02 ` ✗ Fi.CI.BAT: warning for drm/i915: Move CSB MMIO reads out of the execlists lock (rev2) Patchwork
@ 2016-03-03 17:47 ` Tvrtko Ursulin
2016-03-03 20:50 ` Chris Wilson
0 siblings, 1 reply; 9+ messages in thread
From: Tvrtko Ursulin @ 2016-03-03 17:47 UTC (permalink / raw)
To: intel-gfx
On 03/03/16 11:02, Patchwork wrote:
> == Series Details ==
>
> Series: drm/i915: Move CSB MMIO reads out of the execlists lock (rev2)
> URL : https://patchwork.freedesktop.org/series/3973/
> State : warning
>
> == Summary ==
>
> Series 3973v2 drm/i915: Move CSB MMIO reads out of the execlists lock
> http://patchwork.freedesktop.org/api/1.0/series/3973/revisions/2/mbox/
>
> Test drv_module_reload_basic:
> incomplete -> PASS (snb-dellxps)
> Test gem_storedw_loop:
> Subgroup basic-blt:
> pass -> DMESG-WARN (hsw-brixbox)
Known unrelated: https://bugs.freedesktop.org/show_bug.cgi?id=94307
> Test kms_flip:
> Subgroup basic-flip-vs-wf_vblank:
> pass -> DMESG-WARN (hsw-gt2)
Known unrelated: https://bugs.freedesktop.org/show_bug.cgi?id=94384
> Test pm_rpm:
> Subgroup basic-pci-d3-state:
> pass -> DMESG-WARN (bsw-nuc-2)
Known unrelated: https://bugs.freedesktop.org/show_bug.cgi?id=94164
>
> bdw-nuci7 total:169 pass:158 dwarn:0 dfail:0 fail:0 skip:11
> bdw-ultra total:169 pass:155 dwarn:0 dfail:0 fail:0 skip:14
> bsw-nuc-2 total:169 pass:136 dwarn:2 dfail:0 fail:1 skip:30
> byt-nuc total:169 pass:144 dwarn:0 dfail:0 fail:0 skip:25
> hsw-brixbox total:169 pass:153 dwarn:1 dfail:0 fail:0 skip:15
> hsw-gt2 total:169 pass:158 dwarn:1 dfail:0 fail:0 skip:10
> ilk-hp8440p total:169 pass:119 dwarn:0 dfail:0 fail:0 skip:50
> ivb-t430s total:169 pass:154 dwarn:0 dfail:0 fail:0 skip:15
> skl-i5k-2 total:169 pass:153 dwarn:0 dfail:0 fail:0 skip:16
> skl-i7k-2 total:169 pass:153 dwarn:0 dfail:0 fail:0 skip:16
> snb-dellxps total:169 pass:145 dwarn:1 dfail:0 fail:0 skip:23
> snb-x220t total:169 pass:145 dwarn:0 dfail:1 fail:1 skip:22
>
> Results at /archive/results/CI_IGT_test/Patchwork_1521/
>
> 99ba2daf15b76f01916dc645b8d1b03dbbabf13c drm-intel-nightly: 2016y-03m-03d-08h-27m-26s UTC integration manifest
> 999801d1943ae1642f4908e7ded8ee20a5699bcf drm/i915: Move CSB MMIO reads out of the execlists lock
So just the usual unrelated random fails.
Otherwise patch seems stable when I bash it with GLmark, SynMark2, OCL
and some random Linux games.
Regards,
Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: ✗ Fi.CI.BAT: warning for drm/i915: Move CSB MMIO reads out of the execlists lock (rev2)
2016-03-03 17:47 ` Tvrtko Ursulin
@ 2016-03-03 20:50 ` Chris Wilson
2016-03-08 10:16 ` Chris Wilson
0 siblings, 1 reply; 9+ messages in thread
From: Chris Wilson @ 2016-03-03 20:50 UTC (permalink / raw)
To: Tvrtko Ursulin; +Cc: intel-gfx
On Thu, Mar 03, 2016 at 05:47:14PM +0000, Tvrtko Ursulin wrote:
>
> On 03/03/16 11:02, Patchwork wrote:
> >== Series Details ==
> >
> >Series: drm/i915: Move CSB MMIO reads out of the execlists lock (rev2)
> >URL : https://patchwork.freedesktop.org/series/3973/
> >State : warning
> >
> >== Summary ==
> >
> >Series 3973v2 drm/i915: Move CSB MMIO reads out of the execlists lock
> >http://patchwork.freedesktop.org/api/1.0/series/3973/revisions/2/mbox/
> >
> >Test drv_module_reload_basic:
> > incomplete -> PASS (snb-dellxps)
> >Test gem_storedw_loop:
> > Subgroup basic-blt:
> > pass -> DMESG-WARN (hsw-brixbox)
>
> Known unrelated: https://bugs.freedesktop.org/show_bug.cgi?id=94307
>
> >Test kms_flip:
> > Subgroup basic-flip-vs-wf_vblank:
> > pass -> DMESG-WARN (hsw-gt2)
>
> Known unrelated: https://bugs.freedesktop.org/show_bug.cgi?id=94384
>
> >Test pm_rpm:
> > Subgroup basic-pci-d3-state:
> > pass -> DMESG-WARN (bsw-nuc-2)
>
> Known unrelated: https://bugs.freedesktop.org/show_bug.cgi?id=94164
>
> >
> >bdw-nuci7 total:169 pass:158 dwarn:0 dfail:0 fail:0 skip:11
> >bdw-ultra total:169 pass:155 dwarn:0 dfail:0 fail:0 skip:14
> >bsw-nuc-2 total:169 pass:136 dwarn:2 dfail:0 fail:1 skip:30
> >byt-nuc total:169 pass:144 dwarn:0 dfail:0 fail:0 skip:25
> >hsw-brixbox total:169 pass:153 dwarn:1 dfail:0 fail:0 skip:15
> >hsw-gt2 total:169 pass:158 dwarn:1 dfail:0 fail:0 skip:10
> >ilk-hp8440p total:169 pass:119 dwarn:0 dfail:0 fail:0 skip:50
> >ivb-t430s total:169 pass:154 dwarn:0 dfail:0 fail:0 skip:15
> >skl-i5k-2 total:169 pass:153 dwarn:0 dfail:0 fail:0 skip:16
> >skl-i7k-2 total:169 pass:153 dwarn:0 dfail:0 fail:0 skip:16
> >snb-dellxps total:169 pass:145 dwarn:1 dfail:0 fail:0 skip:23
> >snb-x220t total:169 pass:145 dwarn:0 dfail:1 fail:1 skip:22
> >
> >Results at /archive/results/CI_IGT_test/Patchwork_1521/
> >
> >99ba2daf15b76f01916dc645b8d1b03dbbabf13c drm-intel-nightly: 2016y-03m-03d-08h-27m-26s UTC integration manifest
> >999801d1943ae1642f4908e7ded8ee20a5699bcf drm/i915: Move CSB MMIO reads out of the execlists lock
>
> So just the usual unrelated random fails.
>
> Otherwise patch seems stable when I bash it with GLmark, SynMark2,
> OCL and some random Linux games.
Yes, patch is sane, I'm just messing around with my Braswell at the
moment and then I'll try again at getting some numbers. First glance
said 10% in reducing latency (with a 100% throughput improvement in one
particular small copy scenario, that I want to reprdouce and do some back
of the envelope calculations to check that it is sane), but the machine
(thanks to execlists) just dies as soon as I try some more interesting
benchmarks.
-Chris
--
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: ✗ Fi.CI.BAT: warning for drm/i915: Move CSB MMIO reads out of the execlists lock (rev2)
2016-03-03 20:50 ` Chris Wilson
@ 2016-03-08 10:16 ` Chris Wilson
2016-03-08 11:08 ` Tvrtko Ursulin
0 siblings, 1 reply; 9+ messages in thread
From: Chris Wilson @ 2016-03-08 10:16 UTC (permalink / raw)
To: Tvrtko Ursulin, intel-gfx
On Thu, Mar 03, 2016 at 08:50:46PM +0000, Chris Wilson wrote:
> Yes, patch is sane, I'm just messing around with my Braswell at the
> moment and then I'll try again at getting some numbers. First glance
> said 10% in reducing latency (with a 100% throughput improvement in one
> particular small copy scenario, that I want to reprdouce and do some back
> of the envelope calculations to check that it is sane), but the machine
> (thanks to execlists) just dies as soon as I try some more interesting
> benchmarks.
I rebased the patch ontop of the execlists thread so that I could
actually use the machine...
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 4d005dd..5c0a4e0 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -549,13 +549,10 @@ static int intel_execlists_submit(void *arg)
intel_uncore_forcewake_get(dev_priv, FORCEWAKE_ALL);
do {
- u32 status;
- u32 status_id;
- u32 submit_contexts;
- u32 status_pointer;
unsigned read_pointer, write_pointer;
-
- spin_lock(&ring->execlist_lock);
+ u32 csb[GEN8_CSB_ENTRIES][2];
+ u32 status_pointer;
+ unsigned i, read, submit_contexts;
set_current_state(TASK_INTERRUPTIBLE);
status_pointer = I915_READ_FW(RING_CONTEXT_STATUS_PTR(ring));
@@ -563,8 +560,6 @@ static int intel_execlists_submit(void *arg)
write_pointer = GEN8_CSB_WRITE_PTR(status_pointer);
if (read_pointer == write_pointer) {
intel_uncore_forcewake_put(dev_priv, FORCEWAKE_ALL);
- spin_unlock(&ring->execlist_lock);
-
if (kthread_should_stop())
return 0;
@@ -577,37 +572,41 @@ static int intel_execlists_submit(void *arg)
if (read_pointer > write_pointer)
write_pointer += GEN8_CSB_ENTRIES;
- submit_contexts = 0;
+ read = 0;
while (read_pointer < write_pointer) {
- status = get_context_status(ring, ++read_pointer, &status_id);
+ csb[read][0] = get_context_status(ring, ++read_pointer,
+ &csb[read][1]);
+ read++;
+ }
- if (unlikely(status & GEN8_CTX_STATUS_PREEMPTED)) {
- if (status & GEN8_CTX_STATUS_LITE_RESTORE) {
- if (execlists_check_remove_request(ring, status_id))
+ I915_WRITE_FW(RING_CONTEXT_STATUS_PTR(ring),
+ _MASKED_FIELD(GEN8_CSB_READ_PTR_MASK,
+ (write_pointer % GEN8_CSB_ENTRIES) << 8));
+
+ spin_lock(&ring->execlist_lock);
+
+ submit_contexts = 0;
+ for (i = 0; i < read; i++) {
+ if (unlikely(csb[i][0] & GEN8_CTX_STATUS_PREEMPTED)) {
+ if (csb[i][0] & GEN8_CTX_STATUS_LITE_RESTORE) {
+ if (execlists_check_remove_request(ring, csb[i][1]))
WARN(1, "Lite Restored request removed from queue\n");
} else
WARN(1, "Preemption without Lite Restore\n");
}
- if (status & (GEN8_CTX_STATUS_ACTIVE_IDLE |
- GEN8_CTX_STATUS_ELEMENT_SWITCH))
+ if (csb[i][0] & (GEN8_CTX_STATUS_ACTIVE_IDLE |
+ GEN8_CTX_STATUS_ELEMENT_SWITCH))
submit_contexts +=
- execlists_check_remove_request(ring, status_id);
+ execlists_check_remove_request(ring, csb[i][1]);
}
if (submit_contexts) {
if (!ring->disable_lite_restore_wa ||
- (status & GEN8_CTX_STATUS_ACTIVE_IDLE))
+ (csb[i][0] & GEN8_CTX_STATUS_ACTIVE_IDLE))
execlists_context_unqueue__locked(ring);
}
-
- /* Update the read pointer to the old write pointer. Manual ringbuffer
- * management ftw </sarcasm> */
- I915_WRITE_FW(RING_CONTEXT_STATUS_PTR(ring),
- _MASKED_FIELD(GEN8_CSB_READ_PTR_MASK,
- (write_pointer % GEN8_CSB_ENTRIES) << 8));
-
spin_unlock(&ring->execlist_lock);
if (unlikely(submit_contexts > 2))
On braswell that gives improves the nop dispatch latency by 20%
gem:exec:latency:0: -0.35%
gem:exec:latency:1: +4.57%
gem:exec:latency:2: +0.07%
gem:exec:latency:4: +18.05%
gem:exec:latency:8: +26.97%
gem:exec:latency:16: +20.37%
gem:exec:latency:32: +19.91%
gem:exec:latency:64: +24.06%
gem:exec:latency:128: +23.75%
gem:exec:latency:256: +24.54%
gem:exec:latency:512: +24.30%
gem:exec:latency:1024: +24.43%
Outside of that scenario, the changes are more or less in the noise.
Even if we look at the full round-trip latency for synchronous execution.
gem:exec:nop:rcs:single: -2.68%
gem:exec:nop:rcs:continuous: -2.28%
gem:exec:nop:bcs:single: -2.31%
gem:exec:nop:bcs:continuous: +16.64%
gem:exec:nop:vcs:single: -6.24%
gem:exec:nop:vcs:continuous: +3.76%
gem:exec:nop:vecs:single: +2.56%
gem:exec:nop:vecs:continuous: +1.83%
And with any busywork on top, we lose the effect:
gem:exec:trace:Atlantis: -0.12%
gem:exec:trace:glamor: +2.08%
gem:exec:trace:glxgears: +0.79%
gem:exec:trace:OglBatch7: +0.45%
gem:exec:trace:sna: +0.57%
gem:exec:trace:unigine-valley:+0.01%
gem:exec:trace:uxa: +5.63%
I am hesistant to r-b something that cannot be tested since the relevant
igt simply explode on my machine both before and after the patch.
-Chris
--
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: ✗ Fi.CI.BAT: warning for drm/i915: Move CSB MMIO reads out of the execlists lock (rev2)
2016-03-08 10:16 ` Chris Wilson
@ 2016-03-08 11:08 ` Tvrtko Ursulin
2016-03-08 12:50 ` Chris Wilson
0 siblings, 1 reply; 9+ messages in thread
From: Tvrtko Ursulin @ 2016-03-08 11:08 UTC (permalink / raw)
To: Chris Wilson, intel-gfx
On 08/03/16 10:16, Chris Wilson wrote:
> On Thu, Mar 03, 2016 at 08:50:46PM +0000, Chris Wilson wrote:
>> Yes, patch is sane, I'm just messing around with my Braswell at the
>> moment and then I'll try again at getting some numbers. First glance
>> said 10% in reducing latency (with a 100% throughput improvement in one
>> particular small copy scenario, that I want to reprdouce and do some back
>> of the envelope calculations to check that it is sane), but the machine
>> (thanks to execlists) just dies as soon as I try some more interesting
>> benchmarks.
>
> I rebased the patch ontop of the execlists thread so that I could
> actually use the machine...
>
> diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
> index 4d005dd..5c0a4e0 100644
> --- a/drivers/gpu/drm/i915/intel_lrc.c
> +++ b/drivers/gpu/drm/i915/intel_lrc.c
> @@ -549,13 +549,10 @@ static int intel_execlists_submit(void *arg)
>
> intel_uncore_forcewake_get(dev_priv, FORCEWAKE_ALL);
> do {
> - u32 status;
> - u32 status_id;
> - u32 submit_contexts;
> - u32 status_pointer;
> unsigned read_pointer, write_pointer;
> -
> - spin_lock(&ring->execlist_lock);
> + u32 csb[GEN8_CSB_ENTRIES][2];
> + u32 status_pointer;
> + unsigned i, read, submit_contexts;
>
> set_current_state(TASK_INTERRUPTIBLE);
> status_pointer = I915_READ_FW(RING_CONTEXT_STATUS_PTR(ring));
> @@ -563,8 +560,6 @@ static int intel_execlists_submit(void *arg)
> write_pointer = GEN8_CSB_WRITE_PTR(status_pointer);
> if (read_pointer == write_pointer) {
> intel_uncore_forcewake_put(dev_priv, FORCEWAKE_ALL);
> - spin_unlock(&ring->execlist_lock);
> -
> if (kthread_should_stop())
> return 0;
>
> @@ -577,37 +572,41 @@ static int intel_execlists_submit(void *arg)
> if (read_pointer > write_pointer)
> write_pointer += GEN8_CSB_ENTRIES;
>
> - submit_contexts = 0;
> + read = 0;
> while (read_pointer < write_pointer) {
> - status = get_context_status(ring, ++read_pointer, &status_id);
> + csb[read][0] = get_context_status(ring, ++read_pointer,
> + &csb[read][1]);
> + read++;
> + }
>
> - if (unlikely(status & GEN8_CTX_STATUS_PREEMPTED)) {
> - if (status & GEN8_CTX_STATUS_LITE_RESTORE) {
> - if (execlists_check_remove_request(ring, status_id))
> + I915_WRITE_FW(RING_CONTEXT_STATUS_PTR(ring),
> + _MASKED_FIELD(GEN8_CSB_READ_PTR_MASK,
> + (write_pointer % GEN8_CSB_ENTRIES) << 8));
> +
> + spin_lock(&ring->execlist_lock);
> +
> + submit_contexts = 0;
> + for (i = 0; i < read; i++) {
> + if (unlikely(csb[i][0] & GEN8_CTX_STATUS_PREEMPTED)) {
> + if (csb[i][0] & GEN8_CTX_STATUS_LITE_RESTORE) {
> + if (execlists_check_remove_request(ring, csb[i][1]))
> WARN(1, "Lite Restored request removed from queue\n");
> } else
> WARN(1, "Preemption without Lite Restore\n");
> }
>
> - if (status & (GEN8_CTX_STATUS_ACTIVE_IDLE |
> - GEN8_CTX_STATUS_ELEMENT_SWITCH))
> + if (csb[i][0] & (GEN8_CTX_STATUS_ACTIVE_IDLE |
> + GEN8_CTX_STATUS_ELEMENT_SWITCH))
> submit_contexts +=
> - execlists_check_remove_request(ring, status_id);
> + execlists_check_remove_request(ring, csb[i][1]);
> }
>
> if (submit_contexts) {
> if (!ring->disable_lite_restore_wa ||
> - (status & GEN8_CTX_STATUS_ACTIVE_IDLE))
> + (csb[i][0] & GEN8_CTX_STATUS_ACTIVE_IDLE))
> execlists_context_unqueue__locked(ring);
> }
>
> -
> - /* Update the read pointer to the old write pointer. Manual ringbuffer
> - * management ftw </sarcasm> */
> - I915_WRITE_FW(RING_CONTEXT_STATUS_PTR(ring),
> - _MASKED_FIELD(GEN8_CSB_READ_PTR_MASK,
> - (write_pointer % GEN8_CSB_ENTRIES) << 8));
> -
> spin_unlock(&ring->execlist_lock);
>
> if (unlikely(submit_contexts > 2))
>
> On braswell that gives improves the nop dispatch latency by 20%
>
> gem:exec:latency:0: -0.35%
> gem:exec:latency:1: +4.57%
> gem:exec:latency:2: +0.07%
> gem:exec:latency:4: +18.05%
> gem:exec:latency:8: +26.97%
> gem:exec:latency:16: +20.37%
> gem:exec:latency:32: +19.91%
> gem:exec:latency:64: +24.06%
> gem:exec:latency:128: +23.75%
> gem:exec:latency:256: +24.54%
> gem:exec:latency:512: +24.30%
> gem:exec:latency:1024: +24.43%
Cool, so it looks the faster the CPU is relative to the GPU, the bigger
the gain.
> Outside of that scenario, the changes are more or less in the noise.
> Even if we look at the full round-trip latency for synchronous execution.
>
> gem:exec:nop:rcs:single: -2.68%
> gem:exec:nop:rcs:continuous: -2.28%
> gem:exec:nop:bcs:single: -2.31%
> gem:exec:nop:bcs:continuous: +16.64%
> gem:exec:nop:vcs:single: -6.24%
> gem:exec:nop:vcs:continuous: +3.76%
> gem:exec:nop:vecs:single: +2.56%
> gem:exec:nop:vecs:continuous: +1.83%
Yes I would not expect synchronous to benefit a lot. My thinking is that
lock contention is highest (and hence benefit from reducing it) when
someone keeps submitting batches just below the threshold of getting
block by filling the ring buffer, but keeping it well filled.
> And with any busywork on top, we lose the effect:
>
> gem:exec:trace:Atlantis: -0.12%
> gem:exec:trace:glamor: +2.08%
> gem:exec:trace:glxgears: +0.79%
> gem:exec:trace:OglBatch7: +0.45%
> gem:exec:trace:sna: +0.57%
> gem:exec:trace:unigine-valley:+0.01%
> gem:exec:trace:uxa: +5.63%
Hey, at least it is not regressing anything and still has potential for
gains if the factors align.
> I am hesistant to r-b something that cannot be tested since the relevant
> igt simply explode on my machine both before and after the patch.
Well strictly speaking you don't need to do any platform matrix testing
for an r-b. :) But yes, the situation you are seeing on BSW needs to be
resolved.
Regards,
Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: ✗ Fi.CI.BAT: warning for drm/i915: Move CSB MMIO reads out of the execlists lock (rev2)
2016-03-08 11:08 ` Tvrtko Ursulin
@ 2016-03-08 12:50 ` Chris Wilson
0 siblings, 0 replies; 9+ messages in thread
From: Chris Wilson @ 2016-03-08 12:50 UTC (permalink / raw)
To: Tvrtko Ursulin; +Cc: intel-gfx
On Tue, Mar 08, 2016 at 11:08:18AM +0000, Tvrtko Ursulin wrote:
> Well strictly speaking you don't need to do any platform matrix
> testing for an r-b. :) But yes, the situation you are seeing on BSW
> needs to be resolved.
As discussed on IRC, it does seem to be unreproducible on !bsw (the
presumption being that we cannot saturate the CPU with interrupt
handler), but the fix may just be to remove the loop from
cherryview_irq_handler. I'm kicking off a test run to see if that loop
has any merit for GEM...
-Chris
--
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2016-03-08 12:50 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-03-01 13:25 [PATCH] drm/i915: Move CSB MMIO reads out of the execlists lock Tvrtko Ursulin
2016-03-01 14:30 ` ✗ Fi.CI.BAT: warning for " Patchwork
2016-03-03 10:39 ` [PATCH v2] " Tvrtko Ursulin
2016-03-03 11:02 ` ✗ Fi.CI.BAT: warning for drm/i915: Move CSB MMIO reads out of the execlists lock (rev2) Patchwork
2016-03-03 17:47 ` Tvrtko Ursulin
2016-03-03 20:50 ` Chris Wilson
2016-03-08 10:16 ` Chris Wilson
2016-03-08 11:08 ` Tvrtko Ursulin
2016-03-08 12:50 ` Chris Wilson
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox