* [Intel-gfx] [PATCH 0/3] drm/i915/gt: RPS tuning for light media playback
@ 2021-11-17 22:49 Vinay Belgaumkar
2021-11-17 22:49 ` [Intel-gfx] [PATCH 1/3] drm/i915/gt: Spread virtual engines over idle engines Vinay Belgaumkar
` (9 more replies)
0 siblings, 10 replies; 21+ messages in thread
From: Vinay Belgaumkar @ 2021-11-17 22:49 UTC (permalink / raw)
To: intel-gfx, dri-devel; +Cc: Chris Wilson
Switch from tgl to adl, sees one particular media decode pipeline fit
into a single vcs engine on adl, whereas it took two on tgl. However, it
was observed that the power consumtpion for adl remained higher than for
tgl. One contibution is that each engine is treated individually for rps
evaluation, another is that it appears that we prefer to avoid low
frequencies (with no rc6) and use slightly higher frequencies (with lots
of rc6). So let's try tweaking the balancer to smear busy virtual
contexts across multiple engines (trying to make adl look more like
tgl), and tweak the rps evaluation to "race to idle" harder.
Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
Cc: Vinay Belgaumkar <vinay.belgaumkar@intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Chris Wilson (3):
drm/i915/gt: Spread virtual engines over idle engines
drm/i915/gt: Compare average group occupancy for RPS evaluation
drm/i915/gt: Improve "race-to-idle" at low frequencies
.../drm/i915/gt/intel_execlists_submission.c | 80 ++++++++++++-------
drivers/gpu/drm/i915/gt/intel_rps.c | 79 +++++++++++++-----
2 files changed, 112 insertions(+), 47 deletions(-)
--
2.34.0
^ permalink raw reply [flat|nested] 21+ messages in thread
* [Intel-gfx] [PATCH 1/3] drm/i915/gt: Spread virtual engines over idle engines
2021-11-17 22:49 [Intel-gfx] [PATCH 0/3] drm/i915/gt: RPS tuning for light media playback Vinay Belgaumkar
@ 2021-11-17 22:49 ` Vinay Belgaumkar
2021-11-23 9:39 ` Tvrtko Ursulin
2021-11-17 22:49 ` [Intel-gfx] [PATCH 2/3] drm/i915/gt: Compare average group occupancy for RPS evaluation Vinay Belgaumkar
` (8 subsequent siblings)
9 siblings, 1 reply; 21+ messages in thread
From: Vinay Belgaumkar @ 2021-11-17 22:49 UTC (permalink / raw)
To: intel-gfx, dri-devel; +Cc: Chris Wilson
From: Chris Wilson <chris@chris-wilson.co.uk>
Everytime we come to the end of a virtual engine's context, re-randomise
it's siblings[]. As we schedule the siblings' tasklets in the order they
are in the array, earlier entries are executed first (when idle) and so
will be preferred when scheduling the next virtual request. Currently,
we only update the array when switching onto a new idle engine, so we
prefer to stick on the last execute engine, keeping the work compact.
However, it can be beneficial to spread the work out across idle
engines, so choose another sibling as our preferred target at the end of
the context's execution.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Vinay Belgaumkar <vinay.belgaumkar@intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
---
.../drm/i915/gt/intel_execlists_submission.c | 80 ++++++++++++-------
1 file changed, 52 insertions(+), 28 deletions(-)
diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
index ca03880fa7e4..b95bbc8fb91a 100644
--- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
+++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
@@ -539,6 +539,41 @@ static void execlists_schedule_in(struct i915_request *rq, int idx)
GEM_BUG_ON(intel_context_inflight(ce) != rq->engine);
}
+static void virtual_xfer_context(struct virtual_engine *ve,
+ struct intel_engine_cs *engine)
+{
+ unsigned int n;
+
+ if (likely(engine == ve->siblings[0]))
+ return;
+
+ if (!intel_engine_has_relative_mmio(engine))
+ lrc_update_offsets(&ve->context, engine);
+
+ /*
+ * Move the bound engine to the top of the list for
+ * future execution. We then kick this tasklet first
+ * before checking others, so that we preferentially
+ * reuse this set of bound registers.
+ */
+ for (n = 1; n < ve->num_siblings; n++) {
+ if (ve->siblings[n] == engine) {
+ swap(ve->siblings[n], ve->siblings[0]);
+ break;
+ }
+ }
+}
+
+static int ve_random_sibling(struct virtual_engine *ve)
+{
+ return prandom_u32_max(ve->num_siblings);
+}
+
+static int ve_random_other_sibling(struct virtual_engine *ve)
+{
+ return 1 + prandom_u32_max(ve->num_siblings - 1);
+}
+
static void
resubmit_virtual_request(struct i915_request *rq, struct virtual_engine *ve)
{
@@ -578,8 +613,23 @@ static void kick_siblings(struct i915_request *rq, struct intel_context *ce)
rq->execution_mask != engine->mask)
resubmit_virtual_request(rq, ve);
- if (READ_ONCE(ve->request))
+ /*
+ * Reschedule with a new "preferred" sibling.
+ *
+ * The tasklets are executed in the order of ve->siblings[], so
+ * siblings[0] receives preferrential treatment of greedily checking
+ * for execution of the virtual engine. At this point, the virtual
+ * engine is no longer in the current GPU cache due to idleness or
+ * contention, so it can be executed on any without penalty. We
+ * re-randomise at this point in order to spread light loads across
+ * the system, heavy overlapping loads will continue to be greedily
+ * executed by the first available engine.
+ */
+ if (READ_ONCE(ve->request)) {
+ virtual_xfer_context(ve,
+ ve->siblings[ve_random_other_sibling(ve)]);
tasklet_hi_schedule(&ve->base.sched_engine->tasklet);
+ }
}
static void __execlists_schedule_out(struct i915_request * const rq,
@@ -1030,32 +1080,6 @@ first_virtual_engine(struct intel_engine_cs *engine)
return NULL;
}
-static void virtual_xfer_context(struct virtual_engine *ve,
- struct intel_engine_cs *engine)
-{
- unsigned int n;
-
- if (likely(engine == ve->siblings[0]))
- return;
-
- GEM_BUG_ON(READ_ONCE(ve->context.inflight));
- if (!intel_engine_has_relative_mmio(engine))
- lrc_update_offsets(&ve->context, engine);
-
- /*
- * Move the bound engine to the top of the list for
- * future execution. We then kick this tasklet first
- * before checking others, so that we preferentially
- * reuse this set of bound registers.
- */
- for (n = 1; n < ve->num_siblings; n++) {
- if (ve->siblings[n] == engine) {
- swap(ve->siblings[n], ve->siblings[0]);
- break;
- }
- }
-}
-
static void defer_request(struct i915_request *rq, struct list_head * const pl)
{
LIST_HEAD(list);
@@ -3590,7 +3614,7 @@ static void virtual_engine_initial_hint(struct virtual_engine *ve)
* NB This does not force us to execute on this engine, it will just
* typically be the first we inspect for submission.
*/
- swp = prandom_u32_max(ve->num_siblings);
+ swp = ve_random_sibling(ve);
if (swp)
swap(ve->siblings[swp], ve->siblings[0]);
}
--
2.34.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [Intel-gfx] [PATCH 2/3] drm/i915/gt: Compare average group occupancy for RPS evaluation
2021-11-17 22:49 [Intel-gfx] [PATCH 0/3] drm/i915/gt: RPS tuning for light media playback Vinay Belgaumkar
2021-11-17 22:49 ` [Intel-gfx] [PATCH 1/3] drm/i915/gt: Spread virtual engines over idle engines Vinay Belgaumkar
@ 2021-11-17 22:49 ` Vinay Belgaumkar
2021-11-23 17:35 ` Belgaumkar, Vinay
2021-11-17 22:49 ` [Intel-gfx] [PATCH 3/3] drm/i915/gt: Improve "race-to-idle" at low frequencies Vinay Belgaumkar
` (7 subsequent siblings)
9 siblings, 1 reply; 21+ messages in thread
From: Vinay Belgaumkar @ 2021-11-17 22:49 UTC (permalink / raw)
To: intel-gfx, dri-devel; +Cc: Chris Wilson
From: Chris Wilson <chris.p.wilson@intel.com>
Currently, we inspect each engine individually and measure the occupancy
of that engine over the last evaluation interval. If that exceeds our
busyness thresholds, we decide to increase the GPU frequency. However,
under a load balancer, we should consider the occupancy of entire engine
groups, as work may be spread out across the group. In doing so, we
prefer wide over fast, power consumption is approximately proportional to
the square of the frequency. However, since the load balancer is greedy,
the first idle engine gets all the work, and preferrentially reuses the
last active engine, under light loads all work is assigned to one
engine, and so that engine appears very busy. But if the work happened
to overlap slightly, the workload would spread across multiple engines,
reducing each individual engine's runtime, and so reducing the rps
contribution, keeping the frequency low. Instead, when considering the
contribution, consider the contribution over the entire engine group
(capacity).
Signed-off-by: Chris Wilson <chris.p.wilson@intel.com>
Cc: Vinay Belgaumkar <vinay.belgaumkar@intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
---
drivers/gpu/drm/i915/gt/intel_rps.c | 48 ++++++++++++++++++++---------
1 file changed, 34 insertions(+), 14 deletions(-)
diff --git a/drivers/gpu/drm/i915/gt/intel_rps.c b/drivers/gpu/drm/i915/gt/intel_rps.c
index 07ff7ba7b2b7..3675ac93ded0 100644
--- a/drivers/gpu/drm/i915/gt/intel_rps.c
+++ b/drivers/gpu/drm/i915/gt/intel_rps.c
@@ -7,6 +7,7 @@
#include "i915_drv.h"
#include "intel_breadcrumbs.h"
+#include "intel_engine_pm.h"
#include "intel_gt.h"
#include "intel_gt_clock_utils.h"
#include "intel_gt_irq.h"
@@ -65,26 +66,45 @@ static void set(struct intel_uncore *uncore, i915_reg_t reg, u32 val)
static void rps_timer(struct timer_list *t)
{
struct intel_rps *rps = from_timer(rps, t, timer);
- struct intel_engine_cs *engine;
- ktime_t dt, last, timestamp;
- enum intel_engine_id id;
+ struct intel_gt *gt = rps_to_gt(rps);
+ ktime_t dt, last, timestamp = 0;
s64 max_busy[3] = {};
+ int i, j;
- timestamp = 0;
- for_each_engine(engine, rps_to_gt(rps), id) {
- s64 busy;
- int i;
+ /* Compare average occupancy over each engine group */
+ for (i = 0; i < ARRAY_SIZE(gt->engine_class); i++) {
+ s64 busy = 0;
+ int count = 0;
+
+ for (j = 0; j < ARRAY_SIZE(gt->engine_class[i]); j++) {
+ struct intel_engine_cs *engine;
- dt = intel_engine_get_busy_time(engine, ×tamp);
- last = engine->stats.rps;
- engine->stats.rps = dt;
+ engine = gt->engine_class[i][j];
+ if (!engine)
+ continue;
- busy = ktime_to_ns(ktime_sub(dt, last));
- for (i = 0; i < ARRAY_SIZE(max_busy); i++) {
- if (busy > max_busy[i])
- swap(busy, max_busy[i]);
+ dt = intel_engine_get_busy_time(engine, ×tamp);
+ last = engine->stats.rps;
+ engine->stats.rps = dt;
+
+ if (!intel_engine_pm_is_awake(engine))
+ continue;
+
+ busy += ktime_to_ns(ktime_sub(dt, last));
+ count++;
+ }
+
+ if (count > 1)
+ busy = div_u64(busy, count);
+ if (busy <= max_busy[ARRAY_SIZE(max_busy) - 1])
+ continue;
+
+ for (j = 0; j < ARRAY_SIZE(max_busy); j++) {
+ if (busy > max_busy[j])
+ swap(busy, max_busy[j]);
}
}
+
last = rps->pm_timestamp;
rps->pm_timestamp = timestamp;
--
2.34.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [Intel-gfx] [PATCH 3/3] drm/i915/gt: Improve "race-to-idle" at low frequencies
2021-11-17 22:49 [Intel-gfx] [PATCH 0/3] drm/i915/gt: RPS tuning for light media playback Vinay Belgaumkar
2021-11-17 22:49 ` [Intel-gfx] [PATCH 1/3] drm/i915/gt: Spread virtual engines over idle engines Vinay Belgaumkar
2021-11-17 22:49 ` [Intel-gfx] [PATCH 2/3] drm/i915/gt: Compare average group occupancy for RPS evaluation Vinay Belgaumkar
@ 2021-11-17 22:49 ` Vinay Belgaumkar
2021-11-22 18:44 ` Rodrigo Vivi
2021-11-23 17:37 ` Belgaumkar, Vinay
2021-11-17 23:59 ` [Intel-gfx] ✗ Fi.CI.SPARSE: warning for drm/i915/gt: RPS tuning for light media playback Patchwork
` (6 subsequent siblings)
9 siblings, 2 replies; 21+ messages in thread
From: Vinay Belgaumkar @ 2021-11-17 22:49 UTC (permalink / raw)
To: intel-gfx, dri-devel; +Cc: Chris Wilson
From: Chris Wilson <chris@chris-wilson.co.uk>
While the power consumption is proportional to the frequency, there is
also a static draw for active gates. The longer we are able to powergate
(rc6), the lower the static draw. Thus there is a sweetspot in the
frequency/power curve where we run at higher frequency in order to sleep
longer, aka race-to-idle. This is more evident at lower frequencies, so
let's look to bump the frequency if we think we will benefit by sleeping
longer at the higher frequency and so conserving power.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Vinay Belgaumkar <vinay.belgaumkar@intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
---
drivers/gpu/drm/i915/gt/intel_rps.c | 31 ++++++++++++++++++++++++-----
1 file changed, 26 insertions(+), 5 deletions(-)
diff --git a/drivers/gpu/drm/i915/gt/intel_rps.c b/drivers/gpu/drm/i915/gt/intel_rps.c
index 3675ac93ded0..6af3231982af 100644
--- a/drivers/gpu/drm/i915/gt/intel_rps.c
+++ b/drivers/gpu/drm/i915/gt/intel_rps.c
@@ -63,6 +63,22 @@ static void set(struct intel_uncore *uncore, i915_reg_t reg, u32 val)
intel_uncore_write_fw(uncore, reg, val);
}
+static bool race_to_idle(struct intel_rps *rps, u64 busy, u64 dt)
+{
+ unsigned int this = rps->cur_freq;
+ unsigned int next = rps->cur_freq + 1;
+ u64 next_dt = next * max(busy, dt);
+
+ /*
+ * Compare estimated time spent in rc6 at the next power bin. If
+ * we expect to sleep longer than the estimated increased power
+ * cost of running at a higher frequency, it will be reduced power
+ * consumption overall.
+ */
+ return (((next_dt - this * busy) >> 10) * this * this >
+ ((next_dt - next * busy) >> 10) * next * next);
+}
+
static void rps_timer(struct timer_list *t)
{
struct intel_rps *rps = from_timer(rps, t, timer);
@@ -133,7 +149,7 @@ static void rps_timer(struct timer_list *t)
if (!max_busy[i])
break;
- busy += div_u64(max_busy[i], 1 << i);
+ busy += max_busy[i] >> i;
}
GT_TRACE(rps_to_gt(rps),
"busy:%lld [%d%%], max:[%lld, %lld, %lld], interval:%d\n",
@@ -141,13 +157,18 @@ static void rps_timer(struct timer_list *t)
max_busy[0], max_busy[1], max_busy[2],
rps->pm_interval);
- if (100 * busy > rps->power.up_threshold * dt &&
- rps->cur_freq < rps->max_freq_softlimit) {
+ if (rps->cur_freq < rps->max_freq_softlimit &&
+ race_to_idle(rps, max_busy[0], dt)) {
+ rps->pm_iir |= GEN6_PM_RP_UP_THRESHOLD;
+ rps->pm_interval = 1;
+ schedule_work(&rps->work);
+ } else if (rps->cur_freq < rps->max_freq_softlimit &&
+ 100 * busy > rps->power.up_threshold * dt) {
rps->pm_iir |= GEN6_PM_RP_UP_THRESHOLD;
rps->pm_interval = 1;
schedule_work(&rps->work);
- } else if (100 * busy < rps->power.down_threshold * dt &&
- rps->cur_freq > rps->min_freq_softlimit) {
+ } else if (rps->cur_freq > rps->min_freq_softlimit &&
+ 100 * busy < rps->power.down_threshold * dt) {
rps->pm_iir |= GEN6_PM_RP_DOWN_THRESHOLD;
rps->pm_interval = 1;
schedule_work(&rps->work);
--
2.34.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [Intel-gfx] ✗ Fi.CI.SPARSE: warning for drm/i915/gt: RPS tuning for light media playback
2021-11-17 22:49 [Intel-gfx] [PATCH 0/3] drm/i915/gt: RPS tuning for light media playback Vinay Belgaumkar
` (2 preceding siblings ...)
2021-11-17 22:49 ` [Intel-gfx] [PATCH 3/3] drm/i915/gt: Improve "race-to-idle" at low frequencies Vinay Belgaumkar
@ 2021-11-17 23:59 ` Patchwork
2021-11-18 0:31 ` [Intel-gfx] ✗ Fi.CI.BAT: failure " Patchwork
` (5 subsequent siblings)
9 siblings, 0 replies; 21+ messages in thread
From: Patchwork @ 2021-11-17 23:59 UTC (permalink / raw)
To: Vinay Belgaumkar; +Cc: intel-gfx
== Series Details ==
Series: drm/i915/gt: RPS tuning for light media playback
URL : https://patchwork.freedesktop.org/series/97043/
State : warning
== Summary ==
$ dim sparse --fast origin/drm-tip
Sparse version: v0.6.2
Fast mode used, each commit won't be checked separately.
-
+drivers/gpu/drm/i915/gt/intel_engine_stats.h:28:9: warning: trying to copy expression type 31
+drivers/gpu/drm/i915/gt/intel_engine_stats.h:28:9: warning: trying to copy expression type 31
+drivers/gpu/drm/i915/gt/intel_engine_stats.h:28:9: warning: trying to copy expression type 31
+drivers/gpu/drm/i915/gt/intel_engine_stats.h:33:9: warning: trying to copy expression type 31
+drivers/gpu/drm/i915/gt/intel_engine_stats.h:33:9: warning: trying to copy expression type 31
+drivers/gpu/drm/i915/gt/intel_engine_stats.h:51:9: warning: trying to copy expression type 31
+drivers/gpu/drm/i915/gt/intel_engine_stats.h:51:9: warning: trying to copy expression type 31
+drivers/gpu/drm/i915/gt/intel_engine_stats.h:51:9: warning: trying to copy expression type 31
+drivers/gpu/drm/i915/gt/intel_engine_stats.h:57:9: warning: trying to copy expression type 31
+drivers/gpu/drm/i915/gt/intel_engine_stats.h:57:9: warning: trying to copy expression type 31
^ permalink raw reply [flat|nested] 21+ messages in thread
* [Intel-gfx] ✗ Fi.CI.BAT: failure for drm/i915/gt: RPS tuning for light media playback
2021-11-17 22:49 [Intel-gfx] [PATCH 0/3] drm/i915/gt: RPS tuning for light media playback Vinay Belgaumkar
` (3 preceding siblings ...)
2021-11-17 23:59 ` [Intel-gfx] ✗ Fi.CI.SPARSE: warning for drm/i915/gt: RPS tuning for light media playback Patchwork
@ 2021-11-18 0:31 ` Patchwork
2021-11-18 20:30 ` [Intel-gfx] ✗ Fi.CI.SPARSE: warning for drm/i915/gt: RPS tuning for light media playback (rev2) Patchwork
` (4 subsequent siblings)
9 siblings, 0 replies; 21+ messages in thread
From: Patchwork @ 2021-11-18 0:31 UTC (permalink / raw)
To: Vinay Belgaumkar; +Cc: intel-gfx
[-- Attachment #1: Type: text/plain, Size: 5557 bytes --]
== Series Details ==
Series: drm/i915/gt: RPS tuning for light media playback
URL : https://patchwork.freedesktop.org/series/97043/
State : failure
== Summary ==
CI Bug Log - changes from CI_DRM_10897 -> Patchwork_21624
====================================================
Summary
-------
**FAILURE**
Serious unknown changes coming with Patchwork_21624 absolutely need to be
verified manually.
If you think the reported changes have nothing to do with the changes
introduced in Patchwork_21624, please notify your bug team to allow them
to document this new failure mode, which will reduce false positives in CI.
External URL: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21624/index.html
Participating hosts (39 -> 35)
------------------------------
Additional (1): fi-tgl-u2
Missing (5): bat-dg1-6 fi-hsw-4200u fi-bsw-cyan fi-ctg-p8600 bat-jsl-1
Possible new issues
-------------------
Here are the unknown changes that may have been introduced in Patchwork_21624:
### IGT changes ###
#### Possible regressions ####
* igt@gem_lmem_swapping@verify-random:
- fi-tgl-u2: NOTRUN -> [SKIP][1] +3 similar issues
[1]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21624/fi-tgl-u2/igt@gem_lmem_swapping@verify-random.html
* igt@kms_psr@primary_page_flip:
- fi-skl-6600u: [PASS][2] -> [INCOMPLETE][3]
[2]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10897/fi-skl-6600u/igt@kms_psr@primary_page_flip.html
[3]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21624/fi-skl-6600u/igt@kms_psr@primary_page_flip.html
Known issues
------------
Here are the changes found in Patchwork_21624 that come from known issues:
### IGT changes ###
#### Issues hit ####
* igt@gem_huc_copy@huc-copy:
- fi-tgl-u2: NOTRUN -> [SKIP][4] ([i915#2190])
[4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21624/fi-tgl-u2/igt@gem_huc_copy@huc-copy.html
* igt@kms_chamelium@dp-hpd-fast:
- fi-tgl-u2: NOTRUN -> [SKIP][5] ([fdo#109284] / [fdo#111827]) +8 similar issues
[5]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21624/fi-tgl-u2/igt@kms_chamelium@dp-hpd-fast.html
* igt@kms_cursor_legacy@basic-busy-flip-before-cursor-atomic:
- fi-tgl-u2: NOTRUN -> [SKIP][6] ([i915#4103]) +1 similar issue
[6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21624/fi-tgl-u2/igt@kms_cursor_legacy@basic-busy-flip-before-cursor-atomic.html
* igt@kms_force_connector_basic@force-load-detect:
- fi-tgl-u2: NOTRUN -> [SKIP][7] ([fdo#109285])
[7]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21624/fi-tgl-u2/igt@kms_force_connector_basic@force-load-detect.html
* igt@prime_vgem@basic-userptr:
- fi-tgl-u2: NOTRUN -> [SKIP][8] ([i915#3301])
[8]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21624/fi-tgl-u2/igt@prime_vgem@basic-userptr.html
* igt@runner@aborted:
- fi-skl-6600u: NOTRUN -> [FAIL][9] ([i915#2722] / [i915#3363] / [i915#4312])
[9]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21624/fi-skl-6600u/igt@runner@aborted.html
#### Possible fixes ####
* igt@i915_module_load@reload:
- fi-kbl-soraka: [DMESG-WARN][10] ([i915#1982]) -> [PASS][11]
[10]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10897/fi-kbl-soraka/igt@i915_module_load@reload.html
[11]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21624/fi-kbl-soraka/igt@i915_module_load@reload.html
* igt@i915_selftest@live@gt_heartbeat:
- {fi-tgl-dsi}: [DMESG-FAIL][12] ([i915#541]) -> [PASS][13]
[12]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10897/fi-tgl-dsi/igt@i915_selftest@live@gt_heartbeat.html
[13]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21624/fi-tgl-dsi/igt@i915_selftest@live@gt_heartbeat.html
{name}: This element is suppressed. This means it is ignored when computing
the status of the difference (SUCCESS, WARNING, or FAILURE).
[fdo#109284]: https://bugs.freedesktop.org/show_bug.cgi?id=109284
[fdo#109285]: https://bugs.freedesktop.org/show_bug.cgi?id=109285
[fdo#111827]: https://bugs.freedesktop.org/show_bug.cgi?id=111827
[i915#1982]: https://gitlab.freedesktop.org/drm/intel/issues/1982
[i915#2190]: https://gitlab.freedesktop.org/drm/intel/issues/2190
[i915#2722]: https://gitlab.freedesktop.org/drm/intel/issues/2722
[i915#3301]: https://gitlab.freedesktop.org/drm/intel/issues/3301
[i915#3303]: https://gitlab.freedesktop.org/drm/intel/issues/3303
[i915#3363]: https://gitlab.freedesktop.org/drm/intel/issues/3363
[i915#4103]: https://gitlab.freedesktop.org/drm/intel/issues/4103
[i915#4312]: https://gitlab.freedesktop.org/drm/intel/issues/4312
[i915#541]: https://gitlab.freedesktop.org/drm/intel/issues/541
Build changes
-------------
* Linux: CI_DRM_10897 -> Patchwork_21624
CI-20190529: 20190529
CI_DRM_10897: 6afd200919ae894c775326ea99e607e5a8adf51b @ git://anongit.freedesktop.org/gfx-ci/linux
IGT_6284: 2971051d07d02da90c20ccb842e76ee711b02ecb @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
Patchwork_21624: a62e0f39f79fadcd7c6d1a38363ad8b283e6fb42 @ git://anongit.freedesktop.org/gfx-ci/linux
== Linux commits ==
a62e0f39f79f drm/i915/gt: Improve "race-to-idle" at low frequencies
f81a7adf2f72 drm/i915/gt: Compare average group occupancy for RPS evaluation
0aab8003aedf drm/i915/gt: Spread virtual engines over idle engines
== Logs ==
For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21624/index.html
[-- Attachment #2: Type: text/html, Size: 6384 bytes --]
^ permalink raw reply [flat|nested] 21+ messages in thread
* [Intel-gfx] ✗ Fi.CI.SPARSE: warning for drm/i915/gt: RPS tuning for light media playback (rev2)
2021-11-17 22:49 [Intel-gfx] [PATCH 0/3] drm/i915/gt: RPS tuning for light media playback Vinay Belgaumkar
` (4 preceding siblings ...)
2021-11-18 0:31 ` [Intel-gfx] ✗ Fi.CI.BAT: failure " Patchwork
@ 2021-11-18 20:30 ` Patchwork
2021-11-18 20:59 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
` (3 subsequent siblings)
9 siblings, 0 replies; 21+ messages in thread
From: Patchwork @ 2021-11-18 20:30 UTC (permalink / raw)
To: Vinay Belgaumkar; +Cc: intel-gfx
== Series Details ==
Series: drm/i915/gt: RPS tuning for light media playback (rev2)
URL : https://patchwork.freedesktop.org/series/97043/
State : warning
== Summary ==
$ dim sparse --fast origin/drm-tip
Sparse version: v0.6.2
Fast mode used, each commit won't be checked separately.
-
+drivers/gpu/drm/i915/gt/intel_engine_stats.h:28:9: warning: trying to copy expression type 31
+drivers/gpu/drm/i915/gt/intel_engine_stats.h:28:9: warning: trying to copy expression type 31
+drivers/gpu/drm/i915/gt/intel_engine_stats.h:28:9: warning: trying to copy expression type 31
+drivers/gpu/drm/i915/gt/intel_engine_stats.h:33:9: warning: trying to copy expression type 31
+drivers/gpu/drm/i915/gt/intel_engine_stats.h:33:9: warning: trying to copy expression type 31
+drivers/gpu/drm/i915/gt/intel_engine_stats.h:51:9: warning: trying to copy expression type 31
+drivers/gpu/drm/i915/gt/intel_engine_stats.h:51:9: warning: trying to copy expression type 31
+drivers/gpu/drm/i915/gt/intel_engine_stats.h:51:9: warning: trying to copy expression type 31
+drivers/gpu/drm/i915/gt/intel_engine_stats.h:57:9: warning: trying to copy expression type 31
+drivers/gpu/drm/i915/gt/intel_engine_stats.h:57:9: warning: trying to copy expression type 31
^ permalink raw reply [flat|nested] 21+ messages in thread
* [Intel-gfx] ✓ Fi.CI.BAT: success for drm/i915/gt: RPS tuning for light media playback (rev2)
2021-11-17 22:49 [Intel-gfx] [PATCH 0/3] drm/i915/gt: RPS tuning for light media playback Vinay Belgaumkar
` (5 preceding siblings ...)
2021-11-18 20:30 ` [Intel-gfx] ✗ Fi.CI.SPARSE: warning for drm/i915/gt: RPS tuning for light media playback (rev2) Patchwork
@ 2021-11-18 20:59 ` Patchwork
2021-11-20 0:37 ` [Intel-gfx] ✗ Fi.CI.SPARSE: warning for drm/i915/gt: RPS tuning for light media playback (rev3) Patchwork
` (2 subsequent siblings)
9 siblings, 0 replies; 21+ messages in thread
From: Patchwork @ 2021-11-18 20:59 UTC (permalink / raw)
To: Vinay Belgaumkar; +Cc: intel-gfx
[-- Attachment #1: Type: text/plain, Size: 8401 bytes --]
== Series Details ==
Series: drm/i915/gt: RPS tuning for light media playback (rev2)
URL : https://patchwork.freedesktop.org/series/97043/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_10900 -> Patchwork_21634
====================================================
Summary
-------
**SUCCESS**
No regressions found.
External URL: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21634/index.html
Participating hosts (38 -> 32)
------------------------------
Additional (1): fi-tgl-1115g4
Missing (7): fi-kbl-soraka fi-hsw-4200u fi-bsw-cyan fi-ctg-p8600 fi-kbl-x1275 bat-jsl-2 bat-jsl-1
Known issues
------------
Here are the changes found in Patchwork_21634 that come from known issues:
### IGT changes ###
#### Issues hit ####
* igt@amdgpu/amd_basic@query-info:
- fi-tgl-1115g4: NOTRUN -> [SKIP][1] ([fdo#109315])
[1]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21634/fi-tgl-1115g4/igt@amdgpu/amd_basic@query-info.html
* igt@amdgpu/amd_cs_nop@nop-gfx0:
- fi-tgl-1115g4: NOTRUN -> [SKIP][2] ([fdo#109315] / [i915#2575]) +16 similar issues
[2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21634/fi-tgl-1115g4/igt@amdgpu/amd_cs_nop@nop-gfx0.html
* igt@amdgpu/amd_cs_nop@sync-fork-gfx0:
- fi-skl-6600u: NOTRUN -> [SKIP][3] ([fdo#109271]) +25 similar issues
[3]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21634/fi-skl-6600u/igt@amdgpu/amd_cs_nop@sync-fork-gfx0.html
* igt@core_hotunplug@unbind-rebind:
- fi-tgl-u2: [PASS][4] -> [INCOMPLETE][5] ([i915#4006])
[4]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10900/fi-tgl-u2/igt@core_hotunplug@unbind-rebind.html
[5]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21634/fi-tgl-u2/igt@core_hotunplug@unbind-rebind.html
* igt@gem_huc_copy@huc-copy:
- fi-skl-6600u: NOTRUN -> [SKIP][6] ([fdo#109271] / [i915#2190])
[6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21634/fi-skl-6600u/igt@gem_huc_copy@huc-copy.html
- fi-tgl-1115g4: NOTRUN -> [SKIP][7] ([i915#2190])
[7]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21634/fi-tgl-1115g4/igt@gem_huc_copy@huc-copy.html
* igt@gem_lmem_swapping@basic:
- fi-tgl-1115g4: NOTRUN -> [SKIP][8] ([i915#4555])
[8]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21634/fi-tgl-1115g4/igt@gem_lmem_swapping@basic.html
* igt@gem_lmem_swapping@verify-random:
- fi-tgl-1115g4: NOTRUN -> [SKIP][9] ([i915#4555] / [i915#4565]) +2 similar issues
[9]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21634/fi-tgl-1115g4/igt@gem_lmem_swapping@verify-random.html
* igt@i915_pm_backlight@basic-brightness:
- fi-tgl-1115g4: NOTRUN -> [SKIP][10] ([i915#1155])
[10]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21634/fi-tgl-1115g4/igt@i915_pm_backlight@basic-brightness.html
* igt@kms_chamelium@common-hpd-after-suspend:
- fi-tgl-1115g4: NOTRUN -> [SKIP][11] ([fdo#111827]) +8 similar issues
[11]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21634/fi-tgl-1115g4/igt@kms_chamelium@common-hpd-after-suspend.html
* igt@kms_chamelium@vga-edid-read:
- fi-skl-6600u: NOTRUN -> [SKIP][12] ([fdo#109271] / [fdo#111827]) +8 similar issues
[12]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21634/fi-skl-6600u/igt@kms_chamelium@vga-edid-read.html
* igt@kms_cursor_legacy@basic-busy-flip-before-cursor-atomic:
- fi-tgl-1115g4: NOTRUN -> [SKIP][13] ([i915#4103]) +1 similar issue
[13]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21634/fi-tgl-1115g4/igt@kms_cursor_legacy@basic-busy-flip-before-cursor-atomic.html
* igt@kms_force_connector_basic@force-load-detect:
- fi-tgl-1115g4: NOTRUN -> [SKIP][14] ([fdo#109285])
[14]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21634/fi-tgl-1115g4/igt@kms_force_connector_basic@force-load-detect.html
* igt@kms_pipe_crc_basic@compare-crc-sanitycheck-pipe-d:
- fi-skl-6600u: NOTRUN -> [SKIP][15] ([fdo#109271] / [i915#533])
[15]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21634/fi-skl-6600u/igt@kms_pipe_crc_basic@compare-crc-sanitycheck-pipe-d.html
* igt@kms_psr@primary_mmap_gtt:
- fi-tgl-1115g4: NOTRUN -> [SKIP][16] ([i915#1072]) +3 similar issues
[16]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21634/fi-tgl-1115g4/igt@kms_psr@primary_mmap_gtt.html
* igt@prime_vgem@basic-userptr:
- fi-tgl-1115g4: NOTRUN -> [SKIP][17] ([i915#3301])
[17]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21634/fi-tgl-1115g4/igt@prime_vgem@basic-userptr.html
* igt@runner@aborted:
- fi-bdw-5557u: NOTRUN -> [FAIL][18] ([i915#1602] / [i915#2426] / [i915#4312])
[18]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21634/fi-bdw-5557u/igt@runner@aborted.html
- fi-tgl-u2: NOTRUN -> [FAIL][19] ([i915#1602] / [i915#2722] / [i915#4312])
[19]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21634/fi-tgl-u2/igt@runner@aborted.html
#### Possible fixes ####
* igt@gem_exec_suspend@basic-s3:
- fi-bdw-5557u: [INCOMPLETE][20] ([i915#146]) -> [PASS][21]
[20]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10900/fi-bdw-5557u/igt@gem_exec_suspend@basic-s3.html
[21]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21634/fi-bdw-5557u/igt@gem_exec_suspend@basic-s3.html
* igt@gem_flink_basic@bad-flink:
- fi-skl-6600u: [FAIL][22] ([i915#4547]) -> [PASS][23]
[22]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10900/fi-skl-6600u/igt@gem_flink_basic@bad-flink.html
[23]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21634/fi-skl-6600u/igt@gem_flink_basic@bad-flink.html
#### Warnings ####
* igt@gem_lmem_swapping@verify-random:
- fi-tgl-u2: [SKIP][24] ([i915#4555]) -> [SKIP][25] ([i915#4555] / [i915#4565]) +2 similar issues
[24]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10900/fi-tgl-u2/igt@gem_lmem_swapping@verify-random.html
[25]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21634/fi-tgl-u2/igt@gem_lmem_swapping@verify-random.html
{name}: This element is suppressed. This means it is ignored when computing
the status of the difference (SUCCESS, WARNING, or FAILURE).
[fdo#109271]: https://bugs.freedesktop.org/show_bug.cgi?id=109271
[fdo#109285]: https://bugs.freedesktop.org/show_bug.cgi?id=109285
[fdo#109315]: https://bugs.freedesktop.org/show_bug.cgi?id=109315
[fdo#111827]: https://bugs.freedesktop.org/show_bug.cgi?id=111827
[i915#1072]: https://gitlab.freedesktop.org/drm/intel/issues/1072
[i915#1155]: https://gitlab.freedesktop.org/drm/intel/issues/1155
[i915#146]: https://gitlab.freedesktop.org/drm/intel/issues/146
[i915#1602]: https://gitlab.freedesktop.org/drm/intel/issues/1602
[i915#2190]: https://gitlab.freedesktop.org/drm/intel/issues/2190
[i915#2426]: https://gitlab.freedesktop.org/drm/intel/issues/2426
[i915#2575]: https://gitlab.freedesktop.org/drm/intel/issues/2575
[i915#2722]: https://gitlab.freedesktop.org/drm/intel/issues/2722
[i915#3301]: https://gitlab.freedesktop.org/drm/intel/issues/3301
[i915#4006]: https://gitlab.freedesktop.org/drm/intel/issues/4006
[i915#4103]: https://gitlab.freedesktop.org/drm/intel/issues/4103
[i915#4312]: https://gitlab.freedesktop.org/drm/intel/issues/4312
[i915#4547]: https://gitlab.freedesktop.org/drm/intel/issues/4547
[i915#4555]: https://gitlab.freedesktop.org/drm/intel/issues/4555
[i915#4565]: https://gitlab.freedesktop.org/drm/intel/issues/4565
[i915#533]: https://gitlab.freedesktop.org/drm/intel/issues/533
Build changes
-------------
* Linux: CI_DRM_10900 -> Patchwork_21634
CI-20190529: 20190529
CI_DRM_10900: b50839f33180500c64a505623ab77829b869a57c @ git://anongit.freedesktop.org/gfx-ci/linux
IGT_6285: 2e0355faad5c2e81cd6705b76e529ce526c7c9bf @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
Patchwork_21634: df72fb71a991e5048bb055a1d4990622e1b01240 @ git://anongit.freedesktop.org/gfx-ci/linux
== Linux commits ==
df72fb71a991 drm/i915/gt: Improve "race-to-idle" at low frequencies
cb8653337d0c drm/i915/gt: Compare average group occupancy for RPS evaluation
ac85c6fff4b5 drm/i915/gt: Spread virtual engines over idle engines
== Logs ==
For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21634/index.html
[-- Attachment #2: Type: text/html, Size: 10329 bytes --]
^ permalink raw reply [flat|nested] 21+ messages in thread
* [Intel-gfx] ✗ Fi.CI.SPARSE: warning for drm/i915/gt: RPS tuning for light media playback (rev3)
2021-11-17 22:49 [Intel-gfx] [PATCH 0/3] drm/i915/gt: RPS tuning for light media playback Vinay Belgaumkar
` (6 preceding siblings ...)
2021-11-18 20:59 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
@ 2021-11-20 0:37 ` Patchwork
2021-11-20 1:09 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2021-11-20 5:13 ` [Intel-gfx] ✗ Fi.CI.IGT: failure " Patchwork
9 siblings, 0 replies; 21+ messages in thread
From: Patchwork @ 2021-11-20 0:37 UTC (permalink / raw)
To: Vinay Belgaumkar; +Cc: intel-gfx
== Series Details ==
Series: drm/i915/gt: RPS tuning for light media playback (rev3)
URL : https://patchwork.freedesktop.org/series/97043/
State : warning
== Summary ==
$ dim sparse --fast origin/drm-tip
Sparse version: v0.6.2
Fast mode used, each commit won't be checked separately.
-
+drivers/gpu/drm/i915/gt/intel_engine_stats.h:28:9: warning: trying to copy expression type 31
+drivers/gpu/drm/i915/gt/intel_engine_stats.h:28:9: warning: trying to copy expression type 31
+drivers/gpu/drm/i915/gt/intel_engine_stats.h:28:9: warning: trying to copy expression type 31
+drivers/gpu/drm/i915/gt/intel_engine_stats.h:33:9: warning: trying to copy expression type 31
+drivers/gpu/drm/i915/gt/intel_engine_stats.h:33:9: warning: trying to copy expression type 31
+drivers/gpu/drm/i915/gt/intel_engine_stats.h:51:9: warning: trying to copy expression type 31
+drivers/gpu/drm/i915/gt/intel_engine_stats.h:51:9: warning: trying to copy expression type 31
+drivers/gpu/drm/i915/gt/intel_engine_stats.h:51:9: warning: trying to copy expression type 31
+drivers/gpu/drm/i915/gt/intel_engine_stats.h:57:9: warning: trying to copy expression type 31
+drivers/gpu/drm/i915/gt/intel_engine_stats.h:57:9: warning: trying to copy expression type 31
^ permalink raw reply [flat|nested] 21+ messages in thread
* [Intel-gfx] ✓ Fi.CI.BAT: success for drm/i915/gt: RPS tuning for light media playback (rev3)
2021-11-17 22:49 [Intel-gfx] [PATCH 0/3] drm/i915/gt: RPS tuning for light media playback Vinay Belgaumkar
` (7 preceding siblings ...)
2021-11-20 0:37 ` [Intel-gfx] ✗ Fi.CI.SPARSE: warning for drm/i915/gt: RPS tuning for light media playback (rev3) Patchwork
@ 2021-11-20 1:09 ` Patchwork
2021-11-20 5:13 ` [Intel-gfx] ✗ Fi.CI.IGT: failure " Patchwork
9 siblings, 0 replies; 21+ messages in thread
From: Patchwork @ 2021-11-20 1:09 UTC (permalink / raw)
To: Vinay Belgaumkar; +Cc: intel-gfx
[-- Attachment #1: Type: text/plain, Size: 9943 bytes --]
== Series Details ==
Series: drm/i915/gt: RPS tuning for light media playback (rev3)
URL : https://patchwork.freedesktop.org/series/97043/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_10908 -> Patchwork_21643
====================================================
Summary
-------
**SUCCESS**
No regressions found.
External URL: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/index.html
Participating hosts (41 -> 36)
------------------------------
Additional (3): fi-kbl-soraka fi-tgl-1115g4 fi-icl-u2
Missing (8): bat-dg1-6 bat-dg1-5 fi-hsw-4200u fi-bsw-cyan bat-adlp-6 fi-ctg-p8600 bat-jsl-2 bat-jsl-1
Known issues
------------
Here are the changes found in Patchwork_21643 that come from known issues:
### IGT changes ###
#### Issues hit ####
* igt@amdgpu/amd_basic@query-info:
- fi-bsw-kefka: NOTRUN -> [SKIP][1] ([fdo#109271]) +17 similar issues
[1]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/fi-bsw-kefka/igt@amdgpu/amd_basic@query-info.html
- fi-tgl-1115g4: NOTRUN -> [SKIP][2] ([fdo#109315])
[2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/fi-tgl-1115g4/igt@amdgpu/amd_basic@query-info.html
* igt@amdgpu/amd_cs_nop@fork-gfx0:
- fi-icl-u2: NOTRUN -> [SKIP][3] ([fdo#109315]) +17 similar issues
[3]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/fi-icl-u2/igt@amdgpu/amd_cs_nop@fork-gfx0.html
* igt@amdgpu/amd_cs_nop@nop-gfx0:
- fi-tgl-1115g4: NOTRUN -> [SKIP][4] ([fdo#109315] / [i915#2575]) +16 similar issues
[4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/fi-tgl-1115g4/igt@amdgpu/amd_cs_nop@nop-gfx0.html
* igt@gem_exec_fence@basic-busy@bcs0:
- fi-kbl-soraka: NOTRUN -> [SKIP][5] ([fdo#109271]) +12 similar issues
[5]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/fi-kbl-soraka/igt@gem_exec_fence@basic-busy@bcs0.html
* igt@gem_exec_suspend@basic-s3:
- fi-bdw-5557u: [PASS][6] -> [INCOMPLETE][7] ([i915#146])
[6]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10908/fi-bdw-5557u/igt@gem_exec_suspend@basic-s3.html
[7]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/fi-bdw-5557u/igt@gem_exec_suspend@basic-s3.html
- fi-bdw-samus: NOTRUN -> [INCOMPLETE][8] ([i915#2539])
[8]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/fi-bdw-samus/igt@gem_exec_suspend@basic-s3.html
* igt@gem_huc_copy@huc-copy:
- fi-kbl-soraka: NOTRUN -> [SKIP][9] ([fdo#109271] / [i915#2190])
[9]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/fi-kbl-soraka/igt@gem_huc_copy@huc-copy.html
- fi-tgl-1115g4: NOTRUN -> [SKIP][10] ([i915#2190])
[10]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/fi-tgl-1115g4/igt@gem_huc_copy@huc-copy.html
- fi-icl-u2: NOTRUN -> [SKIP][11] ([i915#2190])
[11]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/fi-icl-u2/igt@gem_huc_copy@huc-copy.html
* igt@gem_lmem_swapping@basic:
- fi-tgl-1115g4: NOTRUN -> [SKIP][12] ([i915#4555])
[12]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/fi-tgl-1115g4/igt@gem_lmem_swapping@basic.html
* igt@gem_lmem_swapping@parallel-random-engines:
- fi-icl-u2: NOTRUN -> [SKIP][13] ([i915#4555]) +3 similar issues
[13]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/fi-icl-u2/igt@gem_lmem_swapping@parallel-random-engines.html
* igt@gem_lmem_swapping@verify-random:
- fi-tgl-1115g4: NOTRUN -> [SKIP][14] ([i915#4555] / [i915#4565]) +2 similar issues
[14]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/fi-tgl-1115g4/igt@gem_lmem_swapping@verify-random.html
* igt@i915_pm_backlight@basic-brightness:
- fi-tgl-1115g4: NOTRUN -> [SKIP][15] ([i915#1155])
[15]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/fi-tgl-1115g4/igt@i915_pm_backlight@basic-brightness.html
* igt@i915_selftest@live@gt_pm:
- fi-kbl-soraka: NOTRUN -> [DMESG-FAIL][16] ([i915#1886] / [i915#2291])
[16]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/fi-kbl-soraka/igt@i915_selftest@live@gt_pm.html
* igt@kms_chamelium@common-hpd-after-suspend:
- fi-tgl-1115g4: NOTRUN -> [SKIP][17] ([fdo#111827]) +8 similar issues
[17]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/fi-tgl-1115g4/igt@kms_chamelium@common-hpd-after-suspend.html
- fi-kbl-soraka: NOTRUN -> [SKIP][18] ([fdo#109271] / [fdo#111827]) +8 similar issues
[18]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/fi-kbl-soraka/igt@kms_chamelium@common-hpd-after-suspend.html
* igt@kms_chamelium@hdmi-hpd-fast:
- fi-icl-u2: NOTRUN -> [SKIP][19] ([fdo#111827]) +8 similar issues
[19]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/fi-icl-u2/igt@kms_chamelium@hdmi-hpd-fast.html
* igt@kms_cursor_legacy@basic-busy-flip-before-cursor-atomic:
- fi-tgl-1115g4: NOTRUN -> [SKIP][20] ([i915#4103]) +1 similar issue
[20]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/fi-tgl-1115g4/igt@kms_cursor_legacy@basic-busy-flip-before-cursor-atomic.html
* igt@kms_cursor_legacy@basic-busy-flip-before-cursor-legacy:
- fi-icl-u2: NOTRUN -> [SKIP][21] ([fdo#109278]) +2 similar issues
[21]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/fi-icl-u2/igt@kms_cursor_legacy@basic-busy-flip-before-cursor-legacy.html
* igt@kms_force_connector_basic@force-load-detect:
- fi-tgl-1115g4: NOTRUN -> [SKIP][22] ([fdo#109285])
[22]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/fi-tgl-1115g4/igt@kms_force_connector_basic@force-load-detect.html
- fi-icl-u2: NOTRUN -> [SKIP][23] ([fdo#109285])
[23]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/fi-icl-u2/igt@kms_force_connector_basic@force-load-detect.html
* igt@kms_pipe_crc_basic@compare-crc-sanitycheck-pipe-d:
- fi-kbl-soraka: NOTRUN -> [SKIP][24] ([fdo#109271] / [i915#533])
[24]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/fi-kbl-soraka/igt@kms_pipe_crc_basic@compare-crc-sanitycheck-pipe-d.html
* igt@kms_psr@primary_mmap_gtt:
- fi-tgl-1115g4: NOTRUN -> [SKIP][25] ([i915#1072]) +3 similar issues
[25]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/fi-tgl-1115g4/igt@kms_psr@primary_mmap_gtt.html
* igt@prime_vgem@basic-userptr:
- fi-icl-u2: NOTRUN -> [SKIP][26] ([i915#3301])
[26]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/fi-icl-u2/igt@prime_vgem@basic-userptr.html
- fi-tgl-1115g4: NOTRUN -> [SKIP][27] ([i915#3301])
[27]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/fi-tgl-1115g4/igt@prime_vgem@basic-userptr.html
#### Possible fixes ####
* igt@core_hotunplug@unbind-rebind:
- fi-tgl-u2: [INCOMPLETE][28] ([i915#4006]) -> [PASS][29]
[28]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10908/fi-tgl-u2/igt@core_hotunplug@unbind-rebind.html
[29]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/fi-tgl-u2/igt@core_hotunplug@unbind-rebind.html
* igt@gem_exec_suspend@basic-s0:
- fi-bdw-samus: [INCOMPLETE][30] ([i915#146] / [i915#2539]) -> [PASS][31]
[30]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10908/fi-bdw-samus/igt@gem_exec_suspend@basic-s0.html
[31]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/fi-bdw-samus/igt@gem_exec_suspend@basic-s0.html
* igt@i915_selftest@live@execlists:
- fi-bsw-kefka: [INCOMPLETE][32] ([i915#2940]) -> [PASS][33]
[32]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10908/fi-bsw-kefka/igt@i915_selftest@live@execlists.html
[33]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/fi-bsw-kefka/igt@i915_selftest@live@execlists.html
[fdo#109271]: https://bugs.freedesktop.org/show_bug.cgi?id=109271
[fdo#109278]: https://bugs.freedesktop.org/show_bug.cgi?id=109278
[fdo#109285]: https://bugs.freedesktop.org/show_bug.cgi?id=109285
[fdo#109315]: https://bugs.freedesktop.org/show_bug.cgi?id=109315
[fdo#111827]: https://bugs.freedesktop.org/show_bug.cgi?id=111827
[i915#1072]: https://gitlab.freedesktop.org/drm/intel/issues/1072
[i915#1155]: https://gitlab.freedesktop.org/drm/intel/issues/1155
[i915#146]: https://gitlab.freedesktop.org/drm/intel/issues/146
[i915#1886]: https://gitlab.freedesktop.org/drm/intel/issues/1886
[i915#2190]: https://gitlab.freedesktop.org/drm/intel/issues/2190
[i915#2291]: https://gitlab.freedesktop.org/drm/intel/issues/2291
[i915#2539]: https://gitlab.freedesktop.org/drm/intel/issues/2539
[i915#2575]: https://gitlab.freedesktop.org/drm/intel/issues/2575
[i915#2940]: https://gitlab.freedesktop.org/drm/intel/issues/2940
[i915#3301]: https://gitlab.freedesktop.org/drm/intel/issues/3301
[i915#4006]: https://gitlab.freedesktop.org/drm/intel/issues/4006
[i915#4103]: https://gitlab.freedesktop.org/drm/intel/issues/4103
[i915#4555]: https://gitlab.freedesktop.org/drm/intel/issues/4555
[i915#4565]: https://gitlab.freedesktop.org/drm/intel/issues/4565
[i915#533]: https://gitlab.freedesktop.org/drm/intel/issues/533
Build changes
-------------
* Linux: CI_DRM_10908 -> Patchwork_21643
CI-20190529: 20190529
CI_DRM_10908: 2b9df9dd40af61e0178fef5b88b1e8baae4194c6 @ git://anongit.freedesktop.org/gfx-ci/linux
IGT_6285: 2e0355faad5c2e81cd6705b76e529ce526c7c9bf @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
Patchwork_21643: 705bb4bd7554eb2d5b7b10e9e0b1faf97ab76499 @ git://anongit.freedesktop.org/gfx-ci/linux
== Linux commits ==
705bb4bd7554 drm/i915/gt: Improve "race-to-idle" at low frequencies
f71886e10eff drm/i915/gt: Compare average group occupancy for RPS evaluation
81eb5129f87e drm/i915/gt: Spread virtual engines over idle engines
== Logs ==
For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/index.html
[-- Attachment #2: Type: text/html, Size: 12311 bytes --]
^ permalink raw reply [flat|nested] 21+ messages in thread
* [Intel-gfx] ✗ Fi.CI.IGT: failure for drm/i915/gt: RPS tuning for light media playback (rev3)
2021-11-17 22:49 [Intel-gfx] [PATCH 0/3] drm/i915/gt: RPS tuning for light media playback Vinay Belgaumkar
` (8 preceding siblings ...)
2021-11-20 1:09 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
@ 2021-11-20 5:13 ` Patchwork
9 siblings, 0 replies; 21+ messages in thread
From: Patchwork @ 2021-11-20 5:13 UTC (permalink / raw)
To: Vinay Belgaumkar; +Cc: intel-gfx
[-- Attachment #1: Type: text/plain, Size: 30278 bytes --]
== Series Details ==
Series: drm/i915/gt: RPS tuning for light media playback (rev3)
URL : https://patchwork.freedesktop.org/series/97043/
State : failure
== Summary ==
CI Bug Log - changes from CI_DRM_10908_full -> Patchwork_21643_full
====================================================
Summary
-------
**FAILURE**
Serious unknown changes coming with Patchwork_21643_full absolutely need to be
verified manually.
If you think the reported changes have nothing to do with the changes
introduced in Patchwork_21643_full, please notify your bug team to allow them
to document this new failure mode, which will reduce false positives in CI.
Participating hosts (11 -> 11)
------------------------------
No changes in participating hosts
Possible new issues
-------------------
Here are the unknown changes that may have been introduced in Patchwork_21643_full:
### IGT changes ###
#### Possible regressions ####
* igt@kms_plane_alpha_blend@pipe-c-coverage-7efc:
- shard-tglb: [PASS][1] -> [INCOMPLETE][2]
[1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10908/shard-tglb7/igt@kms_plane_alpha_blend@pipe-c-coverage-7efc.html
[2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-tglb8/igt@kms_plane_alpha_blend@pipe-c-coverage-7efc.html
* igt@prime_mmap_coherency@write:
- shard-kbl: [PASS][3] -> [INCOMPLETE][4] +1 similar issue
[3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10908/shard-kbl6/igt@prime_mmap_coherency@write.html
[4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-kbl7/igt@prime_mmap_coherency@write.html
#### Warnings ####
* igt@kms_content_protection@legacy:
- shard-kbl: [TIMEOUT][5] ([i915#1319]) -> [INCOMPLETE][6]
[5]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10908/shard-kbl2/igt@kms_content_protection@legacy.html
[6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-kbl7/igt@kms_content_protection@legacy.html
Known issues
------------
Here are the changes found in Patchwork_21643_full that come from known issues:
### IGT changes ###
#### Issues hit ####
* igt@gem_create@create-massive:
- shard-kbl: NOTRUN -> [DMESG-WARN][7] ([i915#3002])
[7]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-kbl4/igt@gem_create@create-massive.html
* igt@gem_eio@unwedge-stress:
- shard-iclb: [PASS][8] -> [TIMEOUT][9] ([i915#2369] / [i915#2481] / [i915#3070])
[8]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10908/shard-iclb8/igt@gem_eio@unwedge-stress.html
[9]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-iclb3/igt@gem_eio@unwedge-stress.html
* igt@gem_exec_capture@pi@bcs0:
- shard-iclb: [PASS][10] -> [INCOMPLETE][11] ([i915#2369] / [i915#3371])
[10]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10908/shard-iclb6/igt@gem_exec_capture@pi@bcs0.html
[11]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-iclb4/igt@gem_exec_capture@pi@bcs0.html
* igt@gem_exec_fair@basic-none@vcs0:
- shard-kbl: NOTRUN -> [FAIL][12] ([i915#2842]) +2 similar issues
[12]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-kbl2/igt@gem_exec_fair@basic-none@vcs0.html
* igt@gem_exec_fair@basic-pace@rcs0:
- shard-kbl: [PASS][13] -> [FAIL][14] ([i915#2842])
[13]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10908/shard-kbl2/igt@gem_exec_fair@basic-pace@rcs0.html
[14]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-kbl6/igt@gem_exec_fair@basic-pace@rcs0.html
* igt@gem_exec_fair@basic-pace@vecs0:
- shard-kbl: [PASS][15] -> [SKIP][16] ([fdo#109271])
[15]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10908/shard-kbl2/igt@gem_exec_fair@basic-pace@vecs0.html
[16]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-kbl6/igt@gem_exec_fair@basic-pace@vecs0.html
- shard-tglb: [PASS][17] -> [FAIL][18] ([i915#2842])
[17]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10908/shard-tglb2/igt@gem_exec_fair@basic-pace@vecs0.html
[18]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-tglb2/igt@gem_exec_fair@basic-pace@vecs0.html
* igt@gem_exec_fair@basic-throttle@rcs0:
- shard-glk: [PASS][19] -> [FAIL][20] ([i915#2842]) +1 similar issue
[19]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10908/shard-glk5/igt@gem_exec_fair@basic-throttle@rcs0.html
[20]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-glk2/igt@gem_exec_fair@basic-throttle@rcs0.html
- shard-iclb: [PASS][21] -> [FAIL][22] ([i915#2849])
[21]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10908/shard-iclb1/igt@gem_exec_fair@basic-throttle@rcs0.html
[22]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-iclb4/igt@gem_exec_fair@basic-throttle@rcs0.html
* igt@gem_exec_params@no-vebox:
- shard-tglb: NOTRUN -> [SKIP][23] ([fdo#109283])
[23]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-tglb3/igt@gem_exec_params@no-vebox.html
* igt@gem_pxp@create-regular-context-2:
- shard-tglb: NOTRUN -> [SKIP][24] ([i915#4270])
[24]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-tglb3/igt@gem_pxp@create-regular-context-2.html
* igt@gem_userptr_blits@input-checking:
- shard-apl: NOTRUN -> [DMESG-WARN][25] ([i915#3002])
[25]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-apl4/igt@gem_userptr_blits@input-checking.html
* igt@i915_pm_backlight@fade_with_dpms:
- shard-kbl: NOTRUN -> [SKIP][26] ([fdo#109271]) +34 similar issues
[26]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-kbl2/igt@i915_pm_backlight@fade_with_dpms.html
* igt@kms_big_fb@x-tiled-max-hw-stride-32bpp-rotate-180-async-flip:
- shard-skl: NOTRUN -> [FAIL][27] ([i915#3743])
[27]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-skl9/igt@kms_big_fb@x-tiled-max-hw-stride-32bpp-rotate-180-async-flip.html
* igt@kms_big_fb@yf-tiled-max-hw-stride-32bpp-rotate-0:
- shard-tglb: NOTRUN -> [SKIP][28] ([fdo#111615]) +1 similar issue
[28]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-tglb2/igt@kms_big_fb@yf-tiled-max-hw-stride-32bpp-rotate-0.html
* igt@kms_ccs@pipe-a-missing-ccs-buffer-y_tiled_gen12_rc_ccs_cc:
- shard-apl: NOTRUN -> [SKIP][29] ([fdo#109271] / [i915#3886]) +3 similar issues
[29]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-apl4/igt@kms_ccs@pipe-a-missing-ccs-buffer-y_tiled_gen12_rc_ccs_cc.html
* igt@kms_ccs@pipe-b-crc-primary-rotation-180-y_tiled_gen12_mc_ccs:
- shard-tglb: NOTRUN -> [SKIP][30] ([i915#3689] / [i915#3886])
[30]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-tglb3/igt@kms_ccs@pipe-b-crc-primary-rotation-180-y_tiled_gen12_mc_ccs.html
* igt@kms_ccs@pipe-b-random-ccs-data-yf_tiled_ccs:
- shard-tglb: NOTRUN -> [SKIP][31] ([fdo#111615] / [i915#3689]) +1 similar issue
[31]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-tglb2/igt@kms_ccs@pipe-b-random-ccs-data-yf_tiled_ccs.html
* igt@kms_ccs@pipe-c-missing-ccs-buffer-y_tiled_gen12_mc_ccs:
- shard-kbl: NOTRUN -> [SKIP][32] ([fdo#109271] / [i915#3886]) +1 similar issue
[32]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-kbl2/igt@kms_ccs@pipe-c-missing-ccs-buffer-y_tiled_gen12_mc_ccs.html
* igt@kms_ccs@pipe-d-random-ccs-data-y_tiled_gen12_mc_ccs:
- shard-tglb: NOTRUN -> [SKIP][33] ([i915#3689])
[33]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-tglb3/igt@kms_ccs@pipe-d-random-ccs-data-y_tiled_gen12_mc_ccs.html
* igt@kms_chamelium@hdmi-audio-edid:
- shard-kbl: NOTRUN -> [SKIP][34] ([fdo#109271] / [fdo#111827]) +1 similar issue
[34]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-kbl2/igt@kms_chamelium@hdmi-audio-edid.html
* igt@kms_chamelium@vga-hpd:
- shard-tglb: NOTRUN -> [SKIP][35] ([fdo#109284] / [fdo#111827]) +1 similar issue
[35]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-tglb3/igt@kms_chamelium@vga-hpd.html
* igt@kms_content_protection@mei_interface:
- shard-tglb: NOTRUN -> [SKIP][36] ([fdo#111828])
[36]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-tglb3/igt@kms_content_protection@mei_interface.html
* igt@kms_cursor_crc@pipe-a-cursor-max-size-offscreen:
- shard-tglb: NOTRUN -> [SKIP][37] ([i915#3359])
[37]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-tglb3/igt@kms_cursor_crc@pipe-a-cursor-max-size-offscreen.html
* igt@kms_cursor_crc@pipe-b-cursor-suspend:
- shard-tglb: [PASS][38] -> [INCOMPLETE][39] ([i915#2411] / [i915#456])
[38]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10908/shard-tglb3/igt@kms_cursor_crc@pipe-b-cursor-suspend.html
[39]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-tglb7/igt@kms_cursor_crc@pipe-b-cursor-suspend.html
* igt@kms_cursor_crc@pipe-c-cursor-32x32-rapid-movement:
- shard-tglb: NOTRUN -> [SKIP][40] ([i915#3319])
[40]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-tglb2/igt@kms_cursor_crc@pipe-c-cursor-32x32-rapid-movement.html
* igt@kms_cursor_crc@pipe-c-cursor-512x170-onscreen:
- shard-tglb: NOTRUN -> [SKIP][41] ([fdo#109279] / [i915#3359])
[41]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-tglb3/igt@kms_cursor_crc@pipe-c-cursor-512x170-onscreen.html
* igt@kms_cursor_legacy@cursor-vs-flip-atomic-transitions-varying-size:
- shard-iclb: [PASS][42] -> [FAIL][43] ([i915#2370])
[42]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10908/shard-iclb7/igt@kms_cursor_legacy@cursor-vs-flip-atomic-transitions-varying-size.html
[43]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-iclb7/igt@kms_cursor_legacy@cursor-vs-flip-atomic-transitions-varying-size.html
* igt@kms_flip@flip-vs-suspend-interruptible@a-dp1:
- shard-kbl: [PASS][44] -> [DMESG-WARN][45] ([i915#180]) +4 similar issues
[44]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10908/shard-kbl2/igt@kms_flip@flip-vs-suspend-interruptible@a-dp1.html
[45]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-kbl7/igt@kms_flip@flip-vs-suspend-interruptible@a-dp1.html
* igt@kms_flip@flip-vs-suspend@a-edp1:
- shard-skl: [PASS][46] -> [INCOMPLETE][47] ([i915#198]) +1 similar issue
[46]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10908/shard-skl4/igt@kms_flip@flip-vs-suspend@a-edp1.html
[47]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-skl1/igt@kms_flip@flip-vs-suspend@a-edp1.html
* igt@kms_frontbuffer_tracking@fbc-1p-primscrn-cur-indfb-draw-blt:
- shard-skl: NOTRUN -> [SKIP][48] ([fdo#109271]) +5 similar issues
[48]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-skl8/igt@kms_frontbuffer_tracking@fbc-1p-primscrn-cur-indfb-draw-blt.html
* igt@kms_frontbuffer_tracking@psr-2p-primscrn-spr-indfb-draw-blt:
- shard-apl: NOTRUN -> [SKIP][49] ([fdo#109271]) +20 similar issues
[49]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-apl4/igt@kms_frontbuffer_tracking@psr-2p-primscrn-spr-indfb-draw-blt.html
* igt@kms_frontbuffer_tracking@psr-2p-scndscrn-pri-indfb-draw-mmap-gtt:
- shard-tglb: NOTRUN -> [SKIP][50] ([fdo#111825]) +15 similar issues
[50]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-tglb3/igt@kms_frontbuffer_tracking@psr-2p-scndscrn-pri-indfb-draw-mmap-gtt.html
* igt@kms_hdr@static-swap:
- shard-tglb: NOTRUN -> [SKIP][51] ([i915#1187])
[51]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-tglb2/igt@kms_hdr@static-swap.html
* igt@kms_pipe_crc_basic@hang-read-crc-pipe-d:
- shard-apl: NOTRUN -> [SKIP][52] ([fdo#109271] / [i915#533])
[52]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-apl4/igt@kms_pipe_crc_basic@hang-read-crc-pipe-d.html
* igt@kms_plane_alpha_blend@pipe-b-constant-alpha-min:
- shard-skl: [PASS][53] -> [FAIL][54] ([fdo#108145] / [i915#265])
[53]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10908/shard-skl10/igt@kms_plane_alpha_blend@pipe-b-constant-alpha-min.html
[54]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-skl1/igt@kms_plane_alpha_blend@pipe-b-constant-alpha-min.html
* igt@kms_plane_lowres@pipe-c-tiling-yf:
- shard-tglb: NOTRUN -> [SKIP][55] ([fdo#111615] / [fdo#112054])
[55]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-tglb3/igt@kms_plane_lowres@pipe-c-tiling-yf.html
* igt@kms_plane_lowres@pipe-d-tiling-none:
- shard-tglb: NOTRUN -> [SKIP][56] ([i915#3536])
[56]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-tglb3/igt@kms_plane_lowres@pipe-d-tiling-none.html
* igt@kms_psr2_sf@cursor-plane-update-sf:
- shard-apl: NOTRUN -> [SKIP][57] ([fdo#109271] / [i915#658])
[57]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-apl4/igt@kms_psr2_sf@cursor-plane-update-sf.html
* igt@kms_psr2_sf@overlay-primary-update-sf-dmg-area-4:
- shard-tglb: NOTRUN -> [SKIP][58] ([i915#2920]) +2 similar issues
[58]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-tglb3/igt@kms_psr2_sf@overlay-primary-update-sf-dmg-area-4.html
* igt@kms_psr@psr2_cursor_render:
- shard-iclb: [PASS][59] -> [SKIP][60] ([fdo#109441]) +2 similar issues
[59]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10908/shard-iclb2/igt@kms_psr@psr2_cursor_render.html
[60]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-iclb5/igt@kms_psr@psr2_cursor_render.html
* igt@kms_psr@psr2_sprite_blt:
- shard-tglb: NOTRUN -> [FAIL][61] ([i915#132] / [i915#3467])
[61]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-tglb3/igt@kms_psr@psr2_sprite_blt.html
* igt@kms_writeback@writeback-fb-id:
- shard-kbl: NOTRUN -> [SKIP][62] ([fdo#109271] / [i915#2437])
[62]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-kbl2/igt@kms_writeback@writeback-fb-id.html
* igt@nouveau_crc@pipe-c-source-outp-complete:
- shard-tglb: NOTRUN -> [SKIP][63] ([i915#2530]) +1 similar issue
[63]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-tglb2/igt@nouveau_crc@pipe-c-source-outp-complete.html
* igt@perf_pmu@rc6-suspend:
- shard-apl: [PASS][64] -> [DMESG-WARN][65] ([i915#180]) +1 similar issue
[64]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10908/shard-apl6/igt@perf_pmu@rc6-suspend.html
[65]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-apl2/igt@perf_pmu@rc6-suspend.html
* igt@prime_udl:
- shard-tglb: NOTRUN -> [SKIP][66] ([fdo#109291])
[66]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-tglb3/igt@prime_udl.html
* igt@sysfs_clients@fair-3:
- shard-kbl: NOTRUN -> [SKIP][67] ([fdo#109271] / [i915#2994])
[67]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-kbl2/igt@sysfs_clients@fair-3.html
#### Possible fixes ####
* igt@gem_ctx_persistence@many-contexts:
- {shard-rkl}: ([PASS][68], [FAIL][69]) ([i915#2410]) -> [PASS][70]
[68]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10908/shard-rkl-4/igt@gem_ctx_persistence@many-contexts.html
[69]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10908/shard-rkl-2/igt@gem_ctx_persistence@many-contexts.html
[70]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-rkl-1/igt@gem_ctx_persistence@many-contexts.html
* igt@gem_eio@in-flight-contexts-immediate:
- {shard-rkl}: ([PASS][71], [TIMEOUT][72]) ([i915#3063]) -> [PASS][73]
[71]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10908/shard-rkl-2/igt@gem_eio@in-flight-contexts-immediate.html
[72]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10908/shard-rkl-4/igt@gem_eio@in-flight-contexts-immediate.html
[73]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-rkl-2/igt@gem_eio@in-flight-contexts-immediate.html
* igt@gem_eio@unwedge-stress:
- shard-tglb: [TIMEOUT][74] ([i915#2369] / [i915#3063] / [i915#3648]) -> [PASS][75]
[74]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10908/shard-tglb3/igt@gem_eio@unwedge-stress.html
[75]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-tglb8/igt@gem_eio@unwedge-stress.html
* igt@gem_exec_fair@basic-none-solo@rcs0:
- {shard-rkl}: [FAIL][76] ([i915#2842]) -> [PASS][77]
[76]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10908/shard-rkl-1/igt@gem_exec_fair@basic-none-solo@rcs0.html
[77]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-rkl-1/igt@gem_exec_fair@basic-none-solo@rcs0.html
* igt@gem_exec_fair@basic-pace@vcs0:
- shard-iclb: [FAIL][78] ([i915#2842]) -> [PASS][79] +1 similar issue
[78]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10908/shard-iclb2/igt@gem_exec_fair@basic-pace@vcs0.html
[79]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-iclb5/igt@gem_exec_fair@basic-pace@vcs0.html
* igt@gem_exec_fair@basic-pace@vcs1:
- shard-kbl: [FAIL][80] ([i915#2842]) -> [PASS][81] +1 similar issue
[80]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10908/shard-kbl2/igt@gem_exec_fair@basic-pace@vcs1.html
[81]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-kbl6/igt@gem_exec_fair@basic-pace@vcs1.html
* igt@gem_exec_whisper@basic-forked-all:
- shard-glk: [DMESG-WARN][82] ([i915#118]) -> [PASS][83] +1 similar issue
[82]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10908/shard-glk8/igt@gem_exec_whisper@basic-forked-all.html
[83]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-glk8/igt@gem_exec_whisper@basic-forked-all.html
* igt@gem_huc_copy@huc-copy:
- shard-tglb: [SKIP][84] ([i915#2190]) -> [PASS][85]
[84]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10908/shard-tglb6/igt@gem_huc_copy@huc-copy.html
[85]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-tglb5/igt@gem_huc_copy@huc-copy.html
* igt@i915_pm_backlight@basic-brightness:
- {shard-rkl}: [SKIP][86] ([i915#3012]) -> [PASS][87]
[86]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10908/shard-rkl-2/igt@i915_pm_backlight@basic-brightness.html
[87]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-rkl-6/igt@i915_pm_backlight@basic-brightness.html
* igt@i915_pm_dc@dc6-dpms:
- shard-iclb: [FAIL][88] ([i915#454]) -> [PASS][89]
[88]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10908/shard-iclb3/igt@i915_pm_dc@dc6-dpms.html
[89]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-iclb8/igt@i915_pm_dc@dc6-dpms.html
* igt@i915_pm_rpm@modeset-lpsp-stress-no-wait:
- {shard-rkl}: [SKIP][90] ([i915#1397]) -> [PASS][91] +1 similar issue
[90]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10908/shard-rkl-2/igt@i915_pm_rpm@modeset-lpsp-stress-no-wait.html
[91]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-rkl-6/igt@i915_pm_rpm@modeset-lpsp-stress-no-wait.html
* igt@i915_pm_rps@min-max-config-idle:
- {shard-rkl}: [FAIL][92] ([i915#4016]) -> [PASS][93]
[92]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10908/shard-rkl-6/igt@i915_pm_rps@min-max-config-idle.html
[93]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-rkl-1/igt@i915_pm_rps@min-max-config-idle.html
* igt@i915_suspend@fence-restore-untiled:
- shard-kbl: [DMESG-WARN][94] ([i915#180]) -> [PASS][95]
[94]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10908/shard-kbl7/igt@i915_suspend@fence-restore-untiled.html
[95]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-kbl2/igt@i915_suspend@fence-restore-untiled.html
* igt@kms_async_flips@alternate-sync-async-flip:
- shard-skl: [FAIL][96] ([i915#2521]) -> [PASS][97]
[96]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10908/shard-skl4/igt@kms_async_flips@alternate-sync-async-flip.html
[97]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-skl7/igt@kms_async_flips@alternate-sync-async-flip.html
* igt@kms_ccs@pipe-a-crc-primary-rotation-180-y_tiled_gen12_rc_ccs_cc:
- {shard-rkl}: [SKIP][98] ([i915#1845]) -> [PASS][99] +10 similar issues
[98]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10908/shard-rkl-4/igt@kms_ccs@pipe-a-crc-primary-rotation-180-y_tiled_gen12_rc_ccs_cc.html
[99]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-rkl-6/igt@kms_ccs@pipe-a-crc-primary-rotation-180-y_tiled_gen12_rc_ccs_cc.html
* igt@kms_ccs@pipe-b-random-ccs-data-y_tiled_gen12_rc_ccs_cc:
- {shard-rkl}: ([SKIP][100], [SKIP][101]) ([i915#1845]) -> [PASS][102] +1 similar issue
[100]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10908/shard-rkl-2/igt@kms_ccs@pipe-b-random-ccs-data-y_tiled_gen12_rc_ccs_cc.html
[101]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10908/shard-rkl-4/igt@kms_ccs@pipe-b-random-ccs-data-y_tiled_gen12_rc_ccs_cc.html
[102]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-rkl-6/igt@kms_ccs@pipe-b-random-ccs-data-y_tiled_gen12_rc_ccs_cc.html
* igt@kms_color@pipe-a-ctm-max:
- {shard-rkl}: [SKIP][103] ([i915#1149] / [i915#1849] / [i915#4070]) -> [PASS][104]
[103]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10908/shard-rkl-2/igt@kms_color@pipe-a-ctm-max.html
[104]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-rkl-6/igt@kms_color@pipe-a-ctm-max.html
* igt@kms_cursor_crc@pipe-b-cursor-128x128-rapid-movement:
- {shard-rkl}: ([PASS][105], [SKIP][106]) ([fdo#112022]) -> [PASS][107]
[105]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10908/shard-rkl-6/igt@kms_cursor_crc@pipe-b-cursor-128x128-rapid-movement.html
[106]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10908/shard-rkl-4/igt@kms_cursor_crc@pipe-b-cursor-128x128-rapid-movement.html
[107]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-rkl-6/igt@kms_cursor_crc@pipe-b-cursor-128x128-rapid-movement.html
* igt@kms_cursor_crc@pipe-b-cursor-dpms:
- {shard-rkl}: [SKIP][108] ([fdo#112022] / [i915#4070]) -> [PASS][109] +2 similar issues
[108]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10908/shard-rkl-2/igt@kms_cursor_crc@pipe-b-cursor-dpms.html
[109]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-rkl-6/igt@kms_cursor_crc@pipe-b-cursor-dpms.html
* igt@kms_cursor_edge_walk@pipe-a-64x64-bottom-edge:
- shard-tglb: [INCOMPLETE][110] -> [PASS][111]
[110]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10908/shard-tglb8/igt@kms_cursor_edge_walk@pipe-a-64x64-bottom-edge.html
[111]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-tglb3/igt@kms_cursor_edge_walk@pipe-a-64x64-bottom-edge.html
* igt@kms_cursor_edge_walk@pipe-b-256x256-right-edge:
- {shard-rkl}: ([PASS][112], [SKIP][113]) ([i915#4098]) -> [PASS][114] +10 similar issues
[112]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10908/shard-rkl-6/igt@kms_cursor_edge_walk@pipe-b-256x256-right-edge.html
[113]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10908/shard-rkl-4/igt@kms_cursor_edge_walk@pipe-b-256x256-right-edge.html
[114]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-rkl-6/igt@kms_cursor_edge_walk@pipe-b-256x256-right-edge.html
* igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions:
- shard-skl: [FAIL][115] ([i915#2346]) -> [PASS][116]
[115]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10908/shard-skl10/igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions.html
[116]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-skl9/igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions.html
* igt@kms_cursor_legacy@short-flip-after-cursor-atomic-transitions-varying-size:
- {shard-rkl}: [SKIP][117] ([fdo#111825] / [i915#4070]) -> [PASS][118]
[117]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10908/shard-rkl-2/igt@kms_cursor_legacy@short-flip-after-cursor-atomic-transitions-varying-size.html
[118]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-rkl-6/igt@kms_cursor_legacy@short-flip-after-cursor-atomic-transitions-varying-size.html
* igt@kms_draw_crc@draw-method-xrgb2101010-mmap-wc-ytiled:
- {shard-rkl}: [SKIP][119] ([fdo#111314]) -> [PASS][120] +2 similar issues
[119]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10908/shard-rkl-2/igt@kms_draw_crc@draw-method-xrgb2101010-mmap-wc-ytiled.html
[120]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-rkl-6/igt@kms_draw_crc@draw-method-xrgb2101010-mmap-wc-ytiled.html
* igt@kms_flip@flip-vs-expired-vblank-interruptible@a-dp1:
- shard-apl: [FAIL][121] ([i915#79]) -> [PASS][122]
[121]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10908/shard-apl4/igt@kms_flip@flip-vs-expired-vblank-interruptible@a-dp1.html
[122]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-apl3/igt@kms_flip@flip-vs-expired-vblank-interruptible@a-dp1.html
* igt@kms_flip@flip-vs-suspend-interruptible@c-dp1:
- shard-apl: [DMESG-WARN][123] ([i915#180]) -> [PASS][124]
[123]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10908/shard-apl8/igt@kms_flip@flip-vs-suspend-interruptible@c-dp1.html
[124]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-apl4/igt@kms_flip@flip-vs-suspend-interruptible@c-dp1.html
* igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytileccs:
- shard-iclb: [SKIP][125] ([i915#3701]) -> [PASS][126]
[125]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10908/shard-iclb2/igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytileccs.html
[126]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-iclb5/igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytileccs.html
* igt@kms_frontbuffer_tracking@fbcpsr-1p-offscren-pri-shrfb-draw-mmap-wc:
- {shard-rkl}: [SKIP][127] ([i915#1849]) -> [PASS][128] +5 similar issues
[127]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10908/shard-rkl-2/igt@kms_frontbuffer_tracking@fbcpsr-1p-offscren-pri-shrfb-draw-mmap-wc.html
[128]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-rkl-6/igt@kms_frontbuffer_tracking@fbcpsr-1p-offscren-pri-shrfb-draw-mmap-wc.html
* igt@kms_frontbuffer_tracking@psr-1p-rte:
- shard-skl: [DMESG-WARN][129] ([i915#1982]) -> [PASS][130]
[129]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10908/shard-skl10/igt@kms_frontbuffer_tracking@psr-1p-rte.html
[130]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-skl9/igt@kms_frontbuffer_tracking@psr-1p-rte.html
* igt@kms_invalid_mode@bad-vtotal:
- {shard-rkl}: [SKIP][131] ([i915#4278]) -> [PASS][132]
[131]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10908/shard-rkl-4/igt@kms_invalid_mode@bad-vtotal.html
[132]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-rkl-6/igt@kms_invalid_mode@bad-vtotal.html
* igt@kms_pipe_crc_basic@nonblocking-crc-pipe-a-frame-sequence:
- {shard-rkl}: ([SKIP][133], [SKIP][134]) ([i915#1849] / [i915#4098]) -> [PASS][135]
[133]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10908/shard-rkl-4/igt@kms_pipe_crc_basic@nonblocking-crc-pipe-a-frame-sequence.html
[134]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10908/shard-rkl-2/igt@kms_pipe_crc_basic@nonblocking-crc-pipe-a-frame-sequence.html
[135]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-rkl-6/igt@kms_pipe_crc_basic@nonblocking-crc-pipe-a-frame-sequence.html
* igt@kms_plane_alpha_blend@pipe-a-coverage-vs-premult-vs-constant:
- {shard-rkl}: [SKIP][136] ([i915#4098]) -> [PASS][137] +1 similar issue
[136]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10908/shard-rkl-4/igt@kms_plane_alpha_blend@pipe-a-coverage-vs-premult-vs-constant.html
[137]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-rkl-6/igt@kms_plane_alpha_blend@pipe-a-coverage-vs-premult-vs-constant.html
* igt@kms_plane_alpha_blend@pipe-b-constant-alpha-max:
- {shard-rkl}: [SKIP][138] ([i915#1849] / [i915#4070]) -> [PASS][139] +1 similar issue
[138]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10908/shard-rkl-2/igt@kms_plane_alpha_blend@pipe-b-constant-alpha-max.html
[139]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-rkl-6/igt@kms_plane_alpha_blend@pipe-b-constant-alpha-max.html
* igt@kms_plane_cursor@pipe-a-overlay-size-64:
- {shard-rkl}: ([PASS][140], [SKIP][141]) ([i915#1845]) -> [PASS][142] +2 similar issues
[140]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10908/shard-rkl-6/igt@kms_plane_cursor@pipe-a-overlay-size-64.html
[141]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10908/shard-rkl-4/igt@kms_plane_cursor@pipe-a-overlay-size-64.html
[142]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-rkl-6/igt@kms_plane_cursor@pipe-a-overlay-size-64.html
* igt@kms_psr@cursor_plane_onoff:
- {shard-rkl}: [SKIP][143] ([i915#1072]) -> [PASS][144]
[143]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10908/shard-rkl-2/igt@kms_psr@cursor_plane_onoff.html
[144]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/shard-rkl-6/igt@kms_psr@cursor_plane_onoff.html
* igt@kms_psr@psr2_cursor_plane_move:
- shard-iclb: [SKIP][145] ([fdo#109441]) -> [PASS][146] +2 similar issues
[145]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10908/shard-iclb4/igt@kms_psr@psr2_cursor_plane_move.html
[146]: https://intel-gfx-ci.01.o
== Logs ==
For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21643/index.html
[-- Attachment #2: Type: text/html, Size: 33170 bytes --]
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [Intel-gfx] [PATCH 3/3] drm/i915/gt: Improve "race-to-idle" at low frequencies
2021-11-17 22:49 ` [Intel-gfx] [PATCH 3/3] drm/i915/gt: Improve "race-to-idle" at low frequencies Vinay Belgaumkar
@ 2021-11-22 18:44 ` Rodrigo Vivi
2021-11-23 9:17 ` Tvrtko Ursulin
2021-11-23 17:37 ` Belgaumkar, Vinay
1 sibling, 1 reply; 21+ messages in thread
From: Rodrigo Vivi @ 2021-11-22 18:44 UTC (permalink / raw)
To: Vinay Belgaumkar; +Cc: intel-gfx, dri-devel, Chris Wilson
On Wed, Nov 17, 2021 at 02:49:55PM -0800, Vinay Belgaumkar wrote:
> From: Chris Wilson <chris@chris-wilson.co.uk>
>
> While the power consumption is proportional to the frequency, there is
> also a static draw for active gates. The longer we are able to powergate
> (rc6), the lower the static draw. Thus there is a sweetspot in the
> frequency/power curve where we run at higher frequency in order to sleep
> longer, aka race-to-idle. This is more evident at lower frequencies, so
> let's look to bump the frequency if we think we will benefit by sleeping
> longer at the higher frequency and so conserving power.
>
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Vinay Belgaumkar <vinay.belgaumkar@intel.com>
> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
Please let's not increase the complexity here, unless we have a very good
and documented reason.
Before trying to implement anything smart like this in the driver I'd like
to see data, power and performance results in different platforms and with
different workloads.
Thanks,
Rodrigo.
> ---
> drivers/gpu/drm/i915/gt/intel_rps.c | 31 ++++++++++++++++++++++++-----
> 1 file changed, 26 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/gt/intel_rps.c b/drivers/gpu/drm/i915/gt/intel_rps.c
> index 3675ac93ded0..6af3231982af 100644
> --- a/drivers/gpu/drm/i915/gt/intel_rps.c
> +++ b/drivers/gpu/drm/i915/gt/intel_rps.c
> @@ -63,6 +63,22 @@ static void set(struct intel_uncore *uncore, i915_reg_t reg, u32 val)
> intel_uncore_write_fw(uncore, reg, val);
> }
>
> +static bool race_to_idle(struct intel_rps *rps, u64 busy, u64 dt)
> +{
> + unsigned int this = rps->cur_freq;
> + unsigned int next = rps->cur_freq + 1;
> + u64 next_dt = next * max(busy, dt);
> +
> + /*
> + * Compare estimated time spent in rc6 at the next power bin. If
> + * we expect to sleep longer than the estimated increased power
> + * cost of running at a higher frequency, it will be reduced power
> + * consumption overall.
> + */
> + return (((next_dt - this * busy) >> 10) * this * this >
> + ((next_dt - next * busy) >> 10) * next * next);
> +}
> +
> static void rps_timer(struct timer_list *t)
> {
> struct intel_rps *rps = from_timer(rps, t, timer);
> @@ -133,7 +149,7 @@ static void rps_timer(struct timer_list *t)
> if (!max_busy[i])
> break;
>
> - busy += div_u64(max_busy[i], 1 << i);
> + busy += max_busy[i] >> i;
> }
> GT_TRACE(rps_to_gt(rps),
> "busy:%lld [%d%%], max:[%lld, %lld, %lld], interval:%d\n",
> @@ -141,13 +157,18 @@ static void rps_timer(struct timer_list *t)
> max_busy[0], max_busy[1], max_busy[2],
> rps->pm_interval);
>
> - if (100 * busy > rps->power.up_threshold * dt &&
> - rps->cur_freq < rps->max_freq_softlimit) {
> + if (rps->cur_freq < rps->max_freq_softlimit &&
> + race_to_idle(rps, max_busy[0], dt)) {
> + rps->pm_iir |= GEN6_PM_RP_UP_THRESHOLD;
> + rps->pm_interval = 1;
> + schedule_work(&rps->work);
> + } else if (rps->cur_freq < rps->max_freq_softlimit &&
> + 100 * busy > rps->power.up_threshold * dt) {
> rps->pm_iir |= GEN6_PM_RP_UP_THRESHOLD;
> rps->pm_interval = 1;
> schedule_work(&rps->work);
> - } else if (100 * busy < rps->power.down_threshold * dt &&
> - rps->cur_freq > rps->min_freq_softlimit) {
> + } else if (rps->cur_freq > rps->min_freq_softlimit &&
> + 100 * busy < rps->power.down_threshold * dt) {
> rps->pm_iir |= GEN6_PM_RP_DOWN_THRESHOLD;
> rps->pm_interval = 1;
> schedule_work(&rps->work);
> --
> 2.34.0
>
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [Intel-gfx] [PATCH 3/3] drm/i915/gt: Improve "race-to-idle" at low frequencies
2021-11-22 18:44 ` Rodrigo Vivi
@ 2021-11-23 9:17 ` Tvrtko Ursulin
2021-11-23 16:53 ` Vivi, Rodrigo
0 siblings, 1 reply; 21+ messages in thread
From: Tvrtko Ursulin @ 2021-11-23 9:17 UTC (permalink / raw)
To: Rodrigo Vivi, Vinay Belgaumkar; +Cc: intel-gfx, dri-devel, Chris Wilson
On 22/11/2021 18:44, Rodrigo Vivi wrote:
> On Wed, Nov 17, 2021 at 02:49:55PM -0800, Vinay Belgaumkar wrote:
>> From: Chris Wilson <chris@chris-wilson.co.uk>
>>
>> While the power consumption is proportional to the frequency, there is
>> also a static draw for active gates. The longer we are able to powergate
>> (rc6), the lower the static draw. Thus there is a sweetspot in the
>> frequency/power curve where we run at higher frequency in order to sleep
>> longer, aka race-to-idle. This is more evident at lower frequencies, so
>> let's look to bump the frequency if we think we will benefit by sleeping
>> longer at the higher frequency and so conserving power.
>>
>> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
>> Cc: Vinay Belgaumkar <vinay.belgaumkar@intel.com>
>> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
>
> Please let's not increase the complexity here, unless we have a very good
> and documented reason.
>
> Before trying to implement anything smart like this in the driver I'd like
> to see data, power and performance results in different platforms and with
> different workloads.
Who has such test suite and test farm which isn't focused to workloads
from a single customer? ;(
Regards,
Tvrtko
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [Intel-gfx] [PATCH 1/3] drm/i915/gt: Spread virtual engines over idle engines
2021-11-17 22:49 ` [Intel-gfx] [PATCH 1/3] drm/i915/gt: Spread virtual engines over idle engines Vinay Belgaumkar
@ 2021-11-23 9:39 ` Tvrtko Ursulin
2021-11-23 19:52 ` Rodrigo Vivi
0 siblings, 1 reply; 21+ messages in thread
From: Tvrtko Ursulin @ 2021-11-23 9:39 UTC (permalink / raw)
To: Vinay Belgaumkar, intel-gfx, dri-devel; +Cc: Chris Wilson
On 17/11/2021 22:49, Vinay Belgaumkar wrote:
> From: Chris Wilson <chris@chris-wilson.co.uk>
>
> Everytime we come to the end of a virtual engine's context, re-randomise
> it's siblings[]. As we schedule the siblings' tasklets in the order they
> are in the array, earlier entries are executed first (when idle) and so
> will be preferred when scheduling the next virtual request. Currently,
> we only update the array when switching onto a new idle engine, so we
> prefer to stick on the last execute engine, keeping the work compact.
> However, it can be beneficial to spread the work out across idle
> engines, so choose another sibling as our preferred target at the end of
> the context's execution.
This partially brings back, from a different angle, the more dynamic
scheduling behavior which has been lost since bugfix 90a987205c6c
("drm/i915/gt: Only swap to a random sibling once upon creation").
One day we could experiment with using engine busyness as criteria
(instead of random). Back in the day busyness was kind of the best
strategy, although sampled at submit, not at the trailing edge like
here, but it still may be able to settle down to engine configuration
better in some scenarios. Only testing could say.
Still, from memory random also wasn't that bad so this should be okay
for now.
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Regards,
Tvrtko
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Vinay Belgaumkar <vinay.belgaumkar@intel.com>
> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
> ---
> .../drm/i915/gt/intel_execlists_submission.c | 80 ++++++++++++-------
> 1 file changed, 52 insertions(+), 28 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
> index ca03880fa7e4..b95bbc8fb91a 100644
> --- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
> +++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
> @@ -539,6 +539,41 @@ static void execlists_schedule_in(struct i915_request *rq, int idx)
> GEM_BUG_ON(intel_context_inflight(ce) != rq->engine);
> }
>
> +static void virtual_xfer_context(struct virtual_engine *ve,
> + struct intel_engine_cs *engine)
> +{
> + unsigned int n;
> +
> + if (likely(engine == ve->siblings[0]))
> + return;
> +
> + if (!intel_engine_has_relative_mmio(engine))
> + lrc_update_offsets(&ve->context, engine);
> +
> + /*
> + * Move the bound engine to the top of the list for
> + * future execution. We then kick this tasklet first
> + * before checking others, so that we preferentially
> + * reuse this set of bound registers.
> + */
> + for (n = 1; n < ve->num_siblings; n++) {
> + if (ve->siblings[n] == engine) {
> + swap(ve->siblings[n], ve->siblings[0]);
> + break;
> + }
> + }
> +}
> +
> +static int ve_random_sibling(struct virtual_engine *ve)
> +{
> + return prandom_u32_max(ve->num_siblings);
> +}
> +
> +static int ve_random_other_sibling(struct virtual_engine *ve)
> +{
> + return 1 + prandom_u32_max(ve->num_siblings - 1);
> +}
> +
> static void
> resubmit_virtual_request(struct i915_request *rq, struct virtual_engine *ve)
> {
> @@ -578,8 +613,23 @@ static void kick_siblings(struct i915_request *rq, struct intel_context *ce)
> rq->execution_mask != engine->mask)
> resubmit_virtual_request(rq, ve);
>
> - if (READ_ONCE(ve->request))
> + /*
> + * Reschedule with a new "preferred" sibling.
> + *
> + * The tasklets are executed in the order of ve->siblings[], so
> + * siblings[0] receives preferrential treatment of greedily checking
> + * for execution of the virtual engine. At this point, the virtual
> + * engine is no longer in the current GPU cache due to idleness or
> + * contention, so it can be executed on any without penalty. We
> + * re-randomise at this point in order to spread light loads across
> + * the system, heavy overlapping loads will continue to be greedily
> + * executed by the first available engine.
> + */
> + if (READ_ONCE(ve->request)) {
> + virtual_xfer_context(ve,
> + ve->siblings[ve_random_other_sibling(ve)]);
> tasklet_hi_schedule(&ve->base.sched_engine->tasklet);
> + }
> }
>
> static void __execlists_schedule_out(struct i915_request * const rq,
> @@ -1030,32 +1080,6 @@ first_virtual_engine(struct intel_engine_cs *engine)
> return NULL;
> }
>
> -static void virtual_xfer_context(struct virtual_engine *ve,
> - struct intel_engine_cs *engine)
> -{
> - unsigned int n;
> -
> - if (likely(engine == ve->siblings[0]))
> - return;
> -
> - GEM_BUG_ON(READ_ONCE(ve->context.inflight));
> - if (!intel_engine_has_relative_mmio(engine))
> - lrc_update_offsets(&ve->context, engine);
> -
> - /*
> - * Move the bound engine to the top of the list for
> - * future execution. We then kick this tasklet first
> - * before checking others, so that we preferentially
> - * reuse this set of bound registers.
> - */
> - for (n = 1; n < ve->num_siblings; n++) {
> - if (ve->siblings[n] == engine) {
> - swap(ve->siblings[n], ve->siblings[0]);
> - break;
> - }
> - }
> -}
> -
> static void defer_request(struct i915_request *rq, struct list_head * const pl)
> {
> LIST_HEAD(list);
> @@ -3590,7 +3614,7 @@ static void virtual_engine_initial_hint(struct virtual_engine *ve)
> * NB This does not force us to execute on this engine, it will just
> * typically be the first we inspect for submission.
> */
> - swp = prandom_u32_max(ve->num_siblings);
> + swp = ve_random_sibling(ve);
> if (swp)
> swap(ve->siblings[swp], ve->siblings[0]);
> }
>
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [Intel-gfx] [PATCH 3/3] drm/i915/gt: Improve "race-to-idle" at low frequencies
2021-11-23 9:17 ` Tvrtko Ursulin
@ 2021-11-23 16:53 ` Vivi, Rodrigo
0 siblings, 0 replies; 21+ messages in thread
From: Vivi, Rodrigo @ 2021-11-23 16:53 UTC (permalink / raw)
To: tvrtko.ursulin@linux.intel.com, Belgaumkar, Vinay
Cc: intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org,
chris@chris-wilson.co.uk
On Tue, 2021-11-23 at 09:17 +0000, Tvrtko Ursulin wrote:
>
> On 22/11/2021 18:44, Rodrigo Vivi wrote:
> > On Wed, Nov 17, 2021 at 02:49:55PM -0800, Vinay Belgaumkar wrote:
> > > From: Chris Wilson <chris@chris-wilson.co.uk>
> > >
> > > While the power consumption is proportional to the frequency,
> > > there is
> > > also a static draw for active gates. The longer we are able to
> > > powergate
> > > (rc6), the lower the static draw. Thus there is a sweetspot in
> > > the
> > > frequency/power curve where we run at higher frequency in order
> > > to sleep
> > > longer, aka race-to-idle. This is more evident at lower
> > > frequencies, so
> > > let's look to bump the frequency if we think we will benefit by
> > > sleeping
> > > longer at the higher frequency and so conserving power.
> > >
> > > Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> > > Cc: Vinay Belgaumkar <vinay.belgaumkar@intel.com>
> > > Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
> >
> > Please let's not increase the complexity here, unless we have a
> > very good
> > and documented reason.
> >
> > Before trying to implement anything smart like this in the driver
> > I'd like
> > to see data, power and performance results in different platforms
> > and with
> > different workloads.
>
> Who has such test suite and test farm which isn't focused to
> workloads
> from a single customer? ;(
Okay, maybe we don't need to cover the world here. But without seen any
data at all it is hard to make this call.
>
> Regards,
>
> Tvrtko
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [Intel-gfx] [PATCH 2/3] drm/i915/gt: Compare average group occupancy for RPS evaluation
2021-11-17 22:49 ` [Intel-gfx] [PATCH 2/3] drm/i915/gt: Compare average group occupancy for RPS evaluation Vinay Belgaumkar
@ 2021-11-23 17:35 ` Belgaumkar, Vinay
0 siblings, 0 replies; 21+ messages in thread
From: Belgaumkar, Vinay @ 2021-11-23 17:35 UTC (permalink / raw)
To: intel-gfx, dri-devel; +Cc: Chris Wilson
On 11/17/2021 2:49 PM, Vinay Belgaumkar wrote:
> From: Chris Wilson <chris.p.wilson@intel.com>
>
> Currently, we inspect each engine individually and measure the occupancy
> of that engine over the last evaluation interval. If that exceeds our
> busyness thresholds, we decide to increase the GPU frequency. However,
> under a load balancer, we should consider the occupancy of entire engine
> groups, as work may be spread out across the group. In doing so, we
> prefer wide over fast, power consumption is approximately proportional to
> the square of the frequency. However, since the load balancer is greedy,
> the first idle engine gets all the work, and preferrentially reuses the
> last active engine, under light loads all work is assigned to one
> engine, and so that engine appears very busy. But if the work happened
> to overlap slightly, the workload would spread across multiple engines,
> reducing each individual engine's runtime, and so reducing the rps
> contribution, keeping the frequency low. Instead, when considering the
> contribution, consider the contribution over the entire engine group
> (capacity).
>
> Signed-off-by: Chris Wilson <chris.p.wilson@intel.com>
> Cc: Vinay Belgaumkar <vinay.belgaumkar@intel.com>
> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
Reviewed-by: Vinay Belgaumkar <vinay.belgaumkar@intel.com>
> ---
> drivers/gpu/drm/i915/gt/intel_rps.c | 48 ++++++++++++++++++++---------
> 1 file changed, 34 insertions(+), 14 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/gt/intel_rps.c b/drivers/gpu/drm/i915/gt/intel_rps.c
> index 07ff7ba7b2b7..3675ac93ded0 100644
> --- a/drivers/gpu/drm/i915/gt/intel_rps.c
> +++ b/drivers/gpu/drm/i915/gt/intel_rps.c
> @@ -7,6 +7,7 @@
>
> #include "i915_drv.h"
> #include "intel_breadcrumbs.h"
> +#include "intel_engine_pm.h"
> #include "intel_gt.h"
> #include "intel_gt_clock_utils.h"
> #include "intel_gt_irq.h"
> @@ -65,26 +66,45 @@ static void set(struct intel_uncore *uncore, i915_reg_t reg, u32 val)
> static void rps_timer(struct timer_list *t)
> {
> struct intel_rps *rps = from_timer(rps, t, timer);
> - struct intel_engine_cs *engine;
> - ktime_t dt, last, timestamp;
> - enum intel_engine_id id;
> + struct intel_gt *gt = rps_to_gt(rps);
> + ktime_t dt, last, timestamp = 0;
> s64 max_busy[3] = {};
> + int i, j;
>
> - timestamp = 0;
> - for_each_engine(engine, rps_to_gt(rps), id) {
> - s64 busy;
> - int i;
> + /* Compare average occupancy over each engine group */
> + for (i = 0; i < ARRAY_SIZE(gt->engine_class); i++) {
> + s64 busy = 0;
> + int count = 0;
> +
> + for (j = 0; j < ARRAY_SIZE(gt->engine_class[i]); j++) {
> + struct intel_engine_cs *engine;
>
> - dt = intel_engine_get_busy_time(engine, ×tamp);
> - last = engine->stats.rps;
> - engine->stats.rps = dt;
> + engine = gt->engine_class[i][j];
> + if (!engine)
> + continue;
>
> - busy = ktime_to_ns(ktime_sub(dt, last));
> - for (i = 0; i < ARRAY_SIZE(max_busy); i++) {
> - if (busy > max_busy[i])
> - swap(busy, max_busy[i]);
> + dt = intel_engine_get_busy_time(engine, ×tamp);
> + last = engine->stats.rps;
> + engine->stats.rps = dt;
> +
> + if (!intel_engine_pm_is_awake(engine))
> + continue;
> +
> + busy += ktime_to_ns(ktime_sub(dt, last));
> + count++;
> + }
> +
> + if (count > 1)
> + busy = div_u64(busy, count);
> + if (busy <= max_busy[ARRAY_SIZE(max_busy) - 1])
> + continue;
> +
> + for (j = 0; j < ARRAY_SIZE(max_busy); j++) {
> + if (busy > max_busy[j])
> + swap(busy, max_busy[j]);
> }
> }
> +
> last = rps->pm_timestamp;
> rps->pm_timestamp = timestamp;
>
>
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [Intel-gfx] [PATCH 3/3] drm/i915/gt: Improve "race-to-idle" at low frequencies
2021-11-17 22:49 ` [Intel-gfx] [PATCH 3/3] drm/i915/gt: Improve "race-to-idle" at low frequencies Vinay Belgaumkar
2021-11-22 18:44 ` Rodrigo Vivi
@ 2021-11-23 17:37 ` Belgaumkar, Vinay
1 sibling, 0 replies; 21+ messages in thread
From: Belgaumkar, Vinay @ 2021-11-23 17:37 UTC (permalink / raw)
To: intel-gfx, dri-devel; +Cc: Chris Wilson
On 11/17/2021 2:49 PM, Vinay Belgaumkar wrote:
> From: Chris Wilson <chris@chris-wilson.co.uk>
>
> While the power consumption is proportional to the frequency, there is
> also a static draw for active gates. The longer we are able to powergate
> (rc6), the lower the static draw. Thus there is a sweetspot in the
> frequency/power curve where we run at higher frequency in order to sleep
> longer, aka race-to-idle. This is more evident at lower frequencies, so
> let's look to bump the frequency if we think we will benefit by sleeping
> longer at the higher frequency and so conserving power.
>
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Vinay Belgaumkar <vinay.belgaumkar@intel.com>
> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
Data collected does show some power savings.
Reviewed-by: Vinay Belgaumkar <vinay.belgaumkar@intel.com>
> ---
> drivers/gpu/drm/i915/gt/intel_rps.c | 31 ++++++++++++++++++++++++-----
> 1 file changed, 26 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/gt/intel_rps.c b/drivers/gpu/drm/i915/gt/intel_rps.c
> index 3675ac93ded0..6af3231982af 100644
> --- a/drivers/gpu/drm/i915/gt/intel_rps.c
> +++ b/drivers/gpu/drm/i915/gt/intel_rps.c
> @@ -63,6 +63,22 @@ static void set(struct intel_uncore *uncore, i915_reg_t reg, u32 val)
> intel_uncore_write_fw(uncore, reg, val);
> }
>
> +static bool race_to_idle(struct intel_rps *rps, u64 busy, u64 dt)
> +{
> + unsigned int this = rps->cur_freq;
> + unsigned int next = rps->cur_freq + 1;
> + u64 next_dt = next * max(busy, dt);
> +
> + /*
> + * Compare estimated time spent in rc6 at the next power bin. If
> + * we expect to sleep longer than the estimated increased power
> + * cost of running at a higher frequency, it will be reduced power
> + * consumption overall.
> + */
> + return (((next_dt - this * busy) >> 10) * this * this >
> + ((next_dt - next * busy) >> 10) * next * next);
> +}
> +
> static void rps_timer(struct timer_list *t)
> {
> struct intel_rps *rps = from_timer(rps, t, timer);
> @@ -133,7 +149,7 @@ static void rps_timer(struct timer_list *t)
> if (!max_busy[i])
> break;
>
> - busy += div_u64(max_busy[i], 1 << i);
> + busy += max_busy[i] >> i;
> }
> GT_TRACE(rps_to_gt(rps),
> "busy:%lld [%d%%], max:[%lld, %lld, %lld], interval:%d\n",
> @@ -141,13 +157,18 @@ static void rps_timer(struct timer_list *t)
> max_busy[0], max_busy[1], max_busy[2],
> rps->pm_interval);
>
> - if (100 * busy > rps->power.up_threshold * dt &&
> - rps->cur_freq < rps->max_freq_softlimit) {
> + if (rps->cur_freq < rps->max_freq_softlimit &&
> + race_to_idle(rps, max_busy[0], dt)) {
> + rps->pm_iir |= GEN6_PM_RP_UP_THRESHOLD;
> + rps->pm_interval = 1;
> + schedule_work(&rps->work);
> + } else if (rps->cur_freq < rps->max_freq_softlimit &&
> + 100 * busy > rps->power.up_threshold * dt) {
> rps->pm_iir |= GEN6_PM_RP_UP_THRESHOLD;
> rps->pm_interval = 1;
> schedule_work(&rps->work);
> - } else if (100 * busy < rps->power.down_threshold * dt &&
> - rps->cur_freq > rps->min_freq_softlimit) {
> + } else if (rps->cur_freq > rps->min_freq_softlimit &&
> + 100 * busy < rps->power.down_threshold * dt) {
> rps->pm_iir |= GEN6_PM_RP_DOWN_THRESHOLD;
> rps->pm_interval = 1;
> schedule_work(&rps->work);
>
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [Intel-gfx] [PATCH 1/3] drm/i915/gt: Spread virtual engines over idle engines
2021-11-23 9:39 ` Tvrtko Ursulin
@ 2021-11-23 19:52 ` Rodrigo Vivi
2021-11-24 8:56 ` Tvrtko Ursulin
0 siblings, 1 reply; 21+ messages in thread
From: Rodrigo Vivi @ 2021-11-23 19:52 UTC (permalink / raw)
To: Tvrtko Ursulin; +Cc: intel-gfx, dri-devel, Chris Wilson
On Tue, Nov 23, 2021 at 09:39:25AM +0000, Tvrtko Ursulin wrote:
>
> On 17/11/2021 22:49, Vinay Belgaumkar wrote:
> > From: Chris Wilson <chris@chris-wilson.co.uk>
> >
> > Everytime we come to the end of a virtual engine's context, re-randomise
> > it's siblings[]. As we schedule the siblings' tasklets in the order they
> > are in the array, earlier entries are executed first (when idle) and so
> > will be preferred when scheduling the next virtual request. Currently,
> > we only update the array when switching onto a new idle engine, so we
> > prefer to stick on the last execute engine, keeping the work compact.
> > However, it can be beneficial to spread the work out across idle
> > engines, so choose another sibling as our preferred target at the end of
> > the context's execution.
>
> This partially brings back, from a different angle, the more dynamic
> scheduling behavior which has been lost since bugfix 90a987205c6c
> ("drm/i915/gt: Only swap to a random sibling once upon creation").
Shouldn't we use the Fixes tag here since this is targeting to fix one
of the performance regressions of this patch?
>
> One day we could experiment with using engine busyness as criteria (instead
> of random). Back in the day busyness was kind of the best strategy, although
> sampled at submit, not at the trailing edge like here, but it still may be
> able to settle down to engine configuration better in some scenarios. Only
> testing could say.
>
> Still, from memory random also wasn't that bad so this should be okay for
> now.
>
> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Since you reviewed and it looks to be a middle ground point in terms
of when to balancing (always like in the initial implementation vs
only once like the in 90a987205c6c).
If this one is really fixing the regression by itself:
Acked-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
on this patch here.
But I still don't want to take the risk with touching the freq with
race to idle, until not convinced that it is absolutely needed and
that we are not breaking the world out there.
>
> Regards,
>
> Tvrtko
>
> > Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> > Cc: Vinay Belgaumkar <vinay.belgaumkar@intel.com>
> > Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
> > ---
> > .../drm/i915/gt/intel_execlists_submission.c | 80 ++++++++++++-------
> > 1 file changed, 52 insertions(+), 28 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
> > index ca03880fa7e4..b95bbc8fb91a 100644
> > --- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
> > +++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
> > @@ -539,6 +539,41 @@ static void execlists_schedule_in(struct i915_request *rq, int idx)
> > GEM_BUG_ON(intel_context_inflight(ce) != rq->engine);
> > }
> > +static void virtual_xfer_context(struct virtual_engine *ve,
> > + struct intel_engine_cs *engine)
> > +{
> > + unsigned int n;
> > +
> > + if (likely(engine == ve->siblings[0]))
> > + return;
> > +
> > + if (!intel_engine_has_relative_mmio(engine))
> > + lrc_update_offsets(&ve->context, engine);
> > +
> > + /*
> > + * Move the bound engine to the top of the list for
> > + * future execution. We then kick this tasklet first
> > + * before checking others, so that we preferentially
> > + * reuse this set of bound registers.
> > + */
> > + for (n = 1; n < ve->num_siblings; n++) {
> > + if (ve->siblings[n] == engine) {
> > + swap(ve->siblings[n], ve->siblings[0]);
> > + break;
> > + }
> > + }
> > +}
> > +
> > +static int ve_random_sibling(struct virtual_engine *ve)
> > +{
> > + return prandom_u32_max(ve->num_siblings);
> > +}
> > +
> > +static int ve_random_other_sibling(struct virtual_engine *ve)
> > +{
> > + return 1 + prandom_u32_max(ve->num_siblings - 1);
> > +}
> > +
> > static void
> > resubmit_virtual_request(struct i915_request *rq, struct virtual_engine *ve)
> > {
> > @@ -578,8 +613,23 @@ static void kick_siblings(struct i915_request *rq, struct intel_context *ce)
> > rq->execution_mask != engine->mask)
> > resubmit_virtual_request(rq, ve);
> > - if (READ_ONCE(ve->request))
> > + /*
> > + * Reschedule with a new "preferred" sibling.
> > + *
> > + * The tasklets are executed in the order of ve->siblings[], so
> > + * siblings[0] receives preferrential treatment of greedily checking
> > + * for execution of the virtual engine. At this point, the virtual
> > + * engine is no longer in the current GPU cache due to idleness or
> > + * contention, so it can be executed on any without penalty. We
> > + * re-randomise at this point in order to spread light loads across
> > + * the system, heavy overlapping loads will continue to be greedily
> > + * executed by the first available engine.
> > + */
> > + if (READ_ONCE(ve->request)) {
> > + virtual_xfer_context(ve,
> > + ve->siblings[ve_random_other_sibling(ve)]);
> > tasklet_hi_schedule(&ve->base.sched_engine->tasklet);
> > + }
> > }
> > static void __execlists_schedule_out(struct i915_request * const rq,
> > @@ -1030,32 +1080,6 @@ first_virtual_engine(struct intel_engine_cs *engine)
> > return NULL;
> > }
> > -static void virtual_xfer_context(struct virtual_engine *ve,
> > - struct intel_engine_cs *engine)
> > -{
> > - unsigned int n;
> > -
> > - if (likely(engine == ve->siblings[0]))
> > - return;
> > -
> > - GEM_BUG_ON(READ_ONCE(ve->context.inflight));
> > - if (!intel_engine_has_relative_mmio(engine))
> > - lrc_update_offsets(&ve->context, engine);
> > -
> > - /*
> > - * Move the bound engine to the top of the list for
> > - * future execution. We then kick this tasklet first
> > - * before checking others, so that we preferentially
> > - * reuse this set of bound registers.
> > - */
> > - for (n = 1; n < ve->num_siblings; n++) {
> > - if (ve->siblings[n] == engine) {
> > - swap(ve->siblings[n], ve->siblings[0]);
> > - break;
> > - }
> > - }
> > -}
> > -
> > static void defer_request(struct i915_request *rq, struct list_head * const pl)
> > {
> > LIST_HEAD(list);
> > @@ -3590,7 +3614,7 @@ static void virtual_engine_initial_hint(struct virtual_engine *ve)
> > * NB This does not force us to execute on this engine, it will just
> > * typically be the first we inspect for submission.
> > */
> > - swp = prandom_u32_max(ve->num_siblings);
> > + swp = ve_random_sibling(ve);
> > if (swp)
> > swap(ve->siblings[swp], ve->siblings[0]);
> > }
> >
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [Intel-gfx] [PATCH 1/3] drm/i915/gt: Spread virtual engines over idle engines
2021-11-23 19:52 ` Rodrigo Vivi
@ 2021-11-24 8:56 ` Tvrtko Ursulin
2021-11-24 13:55 ` Rodrigo Vivi
0 siblings, 1 reply; 21+ messages in thread
From: Tvrtko Ursulin @ 2021-11-24 8:56 UTC (permalink / raw)
To: Rodrigo Vivi; +Cc: intel-gfx, dri-devel, Chris Wilson
On 23/11/2021 19:52, Rodrigo Vivi wrote:
> On Tue, Nov 23, 2021 at 09:39:25AM +0000, Tvrtko Ursulin wrote:
>>
>> On 17/11/2021 22:49, Vinay Belgaumkar wrote:
>>> From: Chris Wilson <chris@chris-wilson.co.uk>
>>>
>>> Everytime we come to the end of a virtual engine's context, re-randomise
>>> it's siblings[]. As we schedule the siblings' tasklets in the order they
>>> are in the array, earlier entries are executed first (when idle) and so
>>> will be preferred when scheduling the next virtual request. Currently,
>>> we only update the array when switching onto a new idle engine, so we
>>> prefer to stick on the last execute engine, keeping the work compact.
>>> However, it can be beneficial to spread the work out across idle
>>> engines, so choose another sibling as our preferred target at the end of
>>> the context's execution.
>>
>> This partially brings back, from a different angle, the more dynamic
>> scheduling behavior which has been lost since bugfix 90a987205c6c
>> ("drm/i915/gt: Only swap to a random sibling once upon creation").
>
> Shouldn't we use the Fixes tag here since this is targeting to fix one
> of the performance regressions of this patch?
Probably not but hard to say. Note that it wasn't a performance
regression that was reported but power.
And to go back to what we said elsewhere in the thread, I am actually
with you in thinking that in the ideal world we need PnP testing across
a variety of workloads and platforms. And "in the ideal world" should
really be in the normal world. It is not professional to be reactive to
isolated bug reports from users, without being able to see the overall
picture.
>> One day we could experiment with using engine busyness as criteria (instead
>> of random). Back in the day busyness was kind of the best strategy, although
>> sampled at submit, not at the trailing edge like here, but it still may be
>> able to settle down to engine configuration better in some scenarios. Only
>> testing could say.
>>
>> Still, from memory random also wasn't that bad so this should be okay for
>> now.
>>
>> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>
> Since you reviewed and it looks to be a middle ground point in terms
> of when to balancing (always like in the initial implementation vs
> only once like the in 90a987205c6c).
>
> If this one is really fixing the regression by itself:
> Acked-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
> on this patch here.
>
> But I still don't want to take the risk with touching the freq with
> race to idle, until not convinced that it is absolutely needed and
> that we are not breaking the world out there.
Yes agreed in principle, we have users with different priorities.
However the RPS patches in the series, definitely the 1st one which
looks at classes versus individual engines, sound plausible to me. Given
the absence of automated PnP testing mentioned above, in the past it was
usually Chris who was making the above and beyond effort to evaluate
changes like these on as many platforms as he could, and with different
workloads. Not sure who has the mandate and drive to fill that space but
something will need to happen.
Regards,
Tvrtko
>>
>> Regards,
>>
>> Tvrtko
>>
>>> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
>>> Cc: Vinay Belgaumkar <vinay.belgaumkar@intel.com>
>>> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
>>> ---
>>> .../drm/i915/gt/intel_execlists_submission.c | 80 ++++++++++++-------
>>> 1 file changed, 52 insertions(+), 28 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
>>> index ca03880fa7e4..b95bbc8fb91a 100644
>>> --- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
>>> +++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
>>> @@ -539,6 +539,41 @@ static void execlists_schedule_in(struct i915_request *rq, int idx)
>>> GEM_BUG_ON(intel_context_inflight(ce) != rq->engine);
>>> }
>>> +static void virtual_xfer_context(struct virtual_engine *ve,
>>> + struct intel_engine_cs *engine)
>>> +{
>>> + unsigned int n;
>>> +
>>> + if (likely(engine == ve->siblings[0]))
>>> + return;
>>> +
>>> + if (!intel_engine_has_relative_mmio(engine))
>>> + lrc_update_offsets(&ve->context, engine);
>>> +
>>> + /*
>>> + * Move the bound engine to the top of the list for
>>> + * future execution. We then kick this tasklet first
>>> + * before checking others, so that we preferentially
>>> + * reuse this set of bound registers.
>>> + */
>>> + for (n = 1; n < ve->num_siblings; n++) {
>>> + if (ve->siblings[n] == engine) {
>>> + swap(ve->siblings[n], ve->siblings[0]);
>>> + break;
>>> + }
>>> + }
>>> +}
>>> +
>>> +static int ve_random_sibling(struct virtual_engine *ve)
>>> +{
>>> + return prandom_u32_max(ve->num_siblings);
>>> +}
>>> +
>>> +static int ve_random_other_sibling(struct virtual_engine *ve)
>>> +{
>>> + return 1 + prandom_u32_max(ve->num_siblings - 1);
>>> +}
>>> +
>>> static void
>>> resubmit_virtual_request(struct i915_request *rq, struct virtual_engine *ve)
>>> {
>>> @@ -578,8 +613,23 @@ static void kick_siblings(struct i915_request *rq, struct intel_context *ce)
>>> rq->execution_mask != engine->mask)
>>> resubmit_virtual_request(rq, ve);
>>> - if (READ_ONCE(ve->request))
>>> + /*
>>> + * Reschedule with a new "preferred" sibling.
>>> + *
>>> + * The tasklets are executed in the order of ve->siblings[], so
>>> + * siblings[0] receives preferrential treatment of greedily checking
>>> + * for execution of the virtual engine. At this point, the virtual
>>> + * engine is no longer in the current GPU cache due to idleness or
>>> + * contention, so it can be executed on any without penalty. We
>>> + * re-randomise at this point in order to spread light loads across
>>> + * the system, heavy overlapping loads will continue to be greedily
>>> + * executed by the first available engine.
>>> + */
>>> + if (READ_ONCE(ve->request)) {
>>> + virtual_xfer_context(ve,
>>> + ve->siblings[ve_random_other_sibling(ve)]);
>>> tasklet_hi_schedule(&ve->base.sched_engine->tasklet);
>>> + }
>>> }
>>> static void __execlists_schedule_out(struct i915_request * const rq,
>>> @@ -1030,32 +1080,6 @@ first_virtual_engine(struct intel_engine_cs *engine)
>>> return NULL;
>>> }
>>> -static void virtual_xfer_context(struct virtual_engine *ve,
>>> - struct intel_engine_cs *engine)
>>> -{
>>> - unsigned int n;
>>> -
>>> - if (likely(engine == ve->siblings[0]))
>>> - return;
>>> -
>>> - GEM_BUG_ON(READ_ONCE(ve->context.inflight));
>>> - if (!intel_engine_has_relative_mmio(engine))
>>> - lrc_update_offsets(&ve->context, engine);
>>> -
>>> - /*
>>> - * Move the bound engine to the top of the list for
>>> - * future execution. We then kick this tasklet first
>>> - * before checking others, so that we preferentially
>>> - * reuse this set of bound registers.
>>> - */
>>> - for (n = 1; n < ve->num_siblings; n++) {
>>> - if (ve->siblings[n] == engine) {
>>> - swap(ve->siblings[n], ve->siblings[0]);
>>> - break;
>>> - }
>>> - }
>>> -}
>>> -
>>> static void defer_request(struct i915_request *rq, struct list_head * const pl)
>>> {
>>> LIST_HEAD(list);
>>> @@ -3590,7 +3614,7 @@ static void virtual_engine_initial_hint(struct virtual_engine *ve)
>>> * NB This does not force us to execute on this engine, it will just
>>> * typically be the first we inspect for submission.
>>> */
>>> - swp = prandom_u32_max(ve->num_siblings);
>>> + swp = ve_random_sibling(ve);
>>> if (swp)
>>> swap(ve->siblings[swp], ve->siblings[0]);
>>> }
>>>
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [Intel-gfx] [PATCH 1/3] drm/i915/gt: Spread virtual engines over idle engines
2021-11-24 8:56 ` Tvrtko Ursulin
@ 2021-11-24 13:55 ` Rodrigo Vivi
2021-11-24 15:09 ` Rodrigo Vivi
0 siblings, 1 reply; 21+ messages in thread
From: Rodrigo Vivi @ 2021-11-24 13:55 UTC (permalink / raw)
To: Tvrtko Ursulin; +Cc: intel-gfx, dri-devel, Chris Wilson
On Wed, Nov 24, 2021 at 08:56:52AM +0000, Tvrtko Ursulin wrote:
>
> On 23/11/2021 19:52, Rodrigo Vivi wrote:
> > On Tue, Nov 23, 2021 at 09:39:25AM +0000, Tvrtko Ursulin wrote:
> > >
> > > On 17/11/2021 22:49, Vinay Belgaumkar wrote:
> > > > From: Chris Wilson <chris@chris-wilson.co.uk>
> > > >
> > > > Everytime we come to the end of a virtual engine's context, re-randomise
> > > > it's siblings[]. As we schedule the siblings' tasklets in the order they
> > > > are in the array, earlier entries are executed first (when idle) and so
> > > > will be preferred when scheduling the next virtual request. Currently,
> > > > we only update the array when switching onto a new idle engine, so we
> > > > prefer to stick on the last execute engine, keeping the work compact.
> > > > However, it can be beneficial to spread the work out across idle
> > > > engines, so choose another sibling as our preferred target at the end of
> > > > the context's execution.
> > >
> > > This partially brings back, from a different angle, the more dynamic
> > > scheduling behavior which has been lost since bugfix 90a987205c6c
> > > ("drm/i915/gt: Only swap to a random sibling once upon creation").
> >
> > Shouldn't we use the Fixes tag here since this is targeting to fix one
> > of the performance regressions of this patch?
>
> Probably not but hard to say. Note that it wasn't a performance regression
> that was reported but power.
>
> And to go back to what we said elsewhere in the thread, I am actually with
> you in thinking that in the ideal world we need PnP testing across a variety
> of workloads and platforms. And "in the ideal world" should really be in the
> normal world. It is not professional to be reactive to isolated bug reports
> from users, without being able to see the overall picture.
We surely need to address the bug report from users. I'm just asking to address
that with the smallest fix that we can backport and fit to the products milestones.
Instead, we are creating another optimization feature on a rush. Without a proper
validation.
I believe it is too risk to add an algorithm like that without a broader test.
I see a big risk of introducing corner cases that will results in more bug report
from other users in a near future.
So, let's all be professionals and provide a smaller fix for a regression on
the load balancing scenario and provide a better validation with more data
to justify this new feature.
Thanks,
Rodrigo.
>
> > > One day we could experiment with using engine busyness as criteria (instead
> > > of random). Back in the day busyness was kind of the best strategy, although
> > > sampled at submit, not at the trailing edge like here, but it still may be
> > > able to settle down to engine configuration better in some scenarios. Only
> > > testing could say.
> > >
> > > Still, from memory random also wasn't that bad so this should be okay for
> > > now.
> > >
> > > Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> >
> > Since you reviewed and it looks to be a middle ground point in terms
> > of when to balancing (always like in the initial implementation vs
> > only once like the in 90a987205c6c).
> >
> > If this one is really fixing the regression by itself:
> > Acked-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
> > on this patch here.
> >
> > But I still don't want to take the risk with touching the freq with
> > race to idle, until not convinced that it is absolutely needed and
> > that we are not breaking the world out there.
>
> Yes agreed in principle, we have users with different priorities.
>
> However the RPS patches in the series, definitely the 1st one which looks at
> classes versus individual engines, sound plausible to me. Given the absence
> of automated PnP testing mentioned above, in the past it was usually Chris
> who was making the above and beyond effort to evaluate changes like these on
> as many platforms as he could, and with different workloads. Not sure who
> has the mandate and drive to fill that space but something will need to
> happen.
>
> Regards,
>
> Tvrtko
>
> > >
> > > Regards,
> > >
> > > Tvrtko
> > >
> > > > Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> > > > Cc: Vinay Belgaumkar <vinay.belgaumkar@intel.com>
> > > > Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
> > > > ---
> > > > .../drm/i915/gt/intel_execlists_submission.c | 80 ++++++++++++-------
> > > > 1 file changed, 52 insertions(+), 28 deletions(-)
> > > >
> > > > diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
> > > > index ca03880fa7e4..b95bbc8fb91a 100644
> > > > --- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
> > > > +++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
> > > > @@ -539,6 +539,41 @@ static void execlists_schedule_in(struct i915_request *rq, int idx)
> > > > GEM_BUG_ON(intel_context_inflight(ce) != rq->engine);
> > > > }
> > > > +static void virtual_xfer_context(struct virtual_engine *ve,
> > > > + struct intel_engine_cs *engine)
> > > > +{
> > > > + unsigned int n;
> > > > +
> > > > + if (likely(engine == ve->siblings[0]))
> > > > + return;
> > > > +
> > > > + if (!intel_engine_has_relative_mmio(engine))
> > > > + lrc_update_offsets(&ve->context, engine);
> > > > +
> > > > + /*
> > > > + * Move the bound engine to the top of the list for
> > > > + * future execution. We then kick this tasklet first
> > > > + * before checking others, so that we preferentially
> > > > + * reuse this set of bound registers.
> > > > + */
> > > > + for (n = 1; n < ve->num_siblings; n++) {
> > > > + if (ve->siblings[n] == engine) {
> > > > + swap(ve->siblings[n], ve->siblings[0]);
> > > > + break;
> > > > + }
> > > > + }
> > > > +}
> > > > +
> > > > +static int ve_random_sibling(struct virtual_engine *ve)
> > > > +{
> > > > + return prandom_u32_max(ve->num_siblings);
> > > > +}
> > > > +
> > > > +static int ve_random_other_sibling(struct virtual_engine *ve)
> > > > +{
> > > > + return 1 + prandom_u32_max(ve->num_siblings - 1);
> > > > +}
> > > > +
> > > > static void
> > > > resubmit_virtual_request(struct i915_request *rq, struct virtual_engine *ve)
> > > > {
> > > > @@ -578,8 +613,23 @@ static void kick_siblings(struct i915_request *rq, struct intel_context *ce)
> > > > rq->execution_mask != engine->mask)
> > > > resubmit_virtual_request(rq, ve);
> > > > - if (READ_ONCE(ve->request))
> > > > + /*
> > > > + * Reschedule with a new "preferred" sibling.
> > > > + *
> > > > + * The tasklets are executed in the order of ve->siblings[], so
> > > > + * siblings[0] receives preferrential treatment of greedily checking
> > > > + * for execution of the virtual engine. At this point, the virtual
> > > > + * engine is no longer in the current GPU cache due to idleness or
> > > > + * contention, so it can be executed on any without penalty. We
> > > > + * re-randomise at this point in order to spread light loads across
> > > > + * the system, heavy overlapping loads will continue to be greedily
> > > > + * executed by the first available engine.
> > > > + */
> > > > + if (READ_ONCE(ve->request)) {
> > > > + virtual_xfer_context(ve,
> > > > + ve->siblings[ve_random_other_sibling(ve)]);
> > > > tasklet_hi_schedule(&ve->base.sched_engine->tasklet);
> > > > + }
> > > > }
> > > > static void __execlists_schedule_out(struct i915_request * const rq,
> > > > @@ -1030,32 +1080,6 @@ first_virtual_engine(struct intel_engine_cs *engine)
> > > > return NULL;
> > > > }
> > > > -static void virtual_xfer_context(struct virtual_engine *ve,
> > > > - struct intel_engine_cs *engine)
> > > > -{
> > > > - unsigned int n;
> > > > -
> > > > - if (likely(engine == ve->siblings[0]))
> > > > - return;
> > > > -
> > > > - GEM_BUG_ON(READ_ONCE(ve->context.inflight));
> > > > - if (!intel_engine_has_relative_mmio(engine))
> > > > - lrc_update_offsets(&ve->context, engine);
> > > > -
> > > > - /*
> > > > - * Move the bound engine to the top of the list for
> > > > - * future execution. We then kick this tasklet first
> > > > - * before checking others, so that we preferentially
> > > > - * reuse this set of bound registers.
> > > > - */
> > > > - for (n = 1; n < ve->num_siblings; n++) {
> > > > - if (ve->siblings[n] == engine) {
> > > > - swap(ve->siblings[n], ve->siblings[0]);
> > > > - break;
> > > > - }
> > > > - }
> > > > -}
> > > > -
> > > > static void defer_request(struct i915_request *rq, struct list_head * const pl)
> > > > {
> > > > LIST_HEAD(list);
> > > > @@ -3590,7 +3614,7 @@ static void virtual_engine_initial_hint(struct virtual_engine *ve)
> > > > * NB This does not force us to execute on this engine, it will just
> > > > * typically be the first we inspect for submission.
> > > > */
> > > > - swp = prandom_u32_max(ve->num_siblings);
> > > > + swp = ve_random_sibling(ve);
> > > > if (swp)
> > > > swap(ve->siblings[swp], ve->siblings[0]);
> > > > }
> > > >
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [Intel-gfx] [PATCH 1/3] drm/i915/gt: Spread virtual engines over idle engines
2021-11-24 13:55 ` Rodrigo Vivi
@ 2021-11-24 15:09 ` Rodrigo Vivi
0 siblings, 0 replies; 21+ messages in thread
From: Rodrigo Vivi @ 2021-11-24 15:09 UTC (permalink / raw)
To: Tvrtko Ursulin; +Cc: intel-gfx, dri-devel, Chris Wilson
On Wed, Nov 24, 2021 at 08:55:39AM -0500, Rodrigo Vivi wrote:
> On Wed, Nov 24, 2021 at 08:56:52AM +0000, Tvrtko Ursulin wrote:
> >
> > On 23/11/2021 19:52, Rodrigo Vivi wrote:
> > > On Tue, Nov 23, 2021 at 09:39:25AM +0000, Tvrtko Ursulin wrote:
> > > >
> > > > On 17/11/2021 22:49, Vinay Belgaumkar wrote:
> > > > > From: Chris Wilson <chris@chris-wilson.co.uk>
> > > > >
> > > > > Everytime we come to the end of a virtual engine's context, re-randomise
> > > > > it's siblings[]. As we schedule the siblings' tasklets in the order they
> > > > > are in the array, earlier entries are executed first (when idle) and so
> > > > > will be preferred when scheduling the next virtual request. Currently,
> > > > > we only update the array when switching onto a new idle engine, so we
> > > > > prefer to stick on the last execute engine, keeping the work compact.
> > > > > However, it can be beneficial to spread the work out across idle
> > > > > engines, so choose another sibling as our preferred target at the end of
> > > > > the context's execution.
> > > >
> > > > This partially brings back, from a different angle, the more dynamic
> > > > scheduling behavior which has been lost since bugfix 90a987205c6c
> > > > ("drm/i915/gt: Only swap to a random sibling once upon creation").
> > >
> > > Shouldn't we use the Fixes tag here since this is targeting to fix one
> > > of the performance regressions of this patch?
> >
> > Probably not but hard to say. Note that it wasn't a performance regression
> > that was reported but power.
> >
> > And to go back to what we said elsewhere in the thread, I am actually with
> > you in thinking that in the ideal world we need PnP testing across a variety
> > of workloads and platforms. And "in the ideal world" should really be in the
> > normal world. It is not professional to be reactive to isolated bug reports
> > from users, without being able to see the overall picture.
>
> We surely need to address the bug report from users. I'm just asking to address
> that with the smallest fix that we can backport and fit to the products milestones.
>
> Instead, we are creating another optimization feature on a rush. Without a proper
> validation.
>
> I believe it is too risk to add an algorithm like that without a broader test.
> I see a big risk of introducing corner cases that will results in more bug report
> from other users in a near future.
>
> So, let's all be professionals and provide a smaller fix for a regression on
> the load balancing scenario and provide a better validation with more data
> to justify this new feature.
Okay, after more IRC discussions I see that patch 2 is also part of the solution
and probably safe.
Let me be clear that my biggest complain and the risk is with race-to-idle in
patch 3 on trying to predict the rc6 behavior and increasing the freq to try to
complete job faster and then get to rc6 faster... That one would need a lot
more validation.
>
> Thanks,
> Rodrigo.
>
> >
> > > > One day we could experiment with using engine busyness as criteria (instead
> > > > of random). Back in the day busyness was kind of the best strategy, although
> > > > sampled at submit, not at the trailing edge like here, but it still may be
> > > > able to settle down to engine configuration better in some scenarios. Only
> > > > testing could say.
> > > >
> > > > Still, from memory random also wasn't that bad so this should be okay for
> > > > now.
> > > >
> > > > Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> > >
> > > Since you reviewed and it looks to be a middle ground point in terms
> > > of when to balancing (always like in the initial implementation vs
> > > only once like the in 90a987205c6c).
> > >
> > > If this one is really fixing the regression by itself:
> > > Acked-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
> > > on this patch here.
> > >
> > > But I still don't want to take the risk with touching the freq with
> > > race to idle, until not convinced that it is absolutely needed and
> > > that we are not breaking the world out there.
> >
> > Yes agreed in principle, we have users with different priorities.
> >
> > However the RPS patches in the series, definitely the 1st one which looks at
> > classes versus individual engines, sound plausible to me. Given the absence
> > of automated PnP testing mentioned above, in the past it was usually Chris
> > who was making the above and beyond effort to evaluate changes like these on
> > as many platforms as he could, and with different workloads. Not sure who
> > has the mandate and drive to fill that space but something will need to
> > happen.
> >
> > Regards,
> >
> > Tvrtko
> >
> > > >
> > > > Regards,
> > > >
> > > > Tvrtko
> > > >
> > > > > Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> > > > > Cc: Vinay Belgaumkar <vinay.belgaumkar@intel.com>
> > > > > Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
> > > > > ---
> > > > > .../drm/i915/gt/intel_execlists_submission.c | 80 ++++++++++++-------
> > > > > 1 file changed, 52 insertions(+), 28 deletions(-)
> > > > >
> > > > > diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
> > > > > index ca03880fa7e4..b95bbc8fb91a 100644
> > > > > --- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
> > > > > +++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
> > > > > @@ -539,6 +539,41 @@ static void execlists_schedule_in(struct i915_request *rq, int idx)
> > > > > GEM_BUG_ON(intel_context_inflight(ce) != rq->engine);
> > > > > }
> > > > > +static void virtual_xfer_context(struct virtual_engine *ve,
> > > > > + struct intel_engine_cs *engine)
> > > > > +{
> > > > > + unsigned int n;
> > > > > +
> > > > > + if (likely(engine == ve->siblings[0]))
> > > > > + return;
> > > > > +
> > > > > + if (!intel_engine_has_relative_mmio(engine))
> > > > > + lrc_update_offsets(&ve->context, engine);
> > > > > +
> > > > > + /*
> > > > > + * Move the bound engine to the top of the list for
> > > > > + * future execution. We then kick this tasklet first
> > > > > + * before checking others, so that we preferentially
> > > > > + * reuse this set of bound registers.
> > > > > + */
> > > > > + for (n = 1; n < ve->num_siblings; n++) {
> > > > > + if (ve->siblings[n] == engine) {
> > > > > + swap(ve->siblings[n], ve->siblings[0]);
> > > > > + break;
> > > > > + }
> > > > > + }
> > > > > +}
> > > > > +
> > > > > +static int ve_random_sibling(struct virtual_engine *ve)
> > > > > +{
> > > > > + return prandom_u32_max(ve->num_siblings);
> > > > > +}
> > > > > +
> > > > > +static int ve_random_other_sibling(struct virtual_engine *ve)
> > > > > +{
> > > > > + return 1 + prandom_u32_max(ve->num_siblings - 1);
> > > > > +}
> > > > > +
> > > > > static void
> > > > > resubmit_virtual_request(struct i915_request *rq, struct virtual_engine *ve)
> > > > > {
> > > > > @@ -578,8 +613,23 @@ static void kick_siblings(struct i915_request *rq, struct intel_context *ce)
> > > > > rq->execution_mask != engine->mask)
> > > > > resubmit_virtual_request(rq, ve);
> > > > > - if (READ_ONCE(ve->request))
> > > > > + /*
> > > > > + * Reschedule with a new "preferred" sibling.
> > > > > + *
> > > > > + * The tasklets are executed in the order of ve->siblings[], so
> > > > > + * siblings[0] receives preferrential treatment of greedily checking
> > > > > + * for execution of the virtual engine. At this point, the virtual
> > > > > + * engine is no longer in the current GPU cache due to idleness or
> > > > > + * contention, so it can be executed on any without penalty. We
> > > > > + * re-randomise at this point in order to spread light loads across
> > > > > + * the system, heavy overlapping loads will continue to be greedily
> > > > > + * executed by the first available engine.
> > > > > + */
> > > > > + if (READ_ONCE(ve->request)) {
> > > > > + virtual_xfer_context(ve,
> > > > > + ve->siblings[ve_random_other_sibling(ve)]);
> > > > > tasklet_hi_schedule(&ve->base.sched_engine->tasklet);
> > > > > + }
> > > > > }
> > > > > static void __execlists_schedule_out(struct i915_request * const rq,
> > > > > @@ -1030,32 +1080,6 @@ first_virtual_engine(struct intel_engine_cs *engine)
> > > > > return NULL;
> > > > > }
> > > > > -static void virtual_xfer_context(struct virtual_engine *ve,
> > > > > - struct intel_engine_cs *engine)
> > > > > -{
> > > > > - unsigned int n;
> > > > > -
> > > > > - if (likely(engine == ve->siblings[0]))
> > > > > - return;
> > > > > -
> > > > > - GEM_BUG_ON(READ_ONCE(ve->context.inflight));
> > > > > - if (!intel_engine_has_relative_mmio(engine))
> > > > > - lrc_update_offsets(&ve->context, engine);
> > > > > -
> > > > > - /*
> > > > > - * Move the bound engine to the top of the list for
> > > > > - * future execution. We then kick this tasklet first
> > > > > - * before checking others, so that we preferentially
> > > > > - * reuse this set of bound registers.
> > > > > - */
> > > > > - for (n = 1; n < ve->num_siblings; n++) {
> > > > > - if (ve->siblings[n] == engine) {
> > > > > - swap(ve->siblings[n], ve->siblings[0]);
> > > > > - break;
> > > > > - }
> > > > > - }
> > > > > -}
> > > > > -
> > > > > static void defer_request(struct i915_request *rq, struct list_head * const pl)
> > > > > {
> > > > > LIST_HEAD(list);
> > > > > @@ -3590,7 +3614,7 @@ static void virtual_engine_initial_hint(struct virtual_engine *ve)
> > > > > * NB This does not force us to execute on this engine, it will just
> > > > > * typically be the first we inspect for submission.
> > > > > */
> > > > > - swp = prandom_u32_max(ve->num_siblings);
> > > > > + swp = ve_random_sibling(ve);
> > > > > if (swp)
> > > > > swap(ve->siblings[swp], ve->siblings[0]);
> > > > > }
> > > > >
^ permalink raw reply [flat|nested] 21+ messages in thread
end of thread, other threads:[~2021-11-24 15:09 UTC | newest]
Thread overview: 21+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2021-11-17 22:49 [Intel-gfx] [PATCH 0/3] drm/i915/gt: RPS tuning for light media playback Vinay Belgaumkar
2021-11-17 22:49 ` [Intel-gfx] [PATCH 1/3] drm/i915/gt: Spread virtual engines over idle engines Vinay Belgaumkar
2021-11-23 9:39 ` Tvrtko Ursulin
2021-11-23 19:52 ` Rodrigo Vivi
2021-11-24 8:56 ` Tvrtko Ursulin
2021-11-24 13:55 ` Rodrigo Vivi
2021-11-24 15:09 ` Rodrigo Vivi
2021-11-17 22:49 ` [Intel-gfx] [PATCH 2/3] drm/i915/gt: Compare average group occupancy for RPS evaluation Vinay Belgaumkar
2021-11-23 17:35 ` Belgaumkar, Vinay
2021-11-17 22:49 ` [Intel-gfx] [PATCH 3/3] drm/i915/gt: Improve "race-to-idle" at low frequencies Vinay Belgaumkar
2021-11-22 18:44 ` Rodrigo Vivi
2021-11-23 9:17 ` Tvrtko Ursulin
2021-11-23 16:53 ` Vivi, Rodrigo
2021-11-23 17:37 ` Belgaumkar, Vinay
2021-11-17 23:59 ` [Intel-gfx] ✗ Fi.CI.SPARSE: warning for drm/i915/gt: RPS tuning for light media playback Patchwork
2021-11-18 0:31 ` [Intel-gfx] ✗ Fi.CI.BAT: failure " Patchwork
2021-11-18 20:30 ` [Intel-gfx] ✗ Fi.CI.SPARSE: warning for drm/i915/gt: RPS tuning for light media playback (rev2) Patchwork
2021-11-18 20:59 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2021-11-20 0:37 ` [Intel-gfx] ✗ Fi.CI.SPARSE: warning for drm/i915/gt: RPS tuning for light media playback (rev3) Patchwork
2021-11-20 1:09 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2021-11-20 5:13 ` [Intel-gfx] ✗ Fi.CI.IGT: failure " Patchwork
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox