* [Intel-gfx] [PATCH 1/7] drm/i915/guc: Use correct context lock when callig clr_context_registered
2021-12-11 0:56 [Intel-gfx] [PATCH 0/7] Fix stealing guc_ids + test Matthew Brost
@ 2021-12-11 0:56 ` Matthew Brost
2021-12-11 0:56 ` [Intel-gfx] [PATCH 2/7] drm/i915/guc: Only assign guc_id.id when stealing guc_id Matthew Brost
` (9 subsequent siblings)
10 siblings, 0 replies; 27+ messages in thread
From: Matthew Brost @ 2021-12-11 0:56 UTC (permalink / raw)
To: intel-gfx, dri-devel
s/ce/cn/ when grabbing guc_state.lock before calling
clr_context_registered.
Fixes: 0f7976506de61 ("drm/i915/guc: Rework and simplify locking")
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
---
drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
index 1f9d4fde421f..9b7b4f4e0d91 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
@@ -1937,9 +1937,9 @@ static int steal_guc_id(struct intel_guc *guc, struct intel_context *ce)
list_del_init(&cn->guc_id.link);
ce->guc_id = cn->guc_id;
- spin_lock(&ce->guc_state.lock);
+ spin_lock(&cn->guc_state.lock);
clr_context_registered(cn);
- spin_unlock(&ce->guc_state.lock);
+ spin_unlock(&cn->guc_state.lock);
set_context_guc_id_invalid(cn);
--
2.33.1
^ permalink raw reply related [flat|nested] 27+ messages in thread* [Intel-gfx] [PATCH 2/7] drm/i915/guc: Only assign guc_id.id when stealing guc_id
2021-12-11 0:56 [Intel-gfx] [PATCH 0/7] Fix stealing guc_ids + test Matthew Brost
2021-12-11 0:56 ` [Intel-gfx] [PATCH 1/7] drm/i915/guc: Use correct context lock when callig clr_context_registered Matthew Brost
@ 2021-12-11 0:56 ` Matthew Brost
2021-12-11 1:08 ` John Harrison
2021-12-11 0:56 ` [Intel-gfx] [PATCH 3/7] drm/i915/guc: Remove racey GEM_BUG_ON Matthew Brost
` (8 subsequent siblings)
10 siblings, 1 reply; 27+ messages in thread
From: Matthew Brost @ 2021-12-11 0:56 UTC (permalink / raw)
To: intel-gfx, dri-devel
Previously assigned whole guc_id structure (list, spin lock) which is
incorrect, only assign the guc_id.id.
Fixes: 0f7976506de61 ("drm/i915/guc: Rework and simplify locking")
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
index 9b7b4f4e0d91..0fb2eeff0262 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
@@ -1935,7 +1935,7 @@ static int steal_guc_id(struct intel_guc *guc, struct intel_context *ce)
GEM_BUG_ON(intel_context_is_parent(cn));
list_del_init(&cn->guc_id.link);
- ce->guc_id = cn->guc_id;
+ ce->guc_id.id = cn->guc_id.id;
spin_lock(&cn->guc_state.lock);
clr_context_registered(cn);
--
2.33.1
^ permalink raw reply related [flat|nested] 27+ messages in thread* Re: [Intel-gfx] [PATCH 2/7] drm/i915/guc: Only assign guc_id.id when stealing guc_id
2021-12-11 0:56 ` [Intel-gfx] [PATCH 2/7] drm/i915/guc: Only assign guc_id.id when stealing guc_id Matthew Brost
@ 2021-12-11 1:08 ` John Harrison
0 siblings, 0 replies; 27+ messages in thread
From: John Harrison @ 2021-12-11 1:08 UTC (permalink / raw)
To: Matthew Brost, intel-gfx, dri-devel
On 12/10/2021 16:56, Matthew Brost wrote:
> Previously assigned whole guc_id structure (list, spin lock) which is
> incorrect, only assign the guc_id.id.
>
> Fixes: 0f7976506de61 ("drm/i915/guc: Rework and simplify locking")
> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: John Harrison <John.C.Harrison@Intel.com>
> ---
> drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> index 9b7b4f4e0d91..0fb2eeff0262 100644
> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> @@ -1935,7 +1935,7 @@ static int steal_guc_id(struct intel_guc *guc, struct intel_context *ce)
> GEM_BUG_ON(intel_context_is_parent(cn));
>
> list_del_init(&cn->guc_id.link);
> - ce->guc_id = cn->guc_id;
> + ce->guc_id.id = cn->guc_id.id;
>
> spin_lock(&cn->guc_state.lock);
> clr_context_registered(cn);
^ permalink raw reply [flat|nested] 27+ messages in thread
* [Intel-gfx] [PATCH 3/7] drm/i915/guc: Remove racey GEM_BUG_ON
2021-12-11 0:56 [Intel-gfx] [PATCH 0/7] Fix stealing guc_ids + test Matthew Brost
2021-12-11 0:56 ` [Intel-gfx] [PATCH 1/7] drm/i915/guc: Use correct context lock when callig clr_context_registered Matthew Brost
2021-12-11 0:56 ` [Intel-gfx] [PATCH 2/7] drm/i915/guc: Only assign guc_id.id when stealing guc_id Matthew Brost
@ 2021-12-11 0:56 ` Matthew Brost
2021-12-11 0:56 ` [Intel-gfx] [PATCH 4/7] drm/i915/guc: Don't hog IRQs when destroying contexts Matthew Brost
` (7 subsequent siblings)
10 siblings, 0 replies; 27+ messages in thread
From: Matthew Brost @ 2021-12-11 0:56 UTC (permalink / raw)
To: intel-gfx, dri-devel
A full GT reset can race with the last context put resulting in the
context ref count being zero but the destroyed bit not yet being set.
Remove GEM_BUG_ON in scrub_guc_desc_for_outstanding_g2h that asserts the
destroyed bit must be set in ref count is zero.
Reviewed-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
index 0fb2eeff0262..36c2965db49b 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
@@ -1040,8 +1040,6 @@ static void scrub_guc_desc_for_outstanding_g2h(struct intel_guc *guc)
spin_unlock(&ce->guc_state.lock);
- GEM_BUG_ON(!do_put && !destroyed);
-
if (pending_enable || destroyed || deregister) {
decr_outstanding_submission_g2h(guc);
if (deregister)
--
2.33.1
^ permalink raw reply related [flat|nested] 27+ messages in thread* [Intel-gfx] [PATCH 4/7] drm/i915/guc: Don't hog IRQs when destroying contexts
2021-12-11 0:56 [Intel-gfx] [PATCH 0/7] Fix stealing guc_ids + test Matthew Brost
` (2 preceding siblings ...)
2021-12-11 0:56 ` [Intel-gfx] [PATCH 3/7] drm/i915/guc: Remove racey GEM_BUG_ON Matthew Brost
@ 2021-12-11 0:56 ` Matthew Brost
2021-12-11 1:07 ` John Harrison
2021-12-11 0:56 ` [Intel-gfx] [PATCH 5/7] drm/i915/guc: Add extra debug on CT deadlock Matthew Brost
` (6 subsequent siblings)
10 siblings, 1 reply; 27+ messages in thread
From: Matthew Brost @ 2021-12-11 0:56 UTC (permalink / raw)
To: intel-gfx, dri-devel
From: John Harrison <John.C.Harrison@Intel.com>
While attempting to debug a CT deadlock issue in various CI failures
(most easily reproduced with gem_ctx_create/basic-files), I was seeing
CPU deadlock errors being reported. This were because the context
destroy loop was blocking waiting on H2G space from inside an IRQ
spinlock. There was deadlock as such, it's just that the H2G queue was
full of context destroy commands and GuC was taking a long time to
process them. However, the kernel was seeing the large amount of time
spent inside the IRQ lock as a dead CPU. Various Bad Things(tm) would
then happen (heartbeat failures, CT deadlock errors, outstanding H2G
WARNs, etc.).
Re-working the loop to only acquire the spinlock around the list
management (which is all it is meant to protect) rather than the
entire destroy operation seems to fix all the above issues.
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
---
.../gpu/drm/i915/gt/uc/intel_guc_submission.c | 45 ++++++++++++-------
1 file changed, 28 insertions(+), 17 deletions(-)
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
index 36c2965db49b..96fcf869e3ff 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
@@ -2644,7 +2644,6 @@ static inline void guc_lrc_desc_unpin(struct intel_context *ce)
unsigned long flags;
bool disabled;
- lockdep_assert_held(&guc->submission_state.lock);
GEM_BUG_ON(!intel_gt_pm_is_awake(gt));
GEM_BUG_ON(!lrc_desc_registered(guc, ce->guc_id.id));
GEM_BUG_ON(ce != __get_context(guc, ce->guc_id.id));
@@ -2660,7 +2659,7 @@ static inline void guc_lrc_desc_unpin(struct intel_context *ce)
}
spin_unlock_irqrestore(&ce->guc_state.lock, flags);
if (unlikely(disabled)) {
- __release_guc_id(guc, ce);
+ release_guc_id(guc, ce);
__guc_context_destroy(ce);
return;
}
@@ -2694,36 +2693,48 @@ static void __guc_context_destroy(struct intel_context *ce)
static void guc_flush_destroyed_contexts(struct intel_guc *guc)
{
- struct intel_context *ce, *cn;
+ struct intel_context *ce;
unsigned long flags;
GEM_BUG_ON(!submission_disabled(guc) &&
guc_submission_initialized(guc));
- spin_lock_irqsave(&guc->submission_state.lock, flags);
- list_for_each_entry_safe(ce, cn,
- &guc->submission_state.destroyed_contexts,
- destroyed_link) {
- list_del_init(&ce->destroyed_link);
- __release_guc_id(guc, ce);
+ while (!list_empty(&guc->submission_state.destroyed_contexts)) {
+ spin_lock_irqsave(&guc->submission_state.lock, flags);
+ ce = list_first_entry_or_null(&guc->submission_state.destroyed_contexts,
+ struct intel_context,
+ destroyed_link);
+ if (ce)
+ list_del_init(&ce->destroyed_link);
+ spin_unlock_irqrestore(&guc->submission_state.lock, flags);
+
+ if (!ce)
+ break;
+
+ release_guc_id(guc, ce);
__guc_context_destroy(ce);
}
- spin_unlock_irqrestore(&guc->submission_state.lock, flags);
}
static void deregister_destroyed_contexts(struct intel_guc *guc)
{
- struct intel_context *ce, *cn;
+ struct intel_context *ce;
unsigned long flags;
- spin_lock_irqsave(&guc->submission_state.lock, flags);
- list_for_each_entry_safe(ce, cn,
- &guc->submission_state.destroyed_contexts,
- destroyed_link) {
- list_del_init(&ce->destroyed_link);
+ while (!list_empty(&guc->submission_state.destroyed_contexts)) {
+ spin_lock_irqsave(&guc->submission_state.lock, flags);
+ ce = list_first_entry_or_null(&guc->submission_state.destroyed_contexts,
+ struct intel_context,
+ destroyed_link);
+ if (ce)
+ list_del_init(&ce->destroyed_link);
+ spin_unlock_irqrestore(&guc->submission_state.lock, flags);
+
+ if (!ce)
+ break;
+
guc_lrc_desc_unpin(ce);
}
- spin_unlock_irqrestore(&guc->submission_state.lock, flags);
}
static void destroyed_worker_func(struct work_struct *w)
--
2.33.1
^ permalink raw reply related [flat|nested] 27+ messages in thread* Re: [Intel-gfx] [PATCH 4/7] drm/i915/guc: Don't hog IRQs when destroying contexts
2021-12-11 0:56 ` [Intel-gfx] [PATCH 4/7] drm/i915/guc: Don't hog IRQs when destroying contexts Matthew Brost
@ 2021-12-11 1:07 ` John Harrison
2021-12-11 1:10 ` Matthew Brost
0 siblings, 1 reply; 27+ messages in thread
From: John Harrison @ 2021-12-11 1:07 UTC (permalink / raw)
To: Matthew Brost, intel-gfx, dri-devel
On 12/10/2021 16:56, Matthew Brost wrote:
> From: John Harrison <John.C.Harrison@Intel.com>
>
> While attempting to debug a CT deadlock issue in various CI failures
> (most easily reproduced with gem_ctx_create/basic-files), I was seeing
> CPU deadlock errors being reported. This were because the context
> destroy loop was blocking waiting on H2G space from inside an IRQ
> spinlock. There was deadlock as such, it's just that the H2G queue was
There was *no* deadlock as such
John.
> full of context destroy commands and GuC was taking a long time to
> process them. However, the kernel was seeing the large amount of time
> spent inside the IRQ lock as a dead CPU. Various Bad Things(tm) would
> then happen (heartbeat failures, CT deadlock errors, outstanding H2G
> WARNs, etc.).
>
> Re-working the loop to only acquire the spinlock around the list
> management (which is all it is meant to protect) rather than the
> entire destroy operation seems to fix all the above issues.
>
> Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
> ---
> .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 45 ++++++++++++-------
> 1 file changed, 28 insertions(+), 17 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> index 36c2965db49b..96fcf869e3ff 100644
> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> @@ -2644,7 +2644,6 @@ static inline void guc_lrc_desc_unpin(struct intel_context *ce)
> unsigned long flags;
> bool disabled;
>
> - lockdep_assert_held(&guc->submission_state.lock);
> GEM_BUG_ON(!intel_gt_pm_is_awake(gt));
> GEM_BUG_ON(!lrc_desc_registered(guc, ce->guc_id.id));
> GEM_BUG_ON(ce != __get_context(guc, ce->guc_id.id));
> @@ -2660,7 +2659,7 @@ static inline void guc_lrc_desc_unpin(struct intel_context *ce)
> }
> spin_unlock_irqrestore(&ce->guc_state.lock, flags);
> if (unlikely(disabled)) {
> - __release_guc_id(guc, ce);
> + release_guc_id(guc, ce);
> __guc_context_destroy(ce);
> return;
> }
> @@ -2694,36 +2693,48 @@ static void __guc_context_destroy(struct intel_context *ce)
>
> static void guc_flush_destroyed_contexts(struct intel_guc *guc)
> {
> - struct intel_context *ce, *cn;
> + struct intel_context *ce;
> unsigned long flags;
>
> GEM_BUG_ON(!submission_disabled(guc) &&
> guc_submission_initialized(guc));
>
> - spin_lock_irqsave(&guc->submission_state.lock, flags);
> - list_for_each_entry_safe(ce, cn,
> - &guc->submission_state.destroyed_contexts,
> - destroyed_link) {
> - list_del_init(&ce->destroyed_link);
> - __release_guc_id(guc, ce);
> + while (!list_empty(&guc->submission_state.destroyed_contexts)) {
> + spin_lock_irqsave(&guc->submission_state.lock, flags);
> + ce = list_first_entry_or_null(&guc->submission_state.destroyed_contexts,
> + struct intel_context,
> + destroyed_link);
> + if (ce)
> + list_del_init(&ce->destroyed_link);
> + spin_unlock_irqrestore(&guc->submission_state.lock, flags);
> +
> + if (!ce)
> + break;
> +
> + release_guc_id(guc, ce);
> __guc_context_destroy(ce);
> }
> - spin_unlock_irqrestore(&guc->submission_state.lock, flags);
> }
>
> static void deregister_destroyed_contexts(struct intel_guc *guc)
> {
> - struct intel_context *ce, *cn;
> + struct intel_context *ce;
> unsigned long flags;
>
> - spin_lock_irqsave(&guc->submission_state.lock, flags);
> - list_for_each_entry_safe(ce, cn,
> - &guc->submission_state.destroyed_contexts,
> - destroyed_link) {
> - list_del_init(&ce->destroyed_link);
> + while (!list_empty(&guc->submission_state.destroyed_contexts)) {
> + spin_lock_irqsave(&guc->submission_state.lock, flags);
> + ce = list_first_entry_or_null(&guc->submission_state.destroyed_contexts,
> + struct intel_context,
> + destroyed_link);
> + if (ce)
> + list_del_init(&ce->destroyed_link);
> + spin_unlock_irqrestore(&guc->submission_state.lock, flags);
> +
> + if (!ce)
> + break;
> +
> guc_lrc_desc_unpin(ce);
> }
> - spin_unlock_irqrestore(&guc->submission_state.lock, flags);
> }
>
> static void destroyed_worker_func(struct work_struct *w)
^ permalink raw reply [flat|nested] 27+ messages in thread* Re: [Intel-gfx] [PATCH 4/7] drm/i915/guc: Don't hog IRQs when destroying contexts
2021-12-11 1:07 ` John Harrison
@ 2021-12-11 1:10 ` Matthew Brost
0 siblings, 0 replies; 27+ messages in thread
From: Matthew Brost @ 2021-12-11 1:10 UTC (permalink / raw)
To: John Harrison; +Cc: intel-gfx, dri-devel
On Fri, Dec 10, 2021 at 05:07:12PM -0800, John Harrison wrote:
> On 12/10/2021 16:56, Matthew Brost wrote:
> > From: John Harrison <John.C.Harrison@Intel.com>
> >
> > While attempting to debug a CT deadlock issue in various CI failures
> > (most easily reproduced with gem_ctx_create/basic-files), I was seeing
> > CPU deadlock errors being reported. This were because the context
> > destroy loop was blocking waiting on H2G space from inside an IRQ
> > spinlock. There was deadlock as such, it's just that the H2G queue was
> There was *no* deadlock as such
>
Let's fix this up when applying the series.
With that:
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
> John.
>
> > full of context destroy commands and GuC was taking a long time to
> > process them. However, the kernel was seeing the large amount of time
> > spent inside the IRQ lock as a dead CPU. Various Bad Things(tm) would
> > then happen (heartbeat failures, CT deadlock errors, outstanding H2G
> > WARNs, etc.).
> >
> > Re-working the loop to only acquire the spinlock around the list
> > management (which is all it is meant to protect) rather than the
> > entire destroy operation seems to fix all the above issues.
> >
> > Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
> > ---
> > .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 45 ++++++++++++-------
> > 1 file changed, 28 insertions(+), 17 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> > index 36c2965db49b..96fcf869e3ff 100644
> > --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> > +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> > @@ -2644,7 +2644,6 @@ static inline void guc_lrc_desc_unpin(struct intel_context *ce)
> > unsigned long flags;
> > bool disabled;
> > - lockdep_assert_held(&guc->submission_state.lock);
> > GEM_BUG_ON(!intel_gt_pm_is_awake(gt));
> > GEM_BUG_ON(!lrc_desc_registered(guc, ce->guc_id.id));
> > GEM_BUG_ON(ce != __get_context(guc, ce->guc_id.id));
> > @@ -2660,7 +2659,7 @@ static inline void guc_lrc_desc_unpin(struct intel_context *ce)
> > }
> > spin_unlock_irqrestore(&ce->guc_state.lock, flags);
> > if (unlikely(disabled)) {
> > - __release_guc_id(guc, ce);
> > + release_guc_id(guc, ce);
> > __guc_context_destroy(ce);
> > return;
> > }
> > @@ -2694,36 +2693,48 @@ static void __guc_context_destroy(struct intel_context *ce)
> > static void guc_flush_destroyed_contexts(struct intel_guc *guc)
> > {
> > - struct intel_context *ce, *cn;
> > + struct intel_context *ce;
> > unsigned long flags;
> > GEM_BUG_ON(!submission_disabled(guc) &&
> > guc_submission_initialized(guc));
> > - spin_lock_irqsave(&guc->submission_state.lock, flags);
> > - list_for_each_entry_safe(ce, cn,
> > - &guc->submission_state.destroyed_contexts,
> > - destroyed_link) {
> > - list_del_init(&ce->destroyed_link);
> > - __release_guc_id(guc, ce);
> > + while (!list_empty(&guc->submission_state.destroyed_contexts)) {
> > + spin_lock_irqsave(&guc->submission_state.lock, flags);
> > + ce = list_first_entry_or_null(&guc->submission_state.destroyed_contexts,
> > + struct intel_context,
> > + destroyed_link);
> > + if (ce)
> > + list_del_init(&ce->destroyed_link);
> > + spin_unlock_irqrestore(&guc->submission_state.lock, flags);
> > +
> > + if (!ce)
> > + break;
> > +
> > + release_guc_id(guc, ce);
> > __guc_context_destroy(ce);
> > }
> > - spin_unlock_irqrestore(&guc->submission_state.lock, flags);
> > }
> > static void deregister_destroyed_contexts(struct intel_guc *guc)
> > {
> > - struct intel_context *ce, *cn;
> > + struct intel_context *ce;
> > unsigned long flags;
> > - spin_lock_irqsave(&guc->submission_state.lock, flags);
> > - list_for_each_entry_safe(ce, cn,
> > - &guc->submission_state.destroyed_contexts,
> > - destroyed_link) {
> > - list_del_init(&ce->destroyed_link);
> > + while (!list_empty(&guc->submission_state.destroyed_contexts)) {
> > + spin_lock_irqsave(&guc->submission_state.lock, flags);
> > + ce = list_first_entry_or_null(&guc->submission_state.destroyed_contexts,
> > + struct intel_context,
> > + destroyed_link);
> > + if (ce)
> > + list_del_init(&ce->destroyed_link);
> > + spin_unlock_irqrestore(&guc->submission_state.lock, flags);
> > +
> > + if (!ce)
> > + break;
> > +
> > guc_lrc_desc_unpin(ce);
> > }
> > - spin_unlock_irqrestore(&guc->submission_state.lock, flags);
> > }
> > static void destroyed_worker_func(struct work_struct *w)
>
^ permalink raw reply [flat|nested] 27+ messages in thread
* [Intel-gfx] [PATCH 5/7] drm/i915/guc: Add extra debug on CT deadlock
2021-12-11 0:56 [Intel-gfx] [PATCH 0/7] Fix stealing guc_ids + test Matthew Brost
` (3 preceding siblings ...)
2021-12-11 0:56 ` [Intel-gfx] [PATCH 4/7] drm/i915/guc: Don't hog IRQs when destroying contexts Matthew Brost
@ 2021-12-11 0:56 ` Matthew Brost
2021-12-11 1:43 ` John Harrison
2021-12-11 0:56 ` [Intel-gfx] [PATCH 6/7] drm/i915/guc: Kick G2H tasklet if no credits Matthew Brost
` (5 subsequent siblings)
10 siblings, 1 reply; 27+ messages in thread
From: Matthew Brost @ 2021-12-11 0:56 UTC (permalink / raw)
To: intel-gfx, dri-devel
Print CT state (H2G + G2H head / tail pointers, credits) on CT
deadlock.
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
index a0cc34be7b56..ee5525c6f79b 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
@@ -523,6 +523,15 @@ static inline bool ct_deadlocked(struct intel_guc_ct *ct)
CT_ERROR(ct, "Communication stalled for %lld ms, desc status=%#x,%#x\n",
ktime_ms_delta(ktime_get(), ct->stall_time),
send->status, recv->status);
+ CT_ERROR(ct, "H2G Space: %u\n",
+ atomic_read(&ct->ctbs.send.space) * 4);
+ CT_ERROR(ct, "Head: %u\n", ct->ctbs.send.desc->head);
+ CT_ERROR(ct, "Tail: %u\n", ct->ctbs.send.desc->tail);
+ CT_ERROR(ct, "G2H Space: %u\n",
+ atomic_read(&ct->ctbs.recv.space) * 4);
+ CT_ERROR(ct, "Head: %u\n", ct->ctbs.recv.desc->head);
+ CT_ERROR(ct, "Tail: %u\n", ct->ctbs.recv.desc->tail);
+
ct->ctbs.send.broken = true;
}
--
2.33.1
^ permalink raw reply related [flat|nested] 27+ messages in thread* Re: [Intel-gfx] [PATCH 5/7] drm/i915/guc: Add extra debug on CT deadlock
2021-12-11 0:56 ` [Intel-gfx] [PATCH 5/7] drm/i915/guc: Add extra debug on CT deadlock Matthew Brost
@ 2021-12-11 1:43 ` John Harrison
2021-12-11 1:45 ` John Harrison
0 siblings, 1 reply; 27+ messages in thread
From: John Harrison @ 2021-12-11 1:43 UTC (permalink / raw)
To: Matthew Brost, intel-gfx, dri-devel
On 12/10/2021 16:56, Matthew Brost wrote:
> Print CT state (H2G + G2H head / tail pointers, credits) on CT
> deadlock.
>
> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: John Harrison <John.C.Harrison@Intel.com>
> ---
> drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c | 9 +++++++++
> 1 file changed, 9 insertions(+)
>
> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
> index a0cc34be7b56..ee5525c6f79b 100644
> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
> @@ -523,6 +523,15 @@ static inline bool ct_deadlocked(struct intel_guc_ct *ct)
> CT_ERROR(ct, "Communication stalled for %lld ms, desc status=%#x,%#x\n",
> ktime_ms_delta(ktime_get(), ct->stall_time),
> send->status, recv->status);
> + CT_ERROR(ct, "H2G Space: %u\n",
> + atomic_read(&ct->ctbs.send.space) * 4);
> + CT_ERROR(ct, "Head: %u\n", ct->ctbs.send.desc->head);
> + CT_ERROR(ct, "Tail: %u\n", ct->ctbs.send.desc->tail);
> + CT_ERROR(ct, "G2H Space: %u\n",
> + atomic_read(&ct->ctbs.recv.space) * 4);
> + CT_ERROR(ct, "Head: %u\n", ct->ctbs.recv.desc->head);
> + CT_ERROR(ct, "Tail: %u\n", ct->ctbs.recv.desc->tail);
> +
> ct->ctbs.send.broken = true;
> }
>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [Intel-gfx] [PATCH 5/7] drm/i915/guc: Add extra debug on CT deadlock
2021-12-11 1:43 ` John Harrison
@ 2021-12-11 1:45 ` John Harrison
2021-12-11 3:24 ` Matthew Brost
0 siblings, 1 reply; 27+ messages in thread
From: John Harrison @ 2021-12-11 1:45 UTC (permalink / raw)
To: Matthew Brost, intel-gfx, dri-devel
On 12/10/2021 17:43, John Harrison wrote:
> On 12/10/2021 16:56, Matthew Brost wrote:
>> Print CT state (H2G + G2H head / tail pointers, credits) on CT
>> deadlock.
>>
>> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> Reviewed-by: John Harrison <John.C.Harrison@Intel.com>
>
>> ---
>> drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c | 9 +++++++++
>> 1 file changed, 9 insertions(+)
>>
>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
>> b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
>> index a0cc34be7b56..ee5525c6f79b 100644
>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
>> @@ -523,6 +523,15 @@ static inline bool ct_deadlocked(struct
>> intel_guc_ct *ct)
>> CT_ERROR(ct, "Communication stalled for %lld ms, desc
>> status=%#x,%#x\n",
>> ktime_ms_delta(ktime_get(), ct->stall_time),
>> send->status, recv->status);
>> + CT_ERROR(ct, "H2G Space: %u\n",
>> + atomic_read(&ct->ctbs.send.space) * 4);
>> + CT_ERROR(ct, "Head: %u\n", ct->ctbs.send.desc->head);
>> + CT_ERROR(ct, "Tail: %u\n", ct->ctbs.send.desc->tail);
Actually, aren't these offsets in dwords? So, scaling the dword space to
bytes but leaving this as dwords would produce a confusing numbers.
John.
>> + CT_ERROR(ct, "G2H Space: %u\n",
>> + atomic_read(&ct->ctbs.recv.space) * 4);
>> + CT_ERROR(ct, "Head: %u\n", ct->ctbs.recv.desc->head);
>> + CT_ERROR(ct, "Tail: %u\n", ct->ctbs.recv.desc->tail);
>> +
>> ct->ctbs.send.broken = true;
>> }
>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [Intel-gfx] [PATCH 5/7] drm/i915/guc: Add extra debug on CT deadlock
2021-12-11 1:45 ` John Harrison
@ 2021-12-11 3:24 ` Matthew Brost
0 siblings, 0 replies; 27+ messages in thread
From: Matthew Brost @ 2021-12-11 3:24 UTC (permalink / raw)
To: John Harrison; +Cc: intel-gfx, dri-devel
On Fri, Dec 10, 2021 at 05:45:05PM -0800, John Harrison wrote:
> On 12/10/2021 17:43, John Harrison wrote:
> > On 12/10/2021 16:56, Matthew Brost wrote:
> > > Print CT state (H2G + G2H head / tail pointers, credits) on CT
> > > deadlock.
> > >
> > > Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> > Reviewed-by: John Harrison <John.C.Harrison@Intel.com>
> >
> > > ---
> > > drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c | 9 +++++++++
> > > 1 file changed, 9 insertions(+)
> > >
> > > diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
> > > b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
> > > index a0cc34be7b56..ee5525c6f79b 100644
> > > --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
> > > +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
> > > @@ -523,6 +523,15 @@ static inline bool ct_deadlocked(struct
> > > intel_guc_ct *ct)
> > > CT_ERROR(ct, "Communication stalled for %lld ms, desc
> > > status=%#x,%#x\n",
> > > ktime_ms_delta(ktime_get(), ct->stall_time),
> > > send->status, recv->status);
> > > + CT_ERROR(ct, "H2G Space: %u\n",
> > > + atomic_read(&ct->ctbs.send.space) * 4);
> > > + CT_ERROR(ct, "Head: %u\n", ct->ctbs.send.desc->head);
> > > + CT_ERROR(ct, "Tail: %u\n", ct->ctbs.send.desc->tail);
> Actually, aren't these offsets in dwords? So, scaling the dword space to
> bytes but leaving this as dwords would produce a confusing numbers.
>
Copy + pasted from CT debugfs but yes it is slightly confusing. I'd
rather leave the head / tail in native format but I'll add info to both
the space / pointers print indicating the units.
Matt
> John.
>
> > > + CT_ERROR(ct, "G2H Space: %u\n",
> > > + atomic_read(&ct->ctbs.recv.space) * 4);
> > > + CT_ERROR(ct, "Head: %u\n", ct->ctbs.recv.desc->head);
> > > + CT_ERROR(ct, "Tail: %u\n", ct->ctbs.recv.desc->tail);
> > > +
> > > ct->ctbs.send.broken = true;
> > > }
> >
>
^ permalink raw reply [flat|nested] 27+ messages in thread
* [Intel-gfx] [PATCH 6/7] drm/i915/guc: Kick G2H tasklet if no credits
2021-12-11 0:56 [Intel-gfx] [PATCH 0/7] Fix stealing guc_ids + test Matthew Brost
` (4 preceding siblings ...)
2021-12-11 0:56 ` [Intel-gfx] [PATCH 5/7] drm/i915/guc: Add extra debug on CT deadlock Matthew Brost
@ 2021-12-11 0:56 ` Matthew Brost
2021-12-11 0:56 ` [Intel-gfx] [PATCH 7/7] drm/i915/guc: Selftest for stealing of guc ids Matthew Brost
` (4 subsequent siblings)
10 siblings, 0 replies; 27+ messages in thread
From: Matthew Brost @ 2021-12-11 0:56 UTC (permalink / raw)
To: intel-gfx, dri-devel
Let's be paranoid and kick the G2H tasklet, which dequeues messages, if
G2H credit are exhausted.
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
index ee5525c6f79b..334c5ab1c7b1 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
@@ -591,12 +591,19 @@ static inline bool h2g_has_room(struct intel_guc_ct *ct, u32 len_dw)
static int has_room_nb(struct intel_guc_ct *ct, u32 h2g_dw, u32 g2h_dw)
{
+ bool h2g = h2g_has_room(ct, h2g_dw);
+ bool g2h = g2h_has_room(ct, g2h_dw);
+
lockdep_assert_held(&ct->ctbs.send.lock);
- if (unlikely(!h2g_has_room(ct, h2g_dw) || !g2h_has_room(ct, g2h_dw))) {
+ if (unlikely(!h2g || !g2h)) {
if (ct->stall_time == KTIME_MAX)
ct->stall_time = ktime_get();
+ /* Be paranoid and kick G2H tasklet to free credits */
+ if (!g2h)
+ tasklet_hi_schedule(&ct->receive_tasklet);
+
if (unlikely(ct_deadlocked(ct)))
return -EPIPE;
else
--
2.33.1
^ permalink raw reply related [flat|nested] 27+ messages in thread* [Intel-gfx] [PATCH 7/7] drm/i915/guc: Selftest for stealing of guc ids
2021-12-11 0:56 [Intel-gfx] [PATCH 0/7] Fix stealing guc_ids + test Matthew Brost
` (5 preceding siblings ...)
2021-12-11 0:56 ` [Intel-gfx] [PATCH 6/7] drm/i915/guc: Kick G2H tasklet if no credits Matthew Brost
@ 2021-12-11 0:56 ` Matthew Brost
2021-12-11 1:33 ` John Harrison
2021-12-11 2:28 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for Fix stealing guc_ids + test Patchwork
` (3 subsequent siblings)
10 siblings, 1 reply; 27+ messages in thread
From: Matthew Brost @ 2021-12-11 0:56 UTC (permalink / raw)
To: intel-gfx, dri-devel
Testing the stealing of guc ids is hard from user spaec as we have 64k
guc_ids. Add a selftest, which artificially reduces the number of guc
ids, and forces a steal. Details of test has comment in code so will not
repeat here.
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/i915/gt/uc/intel_guc.h | 12 ++
.../gpu/drm/i915/gt/uc/intel_guc_submission.c | 15 +-
drivers/gpu/drm/i915/gt/uc/selftest_guc.c | 171 ++++++++++++++++++
3 files changed, 193 insertions(+), 5 deletions(-)
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.h b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
index 1cb46098030d..307380a2e2ff 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc.h
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
@@ -94,6 +94,11 @@ struct intel_guc {
* @guc_ids: used to allocate new guc_ids, single-lrc
*/
struct ida guc_ids;
+ /**
+ * @num_guc_ids: Number of guc_ids, selftest feature to be able
+ * to reduce this number of test.
+ */
+ int num_guc_ids;
/**
* @guc_ids_bitmap: used to allocate new guc_ids, multi-lrc
*/
@@ -202,6 +207,13 @@ struct intel_guc {
*/
struct delayed_work work;
} timestamp;
+
+#ifdef CONFIG_DRM_I915_SELFTEST
+ /**
+ * @number_guc_id_stole: The number of guc_ids that have been stole
+ */
+ int number_guc_id_stole;
+#endif
};
static inline struct intel_guc *log_to_guc(struct intel_guc_log *log)
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
index 96fcf869e3ff..57019b190bfb 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
@@ -145,7 +145,7 @@ guc_create_parallel(struct intel_engine_cs **engines,
* use should be low and 1/16 should be sufficient. Minimum of 32 guc_ids for
* multi-lrc.
*/
-#define NUMBER_MULTI_LRC_GUC_ID (GUC_MAX_LRC_DESCRIPTORS / 16)
+#define NUMBER_MULTI_LRC_GUC_ID(guc) (guc->submission_state.num_guc_ids / 16)
/*
* Below is a set of functions which control the GuC scheduling state which
@@ -1775,7 +1775,7 @@ int intel_guc_submission_init(struct intel_guc *guc)
destroyed_worker_func);
guc->submission_state.guc_ids_bitmap =
- bitmap_zalloc(NUMBER_MULTI_LRC_GUC_ID, GFP_KERNEL);
+ bitmap_zalloc(NUMBER_MULTI_LRC_GUC_ID(guc), GFP_KERNEL);
if (!guc->submission_state.guc_ids_bitmap)
return -ENOMEM;
@@ -1869,13 +1869,13 @@ static int new_guc_id(struct intel_guc *guc, struct intel_context *ce)
if (intel_context_is_parent(ce))
ret = bitmap_find_free_region(guc->submission_state.guc_ids_bitmap,
- NUMBER_MULTI_LRC_GUC_ID,
+ NUMBER_MULTI_LRC_GUC_ID(guc),
order_base_2(ce->parallel.number_children
+ 1));
else
ret = ida_simple_get(&guc->submission_state.guc_ids,
- NUMBER_MULTI_LRC_GUC_ID,
- GUC_MAX_LRC_DESCRIPTORS,
+ NUMBER_MULTI_LRC_GUC_ID(guc),
+ guc->submission_state.num_guc_ids,
GFP_KERNEL | __GFP_RETRY_MAYFAIL |
__GFP_NOWARN);
if (unlikely(ret < 0))
@@ -1941,6 +1941,10 @@ static int steal_guc_id(struct intel_guc *guc, struct intel_context *ce)
set_context_guc_id_invalid(cn);
+#ifdef CONFIG_DRM_I915_SELFTEST
+ guc->number_guc_id_stole++;
+#endif
+
return 0;
} else {
return -EAGAIN;
@@ -3779,6 +3783,7 @@ static bool __guc_submission_selected(struct intel_guc *guc)
void intel_guc_submission_init_early(struct intel_guc *guc)
{
+ guc->submission_state.num_guc_ids = GUC_MAX_LRC_DESCRIPTORS;
guc->submission_supported = __guc_submission_supported(guc);
guc->submission_selected = __guc_submission_selected(guc);
}
diff --git a/drivers/gpu/drm/i915/gt/uc/selftest_guc.c b/drivers/gpu/drm/i915/gt/uc/selftest_guc.c
index fb0e4a7bd8ca..9ab355e64b3f 100644
--- a/drivers/gpu/drm/i915/gt/uc/selftest_guc.c
+++ b/drivers/gpu/drm/i915/gt/uc/selftest_guc.c
@@ -3,8 +3,21 @@
* Copyright �� 2021 Intel Corporation
*/
+#include "selftests/igt_spinner.h"
#include "selftests/intel_scheduler_helpers.h"
+static int request_add_spin(struct i915_request *rq, struct igt_spinner *spin)
+{
+ int err = 0;
+
+ i915_request_get(rq);
+ i915_request_add(rq);
+ if (spin && !igt_wait_for_spinner(spin, rq))
+ err = -ETIMEDOUT;
+
+ return err;
+}
+
static struct i915_request *nop_user_request(struct intel_context *ce,
struct i915_request *from)
{
@@ -110,10 +123,168 @@ static int intel_guc_scrub_ctbs(void *arg)
return ret;
}
+/*
+ * intel_guc_steal_guc_ids - Test to exhaust all guc_ids and then steal one
+ *
+ * This test creates a spinner to which is used as to block all subsequent
+ * submissions until it completes. Next, a loop creates a context and a NOP
+ * request each iteration until the guc_ids are exhausted (request creation
+ * returns -EAGAIN). The spinner is completed unblocking all requests created in
+ * the loop. At this point all guc_ids are exhausted but are available to steal.
+ * Try to create another request which should successfully steal a guc_id. Wait
+ * on last request to complete, idle GPU, verify guc_id was stole via counter,
+ * and exit test. Test also artificially reduces the number of guc_ids so the
+ * test runs in a timely manner.
+ */
+static int intel_guc_steal_guc_ids(void *arg)
+{
+ struct intel_gt *gt = arg;
+ struct intel_guc *guc = >->uc.guc;
+ int ret, sv, i = 0;
+ intel_wakeref_t wakeref;
+ struct intel_engine_cs *engine;
+ struct intel_context **ce;
+ struct igt_spinner spin;
+ struct i915_request *spin_rq = NULL, *rq, *last = NULL;
+ int number_guc_id_stole = guc->number_guc_id_stole;
+
+ ce = kzalloc(sizeof(*ce) * GUC_MAX_LRC_DESCRIPTORS, GFP_KERNEL);
+ if (!ce) {
+ pr_err("Context array allocation failed\n");
+ return -ENOMEM;
+ }
+
+ wakeref = intel_runtime_pm_get(gt->uncore->rpm);
+ engine = intel_selftest_find_any_engine(gt);
+ sv = guc->submission_state.num_guc_ids;
+ guc->submission_state.num_guc_ids = 4096;
+
+ /* Create spinner to block requests in below loop */
+ ce[i++] = intel_context_create(engine);
+ if (IS_ERR(ce[i - 1])) {
+ ce[i - 1] = NULL;
+ ret = PTR_ERR(ce[i - 1]);
+ pr_err("Failed to create context: %d\n", ret);
+ goto err_wakeref;
+ }
+ ret = igt_spinner_init(&spin, engine->gt);
+ if (ret) {
+ pr_err("Failed to create spinner: %d\n", ret);
+ goto err_contexts;
+ }
+ spin_rq = igt_spinner_create_request(&spin, ce[i - 1], MI_ARB_CHECK);
+ if (IS_ERR(spin_rq)) {
+ ret = PTR_ERR(spin_rq);
+ pr_err("Failed to create spinner request: %d\n", ret);
+ goto err_contexts;
+ }
+ ret = request_add_spin(spin_rq, &spin);
+ if (ret) {
+ pr_err("Failed to add Spinner request: %d\n", ret);
+ goto err_spin_rq;
+ }
+
+ /* Use all guc_ids */
+ while (ret != -EAGAIN) {
+ ce[i++] = intel_context_create(engine);
+ if (IS_ERR(ce[i - 1])) {
+ ce[i - 1] = NULL;
+ ret = PTR_ERR(ce[i - 1]);
+ pr_err("Failed to create context: %d\n", ret);
+ goto err_spin_rq;
+ }
+
+ rq = nop_user_request(ce[i - 1], spin_rq);
+ if (IS_ERR(rq)) {
+ ret = PTR_ERR(rq);
+ rq = NULL;
+ if (ret != -EAGAIN) {
+ pr_err("Failed to create request, %d: %d\n", i,
+ ret);
+ goto err_spin_rq;
+ }
+ } else {
+ if (last)
+ i915_request_put(last);
+ last = rq;
+ }
+ }
+
+ /* Release blocked requests */
+ igt_spinner_end(&spin);
+ ret = intel_selftest_wait_for_rq(spin_rq);
+ if (ret) {
+ pr_err("Spin request failed to complete: %d\n", ret);
+ i915_request_put(last);
+ goto err_spin_rq;
+ }
+ i915_request_put(spin_rq);
+ igt_spinner_fini(&spin);
+ spin_rq = NULL;
+
+ /* Wait for last request */
+ ret = i915_request_wait(last, 0, HZ * 30);
+ i915_request_put(last);
+ if (ret < 0) {
+ pr_err("Last request failed to complete: %d\n", ret);
+ goto err_spin_rq;
+ }
+
+ /* Try to steal guc_id */
+ rq = nop_user_request(ce[i - 1], NULL);
+ if (IS_ERR(rq)) {
+ ret = PTR_ERR(rq);
+ pr_err("Failed to steal guc_id, %d: %d\n", i, ret);
+ goto err_spin_rq;
+ }
+
+ /* Wait for last request */
+ ret = i915_request_wait(rq, 0, HZ);
+ i915_request_put(rq);
+ if (ret < 0) {
+ pr_err("Last request failed to complete: %d\n", ret);
+ goto err_spin_rq;
+ }
+
+ /* Wait for idle */
+ ret = intel_gt_wait_for_idle(gt, HZ * 30);
+ if (ret < 0) {
+ pr_err("GT failed to idle: %d\n", ret);
+ goto err_spin_rq;
+ }
+
+ /* Verify a guc_id got stole */
+ if (guc->number_guc_id_stole == number_guc_id_stole) {
+ pr_err("No guc_ids stolen");
+ ret = -EINVAL;
+ } else {
+ ret = 0;
+ }
+
+err_spin_rq:
+ if (spin_rq) {
+ igt_spinner_end(&spin);
+ intel_selftest_wait_for_rq(spin_rq);
+ i915_request_put(spin_rq);
+ igt_spinner_fini(&spin);
+ intel_gt_wait_for_idle(gt, HZ * 30);
+ }
+err_contexts:
+ while (i && ce[--i])
+ intel_context_put(ce[i]);
+err_wakeref:
+ intel_runtime_pm_put(gt->uncore->rpm, wakeref);
+ kfree(ce);
+ guc->submission_state.num_guc_ids = sv;
+
+ return ret;
+}
+
int intel_guc_live_selftests(struct drm_i915_private *i915)
{
static const struct i915_subtest tests[] = {
SUBTEST(intel_guc_scrub_ctbs),
+ SUBTEST(intel_guc_steal_guc_ids),
};
struct intel_gt *gt = &i915->gt;
--
2.33.1
^ permalink raw reply related [flat|nested] 27+ messages in thread* Re: [Intel-gfx] [PATCH 7/7] drm/i915/guc: Selftest for stealing of guc ids
2021-12-11 0:56 ` [Intel-gfx] [PATCH 7/7] drm/i915/guc: Selftest for stealing of guc ids Matthew Brost
@ 2021-12-11 1:33 ` John Harrison
2021-12-11 3:31 ` Matthew Brost
2021-12-11 3:32 ` Matthew Brost
0 siblings, 2 replies; 27+ messages in thread
From: John Harrison @ 2021-12-11 1:33 UTC (permalink / raw)
To: Matthew Brost, intel-gfx, dri-devel
On 12/10/2021 16:56, Matthew Brost wrote:
> Testing the stealing of guc ids is hard from user spaec as we have 64k
spaec -> space
> guc_ids. Add a selftest, which artificially reduces the number of guc
> ids, and forces a steal. Details of test has comment in code so will not
has -> are
But would a copy&paste really be so hard? It is useful to be able to
read what a patch does from the commit log and not have to delve inside
every time.
> repeat here.
>
> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> ---
> drivers/gpu/drm/i915/gt/uc/intel_guc.h | 12 ++
> .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 15 +-
> drivers/gpu/drm/i915/gt/uc/selftest_guc.c | 171 ++++++++++++++++++
> 3 files changed, 193 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.h b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
> index 1cb46098030d..307380a2e2ff 100644
> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc.h
> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
> @@ -94,6 +94,11 @@ struct intel_guc {
> * @guc_ids: used to allocate new guc_ids, single-lrc
> */
> struct ida guc_ids;
> + /**
> + * @num_guc_ids: Number of guc_ids, selftest feature to be able
> + * to reduce this number of test.
of test -> while testing
Should have a CONFIG_SELFTEST around it? And define a wrapper that is
GUC_MAX_LRC_DESCRIPTORS or num_guc_ids as appropriate.
> + */
> + int num_guc_ids;
> /**
> * @guc_ids_bitmap: used to allocate new guc_ids, multi-lrc
> */
> @@ -202,6 +207,13 @@ struct intel_guc {
> */
> struct delayed_work work;
> } timestamp;
> +
> +#ifdef CONFIG_DRM_I915_SELFTEST
> + /**
> + * @number_guc_id_stole: The number of guc_ids that have been stole
> + */
> + int number_guc_id_stole;
stole -> stolen (in all three cases)
> +#endif
> };
>
> static inline struct intel_guc *log_to_guc(struct intel_guc_log *log)
> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> index 96fcf869e3ff..57019b190bfb 100644
> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> @@ -145,7 +145,7 @@ guc_create_parallel(struct intel_engine_cs **engines,
> * use should be low and 1/16 should be sufficient. Minimum of 32 guc_ids for
> * multi-lrc.
> */
> -#define NUMBER_MULTI_LRC_GUC_ID (GUC_MAX_LRC_DESCRIPTORS / 16)
> +#define NUMBER_MULTI_LRC_GUC_ID(guc) (guc->submission_state.num_guc_ids / 16)
And keep the original definition for the non CONFIG_SELFTEST case?
>
> /*
> * Below is a set of functions which control the GuC scheduling state which
> @@ -1775,7 +1775,7 @@ int intel_guc_submission_init(struct intel_guc *guc)
> destroyed_worker_func);
>
> guc->submission_state.guc_ids_bitmap =
> - bitmap_zalloc(NUMBER_MULTI_LRC_GUC_ID, GFP_KERNEL);
> + bitmap_zalloc(NUMBER_MULTI_LRC_GUC_ID(guc), GFP_KERNEL);
> if (!guc->submission_state.guc_ids_bitmap)
> return -ENOMEM;
>
> @@ -1869,13 +1869,13 @@ static int new_guc_id(struct intel_guc *guc, struct intel_context *ce)
>
> if (intel_context_is_parent(ce))
> ret = bitmap_find_free_region(guc->submission_state.guc_ids_bitmap,
> - NUMBER_MULTI_LRC_GUC_ID,
> + NUMBER_MULTI_LRC_GUC_ID(guc),
> order_base_2(ce->parallel.number_children
> + 1));
> else
> ret = ida_simple_get(&guc->submission_state.guc_ids,
> - NUMBER_MULTI_LRC_GUC_ID,
> - GUC_MAX_LRC_DESCRIPTORS,
> + NUMBER_MULTI_LRC_GUC_ID(guc),
> + guc->submission_state.num_guc_ids,
> GFP_KERNEL | __GFP_RETRY_MAYFAIL |
> __GFP_NOWARN);
> if (unlikely(ret < 0))
> @@ -1941,6 +1941,10 @@ static int steal_guc_id(struct intel_guc *guc, struct intel_context *ce)
>
> set_context_guc_id_invalid(cn);
>
> +#ifdef CONFIG_DRM_I915_SELFTEST
> + guc->number_guc_id_stole++;
> +#endif
> +
> return 0;
> } else {
> return -EAGAIN;
> @@ -3779,6 +3783,7 @@ static bool __guc_submission_selected(struct intel_guc *guc)
>
> void intel_guc_submission_init_early(struct intel_guc *guc)
> {
> + guc->submission_state.num_guc_ids = GUC_MAX_LRC_DESCRIPTORS;
> guc->submission_supported = __guc_submission_supported(guc);
> guc->submission_selected = __guc_submission_selected(guc);
> }
> diff --git a/drivers/gpu/drm/i915/gt/uc/selftest_guc.c b/drivers/gpu/drm/i915/gt/uc/selftest_guc.c
> index fb0e4a7bd8ca..9ab355e64b3f 100644
> --- a/drivers/gpu/drm/i915/gt/uc/selftest_guc.c
> +++ b/drivers/gpu/drm/i915/gt/uc/selftest_guc.c
> @@ -3,8 +3,21 @@
> * Copyright �� 2021 Intel Corporation
> */
>
> +#include "selftests/igt_spinner.h"
> #include "selftests/intel_scheduler_helpers.h"
>
> +static int request_add_spin(struct i915_request *rq, struct igt_spinner *spin)
> +{
> + int err = 0;
> +
> + i915_request_get(rq);
> + i915_request_add(rq);
> + if (spin && !igt_wait_for_spinner(spin, rq))
> + err = -ETIMEDOUT;
> +
> + return err;
> +}
> +
> static struct i915_request *nop_user_request(struct intel_context *ce,
> struct i915_request *from)
> {
> @@ -110,10 +123,168 @@ static int intel_guc_scrub_ctbs(void *arg)
> return ret;
> }
>
> +/*
> + * intel_guc_steal_guc_ids - Test to exhaust all guc_ids and then steal one
> + *
> + * This test creates a spinner to which is used as to block all subsequent
to which -> which
as to block -> to block
> + * submissions until it completes. Next, a loop creates a context and a NOP
> + * request each iteration until the guc_ids are exhausted (request creation
> + * returns -EAGAIN). The spinner is completed unblocking all requests created in
spinner is ended,
> + * the loop. At this point all guc_ids are exhausted but are available to steal.
> + * Try to create another request which should successfully steal a guc_id. Wait
> + * on last request to complete, idle GPU, verify guc_id was stole via counter,
stole -> stolen
> + * and exit test. Test also artificially reduces the number of guc_ids so the
> + * test runs in a timely manner.
> + */
> +static int intel_guc_steal_guc_ids(void *arg)
> +{
> + struct intel_gt *gt = arg;
> + struct intel_guc *guc = >->uc.guc;
> + int ret, sv, i = 0;
> + intel_wakeref_t wakeref;
> + struct intel_engine_cs *engine;
> + struct intel_context **ce;
> + struct igt_spinner spin;
> + struct i915_request *spin_rq = NULL, *rq, *last = NULL;
> + int number_guc_id_stole = guc->number_guc_id_stole;
stole -> stolen
> +
> + ce = kzalloc(sizeof(*ce) * GUC_MAX_LRC_DESCRIPTORS, GFP_KERNEL);
> + if (!ce) {
> + pr_err("Context array allocation failed\n");
> + return -ENOMEM;
> + }
> +
> + wakeref = intel_runtime_pm_get(gt->uncore->rpm);
> + engine = intel_selftest_find_any_engine(gt);
> + sv = guc->submission_state.num_guc_ids;
> + guc->submission_state.num_guc_ids = 4096;
> +
> + /* Create spinner to block requests in below loop */
> + ce[i++] = intel_context_create(engine);
> + if (IS_ERR(ce[i - 1])) {
> + ce[i - 1] = NULL;
> + ret = PTR_ERR(ce[i - 1]);
Would be less peculiar looking to do the i++ after the if statement.
> + pr_err("Failed to create context: %d\n", ret);
> + goto err_wakeref;
> + }
> + ret = igt_spinner_init(&spin, engine->gt);
> + if (ret) {
> + pr_err("Failed to create spinner: %d\n", ret);
> + goto err_contexts;
> + }
> + spin_rq = igt_spinner_create_request(&spin, ce[i - 1], MI_ARB_CHECK);
> + if (IS_ERR(spin_rq)) {
> + ret = PTR_ERR(spin_rq);
> + pr_err("Failed to create spinner request: %d\n", ret);
> + goto err_contexts;
> + }
> + ret = request_add_spin(spin_rq, &spin);
> + if (ret) {
> + pr_err("Failed to add Spinner request: %d\n", ret);
> + goto err_spin_rq;
> + }
> +
> + /* Use all guc_ids */
> + while (ret != -EAGAIN) {
> + ce[i++] = intel_context_create(engine);
> + if (IS_ERR(ce[i - 1])) {
> + ce[i - 1] = NULL;
> + ret = PTR_ERR(ce[i - 1]);
> + pr_err("Failed to create context: %d\n", ret);
> + goto err_spin_rq;
Won't this try to put the null context? Or rather will see the null
context and immediately abort the clean up loop. Need to do the i++
after the if statement. Or after the nop_user_request call to get rid of
all the ce[i - 1] things.
John.
> + }
> +
> + rq = nop_user_request(ce[i - 1], spin_rq);
> + if (IS_ERR(rq)) {
> + ret = PTR_ERR(rq);
> + rq = NULL;
> + if (ret != -EAGAIN) {
> + pr_err("Failed to create request, %d: %d\n", i,
> + ret);
> + goto err_spin_rq;
> + }
> + } else {
> + if (last)
> + i915_request_put(last);
> + last = rq;
> + }
> + }
> +
> + /* Release blocked requests */
> + igt_spinner_end(&spin);
> + ret = intel_selftest_wait_for_rq(spin_rq);
> + if (ret) {
> + pr_err("Spin request failed to complete: %d\n", ret);
> + i915_request_put(last);
> + goto err_spin_rq;
> + }
> + i915_request_put(spin_rq);
> + igt_spinner_fini(&spin);
> + spin_rq = NULL;
> +
> + /* Wait for last request */
> + ret = i915_request_wait(last, 0, HZ * 30);
> + i915_request_put(last);
> + if (ret < 0) {
> + pr_err("Last request failed to complete: %d\n", ret);
> + goto err_spin_rq;
> + }
> +
> + /* Try to steal guc_id */
> + rq = nop_user_request(ce[i - 1], NULL);
> + if (IS_ERR(rq)) {
> + ret = PTR_ERR(rq);
> + pr_err("Failed to steal guc_id, %d: %d\n", i, ret);
> + goto err_spin_rq;
> + }
> +
> + /* Wait for last request */
> + ret = i915_request_wait(rq, 0, HZ);
> + i915_request_put(rq);
> + if (ret < 0) {
> + pr_err("Last request failed to complete: %d\n", ret);
> + goto err_spin_rq;
> + }
> +
> + /* Wait for idle */
> + ret = intel_gt_wait_for_idle(gt, HZ * 30);
> + if (ret < 0) {
> + pr_err("GT failed to idle: %d\n", ret);
> + goto err_spin_rq;
> + }
> +
> + /* Verify a guc_id got stole */
> + if (guc->number_guc_id_stole == number_guc_id_stole) {
> + pr_err("No guc_ids stolen");
> + ret = -EINVAL;
> + } else {
> + ret = 0;
> + }
> +
> +err_spin_rq:
> + if (spin_rq) {
> + igt_spinner_end(&spin);
> + intel_selftest_wait_for_rq(spin_rq);
> + i915_request_put(spin_rq);
> + igt_spinner_fini(&spin);
> + intel_gt_wait_for_idle(gt, HZ * 30);
> + }
> +err_contexts:
> + while (i && ce[--i])
> + intel_context_put(ce[i]);
> +err_wakeref:
> + intel_runtime_pm_put(gt->uncore->rpm, wakeref);
> + kfree(ce);
> + guc->submission_state.num_guc_ids = sv;
> +
> + return ret;
> +}
> +
> int intel_guc_live_selftests(struct drm_i915_private *i915)
> {
> static const struct i915_subtest tests[] = {
> SUBTEST(intel_guc_scrub_ctbs),
> + SUBTEST(intel_guc_steal_guc_ids),
> };
> struct intel_gt *gt = &i915->gt;
>
^ permalink raw reply [flat|nested] 27+ messages in thread* Re: [Intel-gfx] [PATCH 7/7] drm/i915/guc: Selftest for stealing of guc ids
2021-12-11 1:33 ` John Harrison
@ 2021-12-11 3:31 ` Matthew Brost
2021-12-11 3:32 ` Matthew Brost
1 sibling, 0 replies; 27+ messages in thread
From: Matthew Brost @ 2021-12-11 3:31 UTC (permalink / raw)
To: John Harrison; +Cc: intel-gfx, dri-devel
On Fri, Dec 10, 2021 at 05:33:02PM -0800, John Harrison wrote:
> On 12/10/2021 16:56, Matthew Brost wrote:
> > Testing the stealing of guc ids is hard from user spaec as we have 64k
> spaec -> space
>
> > guc_ids. Add a selftest, which artificially reduces the number of guc
> > ids, and forces a steal. Details of test has comment in code so will not
> has -> are
>
Yep.
> But would a copy&paste really be so hard? It is useful to be able to read
> what a patch does from the commit log and not have to delve inside every
> time.
>
Will c & p.
>
> > repeat here.
> >
> > Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> > ---
> > drivers/gpu/drm/i915/gt/uc/intel_guc.h | 12 ++
> > .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 15 +-
> > drivers/gpu/drm/i915/gt/uc/selftest_guc.c | 171 ++++++++++++++++++
> > 3 files changed, 193 insertions(+), 5 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.h b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
> > index 1cb46098030d..307380a2e2ff 100644
> > --- a/drivers/gpu/drm/i915/gt/uc/intel_guc.h
> > +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
> > @@ -94,6 +94,11 @@ struct intel_guc {
> > * @guc_ids: used to allocate new guc_ids, single-lrc
> > */
> > struct ida guc_ids;
> > + /**
> > + * @num_guc_ids: Number of guc_ids, selftest feature to be able
> > + * to reduce this number of test.
> of test -> while testing
>
> Should have a CONFIG_SELFTEST around it? And define a wrapper that is
> GUC_MAX_LRC_DESCRIPTORS or num_guc_ids as appropriate.
>
>
> > + */
> > + int num_guc_ids;
> > /**
> > * @guc_ids_bitmap: used to allocate new guc_ids, multi-lrc
> > */
> > @@ -202,6 +207,13 @@ struct intel_guc {
> > */
> > struct delayed_work work;
> > } timestamp;
> > +
> > +#ifdef CONFIG_DRM_I915_SELFTEST
> > + /**
> > + * @number_guc_id_stole: The number of guc_ids that have been stole
> > + */
> > + int number_guc_id_stole;
> stole -> stolen (in all three cases)
>
Sure.
> > +#endif
> > };
> > static inline struct intel_guc *log_to_guc(struct intel_guc_log *log)
> > diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> > index 96fcf869e3ff..57019b190bfb 100644
> > --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> > +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> > @@ -145,7 +145,7 @@ guc_create_parallel(struct intel_engine_cs **engines,
> > * use should be low and 1/16 should be sufficient. Minimum of 32 guc_ids for
> > * multi-lrc.
> > */
> > -#define NUMBER_MULTI_LRC_GUC_ID (GUC_MAX_LRC_DESCRIPTORS / 16)
> > +#define NUMBER_MULTI_LRC_GUC_ID(guc) (guc->submission_state.num_guc_ids / 16)
> And keep the original definition for the non CONFIG_SELFTEST case?
>
Probably could hide submission_state.num_guc_ids behind a
CONFIG_SELFTEST define but SRIOV needs this anyways so didn't bother. If
you insist can I hide this.
> > /*
> > * Below is a set of functions which control the GuC scheduling state which
> > @@ -1775,7 +1775,7 @@ int intel_guc_submission_init(struct intel_guc *guc)
> > destroyed_worker_func);
> > guc->submission_state.guc_ids_bitmap =
> > - bitmap_zalloc(NUMBER_MULTI_LRC_GUC_ID, GFP_KERNEL);
> > + bitmap_zalloc(NUMBER_MULTI_LRC_GUC_ID(guc), GFP_KERNEL);
> > if (!guc->submission_state.guc_ids_bitmap)
> > return -ENOMEM;
> > @@ -1869,13 +1869,13 @@ static int new_guc_id(struct intel_guc *guc, struct intel_context *ce)
> > if (intel_context_is_parent(ce))
> > ret = bitmap_find_free_region(guc->submission_state.guc_ids_bitmap,
> > - NUMBER_MULTI_LRC_GUC_ID,
> > + NUMBER_MULTI_LRC_GUC_ID(guc),
> > order_base_2(ce->parallel.number_children
> > + 1));
> > else
> > ret = ida_simple_get(&guc->submission_state.guc_ids,
> > - NUMBER_MULTI_LRC_GUC_ID,
> > - GUC_MAX_LRC_DESCRIPTORS,
> > + NUMBER_MULTI_LRC_GUC_ID(guc),
> > + guc->submission_state.num_guc_ids,
> > GFP_KERNEL | __GFP_RETRY_MAYFAIL |
> > __GFP_NOWARN);
> > if (unlikely(ret < 0))
> > @@ -1941,6 +1941,10 @@ static int steal_guc_id(struct intel_guc *guc, struct intel_context *ce)
> > set_context_guc_id_invalid(cn);
> > +#ifdef CONFIG_DRM_I915_SELFTEST
> > + guc->number_guc_id_stole++;
> > +#endif
> > +
> > return 0;
> > } else {
> > return -EAGAIN;
> > @@ -3779,6 +3783,7 @@ static bool __guc_submission_selected(struct intel_guc *guc)
> > void intel_guc_submission_init_early(struct intel_guc *guc)
> > {
> > + guc->submission_state.num_guc_ids = GUC_MAX_LRC_DESCRIPTORS;
> > guc->submission_supported = __guc_submission_supported(guc);
> > guc->submission_selected = __guc_submission_selected(guc);
> > }
> > diff --git a/drivers/gpu/drm/i915/gt/uc/selftest_guc.c b/drivers/gpu/drm/i915/gt/uc/selftest_guc.c
> > index fb0e4a7bd8ca..9ab355e64b3f 100644
> > --- a/drivers/gpu/drm/i915/gt/uc/selftest_guc.c
> > +++ b/drivers/gpu/drm/i915/gt/uc/selftest_guc.c
> > @@ -3,8 +3,21 @@
> > * Copyright �� 2021 Intel Corporation
> > */
> > +#include "selftests/igt_spinner.h"
> > #include "selftests/intel_scheduler_helpers.h"
> > +static int request_add_spin(struct i915_request *rq, struct igt_spinner *spin)
> > +{
> > + int err = 0;
> > +
> > + i915_request_get(rq);
> > + i915_request_add(rq);
> > + if (spin && !igt_wait_for_spinner(spin, rq))
> > + err = -ETIMEDOUT;
> > +
> > + return err;
> > +}
> > +
> > static struct i915_request *nop_user_request(struct intel_context *ce,
> > struct i915_request *from)
> > {
> > @@ -110,10 +123,168 @@ static int intel_guc_scrub_ctbs(void *arg)
> > return ret;
> > }
> > +/*
> > + * intel_guc_steal_guc_ids - Test to exhaust all guc_ids and then steal one
> > + *
> > + * This test creates a spinner to which is used as to block all subsequent
> to which -> which
> as to block -> to block
>
> > + * submissions until it completes. Next, a loop creates a context and a NOP
> > + * request each iteration until the guc_ids are exhausted (request creation
> > + * returns -EAGAIN). The spinner is completed unblocking all requests created in
> spinner is ended,
>
> > + * the loop. At this point all guc_ids are exhausted but are available to steal.
> > + * Try to create another request which should successfully steal a guc_id. Wait
> > + * on last request to complete, idle GPU, verify guc_id was stole via counter,
> stole -> stolen
>
> > + * and exit test. Test also artificially reduces the number of guc_ids so the
> > + * test runs in a timely manner.
> > + */
> > +static int intel_guc_steal_guc_ids(void *arg)
> > +{
> > + struct intel_gt *gt = arg;
> > + struct intel_guc *guc = >->uc.guc;
> > + int ret, sv, i = 0;
> > + intel_wakeref_t wakeref;
> > + struct intel_engine_cs *engine;
> > + struct intel_context **ce;
> > + struct igt_spinner spin;
> > + struct i915_request *spin_rq = NULL, *rq, *last = NULL;
> > + int number_guc_id_stole = guc->number_guc_id_stole;
> stole -> stolen
Yep to all of the above.
>
> > +
> > + ce = kzalloc(sizeof(*ce) * GUC_MAX_LRC_DESCRIPTORS, GFP_KERNEL);
> > + if (!ce) {
> > + pr_err("Context array allocation failed\n");
> > + return -ENOMEM;
> > + }
> > +
> > + wakeref = intel_runtime_pm_get(gt->uncore->rpm);
> > + engine = intel_selftest_find_any_engine(gt);
> > + sv = guc->submission_state.num_guc_ids;
> > + guc->submission_state.num_guc_ids = 4096;
> > +
> > + /* Create spinner to block requests in below loop */
> > + ce[i++] = intel_context_create(engine);
> > + if (IS_ERR(ce[i - 1])) {
> > + ce[i - 1] = NULL;
> > + ret = PTR_ERR(ce[i - 1]);
> Would be less peculiar looking to do the i++ after the if statement.
>
I guess?
> > + pr_err("Failed to create context: %d\n", ret);
> > + goto err_wakeref;
> > + }
> > + ret = igt_spinner_init(&spin, engine->gt);
> > + if (ret) {
> > + pr_err("Failed to create spinner: %d\n", ret);
> > + goto err_contexts;
> > + }
> > + spin_rq = igt_spinner_create_request(&spin, ce[i - 1], MI_ARB_CHECK);
> > + if (IS_ERR(spin_rq)) {
> > + ret = PTR_ERR(spin_rq);
> > + pr_err("Failed to create spinner request: %d\n", ret);
> > + goto err_contexts;
> > + }
> > + ret = request_add_spin(spin_rq, &spin);
> > + if (ret) {
> > + pr_err("Failed to add Spinner request: %d\n", ret);
> > + goto err_spin_rq;
> > + }
> > +
> > + /* Use all guc_ids */
> > + while (ret != -EAGAIN) {
> > + ce[i++] = intel_context_create(engine);
> > + if (IS_ERR(ce[i - 1])) {
> > + ce[i - 1] = NULL;
> > + ret = PTR_ERR(ce[i - 1]);
> > + pr_err("Failed to create context: %d\n", ret);
> > + goto err_spin_rq;
> Won't this try to put the null context? Or rather will see the null context
> and immediately abort the clean up loop. Need to do the i++ after the if
> statement. Or after the nop_user_request call to get rid of all the ce[i -
> 1] things.
>
Sure. Can refactor this.
Matt
> John.
>
> > + }
> > +
> > + rq = nop_user_request(ce[i - 1], spin_rq);
> > + if (IS_ERR(rq)) {
> > + ret = PTR_ERR(rq);
> > + rq = NULL;
> > + if (ret != -EAGAIN) {
> > + pr_err("Failed to create request, %d: %d\n", i,
> > + ret);
> > + goto err_spin_rq;
> > + }
> > + } else {
> > + if (last)
> > + i915_request_put(last);
> > + last = rq;
> > + }
> > + }
> > +
> > + /* Release blocked requests */
> > + igt_spinner_end(&spin);
> > + ret = intel_selftest_wait_for_rq(spin_rq);
> > + if (ret) {
> > + pr_err("Spin request failed to complete: %d\n", ret);
> > + i915_request_put(last);
> > + goto err_spin_rq;
> > + }
> > + i915_request_put(spin_rq);
> > + igt_spinner_fini(&spin);
> > + spin_rq = NULL;
> > +
> > + /* Wait for last request */
> > + ret = i915_request_wait(last, 0, HZ * 30);
> > + i915_request_put(last);
> > + if (ret < 0) {
> > + pr_err("Last request failed to complete: %d\n", ret);
> > + goto err_spin_rq;
> > + }
> > +
> > + /* Try to steal guc_id */
> > + rq = nop_user_request(ce[i - 1], NULL);
> > + if (IS_ERR(rq)) {
> > + ret = PTR_ERR(rq);
> > + pr_err("Failed to steal guc_id, %d: %d\n", i, ret);
> > + goto err_spin_rq;
> > + }
> > +
> > + /* Wait for last request */
> > + ret = i915_request_wait(rq, 0, HZ);
> > + i915_request_put(rq);
> > + if (ret < 0) {
> > + pr_err("Last request failed to complete: %d\n", ret);
> > + goto err_spin_rq;
> > + }
> > +
> > + /* Wait for idle */
> > + ret = intel_gt_wait_for_idle(gt, HZ * 30);
> > + if (ret < 0) {
> > + pr_err("GT failed to idle: %d\n", ret);
> > + goto err_spin_rq;
> > + }
> > +
> > + /* Verify a guc_id got stole */
> > + if (guc->number_guc_id_stole == number_guc_id_stole) {
> > + pr_err("No guc_ids stolen");
> > + ret = -EINVAL;
> > + } else {
> > + ret = 0;
> > + }
> > +
> > +err_spin_rq:
> > + if (spin_rq) {
> > + igt_spinner_end(&spin);
> > + intel_selftest_wait_for_rq(spin_rq);
> > + i915_request_put(spin_rq);
> > + igt_spinner_fini(&spin);
> > + intel_gt_wait_for_idle(gt, HZ * 30);
> > + }
> > +err_contexts:
> > + while (i && ce[--i])
> > + intel_context_put(ce[i]);
> > +err_wakeref:
> > + intel_runtime_pm_put(gt->uncore->rpm, wakeref);
> > + kfree(ce);
> > + guc->submission_state.num_guc_ids = sv;
> > +
> > + return ret;
> > +}
> > +
> > int intel_guc_live_selftests(struct drm_i915_private *i915)
> > {
> > static const struct i915_subtest tests[] = {
> > SUBTEST(intel_guc_scrub_ctbs),
> > + SUBTEST(intel_guc_steal_guc_ids),
> > };
> > struct intel_gt *gt = &i915->gt;
>
^ permalink raw reply [flat|nested] 27+ messages in thread* Re: [Intel-gfx] [PATCH 7/7] drm/i915/guc: Selftest for stealing of guc ids
2021-12-11 1:33 ` John Harrison
2021-12-11 3:31 ` Matthew Brost
@ 2021-12-11 3:32 ` Matthew Brost
1 sibling, 0 replies; 27+ messages in thread
From: Matthew Brost @ 2021-12-11 3:32 UTC (permalink / raw)
To: John Harrison; +Cc: intel-gfx, dri-devel
On Fri, Dec 10, 2021 at 05:33:02PM -0800, John Harrison wrote:
> On 12/10/2021 16:56, Matthew Brost wrote:
> > Testing the stealing of guc ids is hard from user spaec as we have 64k
> spaec -> space
>
> > guc_ids. Add a selftest, which artificially reduces the number of guc
> > ids, and forces a steal. Details of test has comment in code so will not
> has -> are
>
> But would a copy&paste really be so hard? It is useful to be able to read
> what a patch does from the commit log and not have to delve inside every
> time.
>
>
> > repeat here.
> >
> > Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> > ---
> > drivers/gpu/drm/i915/gt/uc/intel_guc.h | 12 ++
> > .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 15 +-
> > drivers/gpu/drm/i915/gt/uc/selftest_guc.c | 171 ++++++++++++++++++
> > 3 files changed, 193 insertions(+), 5 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.h b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
> > index 1cb46098030d..307380a2e2ff 100644
> > --- a/drivers/gpu/drm/i915/gt/uc/intel_guc.h
> > +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
> > @@ -94,6 +94,11 @@ struct intel_guc {
> > * @guc_ids: used to allocate new guc_ids, single-lrc
> > */
> > struct ida guc_ids;
> > + /**
> > + * @num_guc_ids: Number of guc_ids, selftest feature to be able
> > + * to reduce this number of test.
> of test -> while testing
>
> Should have a CONFIG_SELFTEST around it? And define a wrapper that is
> GUC_MAX_LRC_DESCRIPTORS or num_guc_ids as appropriate.
>
Missed this. Basically decided again a SELFTEST wrapper because SRIOV
needs this anyways. Can hide this if you insist.
Matt
>
> > + */
> > + int num_guc_ids;
> > /**
> > * @guc_ids_bitmap: used to allocate new guc_ids, multi-lrc
> > */
> > @@ -202,6 +207,13 @@ struct intel_guc {
> > */
> > struct delayed_work work;
> > } timestamp;
> > +
> > +#ifdef CONFIG_DRM_I915_SELFTEST
> > + /**
> > + * @number_guc_id_stole: The number of guc_ids that have been stole
> > + */
> > + int number_guc_id_stole;
> stole -> stolen (in all three cases)
>
> > +#endif
> > };
> > static inline struct intel_guc *log_to_guc(struct intel_guc_log *log)
> > diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> > index 96fcf869e3ff..57019b190bfb 100644
> > --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> > +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> > @@ -145,7 +145,7 @@ guc_create_parallel(struct intel_engine_cs **engines,
> > * use should be low and 1/16 should be sufficient. Minimum of 32 guc_ids for
> > * multi-lrc.
> > */
> > -#define NUMBER_MULTI_LRC_GUC_ID (GUC_MAX_LRC_DESCRIPTORS / 16)
> > +#define NUMBER_MULTI_LRC_GUC_ID(guc) (guc->submission_state.num_guc_ids / 16)
> And keep the original definition for the non CONFIG_SELFTEST case?
>
> > /*
> > * Below is a set of functions which control the GuC scheduling state which
> > @@ -1775,7 +1775,7 @@ int intel_guc_submission_init(struct intel_guc *guc)
> > destroyed_worker_func);
> > guc->submission_state.guc_ids_bitmap =
> > - bitmap_zalloc(NUMBER_MULTI_LRC_GUC_ID, GFP_KERNEL);
> > + bitmap_zalloc(NUMBER_MULTI_LRC_GUC_ID(guc), GFP_KERNEL);
> > if (!guc->submission_state.guc_ids_bitmap)
> > return -ENOMEM;
> > @@ -1869,13 +1869,13 @@ static int new_guc_id(struct intel_guc *guc, struct intel_context *ce)
> > if (intel_context_is_parent(ce))
> > ret = bitmap_find_free_region(guc->submission_state.guc_ids_bitmap,
> > - NUMBER_MULTI_LRC_GUC_ID,
> > + NUMBER_MULTI_LRC_GUC_ID(guc),
> > order_base_2(ce->parallel.number_children
> > + 1));
> > else
> > ret = ida_simple_get(&guc->submission_state.guc_ids,
> > - NUMBER_MULTI_LRC_GUC_ID,
> > - GUC_MAX_LRC_DESCRIPTORS,
> > + NUMBER_MULTI_LRC_GUC_ID(guc),
> > + guc->submission_state.num_guc_ids,
> > GFP_KERNEL | __GFP_RETRY_MAYFAIL |
> > __GFP_NOWARN);
> > if (unlikely(ret < 0))
> > @@ -1941,6 +1941,10 @@ static int steal_guc_id(struct intel_guc *guc, struct intel_context *ce)
> > set_context_guc_id_invalid(cn);
> > +#ifdef CONFIG_DRM_I915_SELFTEST
> > + guc->number_guc_id_stole++;
> > +#endif
> > +
> > return 0;
> > } else {
> > return -EAGAIN;
> > @@ -3779,6 +3783,7 @@ static bool __guc_submission_selected(struct intel_guc *guc)
> > void intel_guc_submission_init_early(struct intel_guc *guc)
> > {
> > + guc->submission_state.num_guc_ids = GUC_MAX_LRC_DESCRIPTORS;
> > guc->submission_supported = __guc_submission_supported(guc);
> > guc->submission_selected = __guc_submission_selected(guc);
> > }
> > diff --git a/drivers/gpu/drm/i915/gt/uc/selftest_guc.c b/drivers/gpu/drm/i915/gt/uc/selftest_guc.c
> > index fb0e4a7bd8ca..9ab355e64b3f 100644
> > --- a/drivers/gpu/drm/i915/gt/uc/selftest_guc.c
> > +++ b/drivers/gpu/drm/i915/gt/uc/selftest_guc.c
> > @@ -3,8 +3,21 @@
> > * Copyright �� 2021 Intel Corporation
> > */
> > +#include "selftests/igt_spinner.h"
> > #include "selftests/intel_scheduler_helpers.h"
> > +static int request_add_spin(struct i915_request *rq, struct igt_spinner *spin)
> > +{
> > + int err = 0;
> > +
> > + i915_request_get(rq);
> > + i915_request_add(rq);
> > + if (spin && !igt_wait_for_spinner(spin, rq))
> > + err = -ETIMEDOUT;
> > +
> > + return err;
> > +}
> > +
> > static struct i915_request *nop_user_request(struct intel_context *ce,
> > struct i915_request *from)
> > {
> > @@ -110,10 +123,168 @@ static int intel_guc_scrub_ctbs(void *arg)
> > return ret;
> > }
> > +/*
> > + * intel_guc_steal_guc_ids - Test to exhaust all guc_ids and then steal one
> > + *
> > + * This test creates a spinner to which is used as to block all subsequent
> to which -> which
> as to block -> to block
>
> > + * submissions until it completes. Next, a loop creates a context and a NOP
> > + * request each iteration until the guc_ids are exhausted (request creation
> > + * returns -EAGAIN). The spinner is completed unblocking all requests created in
> spinner is ended,
>
> > + * the loop. At this point all guc_ids are exhausted but are available to steal.
> > + * Try to create another request which should successfully steal a guc_id. Wait
> > + * on last request to complete, idle GPU, verify guc_id was stole via counter,
> stole -> stolen
>
> > + * and exit test. Test also artificially reduces the number of guc_ids so the
> > + * test runs in a timely manner.
> > + */
> > +static int intel_guc_steal_guc_ids(void *arg)
> > +{
> > + struct intel_gt *gt = arg;
> > + struct intel_guc *guc = >->uc.guc;
> > + int ret, sv, i = 0;
> > + intel_wakeref_t wakeref;
> > + struct intel_engine_cs *engine;
> > + struct intel_context **ce;
> > + struct igt_spinner spin;
> > + struct i915_request *spin_rq = NULL, *rq, *last = NULL;
> > + int number_guc_id_stole = guc->number_guc_id_stole;
> stole -> stolen
>
> > +
> > + ce = kzalloc(sizeof(*ce) * GUC_MAX_LRC_DESCRIPTORS, GFP_KERNEL);
> > + if (!ce) {
> > + pr_err("Context array allocation failed\n");
> > + return -ENOMEM;
> > + }
> > +
> > + wakeref = intel_runtime_pm_get(gt->uncore->rpm);
> > + engine = intel_selftest_find_any_engine(gt);
> > + sv = guc->submission_state.num_guc_ids;
> > + guc->submission_state.num_guc_ids = 4096;
> > +
> > + /* Create spinner to block requests in below loop */
> > + ce[i++] = intel_context_create(engine);
> > + if (IS_ERR(ce[i - 1])) {
> > + ce[i - 1] = NULL;
> > + ret = PTR_ERR(ce[i - 1]);
> Would be less peculiar looking to do the i++ after the if statement.
>
> > + pr_err("Failed to create context: %d\n", ret);
> > + goto err_wakeref;
> > + }
> > + ret = igt_spinner_init(&spin, engine->gt);
> > + if (ret) {
> > + pr_err("Failed to create spinner: %d\n", ret);
> > + goto err_contexts;
> > + }
> > + spin_rq = igt_spinner_create_request(&spin, ce[i - 1], MI_ARB_CHECK);
> > + if (IS_ERR(spin_rq)) {
> > + ret = PTR_ERR(spin_rq);
> > + pr_err("Failed to create spinner request: %d\n", ret);
> > + goto err_contexts;
> > + }
> > + ret = request_add_spin(spin_rq, &spin);
> > + if (ret) {
> > + pr_err("Failed to add Spinner request: %d\n", ret);
> > + goto err_spin_rq;
> > + }
> > +
> > + /* Use all guc_ids */
> > + while (ret != -EAGAIN) {
> > + ce[i++] = intel_context_create(engine);
> > + if (IS_ERR(ce[i - 1])) {
> > + ce[i - 1] = NULL;
> > + ret = PTR_ERR(ce[i - 1]);
> > + pr_err("Failed to create context: %d\n", ret);
> > + goto err_spin_rq;
> Won't this try to put the null context? Or rather will see the null context
> and immediately abort the clean up loop. Need to do the i++ after the if
> statement. Or after the nop_user_request call to get rid of all the ce[i -
> 1] things.
>
> John.
>
> > + }
> > +
> > + rq = nop_user_request(ce[i - 1], spin_rq);
> > + if (IS_ERR(rq)) {
> > + ret = PTR_ERR(rq);
> > + rq = NULL;
> > + if (ret != -EAGAIN) {
> > + pr_err("Failed to create request, %d: %d\n", i,
> > + ret);
> > + goto err_spin_rq;
> > + }
> > + } else {
> > + if (last)
> > + i915_request_put(last);
> > + last = rq;
> > + }
> > + }
> > +
> > + /* Release blocked requests */
> > + igt_spinner_end(&spin);
> > + ret = intel_selftest_wait_for_rq(spin_rq);
> > + if (ret) {
> > + pr_err("Spin request failed to complete: %d\n", ret);
> > + i915_request_put(last);
> > + goto err_spin_rq;
> > + }
> > + i915_request_put(spin_rq);
> > + igt_spinner_fini(&spin);
> > + spin_rq = NULL;
> > +
> > + /* Wait for last request */
> > + ret = i915_request_wait(last, 0, HZ * 30);
> > + i915_request_put(last);
> > + if (ret < 0) {
> > + pr_err("Last request failed to complete: %d\n", ret);
> > + goto err_spin_rq;
> > + }
> > +
> > + /* Try to steal guc_id */
> > + rq = nop_user_request(ce[i - 1], NULL);
> > + if (IS_ERR(rq)) {
> > + ret = PTR_ERR(rq);
> > + pr_err("Failed to steal guc_id, %d: %d\n", i, ret);
> > + goto err_spin_rq;
> > + }
> > +
> > + /* Wait for last request */
> > + ret = i915_request_wait(rq, 0, HZ);
> > + i915_request_put(rq);
> > + if (ret < 0) {
> > + pr_err("Last request failed to complete: %d\n", ret);
> > + goto err_spin_rq;
> > + }
> > +
> > + /* Wait for idle */
> > + ret = intel_gt_wait_for_idle(gt, HZ * 30);
> > + if (ret < 0) {
> > + pr_err("GT failed to idle: %d\n", ret);
> > + goto err_spin_rq;
> > + }
> > +
> > + /* Verify a guc_id got stole */
> > + if (guc->number_guc_id_stole == number_guc_id_stole) {
> > + pr_err("No guc_ids stolen");
> > + ret = -EINVAL;
> > + } else {
> > + ret = 0;
> > + }
> > +
> > +err_spin_rq:
> > + if (spin_rq) {
> > + igt_spinner_end(&spin);
> > + intel_selftest_wait_for_rq(spin_rq);
> > + i915_request_put(spin_rq);
> > + igt_spinner_fini(&spin);
> > + intel_gt_wait_for_idle(gt, HZ * 30);
> > + }
> > +err_contexts:
> > + while (i && ce[--i])
> > + intel_context_put(ce[i]);
> > +err_wakeref:
> > + intel_runtime_pm_put(gt->uncore->rpm, wakeref);
> > + kfree(ce);
> > + guc->submission_state.num_guc_ids = sv;
> > +
> > + return ret;
> > +}
> > +
> > int intel_guc_live_selftests(struct drm_i915_private *i915)
> > {
> > static const struct i915_subtest tests[] = {
> > SUBTEST(intel_guc_scrub_ctbs),
> > + SUBTEST(intel_guc_steal_guc_ids),
> > };
> > struct intel_gt *gt = &i915->gt;
>
^ permalink raw reply [flat|nested] 27+ messages in thread
* [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for Fix stealing guc_ids + test
2021-12-11 0:56 [Intel-gfx] [PATCH 0/7] Fix stealing guc_ids + test Matthew Brost
` (6 preceding siblings ...)
2021-12-11 0:56 ` [Intel-gfx] [PATCH 7/7] drm/i915/guc: Selftest for stealing of guc ids Matthew Brost
@ 2021-12-11 2:28 ` Patchwork
2021-12-11 2:30 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
` (2 subsequent siblings)
10 siblings, 0 replies; 27+ messages in thread
From: Patchwork @ 2021-12-11 2:28 UTC (permalink / raw)
To: Matthew Brost; +Cc: intel-gfx
== Series Details ==
Series: Fix stealing guc_ids + test
URL : https://patchwork.freedesktop.org/series/97896/
State : warning
== Summary ==
$ dim checkpatch origin/drm-tip
52305c422f77 drm/i915/guc: Use correct context lock when callig clr_context_registered
eb4fbbb3657c drm/i915/guc: Only assign guc_id.id when stealing guc_id
9018d8758c77 drm/i915/guc: Remove racey GEM_BUG_ON
e111f83640ff drm/i915/guc: Don't hog IRQs when destroying contexts
1bec569ba9a3 drm/i915/guc: Add extra debug on CT deadlock
30672810bf14 drm/i915/guc: Kick G2H tasklet if no credits
df10803955c6 drm/i915/guc: Selftest for stealing of guc ids
-:52: CHECK:MACRO_ARG_PRECEDENCE: Macro argument 'guc' may be better as '(guc)' to avoid precedence issues
#52: FILE: drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c:148:
+#define NUMBER_MULTI_LRC_GUC_ID(guc) (guc->submission_state.num_guc_ids / 16)
-:158: WARNING:OOM_MESSAGE: Possible unnecessary 'out of memory' message
#158: FILE: drivers/gpu/drm/i915/gt/uc/selftest_guc.c:153:
+ if (!ce) {
+ pr_err("Context array allocation failed\n");
total: 0 errors, 1 warnings, 1 checks, 262 lines checked
^ permalink raw reply [flat|nested] 27+ messages in thread* [Intel-gfx] ✗ Fi.CI.SPARSE: warning for Fix stealing guc_ids + test
2021-12-11 0:56 [Intel-gfx] [PATCH 0/7] Fix stealing guc_ids + test Matthew Brost
` (7 preceding siblings ...)
2021-12-11 2:28 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for Fix stealing guc_ids + test Patchwork
@ 2021-12-11 2:30 ` Patchwork
2021-12-11 2:58 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2021-12-12 1:32 ` [Intel-gfx] ✓ Fi.CI.IGT: " Patchwork
10 siblings, 0 replies; 27+ messages in thread
From: Patchwork @ 2021-12-11 2:30 UTC (permalink / raw)
To: Matthew Brost; +Cc: intel-gfx
== Series Details ==
Series: Fix stealing guc_ids + test
URL : https://patchwork.freedesktop.org/series/97896/
State : warning
== Summary ==
$ dim sparse --fast origin/drm-tip
Sparse version: v0.6.2
Fast mode used, each commit won't be checked separately.
^ permalink raw reply [flat|nested] 27+ messages in thread* [Intel-gfx] ✓ Fi.CI.BAT: success for Fix stealing guc_ids + test
2021-12-11 0:56 [Intel-gfx] [PATCH 0/7] Fix stealing guc_ids + test Matthew Brost
` (8 preceding siblings ...)
2021-12-11 2:30 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
@ 2021-12-11 2:58 ` Patchwork
2021-12-12 1:32 ` [Intel-gfx] ✓ Fi.CI.IGT: " Patchwork
10 siblings, 0 replies; 27+ messages in thread
From: Patchwork @ 2021-12-11 2:58 UTC (permalink / raw)
To: Matthew Brost; +Cc: intel-gfx
[-- Attachment #1: Type: text/plain, Size: 7380 bytes --]
== Series Details ==
Series: Fix stealing guc_ids + test
URL : https://patchwork.freedesktop.org/series/97896/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_10989 -> Patchwork_21831
====================================================
Summary
-------
**SUCCESS**
No regressions found.
External URL: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/index.html
Participating hosts (39 -> 34)
------------------------------
Additional (1): fi-tgl-u2
Missing (6): fi-ilk-m540 fi-hsw-4200u fi-bsw-cyan fi-ctg-p8600 fi-pnv-d510 fi-bdw-samus
Known issues
------------
Here are the changes found in Patchwork_21831 that come from known issues:
### IGT changes ###
#### Issues hit ####
* igt@core_hotunplug@unbind-rebind:
- fi-tgl-u2: NOTRUN -> [INCOMPLETE][1] ([i915#4006])
[1]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/fi-tgl-u2/igt@core_hotunplug@unbind-rebind.html
* igt@gem_huc_copy@huc-copy:
- fi-skl-6600u: NOTRUN -> [SKIP][2] ([fdo#109271] / [i915#2190])
[2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/fi-skl-6600u/igt@gem_huc_copy@huc-copy.html
- fi-tgl-u2: NOTRUN -> [SKIP][3] ([i915#2190])
[3]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/fi-tgl-u2/igt@gem_huc_copy@huc-copy.html
* igt@gem_lmem_swapping@verify-random:
- fi-skl-6600u: NOTRUN -> [SKIP][4] ([fdo#109271] / [i915#4613]) +3 similar issues
[4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/fi-skl-6600u/igt@gem_lmem_swapping@verify-random.html
- fi-tgl-u2: NOTRUN -> [SKIP][5] ([i915#4613]) +3 similar issues
[5]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/fi-tgl-u2/igt@gem_lmem_swapping@verify-random.html
* igt@i915_selftest@live:
- fi-skl-6600u: NOTRUN -> [INCOMPLETE][6] ([i915#198])
[6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/fi-skl-6600u/igt@i915_selftest@live.html
* igt@kms_chamelium@dp-hpd-fast:
- fi-tgl-u2: NOTRUN -> [SKIP][7] ([fdo#109284] / [fdo#111827]) +8 similar issues
[7]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/fi-tgl-u2/igt@kms_chamelium@dp-hpd-fast.html
* igt@kms_chamelium@vga-edid-read:
- fi-skl-6600u: NOTRUN -> [SKIP][8] ([fdo#109271] / [fdo#111827]) +8 similar issues
[8]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/fi-skl-6600u/igt@kms_chamelium@vga-edid-read.html
* igt@kms_cursor_legacy@basic-busy-flip-before-cursor-atomic:
- fi-tgl-u2: NOTRUN -> [SKIP][9] ([i915#4103]) +1 similar issue
[9]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/fi-tgl-u2/igt@kms_cursor_legacy@basic-busy-flip-before-cursor-atomic.html
* igt@kms_cursor_legacy@basic-busy-flip-before-cursor-legacy:
- fi-skl-6600u: NOTRUN -> [SKIP][10] ([fdo#109271]) +3 similar issues
[10]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/fi-skl-6600u/igt@kms_cursor_legacy@basic-busy-flip-before-cursor-legacy.html
* igt@kms_flip@basic-flip-vs-wf_vblank@c-hdmi-a2:
- fi-bsw-n3050: [PASS][11] -> [FAIL][12] ([i915#2122])
[11]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10989/fi-bsw-n3050/igt@kms_flip@basic-flip-vs-wf_vblank@c-hdmi-a2.html
[12]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/fi-bsw-n3050/igt@kms_flip@basic-flip-vs-wf_vblank@c-hdmi-a2.html
* igt@kms_force_connector_basic@force-load-detect:
- fi-tgl-u2: NOTRUN -> [SKIP][13] ([fdo#109285])
[13]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/fi-tgl-u2/igt@kms_force_connector_basic@force-load-detect.html
* igt@kms_frontbuffer_tracking@basic:
- fi-cml-u2: [PASS][14] -> [DMESG-WARN][15] ([i915#4269])
[14]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10989/fi-cml-u2/igt@kms_frontbuffer_tracking@basic.html
[15]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/fi-cml-u2/igt@kms_frontbuffer_tracking@basic.html
* igt@kms_pipe_crc_basic@compare-crc-sanitycheck-pipe-d:
- fi-skl-6600u: NOTRUN -> [SKIP][16] ([fdo#109271] / [i915#533])
[16]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/fi-skl-6600u/igt@kms_pipe_crc_basic@compare-crc-sanitycheck-pipe-d.html
* igt@prime_vgem@basic-userptr:
- fi-tgl-u2: NOTRUN -> [SKIP][17] ([i915#3301])
[17]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/fi-tgl-u2/igt@prime_vgem@basic-userptr.html
* igt@runner@aborted:
- fi-skl-6600u: NOTRUN -> [FAIL][18] ([i915#1436] / [i915#2722] / [i915#4312])
[18]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/fi-skl-6600u/igt@runner@aborted.html
- fi-tgl-u2: NOTRUN -> [FAIL][19] ([i915#2722] / [i915#4312])
[19]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/fi-tgl-u2/igt@runner@aborted.html
#### Possible fixes ####
* igt@gem_exec_suspend@basic-s3:
- fi-skl-6600u: [INCOMPLETE][20] ([i915#4547]) -> [PASS][21]
[20]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10989/fi-skl-6600u/igt@gem_exec_suspend@basic-s3.html
[21]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/fi-skl-6600u/igt@gem_exec_suspend@basic-s3.html
[fdo#109271]: https://bugs.freedesktop.org/show_bug.cgi?id=109271
[fdo#109284]: https://bugs.freedesktop.org/show_bug.cgi?id=109284
[fdo#109285]: https://bugs.freedesktop.org/show_bug.cgi?id=109285
[fdo#111827]: https://bugs.freedesktop.org/show_bug.cgi?id=111827
[i915#1436]: https://gitlab.freedesktop.org/drm/intel/issues/1436
[i915#198]: https://gitlab.freedesktop.org/drm/intel/issues/198
[i915#2122]: https://gitlab.freedesktop.org/drm/intel/issues/2122
[i915#2190]: https://gitlab.freedesktop.org/drm/intel/issues/2190
[i915#2722]: https://gitlab.freedesktop.org/drm/intel/issues/2722
[i915#3301]: https://gitlab.freedesktop.org/drm/intel/issues/3301
[i915#4006]: https://gitlab.freedesktop.org/drm/intel/issues/4006
[i915#4103]: https://gitlab.freedesktop.org/drm/intel/issues/4103
[i915#4269]: https://gitlab.freedesktop.org/drm/intel/issues/4269
[i915#4312]: https://gitlab.freedesktop.org/drm/intel/issues/4312
[i915#4547]: https://gitlab.freedesktop.org/drm/intel/issues/4547
[i915#4613]: https://gitlab.freedesktop.org/drm/intel/issues/4613
[i915#533]: https://gitlab.freedesktop.org/drm/intel/issues/533
Build changes
-------------
* Linux: CI_DRM_10989 -> Patchwork_21831
CI-20190529: 20190529
CI_DRM_10989: 3f422828221d9ceefcddef0be33561b1646a1cbe @ git://anongit.freedesktop.org/gfx-ci/linux
IGT_6305: 136258e86a093fdb50a7a341de1c09ac9a076fea @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
Patchwork_21831: df10803955c66e63cc2c79afc4c2ade2667745e1 @ git://anongit.freedesktop.org/gfx-ci/linux
== Linux commits ==
df10803955c6 drm/i915/guc: Selftest for stealing of guc ids
30672810bf14 drm/i915/guc: Kick G2H tasklet if no credits
1bec569ba9a3 drm/i915/guc: Add extra debug on CT deadlock
e111f83640ff drm/i915/guc: Don't hog IRQs when destroying contexts
9018d8758c77 drm/i915/guc: Remove racey GEM_BUG_ON
eb4fbbb3657c drm/i915/guc: Only assign guc_id.id when stealing guc_id
52305c422f77 drm/i915/guc: Use correct context lock when callig clr_context_registered
== Logs ==
For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/index.html
[-- Attachment #2: Type: text/html, Size: 9038 bytes --]
^ permalink raw reply [flat|nested] 27+ messages in thread* [Intel-gfx] ✓ Fi.CI.IGT: success for Fix stealing guc_ids + test
2021-12-11 0:56 [Intel-gfx] [PATCH 0/7] Fix stealing guc_ids + test Matthew Brost
` (9 preceding siblings ...)
2021-12-11 2:58 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
@ 2021-12-12 1:32 ` Patchwork
10 siblings, 0 replies; 27+ messages in thread
From: Patchwork @ 2021-12-12 1:32 UTC (permalink / raw)
To: Matthew Brost; +Cc: intel-gfx
[-- Attachment #1: Type: text/plain, Size: 30250 bytes --]
== Series Details ==
Series: Fix stealing guc_ids + test
URL : https://patchwork.freedesktop.org/series/97896/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_10989_full -> Patchwork_21831_full
====================================================
Summary
-------
**WARNING**
Minor unknown changes coming with Patchwork_21831_full need to be verified
manually.
If you think the reported changes have nothing to do with the changes
introduced in Patchwork_21831_full, please notify your bug team to allow them
to document this new failure mode, which will reduce false positives in CI.
Participating hosts (10 -> 10)
------------------------------
No changes in participating hosts
Possible new issues
-------------------
Here are the unknown changes that may have been introduced in Patchwork_21831_full:
### IGT changes ###
#### Warnings ####
* igt@kms_ccs@pipe-a-bad-rotation-90-yf_tiled_ccs:
- shard-tglb: [SKIP][1] ([fdo#111615] / [i915#3689]) -> [INCOMPLETE][2]
[1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10989/shard-tglb8/igt@kms_ccs@pipe-a-bad-rotation-90-yf_tiled_ccs.html
[2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-tglb8/igt@kms_ccs@pipe-a-bad-rotation-90-yf_tiled_ccs.html
Known issues
------------
Here are the changes found in Patchwork_21831_full that come from known issues:
### CI changes ###
#### Issues hit ####
* boot:
- shard-glk: ([PASS][3], [PASS][4], [PASS][5], [PASS][6], [PASS][7], [PASS][8], [PASS][9], [PASS][10], [PASS][11], [PASS][12], [PASS][13], [PASS][14], [PASS][15], [PASS][16], [PASS][17], [PASS][18], [PASS][19], [PASS][20], [PASS][21], [PASS][22], [PASS][23], [PASS][24], [PASS][25], [PASS][26], [PASS][27]) -> ([PASS][28], [FAIL][29], [PASS][30], [PASS][31], [PASS][32], [PASS][33], [PASS][34], [PASS][35], [PASS][36], [PASS][37], [PASS][38], [PASS][39], [PASS][40], [PASS][41], [PASS][42], [PASS][43], [PASS][44], [PASS][45], [PASS][46], [PASS][47], [PASS][48], [PASS][49], [PASS][50], [PASS][51], [PASS][52]) ([i915#4392])
[3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10989/shard-glk7/boot.html
[4]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10989/shard-glk7/boot.html
[5]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10989/shard-glk1/boot.html
[6]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10989/shard-glk1/boot.html
[7]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10989/shard-glk1/boot.html
[8]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10989/shard-glk1/boot.html
[9]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10989/shard-glk2/boot.html
[10]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10989/shard-glk2/boot.html
[11]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10989/shard-glk3/boot.html
[12]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10989/shard-glk3/boot.html
[13]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10989/shard-glk9/boot.html
[14]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10989/shard-glk9/boot.html
[15]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10989/shard-glk8/boot.html
[16]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10989/shard-glk3/boot.html
[17]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10989/shard-glk4/boot.html
[18]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10989/shard-glk4/boot.html
[19]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10989/shard-glk8/boot.html
[20]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10989/shard-glk4/boot.html
[21]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10989/shard-glk5/boot.html
[22]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10989/shard-glk5/boot.html
[23]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10989/shard-glk6/boot.html
[24]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10989/shard-glk6/boot.html
[25]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10989/shard-glk8/boot.html
[26]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10989/shard-glk6/boot.html
[27]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10989/shard-glk7/boot.html
[28]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-glk1/boot.html
[29]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-glk1/boot.html
[30]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-glk1/boot.html
[31]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-glk1/boot.html
[32]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-glk2/boot.html
[33]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-glk2/boot.html
[34]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-glk2/boot.html
[35]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-glk3/boot.html
[36]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-glk3/boot.html
[37]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-glk3/boot.html
[38]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-glk4/boot.html
[39]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-glk4/boot.html
[40]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-glk4/boot.html
[41]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-glk5/boot.html
[42]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-glk5/boot.html
[43]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-glk6/boot.html
[44]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-glk6/boot.html
[45]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-glk7/boot.html
[46]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-glk7/boot.html
[47]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-glk7/boot.html
[48]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-glk8/boot.html
[49]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-glk8/boot.html
[50]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-glk9/boot.html
[51]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-glk9/boot.html
[52]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-glk9/boot.html
### IGT changes ###
#### Issues hit ####
* igt@gem_ctx_isolation@preservation-s3@vcs0:
- shard-kbl: [PASS][53] -> [DMESG-WARN][54] ([i915#180]) +3 similar issues
[53]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10989/shard-kbl3/igt@gem_ctx_isolation@preservation-s3@vcs0.html
[54]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-kbl4/igt@gem_ctx_isolation@preservation-s3@vcs0.html
* igt@gem_eio@unwedge-stress:
- shard-tglb: [PASS][55] -> [TIMEOUT][56] ([i915#3063] / [i915#3648])
[55]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10989/shard-tglb2/igt@gem_eio@unwedge-stress.html
[56]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-tglb3/igt@gem_eio@unwedge-stress.html
* igt@gem_exec_fair@basic-none@vcs0:
- shard-kbl: NOTRUN -> [FAIL][57] ([i915#2842])
[57]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-kbl3/igt@gem_exec_fair@basic-none@vcs0.html
* igt@gem_exec_fair@basic-none@vcs1:
- shard-iclb: NOTRUN -> [FAIL][58] ([i915#2842])
[58]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-iclb4/igt@gem_exec_fair@basic-none@vcs1.html
* igt@gem_exec_fair@basic-pace-share@rcs0:
- shard-tglb: [PASS][59] -> [FAIL][60] ([i915#2842]) +2 similar issues
[59]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10989/shard-tglb3/igt@gem_exec_fair@basic-pace-share@rcs0.html
[60]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-tglb5/igt@gem_exec_fair@basic-pace-share@rcs0.html
- shard-glk: [PASS][61] -> [FAIL][62] ([i915#2842])
[61]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10989/shard-glk7/igt@gem_exec_fair@basic-pace-share@rcs0.html
[62]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-glk4/igt@gem_exec_fair@basic-pace-share@rcs0.html
* igt@gem_lmem_swapping@parallel-random:
- shard-apl: NOTRUN -> [SKIP][63] ([fdo#109271] / [i915#4613]) +1 similar issue
[63]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-apl8/igt@gem_lmem_swapping@parallel-random.html
* igt@gem_lmem_swapping@random-engines:
- shard-skl: NOTRUN -> [SKIP][64] ([fdo#109271] / [i915#4613])
[64]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-skl10/igt@gem_lmem_swapping@random-engines.html
* igt@gem_media_vme:
- shard-skl: NOTRUN -> [SKIP][65] ([fdo#109271]) +65 similar issues
[65]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-skl4/igt@gem_media_vme.html
* igt@gem_userptr_blits@vma-merge:
- shard-kbl: NOTRUN -> [FAIL][66] ([i915#3318])
[66]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-kbl3/igt@gem_userptr_blits@vma-merge.html
* igt@i915_pm_lpsp@kms-lpsp@kms-lpsp-dp:
- shard-apl: NOTRUN -> [SKIP][67] ([fdo#109271] / [i915#1937])
[67]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-apl8/igt@i915_pm_lpsp@kms-lpsp@kms-lpsp-dp.html
* igt@kms_big_fb@linear-64bpp-rotate-180:
- shard-glk: [PASS][68] -> [FAIL][69] ([i915#1888] / [i915#3653])
[68]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10989/shard-glk8/igt@kms_big_fb@linear-64bpp-rotate-180.html
[69]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-glk2/igt@kms_big_fb@linear-64bpp-rotate-180.html
* igt@kms_big_fb@x-tiled-max-hw-stride-32bpp-rotate-0-hflip:
- shard-kbl: NOTRUN -> [SKIP][70] ([fdo#109271] / [i915#3777])
[70]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-kbl6/igt@kms_big_fb@x-tiled-max-hw-stride-32bpp-rotate-0-hflip.html
* igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-0-hflip:
- shard-apl: NOTRUN -> [SKIP][71] ([fdo#109271] / [i915#3777]) +1 similar issue
[71]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-apl8/igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-0-hflip.html
* igt@kms_big_fb@yf-tiled-max-hw-stride-32bpp-rotate-180-hflip-async-flip:
- shard-tglb: NOTRUN -> [SKIP][72] ([fdo#111615])
[72]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-tglb8/igt@kms_big_fb@yf-tiled-max-hw-stride-32bpp-rotate-180-hflip-async-flip.html
* igt@kms_ccs@pipe-a-ccs-on-another-bo-y_tiled_gen12_mc_ccs:
- shard-apl: NOTRUN -> [SKIP][73] ([fdo#109271] / [i915#3886]) +5 similar issues
[73]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-apl8/igt@kms_ccs@pipe-a-ccs-on-another-bo-y_tiled_gen12_mc_ccs.html
* igt@kms_ccs@pipe-b-crc-primary-basic-y_tiled_gen12_mc_ccs:
- shard-kbl: NOTRUN -> [SKIP][74] ([fdo#109271] / [i915#3886]) +8 similar issues
[74]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-kbl1/igt@kms_ccs@pipe-b-crc-primary-basic-y_tiled_gen12_mc_ccs.html
* igt@kms_ccs@pipe-c-crc-primary-rotation-180-y_tiled_gen12_mc_ccs:
- shard-skl: NOTRUN -> [SKIP][75] ([fdo#109271] / [i915#3886]) +2 similar issues
[75]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-skl10/igt@kms_ccs@pipe-c-crc-primary-rotation-180-y_tiled_gen12_mc_ccs.html
* igt@kms_color_chamelium@pipe-a-ctm-blue-to-red:
- shard-kbl: NOTRUN -> [SKIP][76] ([fdo#109271] / [fdo#111827]) +13 similar issues
[76]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-kbl6/igt@kms_color_chamelium@pipe-a-ctm-blue-to-red.html
* igt@kms_color_chamelium@pipe-a-ctm-limited-range:
- shard-apl: NOTRUN -> [SKIP][77] ([fdo#109271] / [fdo#111827]) +9 similar issues
[77]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-apl6/igt@kms_color_chamelium@pipe-a-ctm-limited-range.html
* igt@kms_color_chamelium@pipe-c-ctm-negative:
- shard-skl: NOTRUN -> [SKIP][78] ([fdo#109271] / [fdo#111827]) +4 similar issues
[78]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-skl4/igt@kms_color_chamelium@pipe-c-ctm-negative.html
* igt@kms_content_protection@atomic-dpms:
- shard-apl: NOTRUN -> [TIMEOUT][79] ([i915#1319])
[79]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-apl8/igt@kms_content_protection@atomic-dpms.html
* igt@kms_content_protection@uevent:
- shard-apl: NOTRUN -> [FAIL][80] ([i915#2105])
[80]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-apl6/igt@kms_content_protection@uevent.html
* igt@kms_cursor_crc@pipe-a-cursor-suspend:
- shard-kbl: NOTRUN -> [DMESG-WARN][81] ([i915#180])
[81]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-kbl7/igt@kms_cursor_crc@pipe-a-cursor-suspend.html
- shard-skl: [PASS][82] -> [INCOMPLETE][83] ([i915#2828] / [i915#300])
[82]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10989/shard-skl1/igt@kms_cursor_crc@pipe-a-cursor-suspend.html
[83]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-skl6/igt@kms_cursor_crc@pipe-a-cursor-suspend.html
* igt@kms_cursor_crc@pipe-c-cursor-suspend:
- shard-skl: [PASS][84] -> [INCOMPLETE][85] ([i915#300])
[84]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10989/shard-skl6/igt@kms_cursor_crc@pipe-c-cursor-suspend.html
[85]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-skl8/igt@kms_cursor_crc@pipe-c-cursor-suspend.html
* igt@kms_cursor_legacy@flip-vs-cursor-toggle:
- shard-iclb: [PASS][86] -> [FAIL][87] ([i915#2346])
[86]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10989/shard-iclb2/igt@kms_cursor_legacy@flip-vs-cursor-toggle.html
[87]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-iclb7/igt@kms_cursor_legacy@flip-vs-cursor-toggle.html
* igt@kms_flip@2x-flip-vs-panning-vs-hang@ab-hdmi-a1-hdmi-a2:
- shard-glk: [PASS][88] -> [DMESG-WARN][89] ([i915#118] / [i915#1888])
[88]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10989/shard-glk8/igt@kms_flip@2x-flip-vs-panning-vs-hang@ab-hdmi-a1-hdmi-a2.html
[89]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-glk2/igt@kms_flip@2x-flip-vs-panning-vs-hang@ab-hdmi-a1-hdmi-a2.html
* igt@kms_flip@flip-vs-expired-vblank-interruptible@b-edp1:
- shard-skl: NOTRUN -> [FAIL][90] ([i915#79]) +2 similar issues
[90]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-skl10/igt@kms_flip@flip-vs-expired-vblank-interruptible@b-edp1.html
* igt@kms_flip@flip-vs-suspend-interruptible@b-dp1:
- shard-kbl: NOTRUN -> [INCOMPLETE][91] ([i915#3614])
[91]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-kbl3/igt@kms_flip@flip-vs-suspend-interruptible@b-dp1.html
* igt@kms_flip@flip-vs-suspend-interruptible@c-dp1:
- shard-apl: [PASS][92] -> [DMESG-WARN][93] ([i915#180]) +3 similar issues
[92]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10989/shard-apl1/igt@kms_flip@flip-vs-suspend-interruptible@c-dp1.html
[93]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-apl8/igt@kms_flip@flip-vs-suspend-interruptible@c-dp1.html
* igt@kms_flip@plain-flip-ts-check-interruptible@b-edp1:
- shard-skl: [PASS][94] -> [FAIL][95] ([i915#2122]) +2 similar issues
[94]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10989/shard-skl9/igt@kms_flip@plain-flip-ts-check-interruptible@b-edp1.html
[95]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-skl9/igt@kms_flip@plain-flip-ts-check-interruptible@b-edp1.html
* igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytileccs:
- shard-skl: NOTRUN -> [INCOMPLETE][96] ([i915#3699])
[96]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-skl10/igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytileccs.html
- shard-iclb: [PASS][97] -> [SKIP][98] ([i915#3701])
[97]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10989/shard-iclb1/igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytileccs.html
[98]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-iclb2/igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytileccs.html
* igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-cur-indfb-draw-blt:
- shard-kbl: NOTRUN -> [SKIP][99] ([fdo#109271]) +163 similar issues
[99]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-kbl3/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-cur-indfb-draw-blt.html
* igt@kms_pipe_crc_basic@read-crc-pipe-d-frame-sequence:
- shard-skl: NOTRUN -> [SKIP][100] ([fdo#109271] / [i915#533])
[100]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-skl4/igt@kms_pipe_crc_basic@read-crc-pipe-d-frame-sequence.html
* igt@kms_pipe_crc_basic@suspend-read-crc-pipe-d:
- shard-kbl: NOTRUN -> [SKIP][101] ([fdo#109271] / [i915#533]) +1 similar issue
[101]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-kbl3/igt@kms_pipe_crc_basic@suspend-read-crc-pipe-d.html
* igt@kms_plane_alpha_blend@pipe-a-alpha-7efc:
- shard-skl: NOTRUN -> [FAIL][102] ([fdo#108145] / [i915#265])
[102]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-skl10/igt@kms_plane_alpha_blend@pipe-a-alpha-7efc.html
* igt@kms_plane_alpha_blend@pipe-a-alpha-basic:
- shard-kbl: NOTRUN -> [FAIL][103] ([fdo#108145] / [i915#265])
[103]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-kbl1/igt@kms_plane_alpha_blend@pipe-a-alpha-basic.html
* igt@kms_plane_alpha_blend@pipe-a-alpha-opaque-fb:
- shard-apl: NOTRUN -> [FAIL][104] ([fdo#108145] / [i915#265]) +2 similar issues
[104]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-apl8/igt@kms_plane_alpha_blend@pipe-a-alpha-opaque-fb.html
* igt@kms_plane_alpha_blend@pipe-a-alpha-transparent-fb:
- shard-kbl: NOTRUN -> [FAIL][105] ([i915#265])
[105]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-kbl1/igt@kms_plane_alpha_blend@pipe-a-alpha-transparent-fb.html
* igt@kms_plane_alpha_blend@pipe-b-alpha-transparent-fb:
- shard-apl: NOTRUN -> [FAIL][106] ([i915#265])
[106]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-apl6/igt@kms_plane_alpha_blend@pipe-b-alpha-transparent-fb.html
* igt@kms_plane_alpha_blend@pipe-c-alpha-transparent-fb:
- shard-skl: NOTRUN -> [FAIL][107] ([i915#265])
[107]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-skl10/igt@kms_plane_alpha_blend@pipe-c-alpha-transparent-fb.html
* igt@kms_plane_alpha_blend@pipe-c-constant-alpha-min:
- shard-skl: [PASS][108] -> [FAIL][109] ([fdo#108145] / [i915#265]) +1 similar issue
[108]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10989/shard-skl8/igt@kms_plane_alpha_blend@pipe-c-constant-alpha-min.html
[109]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-skl8/igt@kms_plane_alpha_blend@pipe-c-constant-alpha-min.html
* igt@kms_psr2_sf@overlay-primary-update-sf-dmg-area-5:
- shard-apl: NOTRUN -> [SKIP][110] ([fdo#109271] / [i915#658]) +3 similar issues
[110]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-apl8/igt@kms_psr2_sf@overlay-primary-update-sf-dmg-area-5.html
* igt@kms_psr2_sf@primary-plane-update-sf-dmg-area-2:
- shard-kbl: NOTRUN -> [SKIP][111] ([fdo#109271] / [i915#658]) +2 similar issues
[111]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-kbl1/igt@kms_psr2_sf@primary-plane-update-sf-dmg-area-2.html
* igt@kms_psr2_sf@primary-plane-update-sf-dmg-area-3:
- shard-skl: NOTRUN -> [SKIP][112] ([fdo#109271] / [i915#658])
[112]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-skl10/igt@kms_psr2_sf@primary-plane-update-sf-dmg-area-3.html
* igt@kms_setmode@basic:
- shard-glk: [PASS][113] -> [FAIL][114] ([i915#31])
[113]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10989/shard-glk7/igt@kms_setmode@basic.html
[114]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-glk4/igt@kms_setmode@basic.html
* igt@kms_vblank@pipe-a-ts-continuation-suspend:
- shard-kbl: [PASS][115] -> [DMESG-WARN][116] ([i915#180] / [i915#295])
[115]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10989/shard-kbl2/igt@kms_vblank@pipe-a-ts-continuation-suspend.html
[116]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-kbl7/igt@kms_vblank@pipe-a-ts-continuation-suspend.html
* igt@kms_writeback@writeback-check-output:
- shard-apl: NOTRUN -> [SKIP][117] ([fdo#109271] / [i915#2437])
[117]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-apl6/igt@kms_writeback@writeback-check-output.html
* igt@prime_nv_api@i915_nv_reimport_twice_check_flink_name:
- shard-apl: NOTRUN -> [SKIP][118] ([fdo#109271]) +108 similar issues
[118]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-apl8/igt@prime_nv_api@i915_nv_reimport_twice_check_flink_name.html
* igt@sysfs_clients@create:
- shard-apl: NOTRUN -> [SKIP][119] ([fdo#109271] / [i915#2994]) +1 similar issue
[119]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-apl6/igt@sysfs_clients@create.html
* igt@sysfs_clients@fair-0:
- shard-kbl: NOTRUN -> [SKIP][120] ([fdo#109271] / [i915#2994])
[120]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-kbl3/igt@sysfs_clients@fair-0.html
* igt@sysfs_clients@sema-25:
- shard-skl: NOTRUN -> [SKIP][121] ([fdo#109271] / [i915#2994])
[121]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-skl10/igt@sysfs_clients@sema-25.html
#### Possible fixes ####
* igt@drm_read@empty-nonblock:
- {shard-rkl}: ([SKIP][122], [SKIP][123]) ([i915#1845]) -> [PASS][124] +1 similar issue
[122]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10989/shard-rkl-1/igt@drm_read@empty-nonblock.html
[123]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10989/shard-rkl-4/igt@drm_read@empty-nonblock.html
[124]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-rkl-6/igt@drm_read@empty-nonblock.html
* igt@gem_exec_fair@basic-pace@rcs0:
- shard-kbl: [FAIL][125] ([i915#2842]) -> [PASS][126]
[125]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10989/shard-kbl3/igt@gem_exec_fair@basic-pace@rcs0.html
[126]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-kbl6/igt@gem_exec_fair@basic-pace@rcs0.html
* igt@gem_exec_fair@basic-pace@vcs1:
- shard-kbl: [SKIP][127] ([fdo#109271]) -> [PASS][128]
[127]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10989/shard-kbl3/igt@gem_exec_fair@basic-pace@vcs1.html
[128]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-kbl6/igt@gem_exec_fair@basic-pace@vcs1.html
* igt@gem_exec_fair@basic-throttle@rcs0:
- shard-iclb: [FAIL][129] ([i915#2849]) -> [PASS][130]
[129]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10989/shard-iclb5/igt@gem_exec_fair@basic-throttle@rcs0.html
[130]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-iclb6/igt@gem_exec_fair@basic-throttle@rcs0.html
* igt@gem_exec_whisper@basic-normal-all:
- shard-glk: [DMESG-WARN][131] ([i915#118]) -> [PASS][132]
[131]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10989/shard-glk3/igt@gem_exec_whisper@basic-normal-all.html
[132]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-glk3/igt@gem_exec_whisper@basic-normal-all.html
* igt@gem_huc_copy@huc-copy:
- shard-tglb: [SKIP][133] ([i915#2190]) -> [PASS][134]
[133]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10989/shard-tglb7/igt@gem_huc_copy@huc-copy.html
[134]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-tglb8/igt@gem_huc_copy@huc-copy.html
* igt@gen9_exec_parse@allowed-single:
- shard-skl: [DMESG-WARN][135] ([i915#1436] / [i915#716]) -> [PASS][136]
[135]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10989/shard-skl8/igt@gen9_exec_parse@allowed-single.html
[136]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-skl4/igt@gen9_exec_parse@allowed-single.html
* igt@i915_pm_backlight@basic-brightness:
- {shard-rkl}: [SKIP][137] ([i915#3012]) -> [PASS][138]
[137]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10989/shard-rkl-2/igt@i915_pm_backlight@basic-brightness.html
[138]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-rkl-6/igt@i915_pm_backlight@basic-brightness.html
* igt@i915_pm_dc@dc6-dpms:
- shard-iclb: [FAIL][139] ([i915#454]) -> [PASS][140]
[139]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10989/shard-iclb3/igt@i915_pm_dc@dc6-dpms.html
[140]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-iclb7/igt@i915_pm_dc@dc6-dpms.html
* igt@kms_big_fb@x-tiled-32bpp-rotate-0:
- {shard-rkl}: ([SKIP][141], [PASS][142]) ([i915#1845]) -> [PASS][143]
[141]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10989/shard-rkl-4/igt@kms_big_fb@x-tiled-32bpp-rotate-0.html
[142]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10989/shard-rkl-6/igt@kms_big_fb@x-tiled-32bpp-rotate-0.html
[143]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-rkl-6/igt@kms_big_fb@x-tiled-32bpp-rotate-0.html
* igt@kms_ccs@pipe-b-crc-primary-rotation-180-y_tiled_gen12_rc_ccs_cc:
- {shard-rkl}: [SKIP][144] ([i915#1845]) -> [PASS][145] +13 similar issues
[144]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10989/shard-rkl-2/igt@kms_ccs@pipe-b-crc-primary-rotation-180-y_tiled_gen12_rc_ccs_cc.html
[145]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-rkl-6/igt@kms_ccs@pipe-b-crc-primary-rotation-180-y_tiled_gen12_rc_ccs_cc.html
* igt@kms_color@pipe-c-ctm-0-25:
- shard-skl: [DMESG-WARN][146] ([i915#1982]) -> [PASS][147]
[146]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10989/shard-skl4/igt@kms_color@pipe-c-ctm-0-25.html
[147]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-skl1/igt@kms_color@pipe-c-ctm-0-25.html
* igt@kms_cursor_crc@pipe-b-cursor-128x42-random:
- {shard-rkl}: [SKIP][148] ([fdo#112022] / [i915#4070]) -> [PASS][149] +4 similar issues
[148]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10989/shard-rkl-2/igt@kms_cursor_crc@pipe-b-cursor-128x42-random.html
[149]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-rkl-6/igt@kms_cursor_crc@pipe-b-cursor-128x42-random.html
* igt@kms_cursor_crc@pipe-b-cursor-256x85-offscreen:
- {shard-rkl}: ([SKIP][150], [SKIP][151]) ([fdo#112022] / [i915#4070]) -> [PASS][152]
[150]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10989/shard-rkl-1/igt@kms_cursor_crc@pipe-b-cursor-256x85-offscreen.html
[151]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10989/shard-rkl-4/igt@kms_cursor_crc@pipe-b-cursor-256x85-offscreen.html
[152]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-rkl-6/igt@kms_cursor_crc@pipe-b-cursor-256x85-offscreen.html
* igt@kms_cursor_legacy@basic-flip-after-cursor-legacy:
- {shard-rkl}: [SKIP][153] ([fdo#111825] / [i915#4070]) -> [PASS][154] +2 similar issues
[153]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10989/shard-rkl-1/igt@kms_cursor_legacy@basic-flip-after-cursor-legacy.html
[154]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-rkl-6/igt@kms_cursor_legacy@basic-flip-after-cursor-legacy.html
* igt@kms_cursor_legacy@flip-vs-cursor-varying-size:
- shard-iclb: [FAIL][155] ([i915#2346]) -> [PASS][156]
[155]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10989/shard-iclb7/igt@kms_cursor_legacy@flip-vs-cursor-varying-size.html
[156]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-iclb1/igt@kms_cursor_legacy@flip-vs-cursor-varying-size.html
* igt@kms_cursor_legacy@long-nonblocking-modeset-vs-cursor-atomic:
- {shard-rkl}: ([SKIP][157], [SKIP][158]) ([fdo#111825] / [i915#4070]) -> [PASS][159]
[157]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10989/shard-rkl-2/igt@kms_cursor_legacy@long-nonblocking-modeset-vs-cursor-atomic.html
[158]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10989/shard-rkl-4/igt@kms_cursor_legacy@long-nonblocking-modeset-vs-cursor-atomic.html
[159]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-rkl-6/igt@kms_cursor_legacy@long-nonblocking-modeset-vs-cursor-atomic.html
* igt@kms_draw_crc@draw-method-xrgb2101010-mmap-gtt-untiled:
- {shard-rkl}: [SKIP][160] ([fdo#111314]) -> [PASS][161]
[160]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10989/shard-rkl-1/igt@kms_draw_crc@draw-method-xrgb2101010-mmap-gtt-untiled.html
[161]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-rkl-6/igt@kms_draw_crc@draw-method-xrgb2101010-mmap-gtt-untiled.html
* igt@kms_draw_crc@draw-method-xrgb8888-blt-xtiled:
- {shard-rkl}: ([SKIP][162], [SKIP][163]) ([fdo#111314] / [i915#4098]) -> [PASS][164]
[162]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10989/shard-rkl-4/igt@kms_draw_crc@draw-method-xrgb8888-blt-xtiled.html
[163]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10989/shard-rkl-1/igt@kms_draw_crc@draw-method-xrgb8888-blt-xtiled.html
[164]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/shard-rkl-6/igt@kms_draw_crc@draw-method-xrgb8888-blt-xtiled.html
* igt@kms_fbcon_fbt@fbc-suspend:
- {shard-rkl}: [SKIP][165] ([i915#1849]) -> [PASS][166] +10 similar issues
[165]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10989/shard-rkl-2/igt@kms_fbcon_fbt@fbc-suspend.html
[166]
== Logs ==
For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21831/index.html
[-- Attachment #2: Type: text/html, Size: 33279 bytes --]
^ permalink raw reply [flat|nested] 27+ messages in thread