* [PATCH] drm/i915: Remove request->reset_counter
@ 2016-06-29 14:51 Chris Wilson
2016-06-29 15:10 ` Arun Siluvery
2016-06-29 15:45 ` ✗ Ro.CI.BAT: failure for " Patchwork
0 siblings, 2 replies; 4+ messages in thread
From: Chris Wilson @ 2016-06-29 14:51 UTC (permalink / raw)
To: intel-gfx; +Cc: Mika Kuoppala
Since commit 2ed53a94d8cb ("drm/i915: On GPU reset, set the HWS
breadcrumb to the last seqno") once a hang is completed, the seqno is
advanced past all current requests. With this we know that if we wake up
on waiting for a request, if a hang has occurred and reset completed,
our request will be considered complete (i.e.
i915_gem_request_completed() returns true). Therefore we only need to
worry about the situation where a hang has occurred, but not yet reset,
where we may need to release our struct_mutex. Since we don't need to
detect the competed reset using the global gpu_error->reset_counter
anymore, we do not need to track the reset_counter epoch inside the
request.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Arun Siluvery <arun.siluvery@linux.intel.com>
Cc: Mika Kuoppala <mika.kuoppala@intel.com>
---
drivers/gpu/drm/i915/i915_drv.h | 1 -
drivers/gpu/drm/i915/i915_gem.c | 16 ++++++++--------
2 files changed, 8 insertions(+), 9 deletions(-)
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index def011811421..485ab1148181 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2368,7 +2368,6 @@ struct drm_i915_gem_request {
/** On Which ring this request was generated */
struct drm_i915_private *i915;
struct intel_engine_cs *engine;
- unsigned reset_counter;
/** GEM sequence number associated with the previous request,
* when the HWS breadcrumb is equal to this the GPU is processing
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 51191b879747..1d9878258103 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -1506,12 +1506,13 @@ int __i915_wait_request(struct drm_i915_gem_request *req,
/* We need to check whether any gpu reset happened in between
* the request being submitted and now. If a reset has occurred,
- * the request is effectively complete (we either are in the
- * process of or have discarded the rendering and completely
- * reset the GPU. The results of the request are lost and we
- * are free to continue on with the original operation.
+ * the seqno will have been advance past ours and our request
+ * is complete. If we are in the process of handling a reset,
+ * the request is effectively complete as the rendering will
+ * be discarded, but we need to return in order to drop the
+ * struct_mutex.
*/
- if (req->reset_counter != i915_reset_counter(&dev_priv->gpu_error)) {
+ if (i915_reset_in_progress(&dev_priv->gpu_error)) {
ret = 0;
break;
}
@@ -1685,7 +1686,7 @@ i915_wait_request(struct drm_i915_gem_request *req)
return ret;
/* If the GPU hung, we want to keep the requests to find the guilty. */
- if (req->reset_counter == i915_reset_counter(&dev_priv->gpu_error))
+ if (!i915_reset_in_progress(&dev_priv->gpu_error))
__i915_gem_request_retire__upto(req);
return 0;
@@ -1746,7 +1747,7 @@ i915_gem_object_retire_request(struct drm_i915_gem_object *obj,
else if (obj->last_write_req == req)
i915_gem_object_retire__write(obj);
- if (req->reset_counter == i915_reset_counter(&req->i915->gpu_error))
+ if (!i915_reset_in_progress(&req->i915->gpu_error))
__i915_gem_request_retire__upto(req);
}
@@ -3021,7 +3022,6 @@ __i915_gem_request_alloc(struct intel_engine_cs *engine,
kref_init(&req->ref);
req->i915 = dev_priv;
req->engine = engine;
- req->reset_counter = reset_counter;
req->ctx = ctx;
i915_gem_context_reference(req->ctx);
--
2.8.1
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [PATCH] drm/i915: Remove request->reset_counter
2016-06-29 14:51 [PATCH] drm/i915: Remove request->reset_counter Chris Wilson
@ 2016-06-29 15:10 ` Arun Siluvery
2016-06-29 15:45 ` ✗ Ro.CI.BAT: failure for " Patchwork
1 sibling, 0 replies; 4+ messages in thread
From: Arun Siluvery @ 2016-06-29 15:10 UTC (permalink / raw)
To: Chris Wilson, intel-gfx; +Cc: Mika Kuoppala
On 29/06/2016 15:51, Chris Wilson wrote:
> Since commit 2ed53a94d8cb ("drm/i915: On GPU reset, set the HWS
> breadcrumb to the last seqno") once a hang is completed, the seqno is
> advanced past all current requests. With this we know that if we wake up
> on waiting for a request, if a hang has occurred and reset completed,
> our request will be considered complete (i.e.
> i915_gem_request_completed() returns true). Therefore we only need to
> worry about the situation where a hang has occurred, but not yet reset,
> where we may need to release our struct_mutex. Since we don't need to
> detect the competed reset using the global gpu_error->reset_counter
s/competed/completed
> anymore, we do not need to track the reset_counter epoch inside the
> request.
>
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Arun Siluvery <arun.siluvery@linux.intel.com>
> Cc: Mika Kuoppala <mika.kuoppala@intel.com>
> ---
> drivers/gpu/drm/i915/i915_drv.h | 1 -
> drivers/gpu/drm/i915/i915_gem.c | 16 ++++++++--------
> 2 files changed, 8 insertions(+), 9 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
> index def011811421..485ab1148181 100644
> --- a/drivers/gpu/drm/i915/i915_drv.h
> +++ b/drivers/gpu/drm/i915/i915_drv.h
> @@ -2368,7 +2368,6 @@ struct drm_i915_gem_request {
> /** On Which ring this request was generated */
> struct drm_i915_private *i915;
> struct intel_engine_cs *engine;
> - unsigned reset_counter;
>
> /** GEM sequence number associated with the previous request,
> * when the HWS breadcrumb is equal to this the GPU is processing
> diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> index 51191b879747..1d9878258103 100644
> --- a/drivers/gpu/drm/i915/i915_gem.c
> +++ b/drivers/gpu/drm/i915/i915_gem.c
> @@ -1506,12 +1506,13 @@ int __i915_wait_request(struct drm_i915_gem_request *req,
>
> /* We need to check whether any gpu reset happened in between
> * the request being submitted and now. If a reset has occurred,
> - * the request is effectively complete (we either are in the
> - * process of or have discarded the rendering and completely
> - * reset the GPU. The results of the request are lost and we
> - * are free to continue on with the original operation.
> + * the seqno will have been advance past ours and our request
> + * is complete. If we are in the process of handling a reset,
> + * the request is effectively complete as the rendering will
> + * be discarded, but we need to return in order to drop the
> + * struct_mutex.
> */
> - if (req->reset_counter != i915_reset_counter(&dev_priv->gpu_error)) {
> + if (i915_reset_in_progress(&dev_priv->gpu_error)) {
> ret = 0;
> break;
> }
> @@ -1685,7 +1686,7 @@ i915_wait_request(struct drm_i915_gem_request *req)
> return ret;
>
> /* If the GPU hung, we want to keep the requests to find the guilty. */
> - if (req->reset_counter == i915_reset_counter(&dev_priv->gpu_error))
> + if (!i915_reset_in_progress(&dev_priv->gpu_error))
> __i915_gem_request_retire__upto(req);
>
> return 0;
> @@ -1746,7 +1747,7 @@ i915_gem_object_retire_request(struct drm_i915_gem_object *obj,
> else if (obj->last_write_req == req)
> i915_gem_object_retire__write(obj);
>
> - if (req->reset_counter == i915_reset_counter(&req->i915->gpu_error))
> + if (!i915_reset_in_progress(&req->i915->gpu_error))
> __i915_gem_request_retire__upto(req);
> }
>
> @@ -3021,7 +3022,6 @@ __i915_gem_request_alloc(struct intel_engine_cs *engine,
> kref_init(&req->ref);
> req->i915 = dev_priv;
> req->engine = engine;
> - req->reset_counter = reset_counter;
> req->ctx = ctx;
> i915_gem_context_reference(req->ctx);
>
>
this looks good to me,
Reviewed-by: Arun Siluvery <arun.siluvery@linux.intel.com>
regards
Arun
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 4+ messages in thread
* ✗ Ro.CI.BAT: failure for drm/i915: Remove request->reset_counter
2016-06-29 14:51 [PATCH] drm/i915: Remove request->reset_counter Chris Wilson
2016-06-29 15:10 ` Arun Siluvery
@ 2016-06-29 15:45 ` Patchwork
2016-06-29 16:09 ` Chris Wilson
1 sibling, 1 reply; 4+ messages in thread
From: Patchwork @ 2016-06-29 15:45 UTC (permalink / raw)
To: Chris Wilson; +Cc: intel-gfx
== Series Details ==
Series: drm/i915: Remove request->reset_counter
URL : https://patchwork.freedesktop.org/series/9278/
State : failure
== Summary ==
Series 9278v1 drm/i915: Remove request->reset_counter
http://patchwork.freedesktop.org/api/1.0/series/9278/revisions/1/mbox
Test gem_exec_flush:
Subgroup basic-batch-kernel-default-cmd:
pass -> FAIL (ro-byt-n2820)
Test gem_exec_suspend:
Subgroup basic-s3:
pass -> INCOMPLETE (fi-hsw-i7-4770k)
fi-hsw-i7-4770k total:103 pass:86 dwarn:0 dfail:0 fail:0 skip:16
fi-kbl-qkkr total:229 pass:161 dwarn:29 dfail:0 fail:0 skip:39
fi-skl-i5-6260u total:229 pass:202 dwarn:0 dfail:0 fail:2 skip:25
fi-snb-i7-2600 total:229 pass:174 dwarn:0 dfail:0 fail:2 skip:53
ro-bdw-i5-5250u total:229 pass:202 dwarn:1 dfail:1 fail:2 skip:23
ro-bdw-i7-5557U total:229 pass:202 dwarn:1 dfail:1 fail:2 skip:23
ro-bdw-i7-5600u total:229 pass:190 dwarn:0 dfail:1 fail:0 skip:38
ro-bsw-n3050 total:229 pass:177 dwarn:0 dfail:1 fail:2 skip:49
ro-byt-n2820 total:229 pass:178 dwarn:0 dfail:1 fail:5 skip:45
ro-hsw-i3-4010u total:229 pass:195 dwarn:0 dfail:1 fail:2 skip:31
ro-hsw-i7-4770r total:229 pass:195 dwarn:0 dfail:1 fail:2 skip:31
ro-ilk-i7-620lm total:229 pass:155 dwarn:0 dfail:1 fail:3 skip:70
ro-ilk1-i5-650 total:224 pass:155 dwarn:0 dfail:1 fail:3 skip:65
ro-ivb-i7-3770 total:229 pass:186 dwarn:0 dfail:1 fail:2 skip:40
ro-ivb2-i7-3770 total:229 pass:190 dwarn:0 dfail:1 fail:2 skip:36
ro-skl3-i5-6260u total:229 pass:206 dwarn:1 dfail:1 fail:2 skip:19
ro-snb-i7-2620M total:229 pass:179 dwarn:0 dfail:1 fail:1 skip:48
Results at /archive/results/CI_IGT_test/RO_Patchwork_1331/
63f6b6c drm-intel-nightly: 2016y-06m-29d-14h-53m-39s UTC integration manifest
4738005 drm/i915: Remove request->reset_counter
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: ✗ Ro.CI.BAT: failure for drm/i915: Remove request->reset_counter
2016-06-29 15:45 ` ✗ Ro.CI.BAT: failure for " Patchwork
@ 2016-06-29 16:09 ` Chris Wilson
0 siblings, 0 replies; 4+ messages in thread
From: Chris Wilson @ 2016-06-29 16:09 UTC (permalink / raw)
To: intel-gfx
On Wed, Jun 29, 2016 at 03:45:27PM -0000, Patchwork wrote:
> == Series Details ==
>
> Series: drm/i915: Remove request->reset_counter
> URL : https://patchwork.freedesktop.org/series/9278/
> State : failure
>
> == Summary ==
>
> Series 9278v1 drm/i915: Remove request->reset_counter
> http://patchwork.freedesktop.org/api/1.0/series/9278/revisions/1/mbox
>
> Test gem_exec_flush:
> Subgroup basic-batch-kernel-default-cmd:
> pass -> FAIL (ro-byt-n2820)
> Test gem_exec_suspend:
> Subgroup basic-s3:
> pass -> INCOMPLETE (fi-hsw-i7-4770k)
... is just failing for everyone today. If anyone could see what was
wrong with the machine, that would make for a mighty impressive bug fix.
-Chris
--
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2016-06-29 16:09 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-06-29 14:51 [PATCH] drm/i915: Remove request->reset_counter Chris Wilson
2016-06-29 15:10 ` Arun Siluvery
2016-06-29 15:45 ` ✗ Ro.CI.BAT: failure for " Patchwork
2016-06-29 16:09 ` Chris Wilson
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox