* [RFC 1/5] drm/i915: Clean up gen8 irq handler
@ 2015-07-09 10:57 Nick Hoath
2015-07-09 10:57 ` [RFC 2/5] drm/i915: Unify execlist and legacy request life-cycles Nick Hoath
` (4 more replies)
0 siblings, 5 replies; 12+ messages in thread
From: Nick Hoath @ 2015-07-09 10:57 UTC (permalink / raw)
To: intel-gfx
Moved common code handling command streamer interrupts into a function.
Renamed tmp variable to the more descriptive iir.
Signed-off-by: Thomas Daniel <thomas.daniel@intel.com>
Signed-off-by: Nick Hoath <nicholas.hoath@intel.com>
---
drivers/gpu/drm/i915/i915_irq.c | 68 +++++++++++++++++++++--------------------
1 file changed, 35 insertions(+), 33 deletions(-)
diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
index a6fbe64..3ac30b8 100644
--- a/drivers/gpu/drm/i915/i915_irq.c
+++ b/drivers/gpu/drm/i915/i915_irq.c
@@ -1156,70 +1156,72 @@ static void snb_gt_irq_handler(struct drm_device *dev,
ivybridge_parity_error_irq_handler(dev, gt_iir);
}
+static void gen8_cs_irq_handler(struct intel_engine_cs *ring, u32 iir)
+{
+ if (iir & GT_CONTEXT_SWITCH_INTERRUPT)
+ intel_lrc_irq_handler(ring);
+ if (iir & GT_RENDER_USER_INTERRUPT)
+ notify_ring(ring);
+}
+
static irqreturn_t gen8_gt_irq_handler(struct drm_i915_private *dev_priv,
u32 master_ctl)
{
irqreturn_t ret = IRQ_NONE;
if (master_ctl & (GEN8_GT_RCS_IRQ | GEN8_GT_BCS_IRQ)) {
- u32 tmp = I915_READ_FW(GEN8_GT_IIR(0));
- if (tmp) {
- I915_WRITE_FW(GEN8_GT_IIR(0), tmp);
+ u32 iir = I915_READ_FW(GEN8_GT_IIR(0));
+
+ if (iir) {
+ I915_WRITE_FW(GEN8_GT_IIR(0), iir);
ret = IRQ_HANDLED;
- if (tmp & (GT_CONTEXT_SWITCH_INTERRUPT << GEN8_RCS_IRQ_SHIFT))
- intel_lrc_irq_handler(&dev_priv->ring[RCS]);
- if (tmp & (GT_RENDER_USER_INTERRUPT << GEN8_RCS_IRQ_SHIFT))
- notify_ring(&dev_priv->ring[RCS]);
+ gen8_cs_irq_handler(&dev_priv->ring[RCS],
+ iir >> GEN8_RCS_IRQ_SHIFT);
- if (tmp & (GT_CONTEXT_SWITCH_INTERRUPT << GEN8_BCS_IRQ_SHIFT))
- intel_lrc_irq_handler(&dev_priv->ring[BCS]);
- if (tmp & (GT_RENDER_USER_INTERRUPT << GEN8_BCS_IRQ_SHIFT))
- notify_ring(&dev_priv->ring[BCS]);
+ gen8_cs_irq_handler(&dev_priv->ring[BCS],
+ iir >> GEN8_BCS_IRQ_SHIFT);
} else
DRM_ERROR("The master control interrupt lied (GT0)!\n");
}
if (master_ctl & (GEN8_GT_VCS1_IRQ | GEN8_GT_VCS2_IRQ)) {
- u32 tmp = I915_READ_FW(GEN8_GT_IIR(1));
- if (tmp) {
- I915_WRITE_FW(GEN8_GT_IIR(1), tmp);
+ u32 iir = I915_READ_FW(GEN8_GT_IIR(1));
+
+ if (iir) {
+ I915_WRITE_FW(GEN8_GT_IIR(1), iir);
ret = IRQ_HANDLED;
- if (tmp & (GT_CONTEXT_SWITCH_INTERRUPT << GEN8_VCS1_IRQ_SHIFT))
- intel_lrc_irq_handler(&dev_priv->ring[VCS]);
- if (tmp & (GT_RENDER_USER_INTERRUPT << GEN8_VCS1_IRQ_SHIFT))
- notify_ring(&dev_priv->ring[VCS]);
+ gen8_cs_irq_handler(&dev_priv->ring[VCS],
+ iir >> GEN8_VCS1_IRQ_SHIFT);
- if (tmp & (GT_CONTEXT_SWITCH_INTERRUPT << GEN8_VCS2_IRQ_SHIFT))
- intel_lrc_irq_handler(&dev_priv->ring[VCS2]);
- if (tmp & (GT_RENDER_USER_INTERRUPT << GEN8_VCS2_IRQ_SHIFT))
- notify_ring(&dev_priv->ring[VCS2]);
+ gen8_cs_irq_handler(&dev_priv->ring[VCS2],
+ iir >> GEN8_VCS2_IRQ_SHIFT);
} else
DRM_ERROR("The master control interrupt lied (GT1)!\n");
}
if (master_ctl & GEN8_GT_VECS_IRQ) {
- u32 tmp = I915_READ_FW(GEN8_GT_IIR(3));
- if (tmp) {
- I915_WRITE_FW(GEN8_GT_IIR(3), tmp);
+ u32 iir = I915_READ_FW(GEN8_GT_IIR(3));
+
+ if (iir) {
+ I915_WRITE_FW(GEN8_GT_IIR(3), iir);
ret = IRQ_HANDLED;
- if (tmp & (GT_CONTEXT_SWITCH_INTERRUPT << GEN8_VECS_IRQ_SHIFT))
- intel_lrc_irq_handler(&dev_priv->ring[VECS]);
- if (tmp & (GT_RENDER_USER_INTERRUPT << GEN8_VECS_IRQ_SHIFT))
- notify_ring(&dev_priv->ring[VECS]);
+ gen8_cs_irq_handler(&dev_priv->ring[VECS],
+ iir >> GEN8_VECS_IRQ_SHIFT);
} else
DRM_ERROR("The master control interrupt lied (GT3)!\n");
}
if (master_ctl & GEN8_GT_PM_IRQ) {
- u32 tmp = I915_READ_FW(GEN8_GT_IIR(2));
- if (tmp & dev_priv->pm_rps_events) {
+ u32 iir = I915_READ_FW(GEN8_GT_IIR(2));
+
+ if (iir & dev_priv->pm_rps_events) {
I915_WRITE_FW(GEN8_GT_IIR(2),
- tmp & dev_priv->pm_rps_events);
+ iir & dev_priv->pm_rps_events);
ret = IRQ_HANDLED;
- gen6_rps_irq_handler(dev_priv, tmp);
+ gen6_rps_irq_handler(dev_priv, iir);
} else
DRM_ERROR("The master control interrupt lied (PM)!\n");
}
--
2.1.1
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [RFC 2/5] drm/i915: Unify execlist and legacy request life-cycles
2015-07-09 10:57 [RFC 1/5] drm/i915: Clean up gen8 irq handler Nick Hoath
@ 2015-07-09 10:57 ` Nick Hoath
2015-07-09 11:12 ` Chris Wilson
2015-07-09 16:54 ` Yu Dai
2015-07-09 10:57 ` [RFC 3/5] drm/i915: Simplify runtime_pm reference for execlists Nick Hoath
` (3 subsequent siblings)
4 siblings, 2 replies; 12+ messages in thread
From: Nick Hoath @ 2015-07-09 10:57 UTC (permalink / raw)
To: intel-gfx
There is a desire to simplify the i915 driver by reducing the number of
different code paths introduced by the LRC / execlists support. As the
execlists request is now part of the gem request it is possible and
desirable to unify the request life-cycles for execlist and legacy
requests.
Added a context complete flag to a request which gets set during the
context switch interrupt.
Added a function i915_gem_request_retireable(). A request is considered
retireable if its seqno passed (i.e. the request has completed) and either
it was never submitted to the ELSP or its context completed. This ensures
that context save is carried out before the last request for a context is
considered retireable. retire_requests_ring() now uses
i915_gem_request_retireable() rather than request_complete() when deciding
which requests to retire. Requests that were not waiting for a context
switch interrupt (either as a result of being merged into a following
request or by being a legacy request) will be considered retireable as
soon as their seqno has passed.
Removed the extra request reference held for the execlist request.
Removed intel_execlists_retire_requests() and all references to
intel_engine_cs.execlist_retired_req_list.
Moved context unpinning into retire_requests_ring() for now. Further work
is pending for the context pinning - this patch should allow us to use the
active list to track context and ring buffer objects later.
Changed gen8_cs_irq_handler() so that notify_ring() is called when
contexts complete as well as when a user interrupt occurs so that
notification happens when a request is complete and context save has
finished.
v2: Rebase over the read-read optimisation changes
Signed-off-by: Thomas Daniel <thomas.daniel@intel.com>
Signed-off-by: Nick Hoath <nicholas.hoath@intel.com>
---
drivers/gpu/drm/i915/i915_drv.h | 6 ++++
drivers/gpu/drm/i915/i915_gem.c | 49 +++++++++++++++++++++++----------
drivers/gpu/drm/i915/i915_irq.c | 6 ++--
drivers/gpu/drm/i915/intel_lrc.c | 44 ++++++-----------------------
drivers/gpu/drm/i915/intel_lrc.h | 2 +-
drivers/gpu/drm/i915/intel_ringbuffer.h | 1 -
6 files changed, 53 insertions(+), 55 deletions(-)
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 1dbd957..ef02378 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2213,6 +2213,12 @@ struct drm_i915_gem_request {
/** Execlists no. of times this request has been sent to the ELSP */
int elsp_submitted;
+ /**
+ * Execlists: whether this requests's context has completed after
+ * submission to the ELSP
+ */
+ bool ctx_complete;
+
};
int i915_gem_request_alloc(struct intel_engine_cs *ring,
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 49016e0..3681a33 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -2368,6 +2368,12 @@ void i915_vma_move_to_active(struct i915_vma *vma,
list_move_tail(&vma->mm_list, &vma->vm->active_list);
}
+static bool i915_gem_request_retireable(struct drm_i915_gem_request *req)
+{
+ return (i915_gem_request_completed(req, true) &&
+ (!req->elsp_submitted || req->ctx_complete));
+}
+
static void
i915_gem_object_retire__write(struct drm_i915_gem_object *obj)
{
@@ -2853,10 +2859,28 @@ i915_gem_retire_requests_ring(struct intel_engine_cs *ring)
struct drm_i915_gem_request,
list);
- if (!i915_gem_request_completed(request, true))
+ if (!i915_gem_request_retireable(request))
break;
i915_gem_request_retire(request);
+
+ if (i915.enable_execlists) {
+ struct intel_context *ctx = request->ctx;
+ struct drm_i915_private *dev_priv =
+ ring->dev->dev_private;
+ unsigned long flags;
+ struct drm_i915_gem_object *ctx_obj =
+ ctx->engine[ring->id].state;
+
+ spin_lock_irqsave(&ring->execlist_lock, flags);
+
+ if (ctx_obj && (ctx != ring->default_context))
+ intel_lr_context_unpin(ring, ctx);
+
+ intel_runtime_pm_put(dev_priv);
+ spin_unlock_irqrestore(&ring->execlist_lock, flags);
+ }
+
}
/* Move any buffers on the active list that are no longer referenced
@@ -2872,12 +2896,14 @@ i915_gem_retire_requests_ring(struct intel_engine_cs *ring)
if (!list_empty(&obj->last_read_req[ring->id]->list))
break;
+ if (!i915_gem_request_retireable(obj->last_read_req[ring->id]))
+ break;
i915_gem_object_retire__read(obj, ring->id);
}
if (unlikely(ring->trace_irq_req &&
- i915_gem_request_completed(ring->trace_irq_req, true))) {
+ i915_gem_request_retireable(ring->trace_irq_req))) {
ring->irq_put(ring);
i915_gem_request_assign(&ring->trace_irq_req, NULL);
}
@@ -2896,15 +2922,6 @@ i915_gem_retire_requests(struct drm_device *dev)
for_each_ring(ring, dev_priv, i) {
i915_gem_retire_requests_ring(ring);
idle &= list_empty(&ring->request_list);
- if (i915.enable_execlists) {
- unsigned long flags;
-
- spin_lock_irqsave(&ring->execlist_lock, flags);
- idle &= list_empty(&ring->execlist_queue);
- spin_unlock_irqrestore(&ring->execlist_lock, flags);
-
- intel_execlists_retire_requests(ring);
- }
}
if (idle)
@@ -2980,12 +2997,14 @@ i915_gem_object_flush_active(struct drm_i915_gem_object *obj)
if (req == NULL)
continue;
- if (list_empty(&req->list))
- goto retire;
+ if (list_empty(&req->list)) {
+ if (i915_gem_request_retireable(req))
+ i915_gem_object_retire__read(obj, i);
+ continue;
+ }
- if (i915_gem_request_completed(req, true)) {
+ if (i915_gem_request_retireable(req)) {
__i915_gem_request_retire__upto(req);
-retire:
i915_gem_object_retire__read(obj, i);
}
}
diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
index 3ac30b8..dfa2379 100644
--- a/drivers/gpu/drm/i915/i915_irq.c
+++ b/drivers/gpu/drm/i915/i915_irq.c
@@ -1158,9 +1158,11 @@ static void snb_gt_irq_handler(struct drm_device *dev,
static void gen8_cs_irq_handler(struct intel_engine_cs *ring, u32 iir)
{
+ bool need_notify = false;
+
if (iir & GT_CONTEXT_SWITCH_INTERRUPT)
- intel_lrc_irq_handler(ring);
- if (iir & GT_RENDER_USER_INTERRUPT)
+ need_notify = intel_lrc_irq_handler(ring);
+ if ((iir & GT_RENDER_USER_INTERRUPT) || need_notify)
notify_ring(ring);
}
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 9b928ab..8373900 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -411,9 +411,8 @@ static void execlists_context_unqueue(struct intel_engine_cs *ring)
/* Same ctx: ignore first request, as second request
* will update tail past first request's workload */
cursor->elsp_submitted = req0->elsp_submitted;
+ req0->elsp_submitted = 0;
list_del(&req0->execlist_link);
- list_add_tail(&req0->execlist_link,
- &ring->execlist_retired_req_list);
req0 = cursor;
} else {
req1 = cursor;
@@ -450,6 +449,7 @@ static void execlists_context_unqueue(struct intel_engine_cs *ring)
req0->elsp_submitted++;
if (req1)
req1->elsp_submitted++;
+
}
static bool execlists_check_remove_request(struct intel_engine_cs *ring,
@@ -469,11 +469,9 @@ static bool execlists_check_remove_request(struct intel_engine_cs *ring,
if (intel_execlists_ctx_id(ctx_obj) == request_id) {
WARN(head_req->elsp_submitted == 0,
"Never submitted head request\n");
-
if (--head_req->elsp_submitted <= 0) {
+ head_req->ctx_complete = 1;
list_del(&head_req->execlist_link);
- list_add_tail(&head_req->execlist_link,
- &ring->execlist_retired_req_list);
return true;
}
}
@@ -488,8 +486,9 @@ static bool execlists_check_remove_request(struct intel_engine_cs *ring,
*
* Check the unread Context Status Buffers and manage the submission of new
* contexts to the ELSP accordingly.
+ * @return whether a context completed
*/
-void intel_lrc_irq_handler(struct intel_engine_cs *ring)
+bool intel_lrc_irq_handler(struct intel_engine_cs *ring)
{
struct drm_i915_private *dev_priv = ring->dev->dev_private;
u32 status_pointer;
@@ -540,6 +539,8 @@ void intel_lrc_irq_handler(struct intel_engine_cs *ring)
I915_WRITE(RING_CONTEXT_STATUS_PTR(ring),
((u32)ring->next_context_status_buffer & 0x07) << 8);
+
+ return (submit_contexts != 0);
}
static int execlists_context_queue(struct drm_i915_gem_request *request)
@@ -551,7 +552,7 @@ static int execlists_context_queue(struct drm_i915_gem_request *request)
if (request->ctx != ring->default_context)
intel_lr_context_pin(ring, request->ctx);
- i915_gem_request_reference(request);
+ i915_gem_context_reference(request->ctx);
request->tail = request->ringbuf->tail;
@@ -572,8 +573,6 @@ static int execlists_context_queue(struct drm_i915_gem_request *request)
WARN(tail_req->elsp_submitted != 0,
"More than 2 already-submitted reqs queued\n");
list_del(&tail_req->execlist_link);
- list_add_tail(&tail_req->execlist_link,
- &ring->execlist_retired_req_list);
}
}
@@ -938,32 +937,6 @@ int intel_execlists_submission(struct i915_execbuffer_params *params,
return 0;
}
-void intel_execlists_retire_requests(struct intel_engine_cs *ring)
-{
- struct drm_i915_gem_request *req, *tmp;
- struct list_head retired_list;
-
- WARN_ON(!mutex_is_locked(&ring->dev->struct_mutex));
- if (list_empty(&ring->execlist_retired_req_list))
- return;
-
- INIT_LIST_HEAD(&retired_list);
- spin_lock_irq(&ring->execlist_lock);
- list_replace_init(&ring->execlist_retired_req_list, &retired_list);
- spin_unlock_irq(&ring->execlist_lock);
-
- list_for_each_entry_safe(req, tmp, &retired_list, execlist_link) {
- struct intel_context *ctx = req->ctx;
- struct drm_i915_gem_object *ctx_obj =
- ctx->engine[ring->id].state;
-
- if (ctx_obj && (ctx != ring->default_context))
- intel_lr_context_unpin(ring, ctx);
- list_del(&req->execlist_link);
- i915_gem_request_unreference(req);
- }
-}
-
void intel_logical_ring_stop(struct intel_engine_cs *ring)
{
struct drm_i915_private *dev_priv = ring->dev->dev_private;
@@ -1706,7 +1679,6 @@ static int logical_ring_init(struct drm_device *dev, struct intel_engine_cs *rin
init_waitqueue_head(&ring->irq_queue);
INIT_LIST_HEAD(&ring->execlist_queue);
- INIT_LIST_HEAD(&ring->execlist_retired_req_list);
spin_lock_init(&ring->execlist_lock);
ret = i915_cmd_parser_init_ring(ring);
diff --git a/drivers/gpu/drm/i915/intel_lrc.h b/drivers/gpu/drm/i915/intel_lrc.h
index f59940a..9d2e98a 100644
--- a/drivers/gpu/drm/i915/intel_lrc.h
+++ b/drivers/gpu/drm/i915/intel_lrc.h
@@ -82,7 +82,7 @@ int intel_execlists_submission(struct i915_execbuffer_params *params,
struct list_head *vmas);
u32 intel_execlists_ctx_id(struct drm_i915_gem_object *ctx_obj);
-void intel_lrc_irq_handler(struct intel_engine_cs *ring);
+bool intel_lrc_irq_handler(struct intel_engine_cs *ring);
void intel_execlists_retire_requests(struct intel_engine_cs *ring);
#endif /* _INTEL_LRC_H_ */
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
index 304cac4..73b0bd8 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.h
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
@@ -263,7 +263,6 @@ struct intel_engine_cs {
/* Execlists */
spinlock_t execlist_lock;
struct list_head execlist_queue;
- struct list_head execlist_retired_req_list;
u8 next_context_status_buffer;
u32 irq_keep_mask; /* bitmask for interrupts that should not be masked */
int (*emit_request)(struct drm_i915_gem_request *request);
--
2.1.1
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [RFC 3/5] drm/i915: Simplify runtime_pm reference for execlists
2015-07-09 10:57 [RFC 1/5] drm/i915: Clean up gen8 irq handler Nick Hoath
2015-07-09 10:57 ` [RFC 2/5] drm/i915: Unify execlist and legacy request life-cycles Nick Hoath
@ 2015-07-09 10:57 ` Nick Hoath
2015-07-09 11:14 ` Chris Wilson
2015-07-09 10:57 ` [RFC 4/5] drm/i915: Reorder make_rpcs for later patch Nick Hoath
` (2 subsequent siblings)
4 siblings, 1 reply; 12+ messages in thread
From: Nick Hoath @ 2015-07-09 10:57 UTC (permalink / raw)
To: intel-gfx
No longer take a runtime_pm reference for each execlist request. Only
take a single reference when the execlist queue becomes nonempty and
release it when it becomes empty.
Signed-off-by: Thomas Daniel <thomas.daniel@intel.com>
Signed-off-by: Nick Hoath <nicholas.hoath@intel.com>
---
drivers/gpu/drm/i915/i915_gem.c | 10 +++++++---
drivers/gpu/drm/i915/intel_lrc.c | 15 +++++++++++++--
2 files changed, 20 insertions(+), 5 deletions(-)
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 3681a33..d9f5e4d 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -2758,6 +2758,13 @@ static void i915_gem_reset_ring_cleanup(struct drm_i915_private *dev_priv,
}
/*
+ * If we come in here, and the list wasn't empty, then there was
+ * a pm taken, so free it up now
+ */
+ if (!list_empty(&ring->execlist_queue))
+ intel_runtime_pm_put(dev_priv);
+
+ /*
* Clear the execlists queue up before freeing the requests, as those
* are the ones that keep the context and ringbuffer backing objects
* pinned in place.
@@ -2866,8 +2873,6 @@ i915_gem_retire_requests_ring(struct intel_engine_cs *ring)
if (i915.enable_execlists) {
struct intel_context *ctx = request->ctx;
- struct drm_i915_private *dev_priv =
- ring->dev->dev_private;
unsigned long flags;
struct drm_i915_gem_object *ctx_obj =
ctx->engine[ring->id].state;
@@ -2877,7 +2882,6 @@ i915_gem_retire_requests_ring(struct intel_engine_cs *ring)
if (ctx_obj && (ctx != ring->default_context))
intel_lr_context_unpin(ring, ctx);
- intel_runtime_pm_put(dev_priv);
spin_unlock_irqrestore(&ring->execlist_lock, flags);
}
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 8373900..adc4942 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -399,8 +399,16 @@ static void execlists_context_unqueue(struct intel_engine_cs *ring)
*/
WARN_ON(!intel_irqs_enabled(ring->dev->dev_private));
- if (list_empty(&ring->execlist_queue))
+ if (list_empty(&ring->execlist_queue)) {
+ /*
+ * We can only come in to this function if at some
+ * point there was a request queued. So if there
+ * are no longer any requests queued, it's time to
+ * put the pm
+ */
+ intel_runtime_pm_put(ring->dev->dev_private);
return;
+ }
/* Try to read in pairs */
list_for_each_entry_safe(cursor, tmp, &ring->execlist_queue,
@@ -546,6 +554,7 @@ bool intel_lrc_irq_handler(struct intel_engine_cs *ring)
static int execlists_context_queue(struct drm_i915_gem_request *request)
{
struct intel_engine_cs *ring = request->ring;
+ struct drm_i915_private *dev_priv = ring->dev->dev_private;
struct drm_i915_gem_request *cursor;
int num_elements = 0;
@@ -577,8 +586,10 @@ static int execlists_context_queue(struct drm_i915_gem_request *request)
}
list_add_tail(&request->execlist_link, &ring->execlist_queue);
- if (num_elements == 0)
+ if (num_elements == 0) {
+ intel_runtime_pm_get(dev_priv);
execlists_context_unqueue(ring);
+ }
spin_unlock_irq(&ring->execlist_lock);
--
2.1.1
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [RFC 4/5] drm/i915: Reorder make_rpcs for later patch
2015-07-09 10:57 [RFC 1/5] drm/i915: Clean up gen8 irq handler Nick Hoath
2015-07-09 10:57 ` [RFC 2/5] drm/i915: Unify execlist and legacy request life-cycles Nick Hoath
2015-07-09 10:57 ` [RFC 3/5] drm/i915: Simplify runtime_pm reference for execlists Nick Hoath
@ 2015-07-09 10:57 ` Nick Hoath
2015-07-09 10:57 ` [RFC 5/5] drm/i915: Clean up lrc context init Nick Hoath
2015-07-09 11:10 ` [RFC 1/5] drm/i915: Clean up gen8 irq handler Chris Wilson
4 siblings, 0 replies; 12+ messages in thread
From: Nick Hoath @ 2015-07-09 10:57 UTC (permalink / raw)
To: intel-gfx
Issue: VIZ-4798
Signed-off-by: Nick Hoath <nicholas.hoath@intel.com>
---
drivers/gpu/drm/i915/intel_lrc.c | 86 ++++++++++++++++++++--------------------
1 file changed, 43 insertions(+), 43 deletions(-)
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index adc4942..770a6f6 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -1302,6 +1302,49 @@ out:
return ret;
}
+static u32
+make_rpcs(struct drm_device *dev)
+{
+ u32 rpcs = 0;
+
+ /*
+ * No explicit RPCS request is needed to ensure full
+ * slice/subslice/EU enablement prior to Gen9.
+ */
+ if (INTEL_INFO(dev)->gen < 9)
+ return 0;
+
+ /*
+ * Starting in Gen9, render power gating can leave
+ * slice/subslice/EU in a partially enabled state. We
+ * must make an explicit request through RPCS for full
+ * enablement.
+ */
+ if (INTEL_INFO(dev)->has_slice_pg) {
+ rpcs |= GEN8_RPCS_S_CNT_ENABLE;
+ rpcs |= INTEL_INFO(dev)->slice_total <<
+ GEN8_RPCS_S_CNT_SHIFT;
+ rpcs |= GEN8_RPCS_ENABLE;
+ }
+
+ if (INTEL_INFO(dev)->has_subslice_pg) {
+ rpcs |= GEN8_RPCS_SS_CNT_ENABLE;
+ rpcs |= INTEL_INFO(dev)->subslice_per_slice <<
+ GEN8_RPCS_SS_CNT_SHIFT;
+ rpcs |= GEN8_RPCS_ENABLE;
+ }
+
+ if (INTEL_INFO(dev)->has_eu_pg) {
+ rpcs |= INTEL_INFO(dev)->eu_per_subslice <<
+ GEN8_RPCS_EU_MIN_SHIFT;
+ rpcs |= INTEL_INFO(dev)->eu_per_subslice <<
+ GEN8_RPCS_EU_MAX_SHIFT;
+ rpcs |= GEN8_RPCS_ENABLE;
+ }
+
+ return rpcs;
+}
+
static int gen8_init_common_ring(struct intel_engine_cs *ring)
{
struct drm_device *dev = ring->dev;
@@ -1919,49 +1962,6 @@ cleanup_render_ring:
return ret;
}
-static u32
-make_rpcs(struct drm_device *dev)
-{
- u32 rpcs = 0;
-
- /*
- * No explicit RPCS request is needed to ensure full
- * slice/subslice/EU enablement prior to Gen9.
- */
- if (INTEL_INFO(dev)->gen < 9)
- return 0;
-
- /*
- * Starting in Gen9, render power gating can leave
- * slice/subslice/EU in a partially enabled state. We
- * must make an explicit request through RPCS for full
- * enablement.
- */
- if (INTEL_INFO(dev)->has_slice_pg) {
- rpcs |= GEN8_RPCS_S_CNT_ENABLE;
- rpcs |= INTEL_INFO(dev)->slice_total <<
- GEN8_RPCS_S_CNT_SHIFT;
- rpcs |= GEN8_RPCS_ENABLE;
- }
-
- if (INTEL_INFO(dev)->has_subslice_pg) {
- rpcs |= GEN8_RPCS_SS_CNT_ENABLE;
- rpcs |= INTEL_INFO(dev)->subslice_per_slice <<
- GEN8_RPCS_SS_CNT_SHIFT;
- rpcs |= GEN8_RPCS_ENABLE;
- }
-
- if (INTEL_INFO(dev)->has_eu_pg) {
- rpcs |= INTEL_INFO(dev)->eu_per_subslice <<
- GEN8_RPCS_EU_MIN_SHIFT;
- rpcs |= INTEL_INFO(dev)->eu_per_subslice <<
- GEN8_RPCS_EU_MAX_SHIFT;
- rpcs |= GEN8_RPCS_ENABLE;
- }
-
- return rpcs;
-}
-
static int
populate_lr_context(struct intel_context *ctx, struct drm_i915_gem_object *ctx_obj,
struct intel_engine_cs *ring, struct intel_ringbuffer *ringbuf)
--
2.1.1
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [RFC 5/5] drm/i915: Clean up lrc context init
2015-07-09 10:57 [RFC 1/5] drm/i915: Clean up gen8 irq handler Nick Hoath
` (2 preceding siblings ...)
2015-07-09 10:57 ` [RFC 4/5] drm/i915: Reorder make_rpcs for later patch Nick Hoath
@ 2015-07-09 10:57 ` Nick Hoath
2015-07-09 11:17 ` Chris Wilson
2015-07-09 11:10 ` [RFC 1/5] drm/i915: Clean up gen8 irq handler Chris Wilson
4 siblings, 1 reply; 12+ messages in thread
From: Nick Hoath @ 2015-07-09 10:57 UTC (permalink / raw)
To: intel-gfx
Clean up lrc context init by:
- Move context initialisation in to i915_gem_init_hw
- Move one off initialisation for render ring to
i915_gem_validate_context
- Move default context initialisation to logical_ring_init
Rename intel_lr_context_deferred_create to
intel_lr_context_deferred_alloc, to reflect reduced functionality.
Issue: VIZ-4798
Signed-off-by: Nick Hoath <nicholas.hoath@intel.com>
---
drivers/gpu/drm/i915/i915_drv.h | 1 -
drivers/gpu/drm/i915/i915_gem.c | 41 ++++++++----
drivers/gpu/drm/i915/i915_gem_context.c | 21 ------
drivers/gpu/drm/i915/i915_gem_execbuffer.c | 36 +++++++++-
drivers/gpu/drm/i915/intel_lrc.c | 101 ++++++++++-------------------
drivers/gpu/drm/i915/intel_lrc.h | 6 +-
drivers/gpu/drm/i915/intel_ringbuffer.c | 1 +
drivers/gpu/drm/i915/intel_ringbuffer.h | 1 +
8 files changed, 105 insertions(+), 103 deletions(-)
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index ef02378..3142a3a 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -3063,7 +3063,6 @@ int __must_check i915_gem_context_init(struct drm_device *dev);
void i915_gem_context_fini(struct drm_device *dev);
void i915_gem_context_reset(struct drm_device *dev);
int i915_gem_context_open(struct drm_device *dev, struct drm_file *file);
-int i915_gem_context_enable(struct drm_i915_gem_request *req);
void i915_gem_context_close(struct drm_device *dev, struct drm_file *file);
int i915_switch_context(struct drm_i915_gem_request *req);
struct intel_context *
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index d9f5e4d..914f660 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -5048,6 +5048,7 @@ i915_gem_init_hw(struct drm_device *dev)
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_engine_cs *ring;
int ret, i, j;
+ struct drm_i915_gem_request *req;
if (INTEL_INFO(dev)->gen < 6 && !intel_enable_gtt())
return -EIO;
@@ -5100,15 +5101,17 @@ i915_gem_init_hw(struct drm_device *dev)
}
/* Now it is safe to go back round and do everything else: */
- for_each_ring(ring, dev_priv, i) {
- struct drm_i915_gem_request *req;
+ ret = i915_gem_set_seqno(dev, ((u32)~0 - 0x1000));
+ if (ret)
+ goto out;
+
+ for_each_ring(ring, dev_priv, i) {
WARN_ON(!ring->default_context);
ret = i915_gem_request_alloc(ring, ring->default_context, &req);
if (ret) {
- i915_gem_cleanup_ringbuffer(dev);
- goto out;
+ goto clean_ringbuf_out;
}
if (ring->id == RCS) {
@@ -5119,22 +5122,34 @@ i915_gem_init_hw(struct drm_device *dev)
ret = i915_ppgtt_init_ring(req);
if (ret && ret != -EIO) {
DRM_ERROR("PPGTT enable ring #%d failed %d\n", i, ret);
- i915_gem_request_cancel(req);
- i915_gem_cleanup_ringbuffer(dev);
- goto out;
+ goto clean_req_out;
}
- ret = i915_gem_context_enable(req);
- if (ret && ret != -EIO) {
- DRM_ERROR("Context enable ring #%d failed %d\n", i, ret);
- i915_gem_request_cancel(req);
- i915_gem_cleanup_ringbuffer(dev);
- goto out;
+ if (ring->switch_context) {
+ ret = ring->switch_context(req);
+ if (ret && ret != -EIO) {
+ DRM_ERROR("ring switch context: %d\n",
+ ret);
+ goto clean_req_out;
+ }
+ } else if (ring->init_context) {
+ ret = ring->init_context(req);
+ if (ret && ret != -EIO) {
+ DRM_ERROR("ring init context: %d\n",
+ ret);
+ goto clean_req_out;
+ }
}
i915_add_request_no_flush(req);
}
+ return ret;
+
+clean_req_out:
+ i915_gem_request_cancel(req);
+clean_ringbuf_out:
+ i915_gem_cleanup_ringbuffer(dev);
out:
intel_uncore_forcewake_put(dev_priv, FORCEWAKE_ALL);
return ret;
diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c
index a7e58a8..87a7fbc 100644
--- a/drivers/gpu/drm/i915/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/i915_gem_context.c
@@ -409,27 +409,6 @@ void i915_gem_context_fini(struct drm_device *dev)
i915_gem_context_unreference(dctx);
}
-int i915_gem_context_enable(struct drm_i915_gem_request *req)
-{
- struct intel_engine_cs *ring = req->ring;
- int ret;
-
- if (i915.enable_execlists) {
- if (ring->init_context == NULL)
- return 0;
-
- ret = ring->init_context(req);
- } else
- ret = i915_switch_context(req);
-
- if (ret) {
- DRM_ERROR("ring init context: %d\n", ret);
- return ret;
- }
-
- return 0;
-}
-
static int context_idr_cleanup(int id, void *p, void *data)
{
struct intel_context *ctx = p;
diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
index 600db74..5455e35 100644
--- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
@@ -993,6 +993,7 @@ i915_gem_validate_context(struct drm_device *dev, struct drm_file *file,
{
struct intel_context *ctx = NULL;
struct i915_ctx_hang_stats *hs;
+ int ret;
if (ring->id != RCS && ctx_id != DEFAULT_CONTEXT_HANDLE)
return ERR_PTR(-EINVAL);
@@ -1008,14 +1009,47 @@ i915_gem_validate_context(struct drm_device *dev, struct drm_file *file,
}
if (i915.enable_execlists && !ctx->engine[ring->id].state) {
- int ret = intel_lr_context_deferred_create(ctx, ring);
+ ret = intel_lr_context_deferred_alloc(ctx, ring);
if (ret) {
DRM_DEBUG("Could not create LRC %u: %d\n", ctx_id, ret);
return ERR_PTR(ret);
}
+
+ if (ring->id == RCS && !ctx->rcs_initialized) {
+ if (ring->init_context) {
+ struct drm_i915_gem_request *req;
+
+ ret = i915_gem_request_alloc(ring,
+ ring->default_context, &req);
+ if (ret) {
+ DRM_ERROR("ring create req: %d\n",
+ ret);
+ i915_gem_request_cancel(req);
+ goto validate_error;
+ }
+
+ ret = ring->init_context(req);
+ if (ret) {
+ DRM_ERROR("ring init context: %d\n",
+ ret);
+ i915_gem_request_cancel(req);
+ goto validate_error;
+ }
+ i915_add_request_no_flush(req);
+ }
+
+ ctx->rcs_initialized = true;
+ }
}
return ctx;
+
+validate_error:
+ intel_destroy_ringbuffer_obj(ctx->engine[ring->id].ringbuf);
+ drm_gem_object_unreference(&ctx->engine[ring->id].state->base);
+ ctx->engine[ring->id].ringbuf = NULL;
+ ctx->engine[ring->id].state = NULL;
+ return ERR_PTR(ret);
}
void
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 770a6f6..19ad961 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -1350,6 +1350,9 @@ static int gen8_init_common_ring(struct intel_engine_cs *ring)
struct drm_device *dev = ring->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
+ lrc_setup_hardware_status_page(ring,
+ ring->default_context->engine[ring->id].state);
+
I915_WRITE_IMR(ring, ~(ring->irq_enable_mask | ring->irq_keep_mask));
I915_WRITE(RING_HWSTAM(ring->mmio_base), 0xffffffff);
@@ -1739,8 +1742,33 @@ static int logical_ring_init(struct drm_device *dev, struct intel_engine_cs *rin
if (ret)
return ret;
- ret = intel_lr_context_deferred_create(ring->default_context, ring);
+ ret = intel_lr_context_deferred_alloc(ring->default_context, ring);
+ if (ret)
+ return ret;
+
+ ret = i915_gem_obj_ggtt_pin(
+ ring->default_context->engine[ring->id].state,
+ GEN8_LR_CONTEXT_ALIGN, 0);
+ if (ret) {
+ DRM_DEBUG_DRIVER("Pin LRC backing obj failed: %d\n",
+ ret);
+ return ret;
+ }
+
+ ret = intel_pin_and_map_ringbuffer_obj(dev,
+ ring->default_context->engine[ring->id].ringbuf);
+ if (ret) {
+ DRM_ERROR(
+ "Failed to pin and map ringbuffer %s: %d\n",
+ ring->name, ret);
+ goto error_unpin_ggtt;
+ }
+
+ return ret;
+error_unpin_ggtt:
+ i915_gem_object_ggtt_unpin(
+ ring->default_context->engine[ring->id].state);
return ret;
}
@@ -1942,14 +1970,8 @@ int intel_logical_rings_init(struct drm_device *dev)
goto cleanup_vebox_ring;
}
- ret = i915_gem_set_seqno(dev, ((u32)~0 - 0x1000));
- if (ret)
- goto cleanup_bsd2_ring;
-
return 0;
-cleanup_bsd2_ring:
- intel_logical_ring_cleanup(&dev_priv->ring[VCS2]);
cleanup_vebox_ring:
intel_logical_ring_cleanup(&dev_priv->ring[VECS]);
cleanup_blt_ring:
@@ -2146,7 +2168,7 @@ static uint32_t get_lr_context_size(struct intel_engine_cs *ring)
return ret;
}
-static void lrc_setup_hardware_status_page(struct intel_engine_cs *ring,
+void lrc_setup_hardware_status_page(struct intel_engine_cs *ring,
struct drm_i915_gem_object *default_ctx_obj)
{
struct drm_i915_private *dev_priv = ring->dev->dev_private;
@@ -2164,7 +2186,7 @@ static void lrc_setup_hardware_status_page(struct intel_engine_cs *ring,
}
/**
- * intel_lr_context_deferred_create() - create the LRC specific bits of a context
+ * intel_lr_context_deferred_alloc() - create the LRC specific bits of a context
* @ctx: LR context to create.
* @ring: engine to be used with the context.
*
@@ -2176,10 +2198,10 @@ static void lrc_setup_hardware_status_page(struct intel_engine_cs *ring,
*
* Return: non-zero on error.
*/
-int intel_lr_context_deferred_create(struct intel_context *ctx,
+
+int intel_lr_context_deferred_alloc(struct intel_context *ctx,
struct intel_engine_cs *ring)
{
- const bool is_global_default_ctx = (ctx == ring->default_context);
struct drm_device *dev = ring->dev;
struct drm_i915_gem_object *ctx_obj;
uint32_t context_size;
@@ -2197,22 +2219,12 @@ int intel_lr_context_deferred_create(struct intel_context *ctx,
return -ENOMEM;
}
- if (is_global_default_ctx) {
- ret = i915_gem_obj_ggtt_pin(ctx_obj, GEN8_LR_CONTEXT_ALIGN, 0);
- if (ret) {
- DRM_DEBUG_DRIVER("Pin LRC backing obj failed: %d\n",
- ret);
- drm_gem_object_unreference(&ctx_obj->base);
- return ret;
- }
- }
-
ringbuf = kzalloc(sizeof(*ringbuf), GFP_KERNEL);
if (!ringbuf) {
DRM_DEBUG_DRIVER("Failed to allocate ringbuffer %s\n",
ring->name);
ret = -ENOMEM;
- goto error_unpin_ctx;
+ goto error_deref_obj;
}
ringbuf->ring = ring;
@@ -2232,65 +2244,24 @@ int intel_lr_context_deferred_create(struct intel_context *ctx,
ring->name, ret);
goto error_free_rbuf;
}
-
- if (is_global_default_ctx) {
- ret = intel_pin_and_map_ringbuffer_obj(dev, ringbuf);
- if (ret) {
- DRM_ERROR(
- "Failed to pin and map ringbuffer %s: %d\n",
- ring->name, ret);
- goto error_destroy_rbuf;
- }
- }
-
}
ret = populate_lr_context(ctx, ctx_obj, ring, ringbuf);
if (ret) {
DRM_DEBUG_DRIVER("Failed to populate LRC: %d\n", ret);
- goto error;
+ goto error_destroy_rbuf;
}
ctx->engine[ring->id].ringbuf = ringbuf;
ctx->engine[ring->id].state = ctx_obj;
- if (ctx == ring->default_context)
- lrc_setup_hardware_status_page(ring, ctx_obj);
- else if (ring->id == RCS && !ctx->rcs_initialized) {
- if (ring->init_context) {
- struct drm_i915_gem_request *req;
-
- ret = i915_gem_request_alloc(ring, ctx, &req);
- if (ret)
- return ret;
-
- ret = ring->init_context(req);
- if (ret) {
- DRM_ERROR("ring init context: %d\n", ret);
- i915_gem_request_cancel(req);
- ctx->engine[ring->id].ringbuf = NULL;
- ctx->engine[ring->id].state = NULL;
- goto error;
- }
-
- i915_add_request_no_flush(req);
- }
-
- ctx->rcs_initialized = true;
- }
-
return 0;
-error:
- if (is_global_default_ctx)
- intel_unpin_ringbuffer_obj(ringbuf);
error_destroy_rbuf:
intel_destroy_ringbuffer_obj(ringbuf);
error_free_rbuf:
kfree(ringbuf);
-error_unpin_ctx:
- if (is_global_default_ctx)
- i915_gem_object_ggtt_unpin(ctx_obj);
+error_deref_obj:
drm_gem_object_unreference(&ctx_obj->base);
return ret;
}
diff --git a/drivers/gpu/drm/i915/intel_lrc.h b/drivers/gpu/drm/i915/intel_lrc.h
index 9d2e98a..2cec0ee 100644
--- a/drivers/gpu/drm/i915/intel_lrc.h
+++ b/drivers/gpu/drm/i915/intel_lrc.h
@@ -67,8 +67,8 @@ static inline void intel_logical_ring_emit(struct intel_ringbuffer *ringbuf,
/* Logical Ring Contexts */
void intel_lr_context_free(struct intel_context *ctx);
-int intel_lr_context_deferred_create(struct intel_context *ctx,
- struct intel_engine_cs *ring);
+int intel_lr_context_deferred_alloc(struct intel_context *ctx,
+ struct intel_engine_cs *ring);
void intel_lr_context_unpin(struct intel_engine_cs *ring,
struct intel_context *ctx);
void intel_lr_context_reset(struct drm_device *dev,
@@ -84,5 +84,7 @@ u32 intel_execlists_ctx_id(struct drm_i915_gem_object *ctx_obj);
bool intel_lrc_irq_handler(struct intel_engine_cs *ring);
void intel_execlists_retire_requests(struct intel_engine_cs *ring);
+void lrc_setup_hardware_status_page(struct intel_engine_cs *ring,
+ struct drm_i915_gem_object *default_ctx_obj);
#endif /* _INTEL_LRC_H_ */
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
index e39c891..b5e5d45 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.c
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
@@ -2664,6 +2664,7 @@ int intel_init_render_ring_buffer(struct drm_device *dev)
ring->dispatch_execbuffer = i830_dispatch_execbuffer;
else
ring->dispatch_execbuffer = i915_dispatch_execbuffer;
+ ring->switch_context = i915_switch_context;
ring->init_hw = init_render_ring;
ring->cleanup = render_ring_cleanup;
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
index 73b0bd8..ba3049b 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.h
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
@@ -177,6 +177,7 @@ struct intel_engine_cs {
int (*init_hw)(struct intel_engine_cs *ring);
int (*init_context)(struct drm_i915_gem_request *req);
+ int (*switch_context)(struct drm_i915_gem_request *req);
void (*write_tail)(struct intel_engine_cs *ring,
u32 value);
--
2.1.1
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [RFC 1/5] drm/i915: Clean up gen8 irq handler
2015-07-09 10:57 [RFC 1/5] drm/i915: Clean up gen8 irq handler Nick Hoath
` (3 preceding siblings ...)
2015-07-09 10:57 ` [RFC 5/5] drm/i915: Clean up lrc context init Nick Hoath
@ 2015-07-09 11:10 ` Chris Wilson
4 siblings, 0 replies; 12+ messages in thread
From: Chris Wilson @ 2015-07-09 11:10 UTC (permalink / raw)
To: Nick Hoath; +Cc: intel-gfx
On Thu, Jul 09, 2015 at 11:57:40AM +0100, Nick Hoath wrote:
> Moved common code handling command streamer interrupts into a function.
> Renamed tmp variable to the more descriptive iir.
Does the compiler eliminate those shifts you added?
-Chris
--
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [RFC 2/5] drm/i915: Unify execlist and legacy request life-cycles
2015-07-09 10:57 ` [RFC 2/5] drm/i915: Unify execlist and legacy request life-cycles Nick Hoath
@ 2015-07-09 11:12 ` Chris Wilson
2015-07-29 15:20 ` Nick Hoath
2015-07-09 16:54 ` Yu Dai
1 sibling, 1 reply; 12+ messages in thread
From: Chris Wilson @ 2015-07-09 11:12 UTC (permalink / raw)
To: Nick Hoath; +Cc: intel-gfx
On Thu, Jul 09, 2015 at 11:57:41AM +0100, Nick Hoath wrote:
> There is a desire to simplify the i915 driver by reducing the number of
> different code paths introduced by the LRC / execlists support. As the
> execlists request is now part of the gem request it is possible and
> desirable to unify the request life-cycles for execlist and legacy
> requests.
>
> Added a context complete flag to a request which gets set during the
> context switch interrupt.
>
> Added a function i915_gem_request_retireable(). A request is considered
> retireable if its seqno passed (i.e. the request has completed) and either
> it was never submitted to the ELSP or its context completed. This ensures
> that context save is carried out before the last request for a context is
> considered retireable. retire_requests_ring() now uses
> i915_gem_request_retireable() rather than request_complete() when deciding
> which requests to retire. Requests that were not waiting for a context
> switch interrupt (either as a result of being merged into a following
> request or by being a legacy request) will be considered retireable as
> soon as their seqno has passed.
Nak. Just keep the design as requests only retire when seqno passes.
> Removed the extra request reference held for the execlist request.
>
> Removed intel_execlists_retire_requests() and all references to
> intel_engine_cs.execlist_retired_req_list.
>
> Moved context unpinning into retire_requests_ring() for now. Further work
> is pending for the context pinning - this patch should allow us to use the
> active list to track context and ring buffer objects later.
>
> Changed gen8_cs_irq_handler() so that notify_ring() is called when
> contexts complete as well as when a user interrupt occurs so that
> notification happens when a request is complete and context save has
> finished.
>
> v2: Rebase over the read-read optimisation changes
Any reason why you didn't review my patches to do this much more neatly?
-Chris
--
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [RFC 3/5] drm/i915: Simplify runtime_pm reference for execlists
2015-07-09 10:57 ` [RFC 3/5] drm/i915: Simplify runtime_pm reference for execlists Nick Hoath
@ 2015-07-09 11:14 ` Chris Wilson
2015-07-29 15:22 ` Nick Hoath
0 siblings, 1 reply; 12+ messages in thread
From: Chris Wilson @ 2015-07-09 11:14 UTC (permalink / raw)
To: Nick Hoath; +Cc: intel-gfx
On Thu, Jul 09, 2015 at 11:57:42AM +0100, Nick Hoath wrote:
> No longer take a runtime_pm reference for each execlist request. Only
> take a single reference when the execlist queue becomes nonempty and
> release it when it becomes empty.
Nak. We already hold the runtime_pm for GPU activity.
-Chris
--
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [RFC 5/5] drm/i915: Clean up lrc context init
2015-07-09 10:57 ` [RFC 5/5] drm/i915: Clean up lrc context init Nick Hoath
@ 2015-07-09 11:17 ` Chris Wilson
0 siblings, 0 replies; 12+ messages in thread
From: Chris Wilson @ 2015-07-09 11:17 UTC (permalink / raw)
To: Nick Hoath; +Cc: intel-gfx
On Thu, Jul 09, 2015 at 11:57:44AM +0100, Nick Hoath wrote:
> Clean up lrc context init by:
> - Move context initialisation in to i915_gem_init_hw
> - Move one off initialisation for render ring to
> i915_gem_validate_context
> - Move default context initialisation to logical_ring_init
>
> Rename intel_lr_context_deferred_create to
> intel_lr_context_deferred_alloc, to reflect reduced functionality.
Not far enough. You can tidy the deferred context creation to request
allocation now, e.g.:
http://cgit.freedesktop.org/~ickle/linux-2.6/commit/?h=nightly&id=d1a7b2bd3a65b3fb605f0702a7550e8ae281cfd8
-Chris
--
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [RFC 2/5] drm/i915: Unify execlist and legacy request life-cycles
2015-07-09 10:57 ` [RFC 2/5] drm/i915: Unify execlist and legacy request life-cycles Nick Hoath
2015-07-09 11:12 ` Chris Wilson
@ 2015-07-09 16:54 ` Yu Dai
1 sibling, 0 replies; 12+ messages in thread
From: Yu Dai @ 2015-07-09 16:54 UTC (permalink / raw)
To: Nick Hoath, intel-gfx
On 07/09/2015 03:57 AM, Nick Hoath wrote:
> There is a desire to simplify the i915 driver by reducing the number of
> different code paths introduced by the LRC / execlists support. As the
> execlists request is now part of the gem request it is possible and
> desirable to unify the request life-cycles for execlist and legacy
> requests.
>
> Added a context complete flag to a request which gets set during the
> context switch interrupt.
>
> Added a function i915_gem_request_retireable(). A request is considered
> retireable if its seqno passed (i.e. the request has completed) and either
> it was never submitted to the ELSP or its context completed. This ensures
> that context save is carried out before the last request for a context is
> considered retireable. retire_requests_ring() now uses
> i915_gem_request_retireable() rather than request_complete() when deciding
> which requests to retire. Requests that were not waiting for a context
> switch interrupt (either as a result of being merged into a following
> request or by being a legacy request) will be considered retireable as
> soon as their seqno has passed.
>
> Removed the extra request reference held for the execlist request.
>
> Removed intel_execlists_retire_requests() and all references to
> intel_engine_cs.execlist_retired_req_list.
>
> Moved context unpinning into retire_requests_ring() for now. Further work
> is pending for the context pinning - this patch should allow us to use the
> active list to track context and ring buffer objects later.
Just heads up on potential performance drop on certain workloads. Since
retire_reuests_ring is called before each submission, for those CPU
bound workloads, you will see context pin & unpin very often. The
ioremap_wc / iounmap for ring buffer consumes more CPU time. I found
this issue during GuC implementation because GuC does not use execlist
request queue but legacy one. On SKL, there is about 3~5% performance
drop in workloads such as SynMark2 oglbatch5/6/7.
Thanks,
Alex
> Changed gen8_cs_irq_handler() so that notify_ring() is called when
> contexts complete as well as when a user interrupt occurs so that
> notification happens when a request is complete and context save has
> finished.
>
> v2: Rebase over the read-read optimisation changes
>
> Signed-off-by: Thomas Daniel <thomas.daniel@intel.com>
> Signed-off-by: Nick Hoath <nicholas.hoath@intel.com>
> ---
> drivers/gpu/drm/i915/i915_drv.h | 6 ++++
> drivers/gpu/drm/i915/i915_gem.c | 49 +++++++++++++++++++++++----------
> drivers/gpu/drm/i915/i915_irq.c | 6 ++--
> drivers/gpu/drm/i915/intel_lrc.c | 44 ++++++-----------------------
> drivers/gpu/drm/i915/intel_lrc.h | 2 +-
> drivers/gpu/drm/i915/intel_ringbuffer.h | 1 -
> 6 files changed, 53 insertions(+), 55 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
> index 1dbd957..ef02378 100644
> --- a/drivers/gpu/drm/i915/i915_drv.h
> +++ b/drivers/gpu/drm/i915/i915_drv.h
> @@ -2213,6 +2213,12 @@ struct drm_i915_gem_request {
> /** Execlists no. of times this request has been sent to the ELSP */
> int elsp_submitted;
>
> + /**
> + * Execlists: whether this requests's context has completed after
> + * submission to the ELSP
> + */
> + bool ctx_complete;
> +
> };
>
> int i915_gem_request_alloc(struct intel_engine_cs *ring,
> diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> index 49016e0..3681a33 100644
> --- a/drivers/gpu/drm/i915/i915_gem.c
> +++ b/drivers/gpu/drm/i915/i915_gem.c
> @@ -2368,6 +2368,12 @@ void i915_vma_move_to_active(struct i915_vma *vma,
> list_move_tail(&vma->mm_list, &vma->vm->active_list);
> }
>
> +static bool i915_gem_request_retireable(struct drm_i915_gem_request *req)
> +{
> + return (i915_gem_request_completed(req, true) &&
> + (!req->elsp_submitted || req->ctx_complete));
> +}
> +
> static void
> i915_gem_object_retire__write(struct drm_i915_gem_object *obj)
> {
> @@ -2853,10 +2859,28 @@ i915_gem_retire_requests_ring(struct intel_engine_cs *ring)
> struct drm_i915_gem_request,
> list);
>
> - if (!i915_gem_request_completed(request, true))
> + if (!i915_gem_request_retireable(request))
> break;
>
> i915_gem_request_retire(request);
> +
> + if (i915.enable_execlists) {
> + struct intel_context *ctx = request->ctx;
> + struct drm_i915_private *dev_priv =
> + ring->dev->dev_private;
> + unsigned long flags;
> + struct drm_i915_gem_object *ctx_obj =
> + ctx->engine[ring->id].state;
> +
> + spin_lock_irqsave(&ring->execlist_lock, flags);
> +
> + if (ctx_obj && (ctx != ring->default_context))
> + intel_lr_context_unpin(ring, ctx);
> +
> + intel_runtime_pm_put(dev_priv);
> + spin_unlock_irqrestore(&ring->execlist_lock, flags);
> + }
> +
> }
>
> /* Move any buffers on the active list that are no longer referenced
> @@ -2872,12 +2896,14 @@ i915_gem_retire_requests_ring(struct intel_engine_cs *ring)
>
> if (!list_empty(&obj->last_read_req[ring->id]->list))
> break;
> + if (!i915_gem_request_retireable(obj->last_read_req[ring->id]))
> + break;
>
> i915_gem_object_retire__read(obj, ring->id);
> }
>
> if (unlikely(ring->trace_irq_req &&
> - i915_gem_request_completed(ring->trace_irq_req, true))) {
> + i915_gem_request_retireable(ring->trace_irq_req))) {
> ring->irq_put(ring);
> i915_gem_request_assign(&ring->trace_irq_req, NULL);
> }
> @@ -2896,15 +2922,6 @@ i915_gem_retire_requests(struct drm_device *dev)
> for_each_ring(ring, dev_priv, i) {
> i915_gem_retire_requests_ring(ring);
> idle &= list_empty(&ring->request_list);
> - if (i915.enable_execlists) {
> - unsigned long flags;
> -
> - spin_lock_irqsave(&ring->execlist_lock, flags);
> - idle &= list_empty(&ring->execlist_queue);
> - spin_unlock_irqrestore(&ring->execlist_lock, flags);
> -
> - intel_execlists_retire_requests(ring);
> - }
> }
>
> if (idle)
> @@ -2980,12 +2997,14 @@ i915_gem_object_flush_active(struct drm_i915_gem_object *obj)
> if (req == NULL)
> continue;
>
> - if (list_empty(&req->list))
> - goto retire;
> + if (list_empty(&req->list)) {
> + if (i915_gem_request_retireable(req))
> + i915_gem_object_retire__read(obj, i);
> + continue;
> + }
>
> - if (i915_gem_request_completed(req, true)) {
> + if (i915_gem_request_retireable(req)) {
> __i915_gem_request_retire__upto(req);
> -retire:
> i915_gem_object_retire__read(obj, i);
> }
> }
> diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
> index 3ac30b8..dfa2379 100644
> --- a/drivers/gpu/drm/i915/i915_irq.c
> +++ b/drivers/gpu/drm/i915/i915_irq.c
> @@ -1158,9 +1158,11 @@ static void snb_gt_irq_handler(struct drm_device *dev,
>
> static void gen8_cs_irq_handler(struct intel_engine_cs *ring, u32 iir)
> {
> + bool need_notify = false;
> +
> if (iir & GT_CONTEXT_SWITCH_INTERRUPT)
> - intel_lrc_irq_handler(ring);
> - if (iir & GT_RENDER_USER_INTERRUPT)
> + need_notify = intel_lrc_irq_handler(ring);
> + if ((iir & GT_RENDER_USER_INTERRUPT) || need_notify)
> notify_ring(ring);
> }
>
> diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
> index 9b928ab..8373900 100644
> --- a/drivers/gpu/drm/i915/intel_lrc.c
> +++ b/drivers/gpu/drm/i915/intel_lrc.c
> @@ -411,9 +411,8 @@ static void execlists_context_unqueue(struct intel_engine_cs *ring)
> /* Same ctx: ignore first request, as second request
> * will update tail past first request's workload */
> cursor->elsp_submitted = req0->elsp_submitted;
> + req0->elsp_submitted = 0;
> list_del(&req0->execlist_link);
> - list_add_tail(&req0->execlist_link,
> - &ring->execlist_retired_req_list);
> req0 = cursor;
> } else {
> req1 = cursor;
> @@ -450,6 +449,7 @@ static void execlists_context_unqueue(struct intel_engine_cs *ring)
> req0->elsp_submitted++;
> if (req1)
> req1->elsp_submitted++;
> +
> }
>
> static bool execlists_check_remove_request(struct intel_engine_cs *ring,
> @@ -469,11 +469,9 @@ static bool execlists_check_remove_request(struct intel_engine_cs *ring,
> if (intel_execlists_ctx_id(ctx_obj) == request_id) {
> WARN(head_req->elsp_submitted == 0,
> "Never submitted head request\n");
> -
> if (--head_req->elsp_submitted <= 0) {
> + head_req->ctx_complete = 1;
> list_del(&head_req->execlist_link);
> - list_add_tail(&head_req->execlist_link,
> - &ring->execlist_retired_req_list);
> return true;
> }
> }
> @@ -488,8 +486,9 @@ static bool execlists_check_remove_request(struct intel_engine_cs *ring,
> *
> * Check the unread Context Status Buffers and manage the submission of new
> * contexts to the ELSP accordingly.
> + * @return whether a context completed
> */
> -void intel_lrc_irq_handler(struct intel_engine_cs *ring)
> +bool intel_lrc_irq_handler(struct intel_engine_cs *ring)
> {
> struct drm_i915_private *dev_priv = ring->dev->dev_private;
> u32 status_pointer;
> @@ -540,6 +539,8 @@ void intel_lrc_irq_handler(struct intel_engine_cs *ring)
>
> I915_WRITE(RING_CONTEXT_STATUS_PTR(ring),
> ((u32)ring->next_context_status_buffer & 0x07) << 8);
> +
> + return (submit_contexts != 0);
> }
>
> static int execlists_context_queue(struct drm_i915_gem_request *request)
> @@ -551,7 +552,7 @@ static int execlists_context_queue(struct drm_i915_gem_request *request)
> if (request->ctx != ring->default_context)
> intel_lr_context_pin(ring, request->ctx);
>
> - i915_gem_request_reference(request);
> + i915_gem_context_reference(request->ctx);
>
> request->tail = request->ringbuf->tail;
>
> @@ -572,8 +573,6 @@ static int execlists_context_queue(struct drm_i915_gem_request *request)
> WARN(tail_req->elsp_submitted != 0,
> "More than 2 already-submitted reqs queued\n");
> list_del(&tail_req->execlist_link);
> - list_add_tail(&tail_req->execlist_link,
> - &ring->execlist_retired_req_list);
> }
> }
>
> @@ -938,32 +937,6 @@ int intel_execlists_submission(struct i915_execbuffer_params *params,
> return 0;
> }
>
> -void intel_execlists_retire_requests(struct intel_engine_cs *ring)
> -{
> - struct drm_i915_gem_request *req, *tmp;
> - struct list_head retired_list;
> -
> - WARN_ON(!mutex_is_locked(&ring->dev->struct_mutex));
> - if (list_empty(&ring->execlist_retired_req_list))
> - return;
> -
> - INIT_LIST_HEAD(&retired_list);
> - spin_lock_irq(&ring->execlist_lock);
> - list_replace_init(&ring->execlist_retired_req_list, &retired_list);
> - spin_unlock_irq(&ring->execlist_lock);
> -
> - list_for_each_entry_safe(req, tmp, &retired_list, execlist_link) {
> - struct intel_context *ctx = req->ctx;
> - struct drm_i915_gem_object *ctx_obj =
> - ctx->engine[ring->id].state;
> -
> - if (ctx_obj && (ctx != ring->default_context))
> - intel_lr_context_unpin(ring, ctx);
> - list_del(&req->execlist_link);
> - i915_gem_request_unreference(req);
> - }
> -}
> -
> void intel_logical_ring_stop(struct intel_engine_cs *ring)
> {
> struct drm_i915_private *dev_priv = ring->dev->dev_private;
> @@ -1706,7 +1679,6 @@ static int logical_ring_init(struct drm_device *dev, struct intel_engine_cs *rin
> init_waitqueue_head(&ring->irq_queue);
>
> INIT_LIST_HEAD(&ring->execlist_queue);
> - INIT_LIST_HEAD(&ring->execlist_retired_req_list);
> spin_lock_init(&ring->execlist_lock);
>
> ret = i915_cmd_parser_init_ring(ring);
> diff --git a/drivers/gpu/drm/i915/intel_lrc.h b/drivers/gpu/drm/i915/intel_lrc.h
> index f59940a..9d2e98a 100644
> --- a/drivers/gpu/drm/i915/intel_lrc.h
> +++ b/drivers/gpu/drm/i915/intel_lrc.h
> @@ -82,7 +82,7 @@ int intel_execlists_submission(struct i915_execbuffer_params *params,
> struct list_head *vmas);
> u32 intel_execlists_ctx_id(struct drm_i915_gem_object *ctx_obj);
>
> -void intel_lrc_irq_handler(struct intel_engine_cs *ring);
> +bool intel_lrc_irq_handler(struct intel_engine_cs *ring);
> void intel_execlists_retire_requests(struct intel_engine_cs *ring);
>
> #endif /* _INTEL_LRC_H_ */
> diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
> index 304cac4..73b0bd8 100644
> --- a/drivers/gpu/drm/i915/intel_ringbuffer.h
> +++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
> @@ -263,7 +263,6 @@ struct intel_engine_cs {
> /* Execlists */
> spinlock_t execlist_lock;
> struct list_head execlist_queue;
> - struct list_head execlist_retired_req_list;
> u8 next_context_status_buffer;
> u32 irq_keep_mask; /* bitmask for interrupts that should not be masked */
> int (*emit_request)(struct drm_i915_gem_request *request);
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [RFC 2/5] drm/i915: Unify execlist and legacy request life-cycles
2015-07-09 11:12 ` Chris Wilson
@ 2015-07-29 15:20 ` Nick Hoath
0 siblings, 0 replies; 12+ messages in thread
From: Nick Hoath @ 2015-07-29 15:20 UTC (permalink / raw)
To: Chris Wilson; +Cc: intel-gfx@lists.freedesktop.org
On 09/07/2015 12:12, Chris Wilson wrote:
> On Thu, Jul 09, 2015 at 11:57:41AM +0100, Nick Hoath wrote:
>> There is a desire to simplify the i915 driver by reducing the number of
>> different code paths introduced by the LRC / execlists support. As the
>> execlists request is now part of the gem request it is possible and
>> desirable to unify the request life-cycles for execlist and legacy
>> requests.
>>
>> Added a context complete flag to a request which gets set during the
>> context switch interrupt.
>>
>> Added a function i915_gem_request_retireable(). A request is considered
>> retireable if its seqno passed (i.e. the request has completed) and either
>> it was never submitted to the ELSP or its context completed. This ensures
>> that context save is carried out before the last request for a context is
>> considered retireable. retire_requests_ring() now uses
>> i915_gem_request_retireable() rather than request_complete() when deciding
>> which requests to retire. Requests that were not waiting for a context
>> switch interrupt (either as a result of being merged into a following
>> request or by being a legacy request) will be considered retireable as
>> soon as their seqno has passed.
>
> Nak. Just keep the design as requests only retire when seqno passes.
>
>> Removed the extra request reference held for the execlist request.
>>
>> Removed intel_execlists_retire_requests() and all references to
>> intel_engine_cs.execlist_retired_req_list.
>>
>> Moved context unpinning into retire_requests_ring() for now. Further work
>> is pending for the context pinning - this patch should allow us to use the
>> active list to track context and ring buffer objects later.
>>
>> Changed gen8_cs_irq_handler() so that notify_ring() is called when
>> contexts complete as well as when a user interrupt occurs so that
>> notification happens when a request is complete and context save has
>> finished.
>>
>> v2: Rebase over the read-read optimisation changes
>
> Any reason why you didn't review my patches to do this much more neatly?
Do you have a link for the relevant patches?
> -Chris
>
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [RFC 3/5] drm/i915: Simplify runtime_pm reference for execlists
2015-07-09 11:14 ` Chris Wilson
@ 2015-07-29 15:22 ` Nick Hoath
0 siblings, 0 replies; 12+ messages in thread
From: Nick Hoath @ 2015-07-29 15:22 UTC (permalink / raw)
To: Chris Wilson; +Cc: intel-gfx@lists.freedesktop.org
On 09/07/2015 12:14, Chris Wilson wrote:
> On Thu, Jul 09, 2015 at 11:57:42AM +0100, Nick Hoath wrote:
>> No longer take a runtime_pm reference for each execlist request. Only
>> take a single reference when the execlist queue becomes nonempty and
>> release it when it becomes empty.
>
> Nak. We already hold the runtime_pm for GPU activity.
So we should eliminate the runtime_pm reference for execlists?
> -Chris
>
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2015-07-29 15:22 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-07-09 10:57 [RFC 1/5] drm/i915: Clean up gen8 irq handler Nick Hoath
2015-07-09 10:57 ` [RFC 2/5] drm/i915: Unify execlist and legacy request life-cycles Nick Hoath
2015-07-09 11:12 ` Chris Wilson
2015-07-29 15:20 ` Nick Hoath
2015-07-09 16:54 ` Yu Dai
2015-07-09 10:57 ` [RFC 3/5] drm/i915: Simplify runtime_pm reference for execlists Nick Hoath
2015-07-09 11:14 ` Chris Wilson
2015-07-29 15:22 ` Nick Hoath
2015-07-09 10:57 ` [RFC 4/5] drm/i915: Reorder make_rpcs for later patch Nick Hoath
2015-07-09 10:57 ` [RFC 5/5] drm/i915: Clean up lrc context init Nick Hoath
2015-07-09 11:17 ` Chris Wilson
2015-07-09 11:10 ` [RFC 1/5] drm/i915: Clean up gen8 irq handler Chris Wilson
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox