public inbox for intel-gfx@lists.freedesktop.org
 help / color / mirror / Atom feed
From: John Harrison <John.C.Harrison@Intel.com>
To: Tomas Elf <tomas.elf@intel.com>, Intel-GFX@Lists.FreeDesktop.Org
Subject: Re: [PATCH 25/55] drm/i915: Update i915_gem_object_sync() to take a request structure
Date: Thu, 04 Jun 2015 13:57:16 +0100	[thread overview]
Message-ID: <55704B2C.8040905@Intel.com> (raw)
In-Reply-To: <556DF56A.3040408@intel.com>

On 02/06/2015 19:26, Tomas Elf wrote:
> On 29/05/2015 17:43, John.C.Harrison@Intel.com wrote:
>> From: John Harrison <John.C.Harrison@Intel.com>
>>
>> The plan is to pass requests around as the basic submission tracking 
>> structure
>> rather than rings and contexts. This patch updates the 
>> i915_gem_object_sync()
>> code path.
>>
>> v2: Much more complex patch to share a single request between the 
>> sync and the
>> page flip. The _sync() function now supports lazy allocation of the 
>> request
>> structure. That is, if one is passed in then that will be used. If 
>> one is not,
>> then a request will be allocated and passed back out. Note that the 
>> _sync() code
>> does not necessarily require a request. Thus one will only be created 
>> until
>> certain situations. The reason the lazy allocation must be done 
>> within the
>> _sync() code itself is because the decision to need one or not is not 
>> really
>> something that code above can second guess (except in the case where 
>> one is
>> definitely not required because no ring is passed in).
>>
>> The call chains above _sync() now support passing a request through 
>> which most
>> callers passing in NULL and assuming that no request will be required 
>> (because
>> they also pass in NULL for the ring and therefore can't be generating 
>> any ring
>> code).
>>
>> The exeception is intel_crtc_page_flip() which now supports having a 
>> request
>
> 1. "The exeception" -> "The exception"
>
>> returned from _sync(). If one is, then that request is shared by the 
>> page flip
>> (if the page flip is of a type to need a request). If _sync() does 
>> not generate
>> a request but the page flip does need one, then the page flip path 
>> will create
>> its own request.
>>
>> v3: Updated comment description to be clearer about 'to_req' 
>> parameter (Tomas
>> Elf review request). Rebased onto newer tree that significantly 
>> changed the
>> synchronisation code.
>>
>> For: VIZ-5115
>> Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
>> ---
>>   drivers/gpu/drm/i915/i915_drv.h            |    4 ++-
>>   drivers/gpu/drm/i915/i915_gem.c            |   48 
>> +++++++++++++++++++++-------
>>   drivers/gpu/drm/i915/i915_gem_execbuffer.c |    2 +-
>>   drivers/gpu/drm/i915/intel_display.c       |   17 +++++++---
>>   drivers/gpu/drm/i915/intel_drv.h           |    3 +-
>>   drivers/gpu/drm/i915/intel_fbdev.c         |    2 +-
>>   drivers/gpu/drm/i915/intel_lrc.c           |    2 +-
>>   drivers/gpu/drm/i915/intel_overlay.c       |    2 +-
>>   8 files changed, 58 insertions(+), 22 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/i915/i915_drv.h 
>> b/drivers/gpu/drm/i915/i915_drv.h
>> index 64a10fa..f69e9cb 100644
>> --- a/drivers/gpu/drm/i915/i915_drv.h
>> +++ b/drivers/gpu/drm/i915/i915_drv.h
>> @@ -2778,7 +2778,8 @@ static inline void 
>> i915_gem_object_unpin_pages(struct drm_i915_gem_object *obj)
>>
>>   int __must_check i915_mutex_lock_interruptible(struct drm_device 
>> *dev);
>>   int i915_gem_object_sync(struct drm_i915_gem_object *obj,
>> -             struct intel_engine_cs *to);
>> +             struct intel_engine_cs *to,
>> +             struct drm_i915_gem_request **to_req);
>>   void i915_vma_move_to_active(struct i915_vma *vma,
>>                    struct intel_engine_cs *ring);
>>   int i915_gem_dumb_create(struct drm_file *file_priv,
>> @@ -2889,6 +2890,7 @@ int __must_check
>>   i915_gem_object_pin_to_display_plane(struct drm_i915_gem_object *obj,
>>                        u32 alignment,
>>                        struct intel_engine_cs *pipelined,
>> +                     struct drm_i915_gem_request **pipelined_request,
>>                        const struct i915_ggtt_view *view);
>>   void i915_gem_object_unpin_from_display_plane(struct 
>> drm_i915_gem_object *obj,
>>                             const struct i915_ggtt_view *view);
>> diff --git a/drivers/gpu/drm/i915/i915_gem.c 
>> b/drivers/gpu/drm/i915/i915_gem.c
>> index b7d66aa..db90043 100644
>> --- a/drivers/gpu/drm/i915/i915_gem.c
>> +++ b/drivers/gpu/drm/i915/i915_gem.c
>> @@ -3098,25 +3098,26 @@ out:
>>   static int
>>   __i915_gem_object_sync(struct drm_i915_gem_object *obj,
>>                  struct intel_engine_cs *to,
>> -               struct drm_i915_gem_request *req)
>> +               struct drm_i915_gem_request *from_req,
>> +               struct drm_i915_gem_request **to_req)
>>   {
>>       struct intel_engine_cs *from;
>>       int ret;
>>
>> -    from = i915_gem_request_get_ring(req);
>> +    from = i915_gem_request_get_ring(from_req);
>>       if (to == from)
>>           return 0;
>>
>> -    if (i915_gem_request_completed(req, true))
>> +    if (i915_gem_request_completed(from_req, true))
>>           return 0;
>>
>> -    ret = i915_gem_check_olr(req);
>> +    ret = i915_gem_check_olr(from_req);
>>       if (ret)
>>           return ret;
>>
>>       if (!i915_semaphore_is_enabled(obj->base.dev)) {
>>           struct drm_i915_private *i915 = to_i915(obj->base.dev);
>> -        ret = __i915_wait_request(req,
>> +        ret = __i915_wait_request(from_req,
>> atomic_read(&i915->gpu_error.reset_counter),
>>                         i915->mm.interruptible,
>>                         NULL,
>> @@ -3124,15 +3125,25 @@ __i915_gem_object_sync(struct 
>> drm_i915_gem_object *obj,
>>           if (ret)
>>               return ret;
>>
>> -        i915_gem_object_retire_request(obj, req);
>> +        i915_gem_object_retire_request(obj, from_req);
>>       } else {
>>           int idx = intel_ring_sync_index(from, to);
>> -        u32 seqno = i915_gem_request_get_seqno(req);
>> +        u32 seqno = i915_gem_request_get_seqno(from_req);
>>
>> +        WARN_ON(!to_req);
>> +
>> +        /* Optimization: Avoid semaphore sync when we are sure we 
>> already
>> +         * waited for an object with higher seqno */
>
> 2. How about using the standard multi-line comment format?
>

Not my comment. It looks like Chris removed it in his re-write of the 
sync code and I accidentally put it back in when resolving the merge 
conflicts. I'll drop it again.

> /* (empty line)
>  * (first line)
>  * (second line)
>  */
>
>>           if (seqno <= from->semaphore.sync_seqno[idx])
>>               return 0;
>>
>> -        trace_i915_gem_ring_sync_to(from, to, req);
>> +        if (*to_req == NULL) {
>> +            ret = i915_gem_request_alloc(to, to->default_context, 
>> to_req);
>> +            if (ret)
>> +                return ret;
>> +        }
>> +
>> +        trace_i915_gem_ring_sync_to(from, to, from_req);
>>           ret = to->semaphore.sync_to(to, from, seqno);
>>           if (ret)
>>               return ret;
>> @@ -3153,6 +3164,9 @@ __i915_gem_object_sync(struct 
>> drm_i915_gem_object *obj,
>>    *
>>    * @obj: object which may be in use on another ring.
>>    * @to: ring we wish to use the object on. May be NULL.
>> + * @to_req: request we wish to use the object for. See below.
>> + *          This will be allocated and returned if a request is
>> + *          required but not passed in.
>>    *
>>    * This code is meant to abstract object synchronization with the GPU.
>>    * Calling with NULL implies synchronizing the object with the CPU
>> @@ -3168,11 +3182,22 @@ __i915_gem_object_sync(struct 
>> drm_i915_gem_object *obj,
>>    * - If we are a write request (pending_write_domain is set), the new
>>    *   request must wait for outstanding read requests to complete.
>>    *
>> + * For CPU synchronisation (NULL to) no request is required. For 
>> syncing with
>> + * rings to_req must be non-NULL. However, a request does not have 
>> to be
>> + * pre-allocated. If *to_req is null and sync commands will be 
>> emitted then a
>> + * request will be allocated automatically and returned through 
>> *to_req. Note
>> + * that it is not guaranteed that commands will be emitted (because the
>> + * might already be idle). Hence there is no need to create a 
>> request that
>> + * might never have any work submitted. Note further that if a 
>> request is
>> + * returned in *to_req, it is the responsibility of the caller to 
>> submit
>> + * that request (after potentially adding more work to it).
>> + *
>
> 3. "(because the might already be idle)" : The what? The engine?
> 4. "NULL" and "null" mixed. Please be consistent.
>
> Overall, the explanation is better than in the last patch version
>
> With those minor changes:
>
> Reviewed-by: Tomas Elf <tomas.elf@intel.com>
>
> Thanks,
> Tomas
>
>>    * Returns 0 if successful, else propagates up the lower layer error.
>>    */
>>   int
>>   i915_gem_object_sync(struct drm_i915_gem_object *obj,
>> -             struct intel_engine_cs *to)
>> +             struct intel_engine_cs *to,
>> +             struct drm_i915_gem_request **to_req)
>>   {
>>       const bool readonly = obj->base.pending_write_domain == 0;
>>       struct drm_i915_gem_request *req[I915_NUM_RINGS];
>> @@ -3194,7 +3219,7 @@ i915_gem_object_sync(struct drm_i915_gem_object 
>> *obj,
>>                   req[n++] = obj->last_read_req[i];
>>       }
>>       for (i = 0; i < n; i++) {
>> -        ret = __i915_gem_object_sync(obj, to, req[i]);
>> +        ret = __i915_gem_object_sync(obj, to, req[i], to_req);
>>           if (ret)
>>               return ret;
>>       }
>> @@ -4144,12 +4169,13 @@ int
>>   i915_gem_object_pin_to_display_plane(struct drm_i915_gem_object *obj,
>>                        u32 alignment,
>>                        struct intel_engine_cs *pipelined,
>> +                     struct drm_i915_gem_request **pipelined_request,
>>                        const struct i915_ggtt_view *view)
>>   {
>>       u32 old_read_domains, old_write_domain;
>>       int ret;
>>
>> -    ret = i915_gem_object_sync(obj, pipelined);
>> +    ret = i915_gem_object_sync(obj, pipelined, pipelined_request);
>>       if (ret)
>>           return ret;
>>
>> diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c 
>> b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
>> index 50b1ced..bea92ad 100644
>> --- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
>> +++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
>> @@ -899,7 +899,7 @@ i915_gem_execbuffer_move_to_gpu(struct 
>> drm_i915_gem_request *req,
>>           struct drm_i915_gem_object *obj = vma->obj;
>>
>>           if (obj->active & other_rings) {
>> -            ret = i915_gem_object_sync(obj, req->ring);
>> +            ret = i915_gem_object_sync(obj, req->ring, &req);
>>               if (ret)
>>                   return ret;
>>           }
>> diff --git a/drivers/gpu/drm/i915/intel_display.c 
>> b/drivers/gpu/drm/i915/intel_display.c
>> index 657a333..6528ada 100644
>> --- a/drivers/gpu/drm/i915/intel_display.c
>> +++ b/drivers/gpu/drm/i915/intel_display.c
>> @@ -2338,7 +2338,8 @@ int
>>   intel_pin_and_fence_fb_obj(struct drm_plane *plane,
>>                  struct drm_framebuffer *fb,
>>                  const struct drm_plane_state *plane_state,
>> -               struct intel_engine_cs *pipelined)
>> +               struct intel_engine_cs *pipelined,
>> +               struct drm_i915_gem_request **pipelined_request)
>>   {
>>       struct drm_device *dev = fb->dev;
>>       struct drm_i915_private *dev_priv = dev->dev_private;
>> @@ -2403,7 +2404,7 @@ intel_pin_and_fence_fb_obj(struct drm_plane 
>> *plane,
>>
>>       dev_priv->mm.interruptible = false;
>>       ret = i915_gem_object_pin_to_display_plane(obj, alignment, 
>> pipelined,
>> -                           &view);
>> +                           pipelined_request, &view);
>>       if (ret)
>>           goto err_interruptible;
>>
>> @@ -11119,6 +11120,7 @@ static int intel_crtc_page_flip(struct 
>> drm_crtc *crtc,
>>       struct intel_unpin_work *work;
>>       struct intel_engine_cs *ring;
>>       bool mmio_flip;
>> +    struct drm_i915_gem_request *request = NULL;
>>       int ret;
>>
>>       /*
>> @@ -11225,7 +11227,7 @@ static int intel_crtc_page_flip(struct 
>> drm_crtc *crtc,
>>        */
>>       ret = intel_pin_and_fence_fb_obj(crtc->primary, fb,
>>                        crtc->primary->state,
>> -                     mmio_flip ? 
>> i915_gem_request_get_ring(obj->last_write_req) : ring);
>> +                     mmio_flip ? 
>> i915_gem_request_get_ring(obj->last_write_req) : ring, &request);
>>       if (ret)
>>           goto cleanup_pending;
>>
>> @@ -11256,6 +11258,9 @@ static int intel_crtc_page_flip(struct 
>> drm_crtc *crtc,
>>                       intel_ring_get_request(ring));
>>       }
>>
>> +    if (request)
>> +        i915_add_request_no_flush(request->ring);
>> +
>>       work->flip_queued_vblank = drm_crtc_vblank_count(crtc);
>>       work->enable_stall_check = true;
>>
>> @@ -11273,6 +11278,8 @@ static int intel_crtc_page_flip(struct 
>> drm_crtc *crtc,
>>   cleanup_unpin:
>>       intel_unpin_fb_obj(fb, crtc->primary->state);
>>   cleanup_pending:
>> +    if (request)
>> +        i915_gem_request_cancel(request);
>>       atomic_dec(&intel_crtc->unpin_work_count);
>>       mutex_unlock(&dev->struct_mutex);
>>   cleanup:
>> @@ -13171,7 +13178,7 @@ intel_prepare_plane_fb(struct drm_plane *plane,
>>           if (ret)
>>               DRM_DEBUG_KMS("failed to attach phys object\n");
>>       } else {
>> -        ret = intel_pin_and_fence_fb_obj(plane, fb, new_state, NULL);
>> +        ret = intel_pin_and_fence_fb_obj(plane, fb, new_state, NULL, 
>> NULL);
>>       }
>>
>>       if (ret == 0)
>> @@ -15218,7 +15225,7 @@ void intel_modeset_gem_init(struct drm_device 
>> *dev)
>>           ret = intel_pin_and_fence_fb_obj(c->primary,
>>                            c->primary->fb,
>>                            c->primary->state,
>> -                         NULL);
>> +                         NULL, NULL);
>>           mutex_unlock(&dev->struct_mutex);
>>           if (ret) {
>>               DRM_ERROR("failed to pin boot fb on pipe %d\n",
>> diff --git a/drivers/gpu/drm/i915/intel_drv.h 
>> b/drivers/gpu/drm/i915/intel_drv.h
>> index 02d8317..73650ae 100644
>> --- a/drivers/gpu/drm/i915/intel_drv.h
>> +++ b/drivers/gpu/drm/i915/intel_drv.h
>> @@ -1034,7 +1034,8 @@ void intel_release_load_detect_pipe(struct 
>> drm_connector *connector,
>>   int intel_pin_and_fence_fb_obj(struct drm_plane *plane,
>>                      struct drm_framebuffer *fb,
>>                      const struct drm_plane_state *plane_state,
>> -                   struct intel_engine_cs *pipelined);
>> +                   struct intel_engine_cs *pipelined,
>> +                   struct drm_i915_gem_request **pipelined_request);
>>   struct drm_framebuffer *
>>   __intel_framebuffer_create(struct drm_device *dev,
>>                  struct drm_mode_fb_cmd2 *mode_cmd,
>> diff --git a/drivers/gpu/drm/i915/intel_fbdev.c 
>> b/drivers/gpu/drm/i915/intel_fbdev.c
>> index 4e7e7da..dd9f3b2 100644
>> --- a/drivers/gpu/drm/i915/intel_fbdev.c
>> +++ b/drivers/gpu/drm/i915/intel_fbdev.c
>> @@ -151,7 +151,7 @@ static int intelfb_alloc(struct drm_fb_helper 
>> *helper,
>>       }
>>
>>       /* Flush everything out, we'll be doing GTT only from now on */
>> -    ret = intel_pin_and_fence_fb_obj(NULL, fb, NULL, NULL);
>> +    ret = intel_pin_and_fence_fb_obj(NULL, fb, NULL, NULL, NULL);
>>       if (ret) {
>>           DRM_ERROR("failed to pin obj: %d\n", ret);
>>           goto out_fb;
>> diff --git a/drivers/gpu/drm/i915/intel_lrc.c 
>> b/drivers/gpu/drm/i915/intel_lrc.c
>> index 6d005b1..f8e8fdb 100644
>> --- a/drivers/gpu/drm/i915/intel_lrc.c
>> +++ b/drivers/gpu/drm/i915/intel_lrc.c
>> @@ -638,7 +638,7 @@ static int execlists_move_to_gpu(struct 
>> drm_i915_gem_request *req,
>>           struct drm_i915_gem_object *obj = vma->obj;
>>
>>           if (obj->active & other_rings) {
>> -            ret = i915_gem_object_sync(obj, req->ring);
>> +            ret = i915_gem_object_sync(obj, req->ring, &req);
>>               if (ret)
>>                   return ret;
>>           }
>> diff --git a/drivers/gpu/drm/i915/intel_overlay.c 
>> b/drivers/gpu/drm/i915/intel_overlay.c
>> index e7534b9..0f8187a 100644
>> --- a/drivers/gpu/drm/i915/intel_overlay.c
>> +++ b/drivers/gpu/drm/i915/intel_overlay.c
>> @@ -724,7 +724,7 @@ static int intel_overlay_do_put_image(struct 
>> intel_overlay *overlay,
>>       if (ret != 0)
>>           return ret;
>>
>> -    ret = i915_gem_object_pin_to_display_plane(new_bo, 0, NULL,
>> +    ret = i915_gem_object_pin_to_display_plane(new_bo, 0, NULL, NULL,
>>                              &i915_ggtt_view_normal);
>>       if (ret != 0)
>>           return ret;
>>
>

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

  reply	other threads:[~2015-06-04 12:57 UTC|newest]

Thread overview: 120+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-05-29 16:43 [PATCH 00/55] Remove the outstanding_lazy_request John.C.Harrison
2015-05-29 16:43 ` [PATCH 01/55] drm/i915: Re-instate request->uniq becuase it is extremely useful John.C.Harrison
2015-06-03 11:14   ` Tomas Elf
2015-05-29 16:43 ` [PATCH 02/55] drm/i915: Reserve ring buffer space for i915_add_request() commands John.C.Harrison
2015-06-02 18:14   ` Tomas Elf
2015-06-04 12:06   ` John.C.Harrison
2015-06-09 16:00     ` Tomas Elf
2015-06-18 12:10       ` John.C.Harrison
2015-06-17 14:04     ` Daniel Vetter
2015-06-18 10:43       ` John Harrison
2015-06-19 16:34   ` John.C.Harrison
2015-06-22 20:12     ` Daniel Vetter
2015-06-23 11:38       ` John Harrison
2015-06-23 13:24         ` Daniel Vetter
2015-06-23 15:43           ` John Harrison
2015-06-23 20:00             ` Daniel Vetter
2015-06-24 12:18               ` John Harrison
2015-06-24 12:45                 ` Daniel Vetter
2015-06-24 17:05                   ` John Harrison
2015-05-29 16:43 ` [PATCH 03/55] drm/i915: i915_add_request must not fail John.C.Harrison
2015-06-02 18:16   ` Tomas Elf
2015-06-04 14:07     ` John Harrison
2015-06-05 10:55       ` Tomas Elf
2015-06-23 10:16   ` Chris Wilson
2015-06-23 10:47     ` John Harrison
2015-05-29 16:43 ` [PATCH 04/55] drm/i915: Early alloc request in execbuff John.C.Harrison
2015-05-29 16:43 ` [PATCH 05/55] drm/i915: Set context in request from creation even in legacy mode John.C.Harrison
2015-05-29 16:43 ` [PATCH 06/55] drm/i915: Merged the many do_execbuf() parameters into a structure John.C.Harrison
2015-05-29 16:43 ` [PATCH 07/55] drm/i915: Simplify i915_gem_execbuffer_retire_commands() parameters John.C.Harrison
2015-05-29 16:43 ` [PATCH 08/55] drm/i915: Update alloc_request to return the allocated request John.C.Harrison
2015-05-29 16:43 ` [PATCH 09/55] drm/i915: Add request to execbuf params and add explicit cleanup John.C.Harrison
2015-05-29 16:43 ` [PATCH 10/55] drm/i915: Update the dispatch tracepoint to use params->request John.C.Harrison
2015-05-29 16:43 ` [PATCH 11/55] drm/i915: Update move_to_gpu() to take a request structure John.C.Harrison
2015-05-29 16:43 ` [PATCH 12/55] drm/i915: Update execbuffer_move_to_active() " John.C.Harrison
2015-05-29 16:43 ` [PATCH 13/55] drm/i915: Add flag to i915_add_request() to skip the cache flush John.C.Harrison
2015-06-02 18:19   ` Tomas Elf
2015-05-29 16:43 ` [PATCH 14/55] drm/i915: Update i915_gpu_idle() to manage its own request John.C.Harrison
2015-05-29 16:43 ` [PATCH 15/55] drm/i915: Split i915_ppgtt_init_hw() in half - generic and per ring John.C.Harrison
2015-06-18 12:11   ` John.C.Harrison
2015-05-29 16:43 ` [PATCH 16/55] drm/i915: Moved the for_each_ring loop outside of i915_gem_context_enable() John.C.Harrison
2015-05-29 16:43 ` [PATCH 17/55] drm/i915: Don't tag kernel batches as user batches John.C.Harrison
2015-05-29 16:43 ` [PATCH 18/55] drm/i915: Add explicit request management to i915_gem_init_hw() John.C.Harrison
2015-06-02 18:20   ` Tomas Elf
2015-05-29 16:43 ` [PATCH 19/55] drm/i915: Update ppgtt_init_ring() & context_enable() to take requests John.C.Harrison
2015-05-29 16:43 ` [PATCH 20/55] drm/i915: Update i915_switch_context() to take a request structure John.C.Harrison
2015-05-29 16:43 ` [PATCH 21/55] drm/i915: Update do_switch() " John.C.Harrison
2015-05-29 16:43 ` [PATCH 22/55] drm/i915: Update deferred context creation to do explicit request management John.C.Harrison
2015-06-02 18:22   ` Tomas Elf
2015-05-29 16:43 ` [PATCH 23/55] drm/i915: Update init_context() to take a request structure John.C.Harrison
2015-05-29 16:43 ` [PATCH 24/55] drm/i915: Update render_state_init() " John.C.Harrison
2015-05-29 16:43 ` [PATCH 25/55] drm/i915: Update i915_gem_object_sync() " John.C.Harrison
2015-06-02 18:26   ` Tomas Elf
2015-06-04 12:57     ` John Harrison [this message]
2015-06-18 12:14       ` John.C.Harrison
2015-06-18 12:21         ` Chris Wilson
2015-06-18 12:59           ` John Harrison
2015-06-18 14:24             ` Daniel Vetter
2015-06-18 15:39               ` Chris Wilson
2015-06-18 16:16                 ` John Harrison
2015-06-22 20:03                   ` Daniel Vetter
2015-06-22 20:14                     ` Chris Wilson
2015-06-18 16:36         ` 3.16 backlight kernel options Stéphane ANCELOT
2015-05-29 16:43 ` [PATCH 26/55] drm/i915: Update overlay code to do explicit request management John.C.Harrison
2015-05-29 16:43 ` [PATCH 27/55] drm/i915: Update queue_flip() to take a request structure John.C.Harrison
2015-05-29 16:43 ` [PATCH 28/55] drm/i915: Update add_request() " John.C.Harrison
2015-05-29 16:43 ` [PATCH 29/55] drm/i915: Update [vma|object]_move_to_active() to take request structures John.C.Harrison
2015-05-29 16:43 ` [PATCH 30/55] drm/i915: Update l3_remap to take a request structure John.C.Harrison
2015-05-29 16:43 ` [PATCH 31/55] drm/i915: Update mi_set_context() " John.C.Harrison
2015-05-29 16:43 ` [PATCH 32/55] drm/i915: Update a bunch of execbuffer helpers to take request structures John.C.Harrison
2015-05-29 16:43 ` [PATCH 33/55] drm/i915: Update workarounds_emit() " John.C.Harrison
2015-05-29 16:43 ` [PATCH 34/55] drm/i915: Update flush_all_caches() " John.C.Harrison
2015-05-29 16:43 ` [PATCH 35/55] drm/i915: Update switch_mm() to take a request structure John.C.Harrison
2015-05-29 16:43 ` [PATCH 36/55] drm/i915: Update ring->flush() to take a requests structure John.C.Harrison
2015-05-29 16:43 ` [PATCH 37/55] drm/i915: Update some flush helpers to take request structures John.C.Harrison
2015-05-29 16:43 ` [PATCH 38/55] drm/i915: Update ring->emit_flush() to take a request structure John.C.Harrison
2015-05-29 16:44 ` [PATCH 39/55] drm/i915: Update ring->add_request() " John.C.Harrison
2015-05-29 16:44 ` [PATCH 40/55] drm/i915: Update ring->emit_request() " John.C.Harrison
2015-05-29 16:44 ` [PATCH 41/55] drm/i915: Update ring->dispatch_execbuffer() " John.C.Harrison
2015-05-29 16:44 ` [PATCH 42/55] drm/i915: Update ring->emit_bb_start() " John.C.Harrison
2015-05-29 16:44 ` [PATCH 43/55] drm/i915: Update ring->sync_to() " John.C.Harrison
2015-05-29 16:44 ` [PATCH 44/55] drm/i915: Update ring->signal() " John.C.Harrison
2015-05-29 16:44 ` [PATCH 45/55] drm/i915: Update cacheline_align() " John.C.Harrison
2015-05-29 16:44 ` [PATCH 46/55] drm/i915: Update intel_ring_begin() " John.C.Harrison
2015-06-23 10:24   ` Chris Wilson
2015-06-23 10:37     ` John Harrison
2015-06-23 13:25       ` Daniel Vetter
2015-06-23 15:27         ` John Harrison
2015-06-23 15:34           ` Daniel Vetter
2015-05-29 16:44 ` [PATCH 47/55] drm/i915: Update intel_logical_ring_begin() " John.C.Harrison
2015-05-29 16:44 ` [PATCH 48/55] drm/i915: Add *_ring_begin() to request allocation John.C.Harrison
2015-06-17 13:31   ` Daniel Vetter
2015-06-17 14:27     ` Chris Wilson
2015-06-17 14:54       ` Daniel Vetter
2015-06-17 15:52         ` Chris Wilson
2015-06-18 11:21           ` John Harrison
2015-06-18 13:29             ` Daniel Vetter
2015-06-19 16:34               ` John Harrison
2015-05-29 16:44 ` [PATCH 49/55] drm/i915: Remove the now obsolete intel_ring_get_request() John.C.Harrison
2015-05-29 16:44 ` [PATCH 50/55] drm/i915: Remove the now obsolete 'outstanding_lazy_request' John.C.Harrison
2015-05-29 16:44 ` [PATCH 51/55] drm/i915: Move the request/file and request/pid association to creation time John.C.Harrison
2015-06-03 11:15   ` Tomas Elf
2015-05-29 16:44 ` [PATCH 52/55] drm/i915: Remove 'faked' request from LRC submission John.C.Harrison
2015-05-29 16:44 ` [PATCH 53/55] drm/i915: Update a bunch of LRC functions to take requests John.C.Harrison
2015-05-29 16:44 ` [PATCH 54/55] drm/i915: Remove the now obsolete 'i915_gem_check_olr()' John.C.Harrison
2015-06-02 18:27   ` Tomas Elf
2015-06-23 10:23   ` Chris Wilson
2015-06-23 10:39     ` John Harrison
2015-05-29 16:44 ` [PATCH 55/55] drm/i915: Rename the somewhat reduced i915_gem_object_flush_active() John.C.Harrison
2015-06-02 18:27   ` Tomas Elf
2015-06-17 14:06   ` Daniel Vetter
2015-06-17 14:21     ` Chris Wilson
2015-06-18 11:03       ` John Harrison
2015-06-18 11:10         ` Chris Wilson
2015-06-18 11:27           ` John Harrison
2015-06-18 10:57     ` John Harrison
2015-06-04 18:23 ` [PATCH 14/56] drm/i915: Make retire condition check for requests not objects John.C.Harrison
2015-06-04 18:24   ` John Harrison
2015-06-09 15:56   ` Tomas Elf
2015-06-17 15:01     ` Daniel Vetter
2015-06-22 21:04 ` [PATCH 00/55] Remove the outstanding_lazy_request Daniel Vetter

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=55704B2C.8040905@Intel.com \
    --to=john.c.harrison@intel.com \
    --cc=Intel-GFX@Lists.FreeDesktop.Org \
    --cc=tomas.elf@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox