public inbox for intel-gfx@lists.freedesktop.org
 help / color / mirror / Atom feed
From: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
To: Chris Wilson <chris@chris-wilson.co.uk>, intel-gfx@lists.freedesktop.org
Subject: Re: [PATCH 17/55] drm/i915: Simplify request_alloc by returning the allocated request
Date: Tue, 26 Jul 2016 08:09:41 +0300	[thread overview]
Message-ID: <1469509781.4681.17.camel@linux.intel.com> (raw)
In-Reply-To: <1469467954-3920-18-git-send-email-chris@chris-wilson.co.uk>

On ma, 2016-07-25 at 18:31 +0100, Chris Wilson wrote:
> If is simpler and leads to more readable code through the callstack if
> the allocation returns the allocated struct through the return value.
> 
> The importance of this is that it no longer looks like we accidentally
> allocate requests as side-effect of calling certain functions.
> 

I already added in previous series; CC'ing Dave again.

Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>

> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Link: http://patchwork.freedesktop.org/patch/msgid/1469432687-22756-19-git-send-email-chris@chris-wilson.co.uk
> ---
>  drivers/gpu/drm/i915/i915_drv.h            |  3 +-
>  drivers/gpu/drm/i915/i915_gem.c            | 75 ++++++++----------------------
>  drivers/gpu/drm/i915/i915_gem_execbuffer.c | 12 ++---
>  drivers/gpu/drm/i915/i915_gem_request.c    | 58 ++++++++---------------
>  drivers/gpu/drm/i915/i915_trace.h          | 13 +++---
>  drivers/gpu/drm/i915/intel_display.c       | 36 ++++++--------
>  drivers/gpu/drm/i915/intel_lrc.c           |  2 +-
>  drivers/gpu/drm/i915/intel_overlay.c       | 20 ++++----
>  8 files changed, 79 insertions(+), 140 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
> index b7e298b4253e..2259983d2ec6 100644
> --- a/drivers/gpu/drm/i915/i915_drv.h
> +++ b/drivers/gpu/drm/i915/i915_drv.h
> @@ -3171,8 +3171,7 @@ static inline void i915_gem_object_unpin_map(struct drm_i915_gem_object *obj)
>  
>  int __must_check i915_mutex_lock_interruptible(struct drm_device *dev);
>  int i915_gem_object_sync(struct drm_i915_gem_object *obj,
> -			 struct intel_engine_cs *to,
> -			 struct drm_i915_gem_request **to_req);
> +			 struct drm_i915_gem_request *to);
>  void i915_vma_move_to_active(struct i915_vma *vma,
>  			     struct drm_i915_gem_request *req);
>  int i915_gem_dumb_create(struct drm_file *file_priv,
> diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> index 59890f523c5f..b6c4ff63725f 100644
> --- a/drivers/gpu/drm/i915/i915_gem.c
> +++ b/drivers/gpu/drm/i915/i915_gem.c
> @@ -2845,51 +2845,35 @@ out:
>  
>  static int
>  __i915_gem_object_sync(struct drm_i915_gem_object *obj,
> -		       struct intel_engine_cs *to,
> -		       struct drm_i915_gem_request *from_req,
> -		       struct drm_i915_gem_request **to_req)
> +		       struct drm_i915_gem_request *to,
> +		       struct drm_i915_gem_request *from)
>  {
> -	struct intel_engine_cs *from;
>  	int ret;
>  
> -	from = i915_gem_request_get_engine(from_req);
> -	if (to == from)
> +	if (to->engine == from->engine)
>  		return 0;
>  
> -	if (i915_gem_request_completed(from_req))
> +	if (i915_gem_request_completed(from))
>  		return 0;
>  
>  	if (!i915.semaphores) {
> -		struct drm_i915_private *i915 = to_i915(obj->base.dev);
> -		ret = __i915_wait_request(from_req,
> -					  i915->mm.interruptible,
> +		ret = __i915_wait_request(from,
> +					  from->i915->mm.interruptible,
>  					  NULL,
>  					  NO_WAITBOOST);
>  		if (ret)
>  			return ret;
>  
> -		i915_gem_object_retire_request(obj, from_req);
> +		i915_gem_object_retire_request(obj, from);
>  	} else {
> -		int idx = intel_engine_sync_index(from, to);
> -		u32 seqno = i915_gem_request_get_seqno(from_req);
> +		int idx = intel_engine_sync_index(from->engine, to->engine);
> +		u32 seqno = i915_gem_request_get_seqno(from);
>  
> -		WARN_ON(!to_req);
> -
> -		if (seqno <= from->semaphore.sync_seqno[idx])
> +		if (seqno <= from->engine->semaphore.sync_seqno[idx])
>  			return 0;
>  
> -		if (*to_req == NULL) {
> -			struct drm_i915_gem_request *req;
> -
> -			req = i915_gem_request_alloc(to, NULL);
> -			if (IS_ERR(req))
> -				return PTR_ERR(req);
> -
> -			*to_req = req;
> -		}
> -
> -		trace_i915_gem_ring_sync_to(*to_req, from, from_req);
> -		ret = to->semaphore.sync_to(*to_req, from, seqno);
> +		trace_i915_gem_ring_sync_to(to, from);
> +		ret = to->engine->semaphore.sync_to(to, from->engine, seqno);
>  		if (ret)
>  			return ret;
>  
> @@ -2897,8 +2881,8 @@ __i915_gem_object_sync(struct drm_i915_gem_object *obj,
>  		 * might have just caused seqno wrap under
>  		 * the radar.
>  		 */
> -		from->semaphore.sync_seqno[idx] =
> -			i915_gem_request_get_seqno(obj->last_read_req[from->id]);
> +		from->engine->semaphore.sync_seqno[idx] =
> +			i915_gem_request_get_seqno(obj->last_read_req[from->engine->id]);
>  	}
>  
>  	return 0;
> @@ -2908,17 +2892,12 @@ __i915_gem_object_sync(struct drm_i915_gem_object *obj,
>   * i915_gem_object_sync - sync an object to a ring.
>   *
>   * @obj: object which may be in use on another ring.
> - * @to: ring we wish to use the object on. May be NULL.
> - * @to_req: request we wish to use the object for. See below.
> - *          This will be allocated and returned if a request is
> - *          required but not passed in.
> + * @to: request we are wishing to use
>   *
>   * This code is meant to abstract object synchronization with the GPU.
> - * Calling with NULL implies synchronizing the object with the CPU
> - * rather than a particular GPU ring. Conceptually we serialise writes
> - * between engines inside the GPU. We only allow one engine to write
> - * into a buffer at any time, but multiple readers. To ensure each has
> - * a coherent view of memory, we must:
> + * Conceptually we serialise writes between engines inside the GPU.
> + * We only allow one engine to write into a buffer at any time, but
> + * multiple readers. To ensure each has a coherent view of memory, we must:
>   *
>   * - If there is an outstanding write request to the object, the new
>   *   request must wait for it to complete (either CPU or in hw, requests
> @@ -2927,22 +2906,11 @@ __i915_gem_object_sync(struct drm_i915_gem_object *obj,
>   * - If we are a write request (pending_write_domain is set), the new
>   *   request must wait for outstanding read requests to complete.
>   *
> - * For CPU synchronisation (NULL to) no request is required. For syncing with
> - * rings to_req must be non-NULL. However, a request does not have to be
> - * pre-allocated. If *to_req is NULL and sync commands will be emitted then a
> - * request will be allocated automatically and returned through *to_req. Note
> - * that it is not guaranteed that commands will be emitted (because the system
> - * might already be idle). Hence there is no need to create a request that
> - * might never have any work submitted. Note further that if a request is
> - * returned in *to_req, it is the responsibility of the caller to submit
> - * that request (after potentially adding more work to it).
> - *
>   * Returns 0 if successful, else propagates up the lower layer error.
>   */
>  int
>  i915_gem_object_sync(struct drm_i915_gem_object *obj,
> -		     struct intel_engine_cs *to,
> -		     struct drm_i915_gem_request **to_req)
> +		     struct drm_i915_gem_request *to)
>  {
>  	const bool readonly = obj->base.pending_write_domain == 0;
>  	struct drm_i915_gem_request *req[I915_NUM_ENGINES];
> @@ -2951,9 +2919,6 @@ i915_gem_object_sync(struct drm_i915_gem_object *obj,
>  	if (!obj->active)
>  		return 0;
>  
> -	if (to == NULL)
> -		return i915_gem_object_wait_rendering(obj, readonly);
> -
>  	n = 0;
>  	if (readonly) {
>  		if (obj->last_write_req)
> @@ -2964,7 +2929,7 @@ i915_gem_object_sync(struct drm_i915_gem_object *obj,
>  				req[n++] = obj->last_read_req[i];
>  	}
>  	for (i = 0; i < n; i++) {
> -		ret = __i915_gem_object_sync(obj, to, req[i], to_req);
> +		ret = __i915_gem_object_sync(obj, to, req[i]);
>  		if (ret)
>  			return ret;
>  	}
> diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
> index 35c4c595e5ba..eacd78fc93c4 100644
> --- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
> +++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
> @@ -981,7 +981,7 @@ i915_gem_execbuffer_move_to_gpu(struct drm_i915_gem_request *req,
>  		struct drm_i915_gem_object *obj = vma->obj;
>  
>  		if (obj->active & other_rings) {
> -			ret = i915_gem_object_sync(obj, req->engine, &req);
> +			ret = i915_gem_object_sync(obj, req);
>  			if (ret)
>  				return ret;
>  		}
> @@ -1427,7 +1427,6 @@ i915_gem_do_execbuffer(struct drm_device *dev, void *data,
>  {
>  	struct drm_i915_private *dev_priv = to_i915(dev);
>  	struct i915_ggtt *ggtt = &dev_priv->ggtt;
> -	struct drm_i915_gem_request *req = NULL;
>  	struct eb_vmas *eb;
>  	struct drm_i915_gem_object *batch_obj;
>  	struct drm_i915_gem_exec_object2 shadow_exec_entry;
> @@ -1615,13 +1614,13 @@ i915_gem_do_execbuffer(struct drm_device *dev, void *data,
>  		params->batch_obj_vm_offset = i915_gem_obj_offset(batch_obj, vm);
>  
>  	/* Allocate a request for this batch buffer nice and early. */
> -	req = i915_gem_request_alloc(engine, ctx);
> -	if (IS_ERR(req)) {
> -		ret = PTR_ERR(req);
> +	params->request = i915_gem_request_alloc(engine, ctx);
> +	if (IS_ERR(params->request)) {
> +		ret = PTR_ERR(params->request);
>  		goto err_batch_unpin;
>  	}
>  
> -	ret = i915_gem_request_add_to_client(req, file);
> +	ret = i915_gem_request_add_to_client(params->request, file);
>  	if (ret)
>  		goto err_request;
>  
> @@ -1637,7 +1636,6 @@ i915_gem_do_execbuffer(struct drm_device *dev, void *data,
>  	params->dispatch_flags          = dispatch_flags;
>  	params->batch_obj               = batch_obj;
>  	params->ctx                     = ctx;
> -	params->request                 = req;
>  
>  	ret = dev_priv->gt.execbuf_submit(params, args, &eb->vmas);
>  err_request:
> diff --git a/drivers/gpu/drm/i915/i915_gem_request.c b/drivers/gpu/drm/i915/i915_gem_request.c
> index 7e3206051ced..47e46c9da4e7 100644
> --- a/drivers/gpu/drm/i915/i915_gem_request.c
> +++ b/drivers/gpu/drm/i915/i915_gem_request.c
> @@ -292,10 +292,21 @@ static int i915_gem_get_seqno(struct drm_i915_private *dev_priv, u32 *seqno)
>  	return 0;
>  }
>  
> -static inline int
> -__i915_gem_request_alloc(struct intel_engine_cs *engine,
> -			 struct i915_gem_context *ctx,
> -			 struct drm_i915_gem_request **req_out)
> +/**
> + * i915_gem_request_alloc - allocate a request structure
> + *
> + * @engine: engine that we wish to issue the request on.
> + * @ctx: context that the request will be associated with.
> + *       This can be NULL if the request is not directly related to
> + *       any specific user context, in which case this function will
> + *       choose an appropriate context to use.
> + *
> + * Returns a pointer to the allocated request if successful,
> + * or an error code if not.
> + */
> +struct drm_i915_gem_request *
> +i915_gem_request_alloc(struct intel_engine_cs *engine,
> +		       struct i915_gem_context *ctx)
>  {
>  	struct drm_i915_private *dev_priv = engine->i915;
>  	unsigned int reset_counter = i915_reset_counter(&dev_priv->gpu_error);
> @@ -303,18 +314,13 @@ __i915_gem_request_alloc(struct intel_engine_cs *engine,
>  	u32 seqno;
>  	int ret;
>  
> -	if (!req_out)
> -		return -EINVAL;
> -
> -	*req_out = NULL;
> -
>  	/* ABI: Before userspace accesses the GPU (e.g. execbuffer), report
>  	 * EIO if the GPU is already wedged, or EAGAIN to drop the struct_mutex
>  	 * and restart.
>  	 */
>  	ret = i915_gem_check_wedge(reset_counter, dev_priv->mm.interruptible);
>  	if (ret)
> -		return ret;
> +		return ERR_PTR(ret);
>  
>  	/* Move the oldest request to the slab-cache (if not in use!) */
>  	req = list_first_entry_or_null(&engine->request_list,
> @@ -324,7 +330,7 @@ __i915_gem_request_alloc(struct intel_engine_cs *engine,
>  
>  	req = kmem_cache_zalloc(dev_priv->requests, GFP_KERNEL);
>  	if (!req)
> -		return -ENOMEM;
> +		return ERR_PTR(-ENOMEM);
>  
>  	ret = i915_gem_get_seqno(dev_priv, &seqno);
>  	if (ret)
> @@ -357,39 +363,13 @@ __i915_gem_request_alloc(struct intel_engine_cs *engine,
>  	if (ret)
>  		goto err_ctx;
>  
> -	*req_out = req;
> -	return 0;
> +	return req;
>  
>  err_ctx:
>  	i915_gem_context_put(ctx);
>  err:
>  	kmem_cache_free(dev_priv->requests, req);
> -	return ret;
> -}
> -
> -/**
> - * i915_gem_request_alloc - allocate a request structure
> - *
> - * @engine: engine that we wish to issue the request on.
> - * @ctx: context that the request will be associated with.
> - *       This can be NULL if the request is not directly related to
> - *       any specific user context, in which case this function will
> - *       choose an appropriate context to use.
> - *
> - * Returns a pointer to the allocated request if successful,
> - * or an error code if not.
> - */
> -struct drm_i915_gem_request *
> -i915_gem_request_alloc(struct intel_engine_cs *engine,
> -		       struct i915_gem_context *ctx)
> -{
> -	struct drm_i915_gem_request *req;
> -	int err;
> -
> -	if (!ctx)
> -		ctx = engine->i915->kernel_context;
> -	err = __i915_gem_request_alloc(engine, ctx, &req);
> -	return err ? ERR_PTR(err) : req;
> +	return ERR_PTR(ret);
>  }
>  
>  static void i915_gem_mark_busy(const struct intel_engine_cs *engine)
> diff --git a/drivers/gpu/drm/i915/i915_trace.h b/drivers/gpu/drm/i915/i915_trace.h
> index 007112d1e049..9e43c0aa6e3b 100644
> --- a/drivers/gpu/drm/i915/i915_trace.h
> +++ b/drivers/gpu/drm/i915/i915_trace.h
> @@ -449,10 +449,9 @@ TRACE_EVENT(i915_gem_evict_vm,
>  );
>  
>  TRACE_EVENT(i915_gem_ring_sync_to,
> -	    TP_PROTO(struct drm_i915_gem_request *to_req,
> -		     struct intel_engine_cs *from,
> -		     struct drm_i915_gem_request *req),
> -	    TP_ARGS(to_req, from, req),
> +	    TP_PROTO(struct drm_i915_gem_request *to,
> +		     struct drm_i915_gem_request *from),
> +	    TP_ARGS(to, from),
>  
>  	    TP_STRUCT__entry(
>  			     __field(u32, dev)
> @@ -463,9 +462,9 @@ TRACE_EVENT(i915_gem_ring_sync_to,
>  
>  	    TP_fast_assign(
>  			   __entry->dev = from->i915->drm.primary->index;
> -			   __entry->sync_from = from->id;
> -			   __entry->sync_to = to_req->engine->id;
> -			   __entry->seqno = req->fence.seqno;
> +			   __entry->sync_from = from->engine->id;
> +			   __entry->sync_to = to->engine->id;
> +			   __entry->seqno = from->fence.seqno;
>  			   ),
>  
>  	    TP_printk("dev=%u, sync-from=%u, sync-to=%u, seqno=%u",
> diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
> index bff172c45ff7..5d4420b67642 100644
> --- a/drivers/gpu/drm/i915/intel_display.c
> +++ b/drivers/gpu/drm/i915/intel_display.c
> @@ -11583,7 +11583,7 @@ static int intel_crtc_page_flip(struct drm_crtc *crtc,
>  	struct intel_flip_work *work;
>  	struct intel_engine_cs *engine;
>  	bool mmio_flip;
> -	struct drm_i915_gem_request *request = NULL;
> +	struct drm_i915_gem_request *request;
>  	int ret;
>  
>  	/*
> @@ -11690,22 +11690,6 @@ static int intel_crtc_page_flip(struct drm_crtc *crtc,
>  
>  	mmio_flip = use_mmio_flip(engine, obj);
>  
> -	/* When using CS flips, we want to emit semaphores between rings.
> -	 * However, when using mmio flips we will create a task to do the
> -	 * synchronisation, so all we want here is to pin the framebuffer
> -	 * into the display plane and skip any waits.
> -	 */
> -	if (!mmio_flip) {
> -		ret = i915_gem_object_sync(obj, engine, &request);
> -		if (!ret && !request) {
> -			request = i915_gem_request_alloc(engine, NULL);
> -			ret = PTR_ERR_OR_ZERO(request);
> -		}
> -
> -		if (ret)
> -			goto cleanup_pending;
> -	}
> -
>  	ret = intel_pin_and_fence_fb_obj(fb, primary->state->rotation);
>  	if (ret)
>  		goto cleanup_pending;
> @@ -11723,14 +11707,24 @@ static int intel_crtc_page_flip(struct drm_crtc *crtc,
>  
>  		schedule_work(&work->mmio_work);
>  	} else {
> -		i915_gem_request_assign(&work->flip_queued_req, request);
> +		request = i915_gem_request_alloc(engine, engine->last_context);
> +		if (IS_ERR(request)) {
> +			ret = PTR_ERR(request);
> +			goto cleanup_unpin;
> +		}
> +
> +		ret = i915_gem_object_sync(obj, request);
> +		if (ret)
> +			goto cleanup_request;
> +
>  		ret = dev_priv->display.queue_flip(dev, crtc, fb, obj, request,
>  						   page_flip_flags);
>  		if (ret)
> -			goto cleanup_unpin;
> +			goto cleanup_request;
>  
>  		intel_mark_page_flip_active(intel_crtc, work);
>  
> +		work->flip_queued_req = i915_gem_request_get(request);
>  		i915_add_request_no_flush(request);
>  	}
>  
> @@ -11745,11 +11739,11 @@ static int intel_crtc_page_flip(struct drm_crtc *crtc,
>  
>  	return 0;
>  
> +cleanup_request:
> +	i915_add_request_no_flush(request);
>  cleanup_unpin:
>  	intel_unpin_fb_obj(fb, crtc->primary->state->rotation);
>  cleanup_pending:
> -	if (!IS_ERR_OR_NULL(request))
> -		i915_add_request_no_flush(request);
>  	atomic_dec(&intel_crtc->unpin_work_count);
>  	mutex_unlock(&dev->struct_mutex);
>  cleanup:
> diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
> index e8d971e81491..1e57f48250ce 100644
> --- a/drivers/gpu/drm/i915/intel_lrc.c
> +++ b/drivers/gpu/drm/i915/intel_lrc.c
> @@ -655,7 +655,7 @@ static int execlists_move_to_gpu(struct drm_i915_gem_request *req,
>  		struct drm_i915_gem_object *obj = vma->obj;
>  
>  		if (obj->active & other_rings) {
> -			ret = i915_gem_object_sync(obj, req->engine, &req);
> +			ret = i915_gem_object_sync(obj, req);
>  			if (ret)
>  				return ret;
>  		}
> diff --git a/drivers/gpu/drm/i915/intel_overlay.c b/drivers/gpu/drm/i915/intel_overlay.c
> index 8f1d4d9ef345..651efe4e468e 100644
> --- a/drivers/gpu/drm/i915/intel_overlay.c
> +++ b/drivers/gpu/drm/i915/intel_overlay.c
> @@ -229,11 +229,18 @@ static int intel_overlay_do_wait_request(struct intel_overlay *overlay,
>  	return 0;
>  }
>  
> +static struct drm_i915_gem_request *alloc_request(struct intel_overlay *overlay)
> +{
> +	struct drm_i915_private *dev_priv = overlay->i915;
> +	struct intel_engine_cs *engine = &dev_priv->engine[RCS];
> +
> +	return i915_gem_request_alloc(engine, dev_priv->kernel_context);
> +}
> +
>  /* overlay needs to be disable in OCMD reg */
>  static int intel_overlay_on(struct intel_overlay *overlay)
>  {
>  	struct drm_i915_private *dev_priv = overlay->i915;
> -	struct intel_engine_cs *engine = &dev_priv->engine[RCS];
>  	struct drm_i915_gem_request *req;
>  	struct intel_ring *ring;
>  	int ret;
> @@ -241,7 +248,7 @@ static int intel_overlay_on(struct intel_overlay *overlay)
>  	WARN_ON(overlay->active);
>  	WARN_ON(IS_I830(dev_priv) && !(dev_priv->quirks & QUIRK_PIPEA_FORCE));
>  
> -	req = i915_gem_request_alloc(engine, NULL);
> +	req = alloc_request(overlay);
>  	if (IS_ERR(req))
>  		return PTR_ERR(req);
>  
> @@ -268,7 +275,6 @@ static int intel_overlay_continue(struct intel_overlay *overlay,
>  				  bool load_polyphase_filter)
>  {
>  	struct drm_i915_private *dev_priv = overlay->i915;
> -	struct intel_engine_cs *engine = &dev_priv->engine[RCS];
>  	struct drm_i915_gem_request *req;
>  	struct intel_ring *ring;
>  	u32 flip_addr = overlay->flip_addr;
> @@ -285,7 +291,7 @@ static int intel_overlay_continue(struct intel_overlay *overlay,
>  	if (tmp & (1 << 17))
>  		DRM_DEBUG("overlay underrun, DOVSTA: %x\n", tmp);
>  
> -	req = i915_gem_request_alloc(engine, NULL);
> +	req = alloc_request(overlay);
>  	if (IS_ERR(req))
>  		return PTR_ERR(req);
>  
> @@ -338,7 +344,6 @@ static void intel_overlay_off_tail(struct intel_overlay *overlay)
>  static int intel_overlay_off(struct intel_overlay *overlay)
>  {
>  	struct drm_i915_private *dev_priv = overlay->i915;
> -	struct intel_engine_cs *engine = &dev_priv->engine[RCS];
>  	struct drm_i915_gem_request *req;
>  	struct intel_ring *ring;
>  	u32 flip_addr = overlay->flip_addr;
> @@ -352,7 +357,7 @@ static int intel_overlay_off(struct intel_overlay *overlay)
>  	 * of the hw. Do it in both cases */
>  	flip_addr |= OFC_UPDATE;
>  
> -	req = i915_gem_request_alloc(engine, NULL);
> +	req = alloc_request(overlay);
>  	if (IS_ERR(req))
>  		return PTR_ERR(req);
>  
> @@ -412,7 +417,6 @@ static int intel_overlay_recover_from_interrupt(struct intel_overlay *overlay)
>  static int intel_overlay_release_old_vid(struct intel_overlay *overlay)
>  {
>  	struct drm_i915_private *dev_priv = overlay->i915;
> -	struct intel_engine_cs *engine = &dev_priv->engine[RCS];
>  	int ret;
>  
>  	lockdep_assert_held(&dev_priv->drm.struct_mutex);
> @@ -428,7 +432,7 @@ static int intel_overlay_release_old_vid(struct intel_overlay *overlay)
>  		struct drm_i915_gem_request *req;
>  		struct intel_ring *ring;
>  
> -		req = i915_gem_request_alloc(engine, NULL);
> +		req = alloc_request(overlay);
>  		if (IS_ERR(req))
>  			return PTR_ERR(req);
>  
-- 
Joonas Lahtinen
Open Source Technology Center
Intel Corporation
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

  reply	other threads:[~2016-07-26  5:09 UTC|newest]

Thread overview: 124+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-07-25 17:31 Fix the vma leak Chris Wilson
2016-07-25 17:31 ` [PATCH 01/55] drm/i915: Reduce breadcrumb lock coverage for intel_engine_enable_signaling() Chris Wilson
2016-07-26  5:07   ` Joonas Lahtinen
2016-07-25 17:31 ` [PATCH 02/55] drm/i915: Prefer list_first_entry_or_null Chris Wilson
2016-07-25 17:31 ` [PATCH 03/55] drm/i915: Only clear the client pointer when tearing down the file Chris Wilson
2016-07-25 17:31 ` [PATCH 04/55] drm/i915: Only drop the batch-pool's object reference Chris Wilson
2016-07-25 17:31 ` [PATCH 05/55] drm/i915/cmdparser: Remove stray intel_engine_cs *ring Chris Wilson
2016-07-25 17:31 ` [PATCH 06/55] drm/i915: Use engine to refer to the user's BSD intel_engine_cs Chris Wilson
2016-07-25 17:31 ` [PATCH 07/55] drm/i915: Avoid using intel_engine_cs *ring for GPU error capture Chris Wilson
2016-07-26  4:59   ` Joonas Lahtinen
2016-07-26  8:19     ` Chris Wilson
2016-07-27 11:08       ` Joonas Lahtinen
2016-07-26 10:21     ` [PATCH v2] " Chris Wilson
2016-07-26 12:35       ` Joonas Lahtinen
2016-07-25 17:31 ` [PATCH 08/55] drm/i915: Remove stray intel_engine_cs ring identifiers from i915_gem.c Chris Wilson
2016-07-26  5:02   ` Joonas Lahtinen
2016-07-26  8:12     ` Chris Wilson
2016-07-27  6:12       ` Joonas Lahtinen
2016-07-25 17:31 ` [PATCH 09/55] drm/i915: Update a couple of hangcheck comments to talk about engines Chris Wilson
2016-07-25 17:31 ` [PATCH 10/55] drm/i915: Unify intel_logical_ring_emit and intel_ring_emit Chris Wilson
2016-07-25 17:31 ` [PATCH 11/55] drm/i915: Rename request->ringbuf to request->ring Chris Wilson
2016-07-25 17:31 ` [PATCH 12/55] drm/i915: Rename intel_context[engine].ringbuf Chris Wilson
2016-07-25 17:31 ` [PATCH 13/55] drm/i915: Rename struct intel_ringbuffer to struct intel_ring Chris Wilson
2016-07-25 17:31 ` [PATCH 14/55] drm/i915: Rename residual ringbuf parameters Chris Wilson
2016-07-25 17:31 ` [PATCH 15/55] drm/i915: Rename intel_pin_and_map_ring() Chris Wilson
2016-07-25 17:31 ` [PATCH 16/55] drm/i915: Remove obsolete engine->gpu_caches_dirty Chris Wilson
2016-07-26  5:06   ` Joonas Lahtinen
2016-07-25 17:31 ` [PATCH 17/55] drm/i915: Simplify request_alloc by returning the allocated request Chris Wilson
2016-07-26  5:09   ` Joonas Lahtinen [this message]
2016-07-25 17:31 ` [PATCH 18/55] drm/i915: Unify legacy/execlists emission of MI_BATCHBUFFER_START Chris Wilson
2016-07-25 17:31 ` [PATCH 19/55] drm/i915: Remove intel_ring_get_tail() Chris Wilson
2016-07-25 17:31 ` [PATCH 20/55] drm/i915: Convert engine->write_tail to operate on a request Chris Wilson
2016-07-25 17:32 ` [PATCH 21/55] drm/i915: Unify request submission Chris Wilson
2016-07-25 17:32 ` [PATCH 22/55] drm/i915/lrc: Update function names to match request flow Chris Wilson
2016-07-25 17:32 ` [PATCH 23/55] drm/i915: Stop passing caller's num_dwords to engine->semaphore.signal() Chris Wilson
2016-07-25 17:32 ` [PATCH 24/55] drm/i915: Reuse legacy breadcrumbs + tail emission Chris Wilson
2016-07-25 17:32 ` [PATCH 25/55] drm/i915/ringbuffer: Specialise SNB+ request emission for semaphores Chris Wilson
2016-07-25 17:32 ` [PATCH 26/55] drm/i915: Remove duplicate golden render state init from execlists Chris Wilson
2016-07-25 17:32 ` [PATCH 27/55] drm/i915: Refactor golden render state emission to unconfuse gcc Chris Wilson
2016-07-25 17:32 ` [PATCH 28/55] drm/i915: Unify legacy/execlists submit_execbuf callbacks Chris Wilson
2016-07-25 17:32 ` [PATCH 29/55] drm/i915: Simplify calling engine->sync_to Chris Wilson
2016-07-25 17:32 ` [PATCH 30/55] drm/i915: Rename engine->semaphore.sync_to, engine->sempahore.signal locals Chris Wilson
2016-07-25 17:32 ` [PATCH 31/55] drm/i915: Amalgamate GGTT/ppGTT vma debug list walkers Chris Wilson
2016-07-26  5:15   ` Joonas Lahtinen
2016-07-25 17:32 ` [PATCH 32/55] drm/i915: Split early global GTT initialisation Chris Wilson
2016-07-26  7:08   ` Joonas Lahtinen
2016-07-26  7:42     ` Chris Wilson
2016-07-27 10:20       ` Joonas Lahtinen
2016-07-27 10:34         ` Chris Wilson
2016-07-27 11:09           ` Joonas Lahtinen
2016-07-25 17:32 ` [PATCH 33/55] drm/i915: Store owning file on the i915_address_space Chris Wilson
2016-07-26  7:15   ` Joonas Lahtinen
2016-07-25 17:32 ` [PATCH 34/55] drm/i915: Count how many VMA are bound for an object Chris Wilson
2016-07-26  7:44   ` Joonas Lahtinen
2016-07-26  8:02     ` Chris Wilson
2016-07-25 17:32 ` [PATCH 35/55] drm/i915: Be more careful when unbinding vma Chris Wilson
2016-07-26  7:59   ` Joonas Lahtinen
2016-07-26  8:08     ` Chris Wilson
2016-07-25 17:32 ` [PATCH 36/55] drm/i915: Kill drop_pages() Chris Wilson
2016-07-25 17:32 ` [PATCH 37/55] drm/i915: Introduce i915_gem_active for request tracking Chris Wilson
2016-07-26  8:23   ` Joonas Lahtinen
2016-07-26  8:28     ` Chris Wilson
2016-07-28  7:21       ` Joonas Lahtinen
2016-07-25 17:32 ` [PATCH 38/55] drm/i915: Prepare i915_gem_active for annotations Chris Wilson
2016-07-26  8:50   ` Joonas Lahtinen
2016-07-26  9:03     ` Chris Wilson
2016-07-25 17:32 ` [PATCH 39/55] drm/i915: Mark up i915_gem_active for locking annotation Chris Wilson
2016-07-26  8:54   ` Joonas Lahtinen
2016-07-26  9:06     ` Chris Wilson
2016-07-28  7:26       ` Joonas Lahtinen
2016-07-25 17:32 ` [PATCH 40/55] drm/i915: Refactor blocking waits Chris Wilson
2016-07-27  6:04   ` Joonas Lahtinen
2016-07-27  7:04     ` Chris Wilson
2016-07-27 10:40       ` Joonas Lahtinen
2016-07-27 10:48         ` Chris Wilson
2016-07-27  7:07     ` Chris Wilson
2016-07-27 10:42       ` Joonas Lahtinen
2016-07-27 17:34         ` Chris Wilson
2016-07-28  6:40           ` Joonas Lahtinen
2016-07-25 17:32 ` [PATCH 41/55] drm/i915: Rename request->list to link for consistency Chris Wilson
2016-07-26  9:26   ` Joonas Lahtinen
2016-07-25 17:32 ` [PATCH 42/55] drm/i915: Remove obsolete i915_gem_object_flush_active() Chris Wilson
2016-07-26  9:31   ` Joonas Lahtinen
2016-07-26  9:47     ` Chris Wilson
2016-07-25 17:32 ` [PATCH 43/55] drm/i915: Refactor activity tracking for requests Chris Wilson
2016-07-27  7:40   ` Joonas Lahtinen
2016-07-27  7:57     ` Chris Wilson
2016-07-27 10:55       ` Joonas Lahtinen
2016-07-25 17:32 ` [PATCH 44/55] drm/i915: Track requests inside each intel_ring Chris Wilson
2016-07-26 10:10   ` Joonas Lahtinen
2016-07-26 10:15     ` Chris Wilson
2016-07-25 17:32 ` [PATCH 45/55] drm/i915: Convert intel_overlay to request tracking Chris Wilson
2016-07-27  8:12   ` Joonas Lahtinen
2016-07-27  8:22     ` Chris Wilson
2016-07-27  8:34       ` Chris Wilson
2016-07-27 10:59       ` Joonas Lahtinen
2016-07-25 17:32 ` [PATCH 46/55] drm/i915: Move the special case wait-request handling to its one caller Chris Wilson
2016-07-26 12:39   ` Joonas Lahtinen
2016-07-25 17:32 ` [PATCH 47/55] drm/i915: Disable waitboosting for a saturated engine Chris Wilson
2016-07-26 12:40   ` Joonas Lahtinen
2016-07-26 13:11     ` Chris Wilson
2016-07-25 17:32 ` [PATCH 48/55] drm/i915: s/__i915_wait_request/i915_wait_request/ Chris Wilson
2016-07-26 12:42   ` Joonas Lahtinen
2016-07-25 17:32 ` [PATCH 49/55] drm/i915: Double check activity before relocations Chris Wilson
2016-07-26 12:45   ` Joonas Lahtinen
2016-07-25 17:32 ` [PATCH 50/55] drm/i915: Move request list retirement to i915_gem_request.c Chris Wilson
2016-07-26 12:48   ` Joonas Lahtinen
2016-07-26 13:39     ` Chris Wilson
2016-07-25 17:32 ` [PATCH 51/55] drm/i915: i915_vma_move_to_active prep patch Chris Wilson
2016-07-26 12:53   ` Joonas Lahtinen
2016-07-25 17:32 ` [PATCH 52/55] drm/i915: Track active vma requests Chris Wilson
2016-07-27  9:47   ` Joonas Lahtinen
2016-07-27 10:15     ` Chris Wilson
2016-07-25 17:32 ` [PATCH 53/55] drm/i915: Release vma when the handle is closed Chris Wilson
2016-07-27 10:00   ` Joonas Lahtinen
2016-07-27 10:13     ` Chris Wilson
2016-07-28  7:16       ` Joonas Lahtinen
2016-07-25 17:32 ` [PATCH 54/55] drm/i915: Mark the context and address space as closed Chris Wilson
2016-07-27 10:13   ` Joonas Lahtinen
2016-07-27 10:27     ` Chris Wilson
2016-07-25 17:32 ` [PATCH 55/55] Revert "drm/i915: Clean up associated VMAs on context destruction" Chris Wilson
2016-07-27 10:18   ` Joonas Lahtinen
2016-07-26  5:18 ` ✗ Ro.CI.BAT: warning for series starting with [01/55] drm/i915: Reduce breadcrumb lock coverage for intel_engine_enable_signaling() Patchwork
2016-07-26 10:48 ` ✗ Ro.CI.BAT: failure for series starting with [01/55] drm/i915: Reduce breadcrumb lock coverage for intel_engine_enable_signaling() (rev2) Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1469509781.4681.17.camel@linux.intel.com \
    --to=joonas.lahtinen@linux.intel.com \
    --cc=chris@chris-wilson.co.uk \
    --cc=intel-gfx@lists.freedesktop.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox