public inbox for intel-gfx@lists.freedesktop.org
 help / color / mirror / Atom feed
From: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
To: Chris Wilson <chris@chris-wilson.co.uk>, intel-gfx@lists.freedesktop.org
Subject: Re: [PATCH 07/11] drm/i915: i915_vma_move_to_active prep patch
Date: Thu, 17 Dec 2015 12:04:26 +0000	[thread overview]
Message-ID: <5672A4CA.5010900@linux.intel.com> (raw)
In-Reply-To: <1450093012-14955-7-git-send-email-chris@chris-wilson.co.uk>


On 14/12/15 11:36, Chris Wilson wrote:
> This patch is broken out of the next just to remove the code motion from
> that patch and make it more readable. What we do here is move the
> i915_vma_move_to_active() to i915_gem_execbuffer.c and put the three
> stages (read, write, fenced) together so that future modifications to
> active handling are all located in the same spot. The importance of this
> is so that we can more simply control the order in which the requests
> are place in the retirement list (i.e. control the order at which we
> retire and so control the lifetimes to avoid having to hold onto
> references).
>
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> ---
>   drivers/gpu/drm/i915/i915_drv.h              |  3 +-
>   drivers/gpu/drm/i915/i915_gem.c              | 15 -------
>   drivers/gpu/drm/i915/i915_gem_context.c      |  7 ++--
>   drivers/gpu/drm/i915/i915_gem_execbuffer.c   | 63 ++++++++++++++++++----------
>   drivers/gpu/drm/i915/i915_gem_render_state.c |  2 +-
>   5 files changed, 49 insertions(+), 41 deletions(-)

Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Regards,

Tvrtko

>
> diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
> index b32a00f60e98..eb775eb1c693 100644
> --- a/drivers/gpu/drm/i915/i915_drv.h
> +++ b/drivers/gpu/drm/i915/i915_drv.h
> @@ -2775,7 +2775,8 @@ int __must_check i915_mutex_lock_interruptible(struct drm_device *dev);
>   int i915_gem_object_sync(struct drm_i915_gem_object *obj,
>   			 struct drm_i915_gem_request *to);
>   void i915_vma_move_to_active(struct i915_vma *vma,
> -			     struct drm_i915_gem_request *req);
> +			     struct drm_i915_gem_request *req,
> +			     unsigned flags);
>   int i915_gem_dumb_create(struct drm_file *file_priv,
>   			 struct drm_device *dev,
>   			 struct drm_mode_create_dumb *args);
> diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> index 144e92df8137..8a824c5d5348 100644
> --- a/drivers/gpu/drm/i915/i915_gem.c
> +++ b/drivers/gpu/drm/i915/i915_gem.c
> @@ -2016,21 +2016,6 @@ void *i915_gem_object_pin_vmap(struct drm_i915_gem_object *obj)
>   	return obj->vmapping;
>   }
>
> -void i915_vma_move_to_active(struct i915_vma *vma,
> -			     struct drm_i915_gem_request *req)
> -{
> -	struct drm_i915_gem_object *obj = vma->obj;
> -	struct intel_engine_cs *engine = req->engine;
> -
> -	/* Add a reference if we're newly entering the active list. */
> -	if (obj->active == 0)
> -		drm_gem_object_reference(&obj->base);
> -	obj->active |= intel_engine_flag(engine);
> -
> -	i915_gem_request_mark_active(req, &obj->last_read[engine->id]);
> -	list_move_tail(&vma->vm_link, &vma->vm->active_list);
> -}
> -
>   static void
>   i915_gem_object_retire__fence(struct drm_i915_gem_request_active *active,
>   			      struct drm_i915_gem_request *req)
> diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c
> index dcb4603a7f03..c4a8a64cd1b2 100644
> --- a/drivers/gpu/drm/i915/i915_gem_context.c
> +++ b/drivers/gpu/drm/i915/i915_gem_context.c
> @@ -766,8 +766,8 @@ static int do_switch(struct drm_i915_gem_request *req)
>   	 * MI_SET_CONTEXT instead of when the next seqno has completed.
>   	 */
>   	if (from != NULL) {
> -		from->legacy_hw_ctx.rcs_state->base.read_domains = I915_GEM_DOMAIN_INSTRUCTION;
> -		i915_vma_move_to_active(i915_gem_obj_to_ggtt(from->legacy_hw_ctx.rcs_state), req);
> +		struct drm_i915_gem_object *obj = from->legacy_hw_ctx.rcs_state;
> +
>   		/* As long as MI_SET_CONTEXT is serializing, ie. it flushes the
>   		 * whole damn pipeline, we don't need to explicitly mark the
>   		 * object dirty. The only exception is that the context must be
> @@ -775,7 +775,8 @@ static int do_switch(struct drm_i915_gem_request *req)
>   		 * able to defer doing this until we know the object would be
>   		 * swapped, but there is no way to do that yet.
>   		 */
> -		from->legacy_hw_ctx.rcs_state->dirty = 1;
> +		obj->base.read_domains = I915_GEM_DOMAIN_INSTRUCTION;
> +		i915_vma_move_to_active(i915_gem_obj_to_ggtt(obj), req, 0);
>
>   		/* obj is kept alive until the next request by its active ref */
>   		i915_gem_object_ggtt_unpin(from->legacy_hw_ctx.rcs_state);
> diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
> index 6788f71ad989..6de8681bb64c 100644
> --- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
> +++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
> @@ -1064,6 +1064,44 @@ i915_gem_validate_context(struct drm_device *dev, struct drm_file *file,
>   	return ctx;
>   }
>
> +void i915_vma_move_to_active(struct i915_vma *vma,
> +			     struct drm_i915_gem_request *req,
> +			     unsigned flags)
> +{
> +	struct drm_i915_gem_object *obj = vma->obj;
> +	const unsigned engine = req->engine->id;
> +
> +	RQ_BUG_ON(!drm_mm_node_allocated(&vma->node));
> +
> +	obj->dirty = 1; /* be paranoid  */
> +
> +	/* Add a reference if we're newly entering the active list. */
> +	if (obj->active == 0)
> +		drm_gem_object_reference(&obj->base);
> +	obj->active |= 1 << engine;
> +	i915_gem_request_mark_active(req, &obj->last_read[engine]);
> +
> +	if (flags & EXEC_OBJECT_WRITE) {
> +		i915_gem_request_mark_active(req, &obj->last_write);
> +
> +		intel_fb_obj_invalidate(obj, ORIGIN_CS);
> +
> +		/* update for the implicit flush after a batch */
> +		obj->base.write_domain &= ~I915_GEM_GPU_DOMAINS;
> +	}
> +
> +	if (flags & EXEC_OBJECT_NEEDS_FENCE) {
> +		i915_gem_request_mark_active(req, &obj->last_fence);
> +		if (flags & __EXEC_OBJECT_HAS_FENCE) {
> +			struct drm_i915_private *dev_priv = req->i915;
> +			list_move_tail(&dev_priv->fence_regs[obj->fence_reg].lru_list,
> +				       &dev_priv->mm.fence_list);
> +		}
> +	}
> +
> +	list_move_tail(&vma->vm_link, &vma->vm->active_list);
> +}
> +
>   static void
>   i915_gem_execbuffer_move_to_active(struct list_head *vmas,
>   				   struct drm_i915_gem_request *req)
> @@ -1071,35 +1109,18 @@ i915_gem_execbuffer_move_to_active(struct list_head *vmas,
>   	struct i915_vma *vma;
>
>   	list_for_each_entry(vma, vmas, exec_list) {
> -		struct drm_i915_gem_exec_object2 *entry = vma->exec_entry;
>   		struct drm_i915_gem_object *obj = vma->obj;
>   		u32 old_read = obj->base.read_domains;
>   		u32 old_write = obj->base.write_domain;
>
> -		obj->dirty = 1; /* be paranoid  */
>   		obj->base.write_domain = obj->base.pending_write_domain;
> -		if (obj->base.write_domain == 0)
> +		if (obj->base.write_domain)
> +			vma->exec_entry->flags |= EXEC_OBJECT_WRITE;
> +		else
>   			obj->base.pending_read_domains |= obj->base.read_domains;
>   		obj->base.read_domains = obj->base.pending_read_domains;
>
> -		i915_vma_move_to_active(vma, req);
> -		if (obj->base.write_domain) {
> -			i915_gem_request_mark_active(req, &obj->last_write);
> -
> -			intel_fb_obj_invalidate(obj, ORIGIN_CS);
> -
> -			/* update for the implicit flush after a batch */
> -			obj->base.write_domain &= ~I915_GEM_GPU_DOMAINS;
> -		}
> -		if (entry->flags & EXEC_OBJECT_NEEDS_FENCE) {
> -			i915_gem_request_mark_active(req, &obj->last_fence);
> -			if (entry->flags & __EXEC_OBJECT_HAS_FENCE) {
> -				struct drm_i915_private *dev_priv = req->i915;
> -				list_move_tail(&dev_priv->fence_regs[obj->fence_reg].lru_list,
> -					       &dev_priv->mm.fence_list);
> -			}
> -		}
> -
> +		i915_vma_move_to_active(vma, req, vma->exec_entry->flags);
>   		trace_i915_gem_object_change_domain(obj, old_read, old_write);
>   	}
>   }
> diff --git a/drivers/gpu/drm/i915/i915_gem_render_state.c b/drivers/gpu/drm/i915/i915_gem_render_state.c
> index 630e748c991d..d5a87c4ff0f7 100644
> --- a/drivers/gpu/drm/i915/i915_gem_render_state.c
> +++ b/drivers/gpu/drm/i915/i915_gem_render_state.c
> @@ -221,7 +221,7 @@ int i915_gem_render_state_init(struct drm_i915_gem_request *req)
>   			goto out;
>   	}
>
> -	i915_vma_move_to_active(i915_gem_obj_to_ggtt(so.obj), req);
> +	i915_vma_move_to_active(i915_gem_obj_to_ggtt(so.obj), req, 0);
>
>   out:
>   	i915_gem_render_state_fini(&so);
>
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

  reply	other threads:[~2015-12-17 12:04 UTC|newest]

Thread overview: 37+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-12-14 11:36 [PATCH 01/11] drm/i915: Introduce drm_i915_gem_request_node for request tracking Chris Wilson
2015-12-14 11:36 ` [PATCH 02/11] drm/i915: Refactor activity tracking for requests Chris Wilson
2015-12-16 17:16   ` Tvrtko Ursulin
2015-12-16 17:31     ` Chris Wilson
2015-12-14 11:36 ` [PATCH 03/11] drm/i915: Rename vma->*_list to *_link for consistency Chris Wilson
2015-12-17 11:14   ` Tvrtko Ursulin
2015-12-17 11:24     ` Chris Wilson
2015-12-17 11:45       ` Chris Wilson
2015-12-14 11:36 ` [PATCH 04/11] drm/i915: Amalgamate GGTT/ppGTT vma debug list walkers Chris Wilson
2015-12-17 11:21   ` Tvrtko Ursulin
2015-12-14 11:36 ` [PATCH 05/11] drm/i915: Reduce the pointer dance of i915_is_ggtt() Chris Wilson
2015-12-17 11:31   ` Tvrtko Ursulin
2015-12-14 11:36 ` [PATCH 06/11] drm/i915: Store owning file on the i915_address_space Chris Wilson
2015-12-17 11:52   ` Tvrtko Ursulin
2015-12-17 13:25     ` Chris Wilson
2015-12-14 11:36 ` [PATCH 07/11] drm/i915: i915_vma_move_to_active prep patch Chris Wilson
2015-12-17 12:04   ` Tvrtko Ursulin [this message]
2015-12-14 11:36 ` [PATCH 08/11] drm/i915: Track active vma requests Chris Wilson
2015-12-17 12:26   ` Tvrtko Ursulin
2015-12-14 11:36 ` [PATCH 09/11] drm/i915: Release vma when the handle is closed Chris Wilson
2015-12-17 13:46   ` Tvrtko Ursulin
2015-12-17 14:11     ` Chris Wilson
2015-12-17 14:21     ` Chris Wilson
2015-12-17 14:32       ` Tvrtko Ursulin
2015-12-14 11:36 ` [PATCH 10/11] drm/i915: Mark the context and address space as closed Chris Wilson
2015-12-17 12:37   ` Tvrtko Ursulin
2015-12-17 12:39     ` Tvrtko Ursulin
2015-12-17 12:48     ` Chris Wilson
2015-12-17 13:26       ` Tvrtko Ursulin
2015-12-17 14:15   ` Tvrtko Ursulin
2015-12-17 14:26     ` Chris Wilson
2015-12-17 14:35       ` Tvrtko Ursulin
2015-12-14 11:36 ` [PATCH 11/11] Revert "drm/i915: Clean up associated VMAs on context destruction" Chris Wilson
2015-12-14 15:58 ` [PATCH 01/11] drm/i915: Introduce drm_i915_gem_request_node for request tracking Tvrtko Ursulin
2015-12-14 16:11   ` Chris Wilson
2015-12-15 10:51 ` [PATCH v2] drm/i915: Introduce drm_i915_gem_request_active " Chris Wilson
2015-12-17 14:48 ` ✗ failure: UK.CI.checkpatch.pl Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5672A4CA.5010900@linux.intel.com \
    --to=tvrtko.ursulin@linux.intel.com \
    --cc=chris@chris-wilson.co.uk \
    --cc=intel-gfx@lists.freedesktop.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox