public inbox for intel-gfx@lists.freedesktop.org
 help / color / mirror / Atom feed
From: Daniel Vetter <daniel@ffwll.ch>
To: Chris Wilson <chris@chris-wilson.co.uk>
Cc: intel-gfx@lists.freedesktop.org
Subject: Re: [PATCH 13/16] drm/i915: Free RPS boosts for all laggards
Date: Thu, 21 May 2015 14:56:37 +0200	[thread overview]
Message-ID: <20150521125637.GC15256@phenom.ffwll.local> (raw)
In-Reply-To: <1430138487-22541-14-git-send-email-chris@chris-wilson.co.uk>

On Mon, Apr 27, 2015 at 01:41:24PM +0100, Chris Wilson wrote:
> If the client stalls on a congested request, chosen to be 20ms old to
> match throttling, allow the client a free RPS boost.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>

Merged up to this on with an s/rq/req/ per the patch I've just submitted.
I'm not sure about the two follow-up ilk patches, maybe hunt for a few
acks? Jesse might still care ;-)

Thanks, Daniel

> ---
>  drivers/gpu/drm/i915/i915_gem.c  |  2 +-
>  drivers/gpu/drm/i915/intel_drv.h |  3 ++-
>  drivers/gpu/drm/i915/intel_pm.c  | 19 +++++++++++++++----
>  3 files changed, 18 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> index 43ac75834e61..83bccb9f62d6 100644
> --- a/drivers/gpu/drm/i915/i915_gem.c
> +++ b/drivers/gpu/drm/i915/i915_gem.c
> @@ -1247,7 +1247,7 @@ int __i915_wait_request(struct drm_i915_gem_request *req,
>  		jiffies + nsecs_to_jiffies_timeout((u64)*timeout) : 0;
>  
>  	if (INTEL_INFO(dev_priv)->gen >= 6)
> -		gen6_rps_boost(dev_priv, rps);
> +		gen6_rps_boost(dev_priv, rps, req->emitted_jiffies);
>  
>  	/* Record current time in case interrupted by signal, or wedged */
>  	trace_i915_gem_request_wait_begin(req);
> diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h
> index 9eb0a911343a..34cfa61f3321 100644
> --- a/drivers/gpu/drm/i915/intel_drv.h
> +++ b/drivers/gpu/drm/i915/intel_drv.h
> @@ -1361,7 +1361,8 @@ void gen6_rps_busy(struct drm_i915_private *dev_priv);
>  void gen6_rps_reset_ei(struct drm_i915_private *dev_priv);
>  void gen6_rps_idle(struct drm_i915_private *dev_priv);
>  void gen6_rps_boost(struct drm_i915_private *dev_priv,
> -		    struct intel_rps_client *rps);
> +		    struct intel_rps_client *rps,
> +		    unsigned long submitted);
>  void intel_queue_rps_boost_for_request(struct drm_device *dev,
>  				       struct drm_i915_gem_request *rq);
>  void ilk_wm_get_hw_state(struct drm_device *dev);
> diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
> index dcc52f928650..850e02e1c7eb 100644
> --- a/drivers/gpu/drm/i915/intel_pm.c
> +++ b/drivers/gpu/drm/i915/intel_pm.c
> @@ -4122,10 +4122,17 @@ void gen6_rps_idle(struct drm_i915_private *dev_priv)
>  }
>  
>  void gen6_rps_boost(struct drm_i915_private *dev_priv,
> -		    struct intel_rps_client *rps)
> +		    struct intel_rps_client *rps,
> +		    unsigned long submitted)
>  {
>  	u32 val;
>  
> +	/* Force a RPS boost (and don't count it against the client) if
> +	 * the GPU is severely congested.
> +	 */
> +	if (rps && time_after(jiffies, submitted + msecs_to_jiffies(20)))
> +		rps = NULL;
> +
>  	mutex_lock(&dev_priv->rps.hw_lock);
>  	val = dev_priv->rps.max_freq_softlimit;
>  	if (dev_priv->rps.enabled &&
> @@ -6825,11 +6832,12 @@ struct request_boost {
>  static void __intel_rps_boost_work(struct work_struct *work)
>  {
>  	struct request_boost *boost = container_of(work, struct request_boost, work);
> +	struct drm_i915_gem_request *rq = boost->rq;
>  
> -	if (!i915_gem_request_completed(boost->rq, true))
> -		gen6_rps_boost(to_i915(boost->rq->ring->dev), NULL);
> +	if (!i915_gem_request_completed(rq, true))
> +		gen6_rps_boost(to_i915(rq->ring->dev), 0, rq->emitted_jiffies);
>  
> -	i915_gem_request_unreference__unlocked(boost->rq);
> +	i915_gem_request_unreference__unlocked(rq);
>  	kfree(boost);
>  }
>  
> @@ -6841,6 +6849,9 @@ void intel_queue_rps_boost_for_request(struct drm_device *dev,
>  	if (rq == NULL || INTEL_INFO(dev)->gen < 6)
>  		return;
>  
> +	if (i915_gem_request_completed(rq, true))
> +		return;
> +
>  	boost = kmalloc(sizeof(*boost), GFP_ATOMIC);
>  	if (boost == NULL)
>  		return;
> -- 
> 2.1.4
> 
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/intel-gfx

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

  parent reply	other threads:[~2015-05-21 12:54 UTC|newest]

Thread overview: 38+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-04-27 12:41 RPS tuning Chris Wilson
2015-04-27 12:41 ` [PATCH 01/16] drm/i915: Drop i915_gem_obj_is_pinned() from set-cache-level Chris Wilson
2015-04-29 14:50   ` Tvrtko Ursulin
2015-04-29 15:15     ` Chris Wilson
2015-04-27 12:41 ` [PATCH 02/16] drm/i915: Only remove objects pinned to the display from the available aperture Chris Wilson
2015-04-29 15:05   ` Tvrtko Ursulin
2015-04-29 15:48     ` Chris Wilson
2015-04-27 12:41 ` [PATCH 03/16] drm/i915: Remove domain flubbing from i915_gem_object_finish_gpu() Chris Wilson
2015-05-11 16:43   ` Daniel Vetter
2015-04-27 12:41 ` [PATCH 04/16] drm/i915: Ensure cache flushes prior to doing CS flips Chris Wilson
2015-05-11 16:46   ` Daniel Vetter
2015-04-27 12:41 ` [PATCH 05/16] drm/i915: Fix race on unreferencing the wrong mmio-flip-request Chris Wilson
2015-05-11 16:51   ` Daniel Vetter
2015-05-11 20:23     ` Chris Wilson
2015-05-12  8:43       ` Daniel Vetter
2015-04-27 12:41 ` [PATCH 06/16] drm/i915: Implement inter-engine read-read optimisations Chris Wilson
2015-04-29 13:51   ` Tvrtko Ursulin
2015-04-27 12:41 ` [PATCH 07/16] drm/i915: Inline check required for object syncing prior to execbuf Chris Wilson
2015-04-29 14:03   ` Tvrtko Ursulin
2015-04-29 14:22     ` Chris Wilson
2015-04-27 12:41 ` [PATCH 08/16] drm/i915: Add RPS thresholds to debugfs/i915_frequency_info Chris Wilson
2015-05-04 14:36   ` Daniel Vetter
2015-04-27 12:41 ` [PATCH 09/16] drm/i915: Limit ring synchronisation (sw sempahores) RPS boosts Chris Wilson
2015-05-04 14:38   ` Daniel Vetter
2015-05-04 14:46     ` Daniel Vetter
2015-04-27 12:41 ` [PATCH 10/16] drm/i915: Limit mmio flip " Chris Wilson
2015-04-27 12:41 ` [PATCH 11/16] drm/i915: Convert RPS tracking to a intel_rps_client struct Chris Wilson
2015-04-27 12:41 ` [PATCH 12/16] drm/i915: Don't downclock whilst we have clients waiting for GPU results Chris Wilson
2015-04-27 12:41 ` [PATCH 13/16] drm/i915: Free RPS boosts for all laggards Chris Wilson
2015-05-21 12:50   ` Daniel Vetter
2015-05-21 12:56   ` Daniel Vetter [this message]
2015-04-27 12:41 ` [PATCH 14/16] drm/i915: Make the RPS interface gen agnostic Chris Wilson
2015-04-27 12:41 ` [PATCH 15/16] drm/i915, intel_ips: Enable GPU wait-boosting with IPS Chris Wilson
2015-04-27 12:41 ` [PATCH 16/16] drm/i915: Allow RPS waitboosting to use max GPU frequency Chris Wilson
2015-05-04 14:51   ` Daniel Vetter
2015-05-04 14:58     ` Chris Wilson
2015-05-21 12:55   ` Daniel Vetter
2015-05-21 13:05     ` Chris Wilson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20150521125637.GC15256@phenom.ffwll.local \
    --to=daniel@ffwll.ch \
    --cc=chris@chris-wilson.co.uk \
    --cc=intel-gfx@lists.freedesktop.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox