public inbox for intel-gfx@lists.freedesktop.org
 help / color / mirror / Atom feed
From: "Ville Syrjälä" <ville.syrjala@linux.intel.com>
To: Daniel Vetter <daniel@ffwll.ch>
Cc: intel-gfx@lists.freedesktop.org
Subject: Re: [RFC][PATCH 3/3] drm/i915: Make RPS enable immediate
Date: Tue, 24 May 2016 13:14:53 +0300	[thread overview]
Message-ID: <20160524101453.GA4329@intel.com> (raw)
In-Reply-To: <20160524082719.GP27098@phenom.ffwll.local>

On Tue, May 24, 2016 at 10:27:19AM +0200, Daniel Vetter wrote:
> On Mon, May 23, 2016 at 05:09:43PM +0300, ville.syrjala@linux.intel.com wrote:
> > From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > 
> > On SNB (at least) it's dangeruopus to hang the GPU with an infinite
> > batch buffer loop while RPS is disabled. The only thing that keeps
> > the system going in such circumstances are the internal RPS timers,
> > so we should never feed the GPU with RPS disabled unless we want to
> > risk a total system hang.
> > 
> > Previously we didn't fully disable RPS, but that changes in
> > commit 2030d684f7b3 ("drm/i915/bxt: Explicitly clear the Turbo control register")
> > so we probably didn't see the problem so often previously. But
> > even before that we were at the mercy of the BIOS for the initial
> > RPS state, so if the BIOS didn't enable RPS a GPU hang immediately
> > upon boot could have been fatal.
> > 
> > To avoid the problems let's just make the RPS enable immediate.
> > This renders the delayed_resume_work useless actually. We could perhaps
> > just move the ring freq table initialization to the work and do the
> > other stull synchronously?
> > 
> > Fixes: 2030d684f7b3 ("drm/i915/bxt: Explicitly clear the Turbo control register")
> > Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > ---
> > This is more and RFC at this point. Perhaps we might want to keep the delayed
> > work but just for the ring freq table update (which is the main reson this whole
> > thing exists in the first place). Another factor is that wait_for() is not
> > exactly optiomal currently, so it makes the operation slower than it needs to
> > be. Would need some hard numbers to see if it's worth keeping the delayed work
> > or not I suppose.
> 
> Loading the ring freq tables takes forever, so definitely want to keep
> that.

It only takes forever becasue wait_for() sucks.

with current sleep duration of 1000-2000 us:
[  308.231364] rps init took 5533 us
[  308.266322] ring freq init took 34952 us

sleep duration reduced to 100-200 us:
[  155.367588] rps init took 679 us
[  155.371100] ring freq init took 3509 us

So looks like someone just failed to root cause the slowness, and then
went on to optimize the wrong thing.

> I'd just split rps setup in two parts and push the schedule_work
> down a bit. Long term we can go overboard with async (and maybey only
> stall for all the GT setup in the very first batchbuffer).
> -Daniel
> 
> > 
> >  drivers/gpu/drm/i915/intel_pm.c | 1 +
> >  1 file changed, 1 insertion(+)
> > 
> > diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
> > index 817c84c22782..e65c5c2b9b4e 100644
> > --- a/drivers/gpu/drm/i915/intel_pm.c
> > +++ b/drivers/gpu/drm/i915/intel_pm.c
> > @@ -6505,6 +6505,7 @@ void intel_enable_gt_powersave(struct drm_i915_private *dev_priv)
> >  		if (schedule_delayed_work(&dev_priv->rps.delayed_resume_work,
> >  					   round_jiffies_up_relative(HZ)))
> >  			intel_runtime_pm_get_noresume(dev_priv);
> > +		flush_delayed_work(&dev_priv->rps.delayed_resume_work);
> >  	}
> >  }
> >  
> > -- 
> > 2.7.4
> > 
> > _______________________________________________
> > Intel-gfx mailing list
> > Intel-gfx@lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/intel-gfx
> 
> -- 
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch

-- 
Ville Syrjälä
Intel OTC
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

  reply	other threads:[~2016-05-24 10:14 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-05-23 14:09 [PATCH 1/3] drm/i915: Never fully mask the the EI up rps interrupt on SNB/IVB ville.syrjala
2016-05-23 14:09 ` [PATCH 2/3] drm/i915: Don't frob with RPS around GPU reset ville.syrjala
2016-05-23 14:32   ` Chris Wilson
2016-05-24  8:24     ` Daniel Vetter
2016-05-24  8:36       ` Chris Wilson
2016-05-23 14:09 ` [RFC][PATCH 3/3] drm/i915: Make RPS enable immediate ville.syrjala
2016-05-23 14:33   ` Chris Wilson
2016-05-24  8:27   ` Daniel Vetter
2016-05-24 10:14     ` Ville Syrjälä [this message]
2016-05-24 10:43       ` Chris Wilson
2016-05-23 14:30 ` [PATCH 1/3] drm/i915: Never fully mask the the EI up rps interrupt on SNB/IVB Chris Wilson
2016-05-23 14:34   ` Ville Syrjälä
2016-05-23 14:42 ` [PATCH v2 " ville.syrjala
2016-05-23 16:29 ` ✗ Ro.CI.BAT: warning for series starting with [v2,1/3] drm/i915: Never fully mask the the EI up rps interrupt on SNB/IVB (rev2) Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160524101453.GA4329@intel.com \
    --to=ville.syrjala@linux.intel.com \
    --cc=daniel@ffwll.ch \
    --cc=intel-gfx@lists.freedesktop.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox