From: Daniel Vetter <daniel@ffwll.ch>
To: "Ville Syrjälä" <ville.syrjala@linux.intel.com>
Cc: intel-gfx@lists.freedesktop.org
Subject: Re: [PATCH 2/5] drm/i915: Initialise min/max frequencies before updating RPS registers
Date: Thu, 7 Nov 2013 19:13:54 +0100 [thread overview]
Message-ID: <20131107181354.GX14082@phenom.ffwll.local> (raw)
In-Reply-To: <20131107174348.GZ5986@intel.com>
On Thu, Nov 07, 2013 at 07:43:48PM +0200, Ville Syrjälä wrote:
> On Wed, Nov 06, 2013 at 01:56:26PM -0200, Rodrigo Vivi wrote:
> > From: Chris Wilson <chris@chris-wilson.co.uk>
> >
> > The RPS register writing routines use the current value of min/max to
> > set certain limits and interrupt gating. If we set those afterwards, we
> > risk setting up the hw incorrectly and losing power management events,
> > and worse, trigger some internal assertions.
> >
> > Reorder the calling sequences to be correct, and remove the then
> > unrequired clamping from inside set_rps(). And for a bonus, fix the bug
> > of calling gen6_set_rps() from Valleyview.
> >
> > Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> > Signed-off-by: Rodrigo Vivi <rodrigo.vivi@gmail.com>
> > Reviewer: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > CC: Ville Syrjälä <ville.syrjala@linux.intel.com>
>
> Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
>
> > ---
> > drivers/gpu/drm/i915/i915_debugfs.c | 2 +-
> > drivers/gpu/drm/i915/i915_sysfs.c | 16 ++++++++--------
> > drivers/gpu/drm/i915/intel_pm.c | 19 +++++--------------
> > 3 files changed, 14 insertions(+), 23 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
> > index 7008aac..5b28602 100644
> > --- a/drivers/gpu/drm/i915/i915_debugfs.c
> > +++ b/drivers/gpu/drm/i915/i915_debugfs.c
> > @@ -2758,7 +2758,7 @@ i915_max_freq_set(void *data, u64 val)
> > if (IS_VALLEYVIEW(dev)) {
> > val = vlv_freq_opcode(dev_priv->mem_freq, val);
> > dev_priv->rps.max_delay = val;
> > - gen6_set_rps(dev, val);
> > + valleyview_set_rps(dev, val);
> > } else {
> > do_div(val, GT_FREQUENCY_MULTIPLIER);
> > dev_priv->rps.max_delay = val;
> > diff --git a/drivers/gpu/drm/i915/i915_sysfs.c b/drivers/gpu/drm/i915/i915_sysfs.c
> > index cef38fd..46291c4 100644
> > --- a/drivers/gpu/drm/i915/i915_sysfs.c
> > +++ b/drivers/gpu/drm/i915/i915_sysfs.c
> > @@ -342,15 +342,15 @@ static ssize_t gt_max_freq_mhz_store(struct device *kdev,
> > DRM_DEBUG("User requested overclocking to %d\n",
> > val * GT_FREQUENCY_MULTIPLIER);
> >
> > + dev_priv->rps.max_delay = val;
> > +
> > if (dev_priv->rps.cur_delay > val) {
> > - if (IS_VALLEYVIEW(dev_priv->dev))
> > - valleyview_set_rps(dev_priv->dev, val);
> > + if (IS_VALLEYVIEW(dev))
> > + valleyview_set_rps(dev, val);
> > else
> > - gen6_set_rps(dev_priv->dev, val);
> > + gen6_set_rps(dev, val);
Extracting all these IS_VLV blocks and having one abstract entry point
would be a nice follow-up ...
Queued for -next, thanks for the patch.
-Daniel
> > }
> >
> > - dev_priv->rps.max_delay = val;
> > -
> > mutex_unlock(&dev_priv->rps.hw_lock);
> >
> > return count;
> > @@ -411,15 +411,15 @@ static ssize_t gt_min_freq_mhz_store(struct device *kdev,
> > return -EINVAL;
> > }
> >
> > + dev_priv->rps.min_delay = val;
> > +
> > if (dev_priv->rps.cur_delay < val) {
> > if (IS_VALLEYVIEW(dev))
> > valleyview_set_rps(dev, val);
> > else
> > - gen6_set_rps(dev_priv->dev, val);
> > + gen6_set_rps(dev, val);
> > }
> >
> > - dev_priv->rps.min_delay = val;
> > -
> > mutex_unlock(&dev_priv->rps.hw_lock);
> >
> > return count;
> > diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
> > index 09ac9e7..830865e 100644
> > --- a/drivers/gpu/drm/i915/intel_pm.c
> > +++ b/drivers/gpu/drm/i915/intel_pm.c
> > @@ -3414,26 +3414,19 @@ static void ironlake_disable_drps(struct drm_device *dev)
> > * ourselves, instead of doing a rmw cycle (which might result in us clearing
> > * all limits and the gpu stuck at whatever frequency it is at atm).
> > */
> > -static u32 gen6_rps_limits(struct drm_i915_private *dev_priv, u8 *val)
> > +static u32 gen6_rps_limits(struct drm_i915_private *dev_priv, u8 val)
> > {
> > u32 limits;
> >
> > - limits = 0;
> > -
> > - if (*val >= dev_priv->rps.max_delay)
> > - *val = dev_priv->rps.max_delay;
> > - limits |= dev_priv->rps.max_delay << 24;
> > -
> > /* Only set the down limit when we've reached the lowest level to avoid
> > * getting more interrupts, otherwise leave this clear. This prevents a
> > * race in the hw when coming out of rc6: There's a tiny window where
> > * the hw runs at the minimal clock before selecting the desired
> > * frequency, if the down threshold expires in that window we will not
> > * receive a down interrupt. */
> > - if (*val <= dev_priv->rps.min_delay) {
> > - *val = dev_priv->rps.min_delay;
> > + limits = dev_priv->rps.max_delay << 24;
> > + if (val <= dev_priv->rps.min_delay)
> > limits |= dev_priv->rps.min_delay << 16;
> > - }
> >
> > return limits;
> > }
> > @@ -3533,7 +3526,6 @@ static void gen6_set_rps_thresholds(struct drm_i915_private *dev_priv, u8 val)
> > void gen6_set_rps(struct drm_device *dev, u8 val)
> > {
> > struct drm_i915_private *dev_priv = dev->dev_private;
> > - u32 limits = gen6_rps_limits(dev_priv, &val);
> >
> > WARN_ON(!mutex_is_locked(&dev_priv->rps.hw_lock));
> > WARN_ON(val > dev_priv->rps.max_delay);
> > @@ -3556,7 +3548,8 @@ void gen6_set_rps(struct drm_device *dev, u8 val)
> > /* Make sure we continue to get interrupts
> > * until we hit the minimum or maximum frequencies.
> > */
> > - I915_WRITE(GEN6_RP_INTERRUPT_LIMITS, limits);
> > + I915_WRITE(GEN6_RP_INTERRUPT_LIMITS,
> > + gen6_rps_limits(dev_priv, val));
> >
> > POSTING_READ(GEN6_RPNSWREQ);
> >
> > @@ -3620,8 +3613,6 @@ void valleyview_set_rps(struct drm_device *dev, u8 val)
> > {
> > struct drm_i915_private *dev_priv = dev->dev_private;
> >
> > - gen6_rps_limits(dev_priv, &val);
> > -
> > WARN_ON(!mutex_is_locked(&dev_priv->rps.hw_lock));
> > WARN_ON(val > dev_priv->rps.max_delay);
> > WARN_ON(val < dev_priv->rps.min_delay);
> > --
> > 1.8.3.1
>
> --
> Ville Syrjälä
> Intel OTC
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/intel-gfx
--
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
next prev parent reply other threads:[~2013-11-07 18:13 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-11-06 15:56 [PATCH 0/5] drm-intel-collector - update Rodrigo Vivi
2013-11-06 15:56 ` [PATCH 1/5] drm/i915: Asynchronously perform the set-base for a simple modeset Rodrigo Vivi
2013-11-06 15:56 ` [PATCH 2/5] drm/i915: Initialise min/max frequencies before updating RPS registers Rodrigo Vivi
2013-11-07 17:43 ` Ville Syrjälä
2013-11-07 18:13 ` Daniel Vetter [this message]
2013-11-06 15:56 ` [PATCH 3/5] drm/i915: Fix gen3/4 vblank counter wraparound Rodrigo Vivi
2013-11-07 16:16 ` Imre Deak
2013-11-07 16:20 ` Daniel Vetter
2013-11-06 15:56 ` [PATCH 4/5] drm/i915: Use frame counter for intel_wait_for_vblank() on CTG Rodrigo Vivi
2013-11-07 16:39 ` Imre Deak
2013-11-06 15:56 ` [PATCH 5/5] drm/i915: Require HW contexts (when possible) Rodrigo Vivi
2013-11-07 8:35 ` Daniel Vetter
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20131107181354.GX14082@phenom.ffwll.local \
--to=daniel@ffwll.ch \
--cc=intel-gfx@lists.freedesktop.org \
--cc=ville.syrjala@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox