intel-gfx.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
From: Imre Deak <imre.deak@intel.com>
To: Rodrigo Vivi <rodrigo.vivi@gmail.com>,
	patrik.jakobsson@intel.com,
	intel-gfx <intel-gfx@lists.freedesktop.org>
Subject: Re: Fwd: [PATCH] drm/i915: Avoid vblank counter for gen9+
Date: Thu, 18 Feb 2016 19:55:38 +0200	[thread overview]
Message-ID: <1455818138.7638.29.camel@intel.com> (raw)
In-Reply-To: <CABVU7+s3FP0eyBKoauVbR4aPmHOAqbS1kUQM2N0fJ4JQvNz5YQ@mail.gmail.com>

On to, 2016-02-18 at 08:56 -0800, Rodrigo Vivi wrote:
> Imre, Patrik, do you know if I'm missing something or what I'm doing
> wrong with this power domain handler for vblanks to avoid DC states
> when we need a reliable frame counter in place.

The WARN is due to the spin_lock() in drm_vblank_enable(), you can't
call power domain functions in atomic context, due to the mutex the
power domain and runtime PM fw uses.

--Imre

> 
> Do you have better ideas?
> 
> Thanks,
> Rodrigo.
> 
> ---------- Forwarded message ----------
> From: Rodrigo Vivi <rodrigo.vivi@gmail.com>
> Date: Wed, Feb 17, 2016 at 3:14 PM
> Subject: Re: [Intel-gfx] [PATCH] drm/i915: Avoid vblank counter for
> gen9+
> To: Daniel Vetter <daniel@ffwll.ch>, Patrik Jakobsson
> <patrik.jakobsson@linux.intel.com>
> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>, intel-gfx
> <intel-gfx@lists.freedesktop.org>
> 
> 
> On Tue, Feb 16, 2016 at 7:50 AM, Daniel Vetter <daniel@ffwll.ch>
> wrote:
> > On Thu, Feb 11, 2016 at 09:00:47AM -0800, Rodrigo Vivi wrote:
> > > Framecounter register is read-only so DMC cannot restore it
> > > after exiting DC5 and DC6.
> > > 
> > > Easiest way to go is to avoid the counter and use vblank
> > > interruptions for this platform and for all the following
> > > ones since DMC came to stay. At least while we can't change
> > > this register to read-write.
> > > 
> > > Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
> > 
> > Now my comments also in public:
> > - Do we still get reasonable dc5 residency with this - it means
> > we'll keep
> >   vblank irq running forever.
> > 
> > - I'm a bit unclear on what exactly this fixes - have you tested
> > that
> >   long-lasting vblank waits are still accurate? Just want to make
> > sure we
> >   don't just paper over the issue and desktops can still get stuck
> > waiting
> >   for a vblank.
> 
> apparently no... so please just ignore this patch for now... after a
> while with that patch I was seeing the issue again...
> 
> > 
> > Just a bit suprised that the only problem is the framecounter, and
> > not
> > that vblanks stop happening too.
> > 
> > We need to also know these details for the proper fix, which will
> > involve
> > grabbing power well references (might need a new one for vblank
> > interrupts) to make sure.
> 
> Yeap, I liked this idea... so combining a power domain reference with
> a vblank count restore once we know the dc off is blocked we could
> workaround this case... something like:
> 
> diff --git a/drivers/gpu/drm/i915/i915_irq.c
> b/drivers/gpu/drm/i915/i915_irq.c
> index 25a8937..2b18778 100644
> --- a/drivers/gpu/drm/i915/i915_irq.c
> +++ b/drivers/gpu/drm/i915/i915_irq.c
> @@ -2743,7 +2743,10 @@ static int gen8_enable_vblank(struct
> drm_device
> *dev, unsigned int pipe)
>         struct drm_i915_private *dev_priv = dev->dev_private;
>         unsigned long irqflags;
> 
> +       intel_display_power_get(dev_priv, POWER_DOMAIN_VBLANK);
> +
>         spin_lock_irqsave(&dev_priv->irq_lock, irqflags);
> +       dev->vblank[pipe].last = g4x_get_vblank_counter(dev, pipe);
>         bdw_enable_pipe_irq(dev_priv, pipe, GEN8_PIPE_VBLANK);
>         spin_unlock_irqrestore(&dev_priv->irq_lock, irqflags);
> 
> @@ -2796,6 +2799,8 @@ static void gen8_disable_vblank(struct
> drm_device *dev, unsigned int pipe)
>         spin_lock_irqsave(&dev_priv->irq_lock, irqflags);
>         bdw_disable_pipe_irq(dev_priv, pipe, GEN8_PIPE_VBLANK);
>         spin_unlock_irqrestore(&dev_priv->irq_lock, irqflags);
> +
> +       intel_display_power_put(dev_priv, POWER_DOMAIN_VBLANK);
>  }
> 
> where POWER_DOMAIN_VBLANK is part of:
> #define SKL_DISPLAY_DC_OFF_POWER_DOMAINS (              \
>         BIT(POWER_DOMAIN_VBLANK) |                      \
> 
> 
> However I have my dmesg flooded by:
> 
> 
> [   69.025562] BUG: sleeping function called from invalid context at
> drivers/base/power/runtime.c:955
> [   69.025576] in_atomic(): 1, irqs_disabled(): 1, pid: 995, name:
> Xorg
> [   69.025582] Preemption disabled at:[<ffffffff815a1fee>]
> drm_vblank_get+0x4e/0xd0
> 
> [   69.025619] CPU: 3 PID: 995 Comm: Xorg Tainted: G     U  W
> 4.5.0-rc4+ #11
> [   69.025628] Hardware name: Intel Corporation Kabylake Client
> platform/Skylake U DDR3L RVP7, BIOS KBLSE2R1.R00.X019.B01.1512230743
> 12/23/2015
> [   69.025637]  0000000000000000 ffff88003f0bfbb0 ffffffff8148e983
> 0000000000000000
> [   69.025653]  ffff880085b04200 ffff88003f0bfbd0 ffffffff81133ece
> ffffffff81d77f23
> [   69.025667]  00000000000003bb ffff88003f0bfbf8 ffffffff81133f89
> ffff88016913a098
> [   69.025680] Call Trace:
> [   69.025697]  [<ffffffff8148e983>] dump_stack+0x65/0x92
> [   69.025711]  [<ffffffff81133ece>] ___might_sleep+0x10e/0x180
> [   69.025722]  [<ffffffff81133f89>] __might_sleep+0x49/0x80
> [   69.025739]  [<ffffffff815d21d9>] __pm_runtime_resume+0x79/0x80
> [   69.025841]  [<ffffffffa005ebc8>] intel_runtime_pm_get+0x28/0x90
> [i915]
> [   69.025924]  [<ffffffffa005ec49>]
> intel_display_power_get+0x19/0x50 [i915]
> [   69.025995]  [<ffffffffa004aea4>] gen8_enable_vblank+0x34/0xc0
> [i915]
> [   69.026016]  [<ffffffff815a1f46>] drm_vblank_enable+0x76/0xd0
> 
> 
> 
> 
> Another thing that I search in the spec was for an Interrupt to know
> when we came back from DC5 or DC6 or got power well re-enabled, so we
> would be able to restore the drm last counter... but I couldn't find
> any...
> 
> 
> Any other idea?
> 
> 
> > 
> > Cheers, Daniel
> > 
> > > ---
> > >  drivers/gpu/drm/i915/i915_irq.c | 7 +++++--
> > >  1 file changed, 5 insertions(+), 2 deletions(-)
> > > 
> > > diff --git a/drivers/gpu/drm/i915/i915_irq.c
> > > b/drivers/gpu/drm/i915/i915_irq.c
> > > index 25a8937..c294a4b 100644
> > > --- a/drivers/gpu/drm/i915/i915_irq.c
> > > +++ b/drivers/gpu/drm/i915/i915_irq.c
> > > @@ -4556,7 +4556,10 @@ void intel_irq_init(struct
> > > drm_i915_private *dev_priv)
> > > 
> > >       pm_qos_add_request(&dev_priv->pm_qos,
> > > PM_QOS_CPU_DMA_LATENCY, PM_QOS_DEFAULT_VALUE);
> > > 
> > > -     if (IS_GEN2(dev_priv)) {
> > > +     if (INTEL_INFO(dev_priv)->gen >= 9) {
> > > +             dev->max_vblank_count = 0;
> > > +             dev->driver->get_vblank_counter =
> > > g4x_get_vblank_counter;
> > > +     } else if (IS_GEN2(dev_priv)) {
> > >               dev->max_vblank_count = 0;
> > >               dev->driver->get_vblank_counter =
> > > i8xx_get_vblank_counter;
> > >       } else if (IS_G4X(dev_priv) || INTEL_INFO(dev_priv)->gen >=
> > > 5) {
> > > @@ -4572,7 +4575,7 @@ void intel_irq_init(struct drm_i915_private
> > > *dev_priv)
> > >        * Gen2 doesn't have a hardware frame counter and so
> > > depends on
> > >        * vblank interrupts to produce sane vblank seuquence
> > > numbers.
> > >        */
> > > -     if (!IS_GEN2(dev_priv))
> > > +     if (!IS_GEN2(dev_priv) && !INTEL_INFO(dev_priv)->gen >= 9)
> > >               dev->vblank_disable_immediate = true;
> > > 
> > >       dev->driver->get_vblank_timestamp =
> > > i915_get_vblank_timestamp;
> > > --
> > > 2.4.3
> > > 
> > > _______________________________________________
> > > Intel-gfx mailing list
> > > Intel-gfx@lists.freedesktop.org
> > > https://lists.freedesktop.org/mailman/listinfo/intel-gfx
> > 
> > --
> > Daniel Vetter
> > Software Engineer, Intel Corporation
> > http://blog.ffwll.ch
> > _______________________________________________
> > Intel-gfx mailing list
> > Intel-gfx@lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/intel-gfx
> 
> 
> 
> --
> Rodrigo Vivi
> Blog: http://blog.vivi.eng.br
> 
> 
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

  reply	other threads:[~2016-02-18 17:55 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-02-11 17:00 [PATCH] drm/i915: Avoid vblank counter for gen9+ Rodrigo Vivi
2016-02-12  1:19 ` kbuild test robot
2016-02-12  1:32 ` kbuild test robot
2016-02-12 15:51   ` Vivi, Rodrigo
2016-02-12  9:34 ` David Weinehall
2016-02-12 15:57   ` Vivi, Rodrigo
2016-02-12 16:21     ` Vivi, Rodrigo
2016-02-12 16:04 ` Ville Syrjälä
2016-02-16  8:09 ` ✗ Fi.CI.BAT: failure for " Patchwork
2016-02-16 15:50 ` [PATCH] " Daniel Vetter
2016-02-17 23:14   ` Rodrigo Vivi
2016-02-18 16:56     ` Fwd: " Rodrigo Vivi
2016-02-18 17:55       ` Imre Deak [this message]
2016-02-22 14:33       ` Imre Deak
2016-02-26 18:02         ` Rodrigo Vivi
2016-03-02 17:13           ` Imre Deak
2016-03-03 10:14             ` Patrik Jakobsson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1455818138.7638.29.camel@intel.com \
    --to=imre.deak@intel.com \
    --cc=intel-gfx@lists.freedesktop.org \
    --cc=patrik.jakobsson@intel.com \
    --cc=rodrigo.vivi@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).