From: Lukas Wunner <lukas@wunner.de>
To: Imre Deak <imre.deak@intel.com>
Cc: intel-gfx@lists.freedesktop.org,
"Rafael J. Wysocki" <rafael.j.wysocki@intel.com>,
Bjorn Helgaas <bhelgaas@google.com>,
linux-pci@vger.kernel.org
Subject: Re: [PATCH v2 1/2] PCI / PM: Add needs_resume flag to avoid suspend complete optimization
Date: Mon, 24 Apr 2017 22:42:42 +0200 [thread overview]
Message-ID: <20170424204242.GA9933@wunner.de> (raw)
In-Reply-To: <20170424200230.GA9910@wunner.de>
On Mon, Apr 24, 2017 at 10:02:30PM +0200, Lukas Wunner wrote:
> On Mon, Apr 24, 2017 at 05:27:42PM +0300, Imre Deak wrote:
> > Some drivers - like i915 - may not support the system suspend direct
> > complete optimization due to differences in their runtime and system
> > suspend sequence. Add a flag that when set resumes the device before
> > calling the driver's system suspend handlers which effectively disables
> > the optimization.
>
> FWIW, there are at least two alternative solutions to this problem which
> do not require changes to the PCI core:
>
> (1) Add a ->prepare hook to i915_pm_ops which calls pm_runtime_get_sync()
> and a ->complete hook which calls pm_runtime_put().
Thinking a bit more about this, it's even simpler: The PM core acquires
a runtime PM ref in device_prepare() and releases it in device_complete(),
so it's sufficient to just call pm_runtime_resume() in a ->prepare hook
that's newly added to i915. No ->complete hook necessary. Tentative
patch below, based on drm-intel-fixes, would replace both of your patches.
Thanks,
Lukas
-- >8 --
diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
index 5c089b3..6ef156b 100644
--- a/drivers/gpu/drm/i915/i915_drv.c
+++ b/drivers/gpu/drm/i915/i915_drv.c
@@ -1820,6 +1820,11 @@ void i915_reset(struct drm_i915_private *dev_priv)
goto wakeup;
}
+static int i915_pm_prepare(struct device *kdev)
+{
+ pm_runtime_resume(kdev);
+}
+
static int i915_pm_suspend(struct device *kdev)
{
struct pci_dev *pdev = to_pci_dev(kdev);
@@ -2451,6 +2456,7 @@ static int intel_runtime_resume(struct device *kdev)
* S0ix (via system suspend) and S3 event handlers [PMSG_SUSPEND,
* PMSG_RESUME]
*/
+ .prepare = i915_pm_prepare,
.suspend = i915_pm_suspend,
.suspend_late = i915_pm_suspend_late,
.resume_early = i915_pm_resume_early,
next prev parent reply other threads:[~2017-04-24 20:42 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-04-24 14:27 [PATCH v2 1/2] PCI / PM: Add needs_resume flag to avoid suspend complete optimization Imre Deak
2017-04-24 14:27 ` [PATCH v2 2/2] drm/i915: Prevent the system " Imre Deak
2017-04-24 20:16 ` Lukas Wunner
2017-04-25 10:28 ` Imre Deak
2017-04-27 14:17 ` Lofstedt, Marta
2017-04-24 20:02 ` [PATCH v2 1/2] PCI / PM: Add needs_resume flag to avoid " Lukas Wunner
2017-04-24 20:42 ` Lukas Wunner [this message]
2017-04-24 20:56 ` Rafael J. Wysocki
2017-04-25 6:21 ` Lukas Wunner
2017-04-25 8:55 ` Imre Deak
2017-04-25 9:46 ` Lukas Wunner
2017-04-25 10:22 ` Imre Deak
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170424204242.GA9933@wunner.de \
--to=lukas@wunner.de \
--cc=bhelgaas@google.com \
--cc=imre.deak@intel.com \
--cc=intel-gfx@lists.freedesktop.org \
--cc=linux-pci@vger.kernel.org \
--cc=rafael.j.wysocki@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).