From: "Ville Syrjälä" <ville.syrjala@linux.intel.com>
To: Chris Wilson <chris@chris-wilson.co.uk>,
David Weinehall <david.weinehall@linux.intel.com>,
intel-gfx@lists.freedesktop.org, imre.deak@linux.intel.com
Subject: Re: [PATCH v2 3/3] drm/i915: optimise intel_runtime_pm_{get, put}
Date: Fri, 18 Nov 2016 16:15:37 +0200 [thread overview]
Message-ID: <20161118141537.GK31595@intel.com> (raw)
In-Reply-To: <20161118140040.GG28142@nuc-i3427.alporthouse.com>
On Fri, Nov 18, 2016 at 02:00:40PM +0000, Chris Wilson wrote:
> On Fri, Nov 18, 2016 at 03:36:47PM +0200, David Weinehall wrote:
> > Benchmarking shows that on resume we spend quite a bit of time
> > just taking and dropping these references, leaving us two options;
> > either rewriting the code not to take these references more than
> > once, which would be a rather invasive change since the involved
> > functions are used from other places, or to optimise
> > intel_runtime_pm_{get,put}(). This patch does the latter.
> > Initial benchmarking indicate improvements of a couple
> > of milliseconds on resume.
> >
> > Original patch by Chris, with slight fixes by me.
> >
> > v2: Fix missing return value (Patchwork)
> > Remove extra atomic_dec() (Chris)
> >
> > Signed-off-by: David Weinehall <david.weinehall@linux.intel.com>
> > CC: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Imre Deak <imre.deak@linux.intel.com>
>
> I'm happy with this. Not amused that it apparently saves quite a bit of
> overhead with frequent pm_runtime calls.
We could eliminate some of those calls entirely by moving them from
intel_display_power_{get,put}() into the always on well enable/disable
hooks. But I'm not sure how much this overhead originates from the power
well code as opposed to some gem/etc. stuff.
>
> Imre?
> -Chris
>
> --
> Chris Wilson, Intel Open Source Technology Centre
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
--
Ville Syrjälä
Intel OTC
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
next prev parent reply other threads:[~2016-11-18 14:15 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-11-18 13:36 [PATCH v2 0/3] Resume time optimisation David Weinehall
2016-11-18 13:36 ` [PATCH v2 1/3] drm/i915: Cleanup i915_gem_restore_gtt_mappings() David Weinehall
2016-11-18 13:58 ` Chris Wilson
2016-11-21 12:21 ` David Weinehall
2016-11-21 12:30 ` Chris Wilson
2016-11-28 11:24 ` David Weinehall
2016-11-18 13:36 ` [PATCH v2 2/3] drm/i915: Take runtime pm in i915_gem_resume() David Weinehall
2016-11-18 13:36 ` [PATCH v2 3/3] drm/i915: optimise intel_runtime_pm_{get, put} David Weinehall
2016-11-18 14:00 ` Chris Wilson
2016-11-18 14:15 ` Ville Syrjälä [this message]
2016-11-18 15:41 ` Imre Deak
2016-11-18 14:54 ` ✗ Fi.CI.BAT: failure for Resume time optimisation (rev2) Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20161118141537.GK31595@intel.com \
--to=ville.syrjala@linux.intel.com \
--cc=chris@chris-wilson.co.uk \
--cc=david.weinehall@linux.intel.com \
--cc=imre.deak@linux.intel.com \
--cc=intel-gfx@lists.freedesktop.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).