public inbox for intel-gfx@lists.freedesktop.org
 help / color / mirror / Atom feed
* [PATCH 1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled.
@ 2018-01-03 20:39 Dhinakaran Pandiyan
  2018-01-03 20:39 ` [PATCH 2/5] drm/vblank: Restoring vblank counts after device runtime PM events Dhinakaran Pandiyan
                   ` (5 more replies)
  0 siblings, 6 replies; 22+ messages in thread
From: Dhinakaran Pandiyan @ 2018-01-03 20:39 UTC (permalink / raw)
  To: intel-gfx; +Cc: Daniel Vetter, Dhinakaran Pandiyan, Rodrigo Vivi

Updating the vblank counts requires register reads and these reads may not
return meaningful values after the vblank interrupts are disabled as the
device may go to low power state. An additional change would be to allow
the driver to save the vblank counts before entering a low power state, but
that's for the future.

Also, disable vblanks after reading the HW counter in the case where
_crtc_vblank_off() is disabling vblanks.

Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
---
 drivers/gpu/drm/drm_vblank.c | 23 +++++++++--------------
 1 file changed, 9 insertions(+), 14 deletions(-)

diff --git a/drivers/gpu/drm/drm_vblank.c b/drivers/gpu/drm/drm_vblank.c
index 32d9bcf5be7f..7eee82c06ed8 100644
--- a/drivers/gpu/drm/drm_vblank.c
+++ b/drivers/gpu/drm/drm_vblank.c
@@ -347,23 +347,14 @@ void drm_vblank_disable_and_save(struct drm_device *dev, unsigned int pipe)
 	spin_lock_irqsave(&dev->vblank_time_lock, irqflags);
 
 	/*
-	 * Only disable vblank interrupts if they're enabled. This avoids
-	 * calling the ->disable_vblank() operation in atomic context with the
-	 * hardware potentially runtime suspended.
-	 */
-	if (vblank->enabled) {
-		__disable_vblank(dev, pipe);
-		vblank->enabled = false;
-	}
-
-	/*
-	 * Always update the count and timestamp to maintain the
+	 * Update the count and timestamp to maintain the
 	 * appearance that the counter has been ticking all along until
 	 * this time. This makes the count account for the entire time
 	 * between drm_crtc_vblank_on() and drm_crtc_vblank_off().
 	 */
 	drm_update_vblank_count(dev, pipe, false);
-
+	__disable_vblank(dev, pipe);
+	vblank->enabled = false;
 	spin_unlock_irqrestore(&dev->vblank_time_lock, irqflags);
 }
 
@@ -1122,8 +1113,12 @@ void drm_crtc_vblank_off(struct drm_crtc *crtc)
 		      pipe, vblank->enabled, vblank->inmodeset);
 
 	/* Avoid redundant vblank disables without previous
-	 * drm_crtc_vblank_on(). */
-	if (drm_core_check_feature(dev, DRIVER_ATOMIC) || !vblank->inmodeset)
+	 * drm_crtc_vblank_on() and only disable them if they're enabled. This
+	 * avoids calling the ->disable_vblank() operation in atomic context
+	 * with the hardware potentially runtime suspended.
+	 */
+	if ((drm_core_check_feature(dev, DRIVER_ATOMIC) || !vblank->inmodeset) &&
+	    vblank->enabled)
 		drm_vblank_disable_and_save(dev, pipe);
 
 	wake_up(&vblank->queue);
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 2/5] drm/vblank: Restoring vblank counts after device runtime PM events.
  2018-01-03 20:39 [PATCH 1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled Dhinakaran Pandiyan
@ 2018-01-03 20:39 ` Dhinakaran Pandiyan
  2018-01-03 20:55   ` Chris Wilson
  2018-01-03 20:39 ` [PATCH 3/5] drm/i915: Enable vblanks after verifying power domain states Dhinakaran Pandiyan
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 22+ messages in thread
From: Dhinakaran Pandiyan @ 2018-01-03 20:39 UTC (permalink / raw)
  To: intel-gfx; +Cc: Daniel Vetter, Dhinakaran Pandiyan, Rodrigo Vivi

The HW frame counter can get reset when devices enters low power
states and this messes up any following vblank count updates. So, compute
the missed vblank interrupts for that low power state duration using time
stamps. This is similar to _crtc_vblank_on() except that it doesn't enable
vblank interrupts because this function is expected to be called from
the driver _enable_vblank() vfunc.

Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
---
 drivers/gpu/drm/drm_vblank.c | 33 +++++++++++++++++++++++++++++++++
 include/drm/drm_vblank.h     |  1 +
 2 files changed, 34 insertions(+)

diff --git a/drivers/gpu/drm/drm_vblank.c b/drivers/gpu/drm/drm_vblank.c
index 7eee82c06ed8..494e2cff6e55 100644
--- a/drivers/gpu/drm/drm_vblank.c
+++ b/drivers/gpu/drm/drm_vblank.c
@@ -1230,6 +1230,39 @@ void drm_crtc_vblank_on(struct drm_crtc *crtc)
 }
 EXPORT_SYMBOL(drm_crtc_vblank_on);
 
+void drm_crtc_vblank_restore(struct drm_device *dev, unsigned int pipe)
+{
+	ktime_t t_vblank;
+	struct drm_vblank_crtc *vblank;
+	int framedur_ns;
+	u64 diff_ns;
+	u32 cur_vblank, diff = 1;
+	int count = DRM_TIMESTAMP_MAXRETRIES;
+
+	if (WARN_ON(pipe >= dev->num_crtcs))
+		return;
+
+	vblank = &dev->vblank[pipe];
+	WARN_ONCE((drm_debug & DRM_UT_VBL) && !vblank->framedur_ns,
+		  "Cannot compute missed vblanks without frame duration\n");
+	framedur_ns = vblank->framedur_ns;
+
+	do {
+		cur_vblank = __get_vblank_counter(dev, pipe);
+		drm_get_last_vbltimestamp(dev, pipe, &t_vblank, false);
+	} while (cur_vblank != __get_vblank_counter(dev, pipe) && --count > 0);
+
+	diff_ns = ktime_to_ns(ktime_sub(t_vblank, vblank->time));
+	if (framedur_ns)
+		diff = DIV_ROUND_CLOSEST_ULL(diff_ns, framedur_ns);
+
+
+	DRM_DEBUG_VBL("missed %d vblanks in %lld ns, frame duration=%d ns, hw_diff=%d\n",
+		      diff, diff_ns, framedur_ns, cur_vblank - vblank->last);
+	store_vblank(dev, pipe, diff, t_vblank, cur_vblank);
+}
+EXPORT_SYMBOL(drm_crtc_vblank_restore);
+
 static void drm_legacy_vblank_pre_modeset(struct drm_device *dev,
 					  unsigned int pipe)
 {
diff --git a/include/drm/drm_vblank.h b/include/drm/drm_vblank.h
index 848b463a0af5..aafcbef91bd7 100644
--- a/include/drm/drm_vblank.h
+++ b/include/drm/drm_vblank.h
@@ -180,6 +180,7 @@ void drm_crtc_vblank_off(struct drm_crtc *crtc);
 void drm_crtc_vblank_reset(struct drm_crtc *crtc);
 void drm_crtc_vblank_on(struct drm_crtc *crtc);
 u32 drm_crtc_accurate_vblank_count(struct drm_crtc *crtc);
+void drm_crtc_vblank_restore(struct drm_device *dev, unsigned int pipe);
 
 bool drm_calc_vbltimestamp_from_scanoutpos(struct drm_device *dev,
 					   unsigned int pipe, int *max_error,
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 3/5] drm/i915: Enable vblanks after verifying power domain states.
  2018-01-03 20:39 [PATCH 1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled Dhinakaran Pandiyan
  2018-01-03 20:39 ` [PATCH 2/5] drm/vblank: Restoring vblank counts after device runtime PM events Dhinakaran Pandiyan
@ 2018-01-03 20:39 ` Dhinakaran Pandiyan
  2018-01-03 20:57   ` Chris Wilson
  2018-01-04 11:35   ` Maarten Lankhorst
  2018-01-03 20:40 ` [PATCH 4/5] drm/i915: Introduce a non-blocking power domain for vblank interrupts Dhinakaran Pandiyan
                   ` (3 subsequent siblings)
  5 siblings, 2 replies; 22+ messages in thread
From: Dhinakaran Pandiyan @ 2018-01-03 20:39 UTC (permalink / raw)
  To: intel-gfx; +Cc: Daniel Vetter, Dhinakaran Pandiyan, Rodrigo Vivi

Since we want to allow for a non-blocking power domain for vblanks,
the power domain use count and power well use count will not be updated
atomically inside the power domain mutex (see next patch). This affects
verifying if sum(power_domain_use_count) == power_well_use_count at
init time. So do not enable vblanks until this verification is done.

Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
---
 drivers/gpu/drm/i915/intel_display.c | 24 ++++++++++++++++++++----
 1 file changed, 20 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
index 0cd355978ab4..7bc874b8dac7 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -14739,6 +14739,24 @@ static bool has_pch_trancoder(struct drm_i915_private *dev_priv,
 		(HAS_PCH_LPT_H(dev_priv) && pch_transcoder == PIPE_A);
 }
 
+static void modeset_enable_vblanks(struct drm_i915_private *dev_priv)
+{
+	enum pipe pipe;
+
+	for_each_pipe(dev_priv, pipe) {
+		struct intel_crtc *crtc;
+
+		crtc = intel_get_crtc_for_pipe(dev_priv, pipe);
+
+		/* restore vblank interrupts to correct state */
+		drm_crtc_vblank_reset(&crtc->base);
+
+		if (crtc->active)
+			drm_crtc_vblank_on(&crtc->base);
+	}
+}
+
+
 static void intel_sanitize_crtc(struct intel_crtc *crtc,
 				struct drm_modeset_acquire_ctx *ctx)
 {
@@ -14754,13 +14772,9 @@ static void intel_sanitize_crtc(struct intel_crtc *crtc,
 			   I915_READ(reg) & ~PIPECONF_FRAME_START_DELAY_MASK);
 	}
 
-	/* restore vblank interrupts to correct state */
-	drm_crtc_vblank_reset(&crtc->base);
 	if (crtc->active) {
 		struct intel_plane *plane;
 
-		drm_crtc_vblank_on(&crtc->base);
-
 		/* Disable everything but the primary plane */
 		for_each_intel_plane_on_crtc(dev, crtc, plane) {
 			const struct intel_plane_state *plane_state =
@@ -15147,6 +15161,8 @@ intel_modeset_setup_hw_state(struct drm_device *dev,
 
 	intel_power_domains_verify_state(dev_priv);
 
+	modeset_enable_vblanks(dev_priv);
+
 	intel_fbc_init_pipe_state(dev_priv);
 }
 
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 4/5] drm/i915: Introduce a non-blocking power domain for vblank interrupts
  2018-01-03 20:39 [PATCH 1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled Dhinakaran Pandiyan
  2018-01-03 20:39 ` [PATCH 2/5] drm/vblank: Restoring vblank counts after device runtime PM events Dhinakaran Pandiyan
  2018-01-03 20:39 ` [PATCH 3/5] drm/i915: Enable vblanks after verifying power domain states Dhinakaran Pandiyan
@ 2018-01-03 20:40 ` Dhinakaran Pandiyan
  2018-01-05  2:08   ` Rodrigo Vivi
  2018-01-03 20:40 ` [PATCH 5/5] drm/i915: Use the vblank power domain disallow or disable DC states Dhinakaran Pandiyan
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 22+ messages in thread
From: Dhinakaran Pandiyan @ 2018-01-03 20:40 UTC (permalink / raw)
  To: intel-gfx; +Cc: Daniel Vetter, Dhinakaran Pandiyan, Rodrigo Vivi

When DC states are enabled and PSR is active, the hardware enters DC5/DC6
states resulting in the frame counter resetting. The frame counter reset
mess up the vblank counting logic as the diff between the new frame
counter value and the previous is negative, and this negative diff gets
applied as an unsigned value to the vblank count. We cannot reject negative
diffs altogether because they can arise from legitimate frame counter
overflows when there is a long period with vblank disabled. So, this
approach allows for the driver to notice a DC state toggle between a vblank
disable and enable and fill in the missed vblanks.

But, in order to disable DC states when vblank interrupts are required,
the DC_OFF power well has to be disabled in an atomic context. So,
introduce a new VBLANK power domain that can be acquired and released in
atomic contexts with these changes -
1)  _vblank_get() and _vblank_put() methods skip the power_domain mutex
and use a spin lock for the DC power well.
2) power_domains->domain_use_count is converted to an atomic_t array so
that it can be updated outside of the power domain mutex.

v3: Squash domain_use_count atomic_t conversion (Maarten)
v2: Fix deadlock by switching irqsave spinlock.
    Implement atomic version of get_if_enabled.
    Modify power_domain_verify_state to check power well use count and
enabled status atomically.
    Rewrite of intel_power_well_{get,put}

Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
---
 drivers/gpu/drm/i915/i915_debugfs.c     |   2 +-
 drivers/gpu/drm/i915/i915_drv.h         |  19 +++-
 drivers/gpu/drm/i915/intel_display.h    |   1 +
 drivers/gpu/drm/i915/intel_drv.h        |   3 +
 drivers/gpu/drm/i915/intel_runtime_pm.c | 195 ++++++++++++++++++++++++++++----
 5 files changed, 199 insertions(+), 21 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index d81cb2513069..5a7ce734de02 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -2746,7 +2746,7 @@ static int i915_power_domain_info(struct seq_file *m, void *unused)
 		for_each_power_domain(power_domain, power_well->domains)
 			seq_printf(m, "  %-23s %d\n",
 				 intel_display_power_domain_str(power_domain),
-				 power_domains->domain_use_count[power_domain]);
+				 atomic_read(&power_domains->domain_use_count[power_domain]));
 	}
 
 	mutex_unlock(&power_domains->lock);
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index caebd5825279..61a635f03af7 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -1032,6 +1032,23 @@ struct i915_power_well {
 			bool has_fuses:1;
 		} hsw;
 	};
+
+	/* Lock to serialize access to count, hw_enabled and ops, used for
+	 * power wells that have supports_atomix_ctx set to True.
+	 */
+	spinlock_t lock;
+
+	/* Indicates that the get/put methods for this power well can be called
+	 * in atomic contexts, requires .ops to not sleep. This is valid
+	 * only for the DC_OFF power well currently.
+	 */
+	bool supports_atomic_ctx;
+
+	/* DC_OFF power well was disabled since the last time vblanks were
+	 * disabled.
+	 */
+	bool dc_off_disabled;
+
 	const struct i915_power_well_ops *ops;
 };
 
@@ -1045,7 +1062,7 @@ struct i915_power_domains {
 	int power_well_count;
 
 	struct mutex lock;
-	int domain_use_count[POWER_DOMAIN_NUM];
+	atomic_t domain_use_count[POWER_DOMAIN_NUM];
 	struct i915_power_well *power_wells;
 };
 
diff --git a/drivers/gpu/drm/i915/intel_display.h b/drivers/gpu/drm/i915/intel_display.h
index a0d2b6169361..3e9671ff6f79 100644
--- a/drivers/gpu/drm/i915/intel_display.h
+++ b/drivers/gpu/drm/i915/intel_display.h
@@ -172,6 +172,7 @@ enum intel_display_power_domain {
 	POWER_DOMAIN_AUX_C,
 	POWER_DOMAIN_AUX_D,
 	POWER_DOMAIN_GMBUS,
+	POWER_DOMAIN_VBLANK,
 	POWER_DOMAIN_MODESET,
 	POWER_DOMAIN_GT_IRQ,
 	POWER_DOMAIN_INIT,
diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h
index 30f791f89d64..164e62cb047b 100644
--- a/drivers/gpu/drm/i915/intel_drv.h
+++ b/drivers/gpu/drm/i915/intel_drv.h
@@ -1797,6 +1797,9 @@ bool intel_display_power_get_if_enabled(struct drm_i915_private *dev_priv,
 					enum intel_display_power_domain domain);
 void intel_display_power_put(struct drm_i915_private *dev_priv,
 			     enum intel_display_power_domain domain);
+void intel_display_power_vblank_get(struct drm_i915_private *dev_priv,
+				    bool *needs_restore);
+void intel_display_power_vblank_put(struct drm_i915_private *dev_priv);
 
 static inline void
 assert_rpm_device_not_suspended(struct drm_i915_private *dev_priv)
diff --git a/drivers/gpu/drm/i915/intel_runtime_pm.c b/drivers/gpu/drm/i915/intel_runtime_pm.c
index d758da6156a8..93b324dcc55d 100644
--- a/drivers/gpu/drm/i915/intel_runtime_pm.c
+++ b/drivers/gpu/drm/i915/intel_runtime_pm.c
@@ -56,6 +56,19 @@ static struct i915_power_well *
 lookup_power_well(struct drm_i915_private *dev_priv,
 		  enum i915_power_well_id power_well_id);
 
+/* Optimize for the case when this is called from atomic contexts,
+ * although the case is unlikely.
+ */
+#define power_well_lock(power_well, flags) do {			\
+	if (likely(power_well->supports_atomic_ctx))		\
+		spin_lock_irqsave(&power_well->lock, flags);	\
+	} while (0)
+
+#define power_well_unlock(power_well, flags) do {			\
+	if (likely(power_well->supports_atomic_ctx))			\
+		spin_unlock_irqrestore(&power_well->lock, flags);	\
+	} while (0)
+
 const char *
 intel_display_power_domain_str(enum intel_display_power_domain domain)
 {
@@ -126,6 +139,8 @@ intel_display_power_domain_str(enum intel_display_power_domain domain)
 		return "AUX_D";
 	case POWER_DOMAIN_GMBUS:
 		return "GMBUS";
+	case POWER_DOMAIN_VBLANK:
+		return "VBLANK";
 	case POWER_DOMAIN_INIT:
 		return "INIT";
 	case POWER_DOMAIN_MODESET:
@@ -141,6 +156,9 @@ intel_display_power_domain_str(enum intel_display_power_domain domain)
 static void intel_power_well_enable(struct drm_i915_private *dev_priv,
 				    struct i915_power_well *power_well)
 {
+	if (power_well->supports_atomic_ctx)
+		assert_spin_locked(&power_well->lock);
+
 	DRM_DEBUG_KMS("enabling %s\n", power_well->name);
 	power_well->ops->enable(dev_priv, power_well);
 	power_well->hw_enabled = true;
@@ -149,26 +167,43 @@ static void intel_power_well_enable(struct drm_i915_private *dev_priv,
 static void intel_power_well_disable(struct drm_i915_private *dev_priv,
 				     struct i915_power_well *power_well)
 {
+	if (power_well->supports_atomic_ctx)
+		assert_spin_locked(&power_well->lock);
+
 	DRM_DEBUG_KMS("disabling %s\n", power_well->name);
 	power_well->hw_enabled = false;
 	power_well->ops->disable(dev_priv, power_well);
 }
 
-static void intel_power_well_get(struct drm_i915_private *dev_priv,
+static void __intel_power_well_get(struct drm_i915_private *dev_priv,
 				 struct i915_power_well *power_well)
 {
 	if (!power_well->count++)
 		intel_power_well_enable(dev_priv, power_well);
 }
 
+static void intel_power_well_get(struct drm_i915_private *dev_priv,
+				 struct i915_power_well *power_well)
+{
+	unsigned long flags = 0;
+
+	power_well_lock(power_well, flags);
+	__intel_power_well_get(dev_priv, power_well);
+	power_well_unlock(power_well, flags);
+}
+
 static void intel_power_well_put(struct drm_i915_private *dev_priv,
 				 struct i915_power_well *power_well)
 {
+	unsigned long flags = 0;
+
+	power_well_lock(power_well, flags);
 	WARN(!power_well->count, "Use count on power well %s is already zero",
 	     power_well->name);
 
 	if (!--power_well->count)
 		intel_power_well_disable(dev_priv, power_well);
+	power_well_unlock(power_well, flags);
 }
 
 /**
@@ -736,6 +771,7 @@ static void gen9_dc_off_power_well_disable(struct drm_i915_private *dev_priv,
 		skl_enable_dc6(dev_priv);
 	else if (dev_priv->csr.allowed_dc_mask & DC_STATE_EN_UPTO_DC5)
 		gen9_enable_dc5(dev_priv);
+	power_well->dc_off_disabled = true;
 }
 
 static void i9xx_power_well_sync_hw_noop(struct drm_i915_private *dev_priv,
@@ -1453,6 +1489,58 @@ static void chv_pipe_power_well_disable(struct drm_i915_private *dev_priv,
 	chv_set_pipe_power_well(dev_priv, power_well, false);
 }
 
+void intel_display_power_vblank_get(struct drm_i915_private *dev_priv,
+				    bool *needs_restore)
+{
+	struct i915_power_domains *power_domains  = &dev_priv->power_domains;
+	struct i915_power_well *power_well;
+	enum intel_display_power_domain domain = POWER_DOMAIN_VBLANK;
+
+	*needs_restore = false;
+
+	if (!HAS_CSR(dev_priv))
+		return;
+
+	if (!(HAS_PSR(dev_priv) && dev_priv->psr.sink_support))
+		return;
+
+	intel_runtime_pm_get_noresume(dev_priv);
+
+	for_each_power_domain_well(dev_priv, power_well, BIT_ULL(domain)) {
+		unsigned long flags = 0;
+
+		power_well_lock(power_well, flags);
+		__intel_power_well_get(dev_priv, power_well);
+		*needs_restore = power_well->dc_off_disabled;
+		power_well->dc_off_disabled = false;
+		power_well_unlock(power_well, flags);
+	}
+
+	atomic_inc(&power_domains->domain_use_count[domain]);
+}
+
+void intel_display_power_vblank_put(struct drm_i915_private *dev_priv)
+{
+	struct i915_power_domains *power_domains = &dev_priv->power_domains;
+	struct i915_power_well *power_well;
+	enum intel_display_power_domain domain = POWER_DOMAIN_VBLANK;
+
+	if (!HAS_CSR(dev_priv))
+		return;
+
+	if (!(HAS_PSR(dev_priv) && dev_priv->psr.sink_support))
+		return;
+
+	WARN(atomic_dec_return(&power_domains->domain_use_count[domain]) < 0,
+	     "Use count on domain %s was already zero\n",
+	     intel_display_power_domain_str(domain));
+
+	for_each_power_domain_well_rev(dev_priv, power_well, BIT_ULL(domain))
+		intel_power_well_put(dev_priv, power_well);
+
+	intel_runtime_pm_put(dev_priv);
+}
+
 static void
 __intel_display_power_get_domain(struct drm_i915_private *dev_priv,
 				 enum intel_display_power_domain domain)
@@ -1463,7 +1551,7 @@ __intel_display_power_get_domain(struct drm_i915_private *dev_priv,
 	for_each_power_domain_well(dev_priv, power_well, BIT_ULL(domain))
 		intel_power_well_get(dev_priv, power_well);
 
-	power_domains->domain_use_count[domain]++;
+	atomic_inc(&power_domains->domain_use_count[domain]);
 }
 
 /**
@@ -1492,6 +1580,38 @@ void intel_display_power_get(struct drm_i915_private *dev_priv,
 	mutex_unlock(&power_domains->lock);
 }
 
+static bool dc_off_get_if_enabled(struct drm_i915_private *dev_priv,
+				  enum intel_display_power_domain domain)
+{
+	struct i915_power_well *power_well;
+	bool is_enabled;
+	unsigned long flags = 0;
+
+	power_well = lookup_power_well(dev_priv, SKL_DISP_PW_DC_OFF);
+	if (!power_well || !(power_well->domains & domain))
+		return true;
+
+	power_well_lock(power_well, flags);
+	is_enabled = power_well->hw_enabled;
+	if (is_enabled)
+		__intel_power_well_get(dev_priv, power_well);
+	power_well_unlock(power_well, flags);
+
+	return is_enabled;
+}
+
+static void dc_off_put(struct drm_i915_private *dev_priv,
+		       enum intel_display_power_domain domain)
+{
+	struct i915_power_well *power_well;
+
+	power_well = lookup_power_well(dev_priv, SKL_DISP_PW_DC_OFF);
+	if (!power_well || !(power_well->domains & domain))
+		return;
+
+	intel_power_well_put(dev_priv, power_well);
+}
+
 /**
  * intel_display_power_get_if_enabled - grab a reference for an enabled display power domain
  * @dev_priv: i915 device instance
@@ -1508,20 +1628,24 @@ bool intel_display_power_get_if_enabled(struct drm_i915_private *dev_priv,
 					enum intel_display_power_domain domain)
 {
 	struct i915_power_domains *power_domains = &dev_priv->power_domains;
-	bool is_enabled;
+	bool is_enabled = false;
 
 	if (!intel_runtime_pm_get_if_in_use(dev_priv))
 		return false;
 
 	mutex_lock(&power_domains->lock);
 
+	if (!dc_off_get_if_enabled(dev_priv, domain))
+		goto out;
+
 	if (__intel_display_power_is_enabled(dev_priv, domain)) {
 		__intel_display_power_get_domain(dev_priv, domain);
 		is_enabled = true;
-	} else {
-		is_enabled = false;
 	}
 
+	dc_off_put(dev_priv, domain);
+
+out:
 	mutex_unlock(&power_domains->lock);
 
 	if (!is_enabled)
@@ -1549,10 +1673,9 @@ void intel_display_power_put(struct drm_i915_private *dev_priv,
 
 	mutex_lock(&power_domains->lock);
 
-	WARN(!power_domains->domain_use_count[domain],
-	     "Use count on domain %s is already zero\n",
+	WARN(atomic_dec_return(&power_domains->domain_use_count[domain]) < 0,
+	     "Use count on domain %s was already zero\n",
 	     intel_display_power_domain_str(domain));
-	power_domains->domain_use_count[domain]--;
 
 	for_each_power_domain_well_rev(dev_priv, power_well, BIT_ULL(domain))
 		intel_power_well_put(dev_priv, power_well);
@@ -1720,6 +1843,7 @@ void intel_display_power_put(struct drm_i915_private *dev_priv,
 	BIT_ULL(POWER_DOMAIN_GT_IRQ) |			\
 	BIT_ULL(POWER_DOMAIN_MODESET) |			\
 	BIT_ULL(POWER_DOMAIN_AUX_A) |			\
+	BIT_ULL(POWER_DOMAIN_VBLANK) |			\
 	BIT_ULL(POWER_DOMAIN_INIT))
 
 #define BXT_DISPLAY_POWERWELL_2_POWER_DOMAINS (		\
@@ -1743,6 +1867,7 @@ void intel_display_power_put(struct drm_i915_private *dev_priv,
 	BIT_ULL(POWER_DOMAIN_MODESET) |			\
 	BIT_ULL(POWER_DOMAIN_AUX_A) |			\
 	BIT_ULL(POWER_DOMAIN_GMBUS) |			\
+	BIT_ULL(POWER_DOMAIN_VBLANK) |			\
 	BIT_ULL(POWER_DOMAIN_INIT))
 #define BXT_DPIO_CMN_A_POWER_DOMAINS (			\
 	BIT_ULL(POWER_DOMAIN_PORT_DDI_A_LANES) |		\
@@ -1803,6 +1928,7 @@ void intel_display_power_put(struct drm_i915_private *dev_priv,
 	BIT_ULL(POWER_DOMAIN_MODESET) |			\
 	BIT_ULL(POWER_DOMAIN_AUX_A) |			\
 	BIT_ULL(POWER_DOMAIN_GMBUS) |			\
+	BIT_ULL(POWER_DOMAIN_VBLANK) |			\
 	BIT_ULL(POWER_DOMAIN_INIT))
 
 #define CNL_DISPLAY_POWERWELL_2_POWER_DOMAINS (		\
@@ -1850,6 +1976,7 @@ void intel_display_power_put(struct drm_i915_private *dev_priv,
 	CNL_DISPLAY_POWERWELL_2_POWER_DOMAINS |		\
 	BIT_ULL(POWER_DOMAIN_MODESET) |			\
 	BIT_ULL(POWER_DOMAIN_AUX_A) |			\
+	BIT_ULL(POWER_DOMAIN_VBLANK) |			\
 	BIT_ULL(POWER_DOMAIN_INIT))
 
 static const struct i915_power_well_ops i9xx_always_on_power_well_ops = {
@@ -2083,9 +2210,12 @@ bool intel_display_power_well_is_enabled(struct drm_i915_private *dev_priv,
 {
 	struct i915_power_well *power_well;
 	bool ret;
+	unsigned long flags = 0;
 
 	power_well = lookup_power_well(dev_priv, power_well_id);
+	power_well_lock(power_well, flags);
 	ret = power_well->ops->is_enabled(dev_priv, power_well);
+	power_well_unlock(power_well, flags);
 
 	return ret;
 }
@@ -2120,6 +2250,8 @@ static struct i915_power_well skl_power_wells[] = {
 		.domains = SKL_DISPLAY_DC_OFF_POWER_DOMAINS,
 		.ops = &gen9_dc_off_power_well_ops,
 		.id = SKL_DISP_PW_DC_OFF,
+		.supports_atomic_ctx = true,
+		.dc_off_disabled = false,
 	},
 	{
 		.name = "power well 2",
@@ -2180,6 +2312,8 @@ static struct i915_power_well bxt_power_wells[] = {
 		.domains = BXT_DISPLAY_DC_OFF_POWER_DOMAINS,
 		.ops = &gen9_dc_off_power_well_ops,
 		.id = SKL_DISP_PW_DC_OFF,
+		.supports_atomic_ctx = true,
+		.dc_off_disabled = false,
 	},
 	{
 		.name = "power well 2",
@@ -2235,6 +2369,8 @@ static struct i915_power_well glk_power_wells[] = {
 		.domains = GLK_DISPLAY_DC_OFF_POWER_DOMAINS,
 		.ops = &gen9_dc_off_power_well_ops,
 		.id = SKL_DISP_PW_DC_OFF,
+		.supports_atomic_ctx = true,
+		.dc_off_disabled = false,
 	},
 	{
 		.name = "power well 2",
@@ -2359,6 +2495,8 @@ static struct i915_power_well cnl_power_wells[] = {
 		.domains = CNL_DISPLAY_DC_OFF_POWER_DOMAINS,
 		.ops = &gen9_dc_off_power_well_ops,
 		.id = SKL_DISP_PW_DC_OFF,
+		.supports_atomic_ctx = true,
+		.dc_off_disabled = false,
 	},
 	{
 		.name = "power well 2",
@@ -2487,6 +2625,7 @@ static void assert_power_well_ids_unique(struct drm_i915_private *dev_priv)
 int intel_power_domains_init(struct drm_i915_private *dev_priv)
 {
 	struct i915_power_domains *power_domains = &dev_priv->power_domains;
+	struct i915_power_well *power_well;
 
 	i915_modparams.disable_power_well =
 		sanitize_disable_power_well_option(dev_priv,
@@ -2524,6 +2663,10 @@ int intel_power_domains_init(struct drm_i915_private *dev_priv)
 		set_power_wells(power_domains, i9xx_always_on_power_well);
 	}
 
+	for_each_power_well(dev_priv, power_well)
+		if (power_well->supports_atomic_ctx)
+			spin_lock_init(&power_well->lock);
+
 	assert_power_well_ids_unique(dev_priv);
 
 	return 0;
@@ -2571,9 +2714,13 @@ static void intel_power_domains_sync_hw(struct drm_i915_private *dev_priv)
 
 	mutex_lock(&power_domains->lock);
 	for_each_power_well(dev_priv, power_well) {
+		unsigned long flags = 0;
+
+		power_well_lock(power_well, flags);
 		power_well->ops->sync_hw(dev_priv, power_well);
 		power_well->hw_enabled = power_well->ops->is_enabled(dev_priv,
 								     power_well);
+		power_well_unlock(power_well, flags);
 	}
 	mutex_unlock(&power_domains->lock);
 }
@@ -3046,21 +3193,23 @@ void intel_power_domains_suspend(struct drm_i915_private *dev_priv)
 		bxt_display_core_uninit(dev_priv);
 }
 
-static void intel_power_domains_dump_info(struct drm_i915_private *dev_priv)
+static void intel_power_domains_dump_info(struct drm_i915_private *dev_priv,
+					  const int *power_well_use)
 {
 	struct i915_power_domains *power_domains = &dev_priv->power_domains;
 	struct i915_power_well *power_well;
+	int i = 0;
 
 	for_each_power_well(dev_priv, power_well) {
 		enum intel_display_power_domain domain;
 
 		DRM_DEBUG_DRIVER("%-25s %d\n",
-				 power_well->name, power_well->count);
+				 power_well->name, power_well_use[i++]);
 
 		for_each_power_domain(domain, power_well->domains)
 			DRM_DEBUG_DRIVER("  %-23s %d\n",
 					 intel_display_power_domain_str(domain),
-					 power_domains->domain_use_count[domain]);
+					 atomic_read(&power_domains->domain_use_count[domain]));
 	}
 }
 
@@ -3079,6 +3228,7 @@ void intel_power_domains_verify_state(struct drm_i915_private *dev_priv)
 	struct i915_power_domains *power_domains = &dev_priv->power_domains;
 	struct i915_power_well *power_well;
 	bool dump_domain_info;
+	int power_well_use[dev_priv->power_domains.power_well_count];
 
 	mutex_lock(&power_domains->lock);
 
@@ -3087,6 +3237,14 @@ void intel_power_domains_verify_state(struct drm_i915_private *dev_priv)
 		enum intel_display_power_domain domain;
 		int domains_count;
 		bool enabled;
+		int well_count, i = 0;
+		unsigned long flags = 0;
+
+		power_well_lock(power_well, flags);
+		well_count = power_well->count;
+		enabled = power_well->ops->is_enabled(dev_priv, power_well);
+		power_well_unlock(power_well, flags);
+		power_well_use[i++] = well_count;
 
 		/*
 		 * Power wells not belonging to any domain (like the MISC_IO
@@ -3096,20 +3254,19 @@ void intel_power_domains_verify_state(struct drm_i915_private *dev_priv)
 		if (!power_well->domains)
 			continue;
 
-		enabled = power_well->ops->is_enabled(dev_priv, power_well);
-		if ((power_well->count || power_well->always_on) != enabled)
+
+		if ((well_count || power_well->always_on) != enabled)
 			DRM_ERROR("power well %s state mismatch (refcount %d/enabled %d)",
-				  power_well->name, power_well->count, enabled);
+				  power_well->name, well_count, enabled);
 
 		domains_count = 0;
 		for_each_power_domain(domain, power_well->domains)
-			domains_count += power_domains->domain_use_count[domain];
+			domains_count += atomic_read(&power_domains->domain_use_count[domain]);
 
-		if (power_well->count != domains_count) {
+		if (well_count != domains_count) {
 			DRM_ERROR("power well %s refcount/domain refcount mismatch "
 				  "(refcount %d/domains refcount %d)\n",
-				  power_well->name, power_well->count,
-				  domains_count);
+				  power_well->name, well_count, domains_count);
 			dump_domain_info = true;
 		}
 	}
@@ -3118,7 +3275,7 @@ void intel_power_domains_verify_state(struct drm_i915_private *dev_priv)
 		static bool dumped;
 
 		if (!dumped) {
-			intel_power_domains_dump_info(dev_priv);
+			intel_power_domains_dump_info(dev_priv, power_well_use);
 			dumped = true;
 		}
 	}
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 5/5] drm/i915: Use the vblank power domain disallow or disable DC states.
  2018-01-03 20:39 [PATCH 1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled Dhinakaran Pandiyan
                   ` (2 preceding siblings ...)
  2018-01-03 20:40 ` [PATCH 4/5] drm/i915: Introduce a non-blocking power domain for vblank interrupts Dhinakaran Pandiyan
@ 2018-01-03 20:40 ` Dhinakaran Pandiyan
  2018-01-03 20:53 ` [PATCH 1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled Chris Wilson
  2018-01-03 21:01 ` ✓ Fi.CI.BAT: success for series starting with [1/5] " Patchwork
  5 siblings, 0 replies; 22+ messages in thread
From: Dhinakaran Pandiyan @ 2018-01-03 20:40 UTC (permalink / raw)
  To: intel-gfx; +Cc: Daniel Vetter, Dhinakaran Pandiyan, Rodrigo Vivi

Disable DC states before enabling vblank interrupts and conversely
enable DC states after disabling. Since the frame counter may have got
reset between disabling and enabling, use drm_crtc_vblank_restore() to
compute the missed vblanks.

Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
---
 drivers/gpu/drm/i915/i915_irq.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
index 3517c6548e2c..88b4ceac55d0 100644
--- a/drivers/gpu/drm/i915/i915_irq.c
+++ b/drivers/gpu/drm/i915/i915_irq.c
@@ -2963,6 +2963,11 @@ static int gen8_enable_vblank(struct drm_device *dev, unsigned int pipe)
 {
 	struct drm_i915_private *dev_priv = to_i915(dev);
 	unsigned long irqflags;
+	bool needs_restore = false;
+
+	intel_display_power_vblank_get(dev_priv, &needs_restore);
+	if (needs_restore)
+		drm_crtc_vblank_restore(dev, pipe);
 
 	spin_lock_irqsave(&dev_priv->irq_lock, irqflags);
 	bdw_enable_pipe_irq(dev_priv, pipe, GEN8_PIPE_VBLANK);
@@ -3015,6 +3020,7 @@ static void gen8_disable_vblank(struct drm_device *dev, unsigned int pipe)
 	spin_lock_irqsave(&dev_priv->irq_lock, irqflags);
 	bdw_disable_pipe_irq(dev_priv, pipe, GEN8_PIPE_VBLANK);
 	spin_unlock_irqrestore(&dev_priv->irq_lock, irqflags);
+	intel_display_power_vblank_put(dev_priv);
 }
 
 static void ibx_irq_reset(struct drm_i915_private *dev_priv)
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [PATCH 1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled.
  2018-01-03 20:39 [PATCH 1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled Dhinakaran Pandiyan
                   ` (3 preceding siblings ...)
  2018-01-03 20:40 ` [PATCH 5/5] drm/i915: Use the vblank power domain disallow or disable DC states Dhinakaran Pandiyan
@ 2018-01-03 20:53 ` Chris Wilson
  2018-01-03 21:01 ` ✓ Fi.CI.BAT: success for series starting with [1/5] " Patchwork
  5 siblings, 0 replies; 22+ messages in thread
From: Chris Wilson @ 2018-01-03 20:53 UTC (permalink / raw)
  To: intel-gfx; +Cc: Daniel Vetter, Dhinakaran Pandiyan, Rodrigo Vivi

Quoting Dhinakaran Pandiyan (2018-01-03 20:39:57)
> Updating the vblank counts requires register reads and these reads may not
> return meaningful values after the vblank interrupts are disabled as the
> device may go to low power state. An additional change would be to allow
> the driver to save the vblank counts before entering a low power state, but
> that's for the future.
> 
> Also, disable vblanks after reading the HW counter in the case where
> _crtc_vblank_off() is disabling vblanks.
> 
> Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
> Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
> ---
>  drivers/gpu/drm/drm_vblank.c | 23 +++++++++--------------
>  1 file changed, 9 insertions(+), 14 deletions(-)
> 
> diff --git a/drivers/gpu/drm/drm_vblank.c b/drivers/gpu/drm/drm_vblank.c
> index 32d9bcf5be7f..7eee82c06ed8 100644
> --- a/drivers/gpu/drm/drm_vblank.c
> +++ b/drivers/gpu/drm/drm_vblank.c
> @@ -347,23 +347,14 @@ void drm_vblank_disable_and_save(struct drm_device *dev, unsigned int pipe)
>         spin_lock_irqsave(&dev->vblank_time_lock, irqflags);
>  
>         /*
> -        * Only disable vblank interrupts if they're enabled. This avoids
> -        * calling the ->disable_vblank() operation in atomic context with the
> -        * hardware potentially runtime suspended.
> -        */
> -       if (vblank->enabled) {
> -               __disable_vblank(dev, pipe);
> -               vblank->enabled = false;
> -       }
> -
> -       /*
> -        * Always update the count and timestamp to maintain the
> +        * Update the count and timestamp to maintain the
>          * appearance that the counter has been ticking all along until
>          * this time. This makes the count account for the entire time
>          * between drm_crtc_vblank_on() and drm_crtc_vblank_off().
>          */
>         drm_update_vblank_count(dev, pipe, false);
> -
> +       __disable_vblank(dev, pipe);
> +       vblank->enabled = false;
>         spin_unlock_irqrestore(&dev->vblank_time_lock, irqflags);
>  }
>  
> @@ -1122,8 +1113,12 @@ void drm_crtc_vblank_off(struct drm_crtc *crtc)
>                       pipe, vblank->enabled, vblank->inmodeset);
>  
>         /* Avoid redundant vblank disables without previous
> -        * drm_crtc_vblank_on(). */
> -       if (drm_core_check_feature(dev, DRIVER_ATOMIC) || !vblank->inmodeset)
> +        * drm_crtc_vblank_on() and only disable them if they're enabled. This
> +        * avoids calling the ->disable_vblank() operation in atomic context
> +        * with the hardware potentially runtime suspended.
> +        */
> +       if ((drm_core_check_feature(dev, DRIVER_ATOMIC) || !vblank->inmodeset) &&
> +           vblank->enabled)

Outside of the spinlock protecting vblank->enabled.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 2/5] drm/vblank: Restoring vblank counts after device runtime PM events.
  2018-01-03 20:39 ` [PATCH 2/5] drm/vblank: Restoring vblank counts after device runtime PM events Dhinakaran Pandiyan
@ 2018-01-03 20:55   ` Chris Wilson
  0 siblings, 0 replies; 22+ messages in thread
From: Chris Wilson @ 2018-01-03 20:55 UTC (permalink / raw)
  To: intel-gfx; +Cc: Daniel Vetter, Dhinakaran Pandiyan, Rodrigo Vivi

Quoting Dhinakaran Pandiyan (2018-01-03 20:39:58)
> The HW frame counter can get reset when devices enters low power
> states and this messes up any following vblank count updates. So, compute
> the missed vblank interrupts for that low power state duration using time
> stamps. This is similar to _crtc_vblank_on() except that it doesn't enable
> vblank interrupts because this function is expected to be called from
> the driver _enable_vblank() vfunc.
> 
> Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
> Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
> ---
>  drivers/gpu/drm/drm_vblank.c | 33 +++++++++++++++++++++++++++++++++
>  include/drm/drm_vblank.h     |  1 +
>  2 files changed, 34 insertions(+)
> 
> diff --git a/drivers/gpu/drm/drm_vblank.c b/drivers/gpu/drm/drm_vblank.c
> index 7eee82c06ed8..494e2cff6e55 100644
> --- a/drivers/gpu/drm/drm_vblank.c
> +++ b/drivers/gpu/drm/drm_vblank.c
> @@ -1230,6 +1230,39 @@ void drm_crtc_vblank_on(struct drm_crtc *crtc)
>  }
>  EXPORT_SYMBOL(drm_crtc_vblank_on);
>  
> +void drm_crtc_vblank_restore(struct drm_device *dev, unsigned int pipe)

drm_crtc_* should be taking the drm_crtc.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 3/5] drm/i915: Enable vblanks after verifying power domain states.
  2018-01-03 20:39 ` [PATCH 3/5] drm/i915: Enable vblanks after verifying power domain states Dhinakaran Pandiyan
@ 2018-01-03 20:57   ` Chris Wilson
  2018-01-04 21:25     ` Pandiyan, Dhinakaran
  2018-01-04 11:35   ` Maarten Lankhorst
  1 sibling, 1 reply; 22+ messages in thread
From: Chris Wilson @ 2018-01-03 20:57 UTC (permalink / raw)
  To: intel-gfx; +Cc: Daniel Vetter, Dhinakaran Pandiyan, Rodrigo Vivi

Quoting Dhinakaran Pandiyan (2018-01-03 20:39:59)
> Since we want to allow for a non-blocking power domain for vblanks,
> the power domain use count and power well use count will not be updated
> atomically inside the power domain mutex (see next patch). This affects
> verifying if sum(power_domain_use_count) == power_well_use_count at
> init time. So do not enable vblanks until this verification is done.
> 
> Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
> ---
>  drivers/gpu/drm/i915/intel_display.c | 24 ++++++++++++++++++++----
>  1 file changed, 20 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
> index 0cd355978ab4..7bc874b8dac7 100644
> --- a/drivers/gpu/drm/i915/intel_display.c
> +++ b/drivers/gpu/drm/i915/intel_display.c
> @@ -14739,6 +14739,24 @@ static bool has_pch_trancoder(struct drm_i915_private *dev_priv,
>                 (HAS_PCH_LPT_H(dev_priv) && pch_transcoder == PIPE_A);
>  }
>  
> +static void modeset_enable_vblanks(struct drm_i915_private *dev_priv)
> +{
> +       enum pipe pipe;
> +
> +       for_each_pipe(dev_priv, pipe) {

for_each_intel_crtc()
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 22+ messages in thread

* ✓ Fi.CI.BAT: success for series starting with [1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled.
  2018-01-03 20:39 [PATCH 1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled Dhinakaran Pandiyan
                   ` (4 preceding siblings ...)
  2018-01-03 20:53 ` [PATCH 1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled Chris Wilson
@ 2018-01-03 21:01 ` Patchwork
  5 siblings, 0 replies; 22+ messages in thread
From: Patchwork @ 2018-01-03 21:01 UTC (permalink / raw)
  To: Dhinakaran Pandiyan; +Cc: intel-gfx

== Series Details ==

Series: series starting with [1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled.
URL   : https://patchwork.freedesktop.org/series/35959/
State : success

== Summary ==

Series 35959v1 series starting with [1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled.
https://patchwork.freedesktop.org/api/1.0/series/35959/revisions/1/mbox/

Test gem_exec_reloc:
        Subgroup basic-write-cpu:
                incomplete -> PASS       (fi-byt-j1900)
Test kms_psr_sink_crc:
        Subgroup psr_basic:
                pass       -> DMESG-WARN (fi-skl-6700hq) fdo#101144

fdo#101144 https://bugs.freedesktop.org/show_bug.cgi?id=101144

fi-bdw-5557u     total:288  pass:267  dwarn:0   dfail:0   fail:0   skip:21  time:433s
fi-bdw-gvtdvm    total:288  pass:264  dwarn:0   dfail:0   fail:0   skip:24  time:439s
fi-blb-e6850     total:288  pass:223  dwarn:1   dfail:0   fail:0   skip:64  time:380s
fi-bsw-n3050     total:288  pass:242  dwarn:0   dfail:0   fail:0   skip:46  time:491s
fi-bwr-2160      total:288  pass:183  dwarn:0   dfail:0   fail:0   skip:105 time:279s
fi-bxt-dsi       total:288  pass:258  dwarn:0   dfail:0   fail:0   skip:30  time:491s
fi-bxt-j4205     total:288  pass:259  dwarn:0   dfail:0   fail:0   skip:29  time:493s
fi-byt-j1900     total:288  pass:253  dwarn:0   dfail:0   fail:0   skip:35  time:477s
fi-byt-n2820     total:288  pass:249  dwarn:0   dfail:0   fail:0   skip:39  time:468s
fi-elk-e7500     total:224  pass:168  dwarn:9   dfail:1   fail:0   skip:45 
fi-glk-1         total:288  pass:260  dwarn:0   dfail:0   fail:0   skip:28  time:521s
fi-hsw-4770      total:288  pass:261  dwarn:0   dfail:0   fail:0   skip:27  time:410s
fi-hsw-4770r     total:288  pass:261  dwarn:0   dfail:0   fail:0   skip:27  time:413s
fi-ilk-650       total:288  pass:228  dwarn:0   dfail:0   fail:0   skip:60  time:423s
fi-ivb-3520m     total:288  pass:259  dwarn:0   dfail:0   fail:0   skip:29  time:474s
fi-ivb-3770      total:288  pass:255  dwarn:0   dfail:0   fail:0   skip:33  time:430s
fi-kbl-7500u     total:288  pass:263  dwarn:1   dfail:0   fail:0   skip:24  time:478s
fi-kbl-7560u     total:288  pass:268  dwarn:1   dfail:0   fail:0   skip:19  time:507s
fi-kbl-7567u     total:288  pass:268  dwarn:0   dfail:0   fail:0   skip:20  time:464s
fi-kbl-r         total:288  pass:260  dwarn:1   dfail:0   fail:0   skip:27  time:515s
fi-pnv-d510      total:288  pass:222  dwarn:1   dfail:0   fail:0   skip:65  time:586s
fi-skl-6260u     total:288  pass:268  dwarn:0   dfail:0   fail:0   skip:20  time:444s
fi-skl-6600u     total:288  pass:260  dwarn:1   dfail:0   fail:0   skip:27  time:521s
fi-skl-6700hq    total:288  pass:261  dwarn:1   dfail:0   fail:0   skip:26  time:539s
fi-skl-6700k2    total:288  pass:264  dwarn:0   dfail:0   fail:0   skip:24  time:505s
fi-skl-6770hq    total:288  pass:268  dwarn:0   dfail:0   fail:0   skip:20  time:486s
fi-skl-gvtdvm    total:288  pass:265  dwarn:0   dfail:0   fail:0   skip:23  time:443s
fi-snb-2520m     total:288  pass:248  dwarn:0   dfail:0   fail:0   skip:40  time:542s
fi-snb-2600      total:288  pass:248  dwarn:0   dfail:0   fail:0   skip:40  time:414s
Blacklisted hosts:
fi-cfl-s2        total:288  pass:262  dwarn:0   dfail:0   fail:0   skip:26  time:577s
fi-glk-dsi       total:288  pass:258  dwarn:0   dfail:0   fail:0   skip:30  time:478s

d26e7804b83cf7d3754323d02afa5ff04326d33d drm-tip: 2018y-01m-03d-17h-48m-30s UTC integration manifest
ce54eb2bad9e drm/i915: Use the vblank power domain disallow or disable DC states.
1028987b2353 drm/i915: Introduce a non-blocking power domain for vblank interrupts
5f157a49993f drm/i915: Enable vblanks after verifying power domain states.
556cdbf0cced drm/vblank: Restoring vblank counts after device runtime PM events.
8668247cf2d1 drm/vblank: Do not update vblank counts if vblanks are already disabled.

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_7601/issues.html
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 3/5] drm/i915: Enable vblanks after verifying power domain states.
  2018-01-03 20:39 ` [PATCH 3/5] drm/i915: Enable vblanks after verifying power domain states Dhinakaran Pandiyan
  2018-01-03 20:57   ` Chris Wilson
@ 2018-01-04 11:35   ` Maarten Lankhorst
  2018-01-04 21:51     ` Pandiyan, Dhinakaran
  2018-01-06  9:51     ` Dhinakaran Pandiyan
  1 sibling, 2 replies; 22+ messages in thread
From: Maarten Lankhorst @ 2018-01-04 11:35 UTC (permalink / raw)
  To: Dhinakaran Pandiyan, intel-gfx; +Cc: Daniel Vetter, Rodrigo Vivi

Op 03-01-18 om 21:39 schreef Dhinakaran Pandiyan:
> Since we want to allow for a non-blocking power domain for vblanks,
> the power domain use count and power well use count will not be updated
> atomically inside the power domain mutex (see next patch). This affects
> verifying if sum(power_domain_use_count) == power_well_use_count at
> init time. So do not enable vblanks until this verification is done.
>
> Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
> ---
>  drivers/gpu/drm/i915/intel_display.c | 24 ++++++++++++++++++++----
>  1 file changed, 20 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
> index 0cd355978ab4..7bc874b8dac7 100644
> --- a/drivers/gpu/drm/i915/intel_display.c
> +++ b/drivers/gpu/drm/i915/intel_display.c
> @@ -14739,6 +14739,24 @@ static bool has_pch_trancoder(struct drm_i915_private *dev_priv,
>  		(HAS_PCH_LPT_H(dev_priv) && pch_transcoder == PIPE_A);
>  }
>  
> +static void modeset_enable_vblanks(struct drm_i915_private *dev_priv)
> +{
> +	enum pipe pipe;
> +
> +	for_each_pipe(dev_priv, pipe) {
> +		struct intel_crtc *crtc;
> +
> +		crtc = intel_get_crtc_for_pipe(dev_priv, pipe);
> +
> +		/* restore vblank interrupts to correct state */
> +		drm_crtc_vblank_reset(&crtc->base);
> +
> +		if (crtc->active)
> +			drm_crtc_vblank_on(&crtc->base);
> +	}
> +}
> +
> +
>  static void intel_sanitize_crtc(struct intel_crtc *crtc,
>  				struct drm_modeset_acquire_ctx *ctx)
>  {
> @@ -14754,13 +14772,9 @@ static void intel_sanitize_crtc(struct intel_crtc *crtc,
>  			   I915_READ(reg) & ~PIPECONF_FRAME_START_DELAY_MASK);
>  	}
>  
> -	/* restore vblank interrupts to correct state */
> -	drm_crtc_vblank_reset(&crtc->base);
>  	if (crtc->active) {
>  		struct intel_plane *plane;
>  
> -		drm_crtc_vblank_on(&crtc->base);
> -
>  		/* Disable everything but the primary plane */
>  		for_each_intel_plane_on_crtc(dev, crtc, plane) {
>  			const struct intel_plane_state *plane_state =
> @@ -15147,6 +15161,8 @@ intel_modeset_setup_hw_state(struct drm_device *dev,
>  
>  	intel_power_domains_verify_state(dev_priv);
>  
> +	modeset_enable_vblanks(dev_priv);
> +
>  	intel_fbc_init_pipe_state(dev_priv);
>  }
>  

intel_pre_disable_primary_noatomic calls wait_for_vblank, which will fail with this patch.
Wouldn't it be better to make intel_power_domains_verify_state work correctly with the vblank irq?

~Maarten

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 3/5] drm/i915: Enable vblanks after verifying power domain states.
  2018-01-03 20:57   ` Chris Wilson
@ 2018-01-04 21:25     ` Pandiyan, Dhinakaran
  0 siblings, 0 replies; 22+ messages in thread
From: Pandiyan, Dhinakaran @ 2018-01-04 21:25 UTC (permalink / raw)
  To: chris@chris-wilson.co.uk
  Cc: daniel.vetter@ffwll.ch, intel-gfx@lists.freedesktop.org,
	Vivi, Rodrigo



On Wed, 2018-01-03 at 20:57 +0000, Chris Wilson wrote:
> Quoting Dhinakaran Pandiyan (2018-01-03 20:39:59)
> > Since we want to allow for a non-blocking power domain for vblanks,
> > the power domain use count and power well use count will not be updated
> > atomically inside the power domain mutex (see next patch). This affects
> > verifying if sum(power_domain_use_count) == power_well_use_count at
> > init time. So do not enable vblanks until this verification is done.
> > 
> > Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> > Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
> > Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> > Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
> > ---
> >  drivers/gpu/drm/i915/intel_display.c | 24 ++++++++++++++++++++----
> >  1 file changed, 20 insertions(+), 4 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
> > index 0cd355978ab4..7bc874b8dac7 100644
> > --- a/drivers/gpu/drm/i915/intel_display.c
> > +++ b/drivers/gpu/drm/i915/intel_display.c
> > @@ -14739,6 +14739,24 @@ static bool has_pch_trancoder(struct drm_i915_private *dev_priv,
> >                 (HAS_PCH_LPT_H(dev_priv) && pch_transcoder == PIPE_A);
> >  }
> >  
> > +static void modeset_enable_vblanks(struct drm_i915_private *dev_priv)
> > +{
> > +       enum pipe pipe;
> > +
> > +       for_each_pipe(dev_priv, pipe) {
> 
> for_each_intel_crtc()
> -Chris



Thanks for you comments, I'll fix them up in the next version if the
overall approach (disabling DC off) is Acked.
-DK



> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 3/5] drm/i915: Enable vblanks after verifying power domain states.
  2018-01-04 11:35   ` Maarten Lankhorst
@ 2018-01-04 21:51     ` Pandiyan, Dhinakaran
  2018-01-05 11:23       ` Maarten Lankhorst
  2018-01-06  9:51     ` Dhinakaran Pandiyan
  1 sibling, 1 reply; 22+ messages in thread
From: Pandiyan, Dhinakaran @ 2018-01-04 21:51 UTC (permalink / raw)
  To: maarten.lankhorst@linux.intel.com
  Cc: daniel.vetter@ffwll.ch, intel-gfx@lists.freedesktop.org,
	Vivi, Rodrigo


On Thu, 2018-01-04 at 12:35 +0100, Maarten Lankhorst wrote:
> Wouldn't it be better to make intel_power_domains_verify_state work
> correctly with the vblank irq?

I tried to :) Since I changed the domain_use_count to atomic_t and moved
it outside of the locks, verify_state became racy. Let me take another
look.

-DK
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 4/5] drm/i915: Introduce a non-blocking power domain for vblank interrupts
  2018-01-03 20:40 ` [PATCH 4/5] drm/i915: Introduce a non-blocking power domain for vblank interrupts Dhinakaran Pandiyan
@ 2018-01-05  2:08   ` Rodrigo Vivi
  2018-01-08 19:59     ` Pandiyan, Dhinakaran
  0 siblings, 1 reply; 22+ messages in thread
From: Rodrigo Vivi @ 2018-01-05  2:08 UTC (permalink / raw)
  To: Dhinakaran Pandiyan; +Cc: Daniel Vetter, intel-gfx

On Wed, Jan 03, 2018 at 08:40:00PM +0000, Dhinakaran Pandiyan wrote:
> When DC states are enabled and PSR is active, the hardware enters DC5/DC6
> states resulting in the frame counter resetting. The frame counter reset
> mess up the vblank counting logic as the diff between the new frame
> counter value and the previous is negative, and this negative diff gets
> applied as an unsigned value to the vblank count. We cannot reject negative
> diffs altogether because they can arise from legitimate frame counter
> overflows when there is a long period with vblank disabled. So, this
> approach allows for the driver to notice a DC state toggle between a vblank
> disable and enable and fill in the missed vblanks.
> 
> But, in order to disable DC states when vblank interrupts are required,
> the DC_OFF power well has to be disabled in an atomic context. So,
> introduce a new VBLANK power domain that can be acquired and released in
> atomic contexts with these changes -
> 1)  _vblank_get() and _vblank_put() methods skip the power_domain mutex
> and use a spin lock for the DC power well.
> 2) power_domains->domain_use_count is converted to an atomic_t array so
> that it can be updated outside of the power domain mutex.
> 
> v3: Squash domain_use_count atomic_t conversion (Maarten)
> v2: Fix deadlock by switching irqsave spinlock.
>     Implement atomic version of get_if_enabled.
>     Modify power_domain_verify_state to check power well use count and
> enabled status atomically.
>     Rewrite of intel_power_well_{get,put}
> 
> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
> Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
> ---
>  drivers/gpu/drm/i915/i915_debugfs.c     |   2 +-
>  drivers/gpu/drm/i915/i915_drv.h         |  19 +++-
>  drivers/gpu/drm/i915/intel_display.h    |   1 +
>  drivers/gpu/drm/i915/intel_drv.h        |   3 +
>  drivers/gpu/drm/i915/intel_runtime_pm.c | 195 ++++++++++++++++++++++++++++----
>  5 files changed, 199 insertions(+), 21 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
> index d81cb2513069..5a7ce734de02 100644
> --- a/drivers/gpu/drm/i915/i915_debugfs.c
> +++ b/drivers/gpu/drm/i915/i915_debugfs.c
> @@ -2746,7 +2746,7 @@ static int i915_power_domain_info(struct seq_file *m, void *unused)
>  		for_each_power_domain(power_domain, power_well->domains)
>  			seq_printf(m, "  %-23s %d\n",
>  				 intel_display_power_domain_str(power_domain),
> -				 power_domains->domain_use_count[power_domain]);
> +				 atomic_read(&power_domains->domain_use_count[power_domain]));
>  	}
>  
>  	mutex_unlock(&power_domains->lock);
> diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
> index caebd5825279..61a635f03af7 100644
> --- a/drivers/gpu/drm/i915/i915_drv.h
> +++ b/drivers/gpu/drm/i915/i915_drv.h
> @@ -1032,6 +1032,23 @@ struct i915_power_well {
>  			bool has_fuses:1;
>  		} hsw;
>  	};
> +
> +	/* Lock to serialize access to count, hw_enabled and ops, used for
> +	 * power wells that have supports_atomix_ctx set to True.
> +	 */
> +	spinlock_t lock;
> +
> +	/* Indicates that the get/put methods for this power well can be called
> +	 * in atomic contexts, requires .ops to not sleep. This is valid
> +	 * only for the DC_OFF power well currently.
> +	 */
> +	bool supports_atomic_ctx;
> +
> +	/* DC_OFF power well was disabled since the last time vblanks were
> +	 * disabled.
> +	 */
> +	bool dc_off_disabled;
> +
>  	const struct i915_power_well_ops *ops;
>  };
>  
> @@ -1045,7 +1062,7 @@ struct i915_power_domains {
>  	int power_well_count;
>  
>  	struct mutex lock;
> -	int domain_use_count[POWER_DOMAIN_NUM];
> +	atomic_t domain_use_count[POWER_DOMAIN_NUM];
>  	struct i915_power_well *power_wells;
>  };
>  
> diff --git a/drivers/gpu/drm/i915/intel_display.h b/drivers/gpu/drm/i915/intel_display.h
> index a0d2b6169361..3e9671ff6f79 100644
> --- a/drivers/gpu/drm/i915/intel_display.h
> +++ b/drivers/gpu/drm/i915/intel_display.h
> @@ -172,6 +172,7 @@ enum intel_display_power_domain {
>  	POWER_DOMAIN_AUX_C,
>  	POWER_DOMAIN_AUX_D,
>  	POWER_DOMAIN_GMBUS,
> +	POWER_DOMAIN_VBLANK,

Maybe we could start a new category of atomic domains and on interations that makes sense go over both
or making the domains in an array with is_atomic bool in case we can transform the atomic get,put to
be really generic or into .enable .disable function pointers.

>  	POWER_DOMAIN_MODESET,
>  	POWER_DOMAIN_GT_IRQ,
>  	POWER_DOMAIN_INIT,
> diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h
> index 30f791f89d64..164e62cb047b 100644
> --- a/drivers/gpu/drm/i915/intel_drv.h
> +++ b/drivers/gpu/drm/i915/intel_drv.h
> @@ -1797,6 +1797,9 @@ bool intel_display_power_get_if_enabled(struct drm_i915_private *dev_priv,
>  					enum intel_display_power_domain domain);
>  void intel_display_power_put(struct drm_i915_private *dev_priv,
>  			     enum intel_display_power_domain domain);
> +void intel_display_power_vblank_get(struct drm_i915_private *dev_priv,
> +				    bool *needs_restore);

can we dump the needs_restore and always restore the counter?
if so, we could use regular intel_power_domain_{get,put} functions and built the generic atomic inside it.

> +void intel_display_power_vblank_put(struct drm_i915_private *dev_priv);
>  
>  static inline void
>  assert_rpm_device_not_suspended(struct drm_i915_private *dev_priv)
> diff --git a/drivers/gpu/drm/i915/intel_runtime_pm.c b/drivers/gpu/drm/i915/intel_runtime_pm.c
> index d758da6156a8..93b324dcc55d 100644
> --- a/drivers/gpu/drm/i915/intel_runtime_pm.c
> +++ b/drivers/gpu/drm/i915/intel_runtime_pm.c
> @@ -56,6 +56,19 @@ static struct i915_power_well *
>  lookup_power_well(struct drm_i915_private *dev_priv,
>  		  enum i915_power_well_id power_well_id);
>  
> +/* Optimize for the case when this is called from atomic contexts,
> + * although the case is unlikely.
> + */
> +#define power_well_lock(power_well, flags) do {			\
> +	if (likely(power_well->supports_atomic_ctx))		\
> +		spin_lock_irqsave(&power_well->lock, flags);	\
> +	} while (0)
> +
> +#define power_well_unlock(power_well, flags) do {			\
> +	if (likely(power_well->supports_atomic_ctx))			\
> +		spin_unlock_irqrestore(&power_well->lock, flags);	\
> +	} while (0)
> +
>  const char *
>  intel_display_power_domain_str(enum intel_display_power_domain domain)
>  {
> @@ -126,6 +139,8 @@ intel_display_power_domain_str(enum intel_display_power_domain domain)
>  		return "AUX_D";
>  	case POWER_DOMAIN_GMBUS:
>  		return "GMBUS";
> +	case POWER_DOMAIN_VBLANK:
> +		return "VBLANK";
>  	case POWER_DOMAIN_INIT:
>  		return "INIT";
>  	case POWER_DOMAIN_MODESET:
> @@ -141,6 +156,9 @@ intel_display_power_domain_str(enum intel_display_power_domain domain)
>  static void intel_power_well_enable(struct drm_i915_private *dev_priv,
>  				    struct i915_power_well *power_well)
>  {
> +	if (power_well->supports_atomic_ctx)
> +		assert_spin_locked(&power_well->lock);
> +
>  	DRM_DEBUG_KMS("enabling %s\n", power_well->name);
>  	power_well->ops->enable(dev_priv, power_well);
>  	power_well->hw_enabled = true;
> @@ -149,26 +167,43 @@ static void intel_power_well_enable(struct drm_i915_private *dev_priv,
>  static void intel_power_well_disable(struct drm_i915_private *dev_priv,
>  				     struct i915_power_well *power_well)
>  {
> +	if (power_well->supports_atomic_ctx)
> +		assert_spin_locked(&power_well->lock);
> +
>  	DRM_DEBUG_KMS("disabling %s\n", power_well->name);
>  	power_well->hw_enabled = false;
>  	power_well->ops->disable(dev_priv, power_well);
>  }
>  
> -static void intel_power_well_get(struct drm_i915_private *dev_priv,
> +static void __intel_power_well_get(struct drm_i915_private *dev_priv,
>  				 struct i915_power_well *power_well)
>  {
>  	if (!power_well->count++)
>  		intel_power_well_enable(dev_priv, power_well);
>  }
>  
> +static void intel_power_well_get(struct drm_i915_private *dev_priv,
> +				 struct i915_power_well *power_well)
> +{
> +	unsigned long flags = 0;
> +
> +	power_well_lock(power_well, flags);
> +	__intel_power_well_get(dev_priv, power_well);
> +	power_well_unlock(power_well, flags);
> +}
> +
>  static void intel_power_well_put(struct drm_i915_private *dev_priv,
>  				 struct i915_power_well *power_well)
>  {
> +	unsigned long flags = 0;
> +
> +	power_well_lock(power_well, flags);
>  	WARN(!power_well->count, "Use count on power well %s is already zero",
>  	     power_well->name);
>  
>  	if (!--power_well->count)
>  		intel_power_well_disable(dev_priv, power_well);
> +	power_well_unlock(power_well, flags);
>  }
>  
>  /**
> @@ -736,6 +771,7 @@ static void gen9_dc_off_power_well_disable(struct drm_i915_private *dev_priv,
>  		skl_enable_dc6(dev_priv);
>  	else if (dev_priv->csr.allowed_dc_mask & DC_STATE_EN_UPTO_DC5)
>  		gen9_enable_dc5(dev_priv);
> +	power_well->dc_off_disabled = true;
>  }
>  
>  static void i9xx_power_well_sync_hw_noop(struct drm_i915_private *dev_priv,
> @@ -1453,6 +1489,58 @@ static void chv_pipe_power_well_disable(struct drm_i915_private *dev_priv,
>  	chv_set_pipe_power_well(dev_priv, power_well, false);
>  }
>  
> +void intel_display_power_vblank_get(struct drm_i915_private *dev_priv,
> +				    bool *needs_restore)
> +{
> +	struct i915_power_domains *power_domains  = &dev_priv->power_domains;
> +	struct i915_power_well *power_well;
> +	enum intel_display_power_domain domain = POWER_DOMAIN_VBLANK;
> +
> +	*needs_restore = false;
> +
> +	if (!HAS_CSR(dev_priv))
> +		return;
> +
> +	if (!(HAS_PSR(dev_priv) && dev_priv->psr.sink_support))
> +		return;
> +
> +	intel_runtime_pm_get_noresume(dev_priv);
> +
> +	for_each_power_domain_well(dev_priv, power_well, BIT_ULL(domain)) {
> +		unsigned long flags = 0;
> +
> +		power_well_lock(power_well, flags);
> +		__intel_power_well_get(dev_priv, power_well);
> +		*needs_restore = power_well->dc_off_disabled;
> +		power_well->dc_off_disabled = false;

it seem also that by always restoring we don't need to add this specific dc_off_disabled
variable inside the generic pw struct.

> +		power_well_unlock(power_well, flags);
> +	}
> +
> +	atomic_inc(&power_domains->domain_use_count[domain]);
> +}
> +
> +void intel_display_power_vblank_put(struct drm_i915_private *dev_priv)
> +{
> +	struct i915_power_domains *power_domains = &dev_priv->power_domains;
> +	struct i915_power_well *power_well;
> +	enum intel_display_power_domain domain = POWER_DOMAIN_VBLANK;
> +
> +	if (!HAS_CSR(dev_priv))
> +		return;
> +
> +	if (!(HAS_PSR(dev_priv) && dev_priv->psr.sink_support))
> +		return;

Can we remove these checks and do this for any vblank domain?
I believe this kind of association should be only on the domain<->well association
and not check for feature here.

> +
> +	WARN(atomic_dec_return(&power_domains->domain_use_count[domain]) < 0,
> +	     "Use count on domain %s was already zero\n",
> +	     intel_display_power_domain_str(domain));

is the atomic safe enough here? or we need a spinlock?

> +
> +	for_each_power_domain_well_rev(dev_priv, power_well, BIT_ULL(domain))
> +		intel_power_well_put(dev_priv, power_well);
> +
> +	intel_runtime_pm_put(dev_priv);
> +}
> +
>  static void
>  __intel_display_power_get_domain(struct drm_i915_private *dev_priv,
>  				 enum intel_display_power_domain domain)
> @@ -1463,7 +1551,7 @@ __intel_display_power_get_domain(struct drm_i915_private *dev_priv,
>  	for_each_power_domain_well(dev_priv, power_well, BIT_ULL(domain))
>  		intel_power_well_get(dev_priv, power_well);
>  
> -	power_domains->domain_use_count[domain]++;
> +	atomic_inc(&power_domains->domain_use_count[domain]);
>  }
>  
>  /**
> @@ -1492,6 +1580,38 @@ void intel_display_power_get(struct drm_i915_private *dev_priv,
>  	mutex_unlock(&power_domains->lock);
>  }
>  
> +static bool dc_off_get_if_enabled(struct drm_i915_private *dev_priv,
> +				  enum intel_display_power_domain domain)
> +{
> +	struct i915_power_well *power_well;
> +	bool is_enabled;
> +	unsigned long flags = 0;
> +
> +	power_well = lookup_power_well(dev_priv, SKL_DISP_PW_DC_OFF);
> +	if (!power_well || !(power_well->domains & domain))
> +		return true;
> +
> +	power_well_lock(power_well, flags);
> +	is_enabled = power_well->hw_enabled;
> +	if (is_enabled)
> +		__intel_power_well_get(dev_priv, power_well);
> +	power_well_unlock(power_well, flags);
> +
> +	return is_enabled;
> +}
> +
> +static void dc_off_put(struct drm_i915_private *dev_priv,
> +		       enum intel_display_power_domain domain)
> +{
> +	struct i915_power_well *power_well;
> +
> +	power_well = lookup_power_well(dev_priv, SKL_DISP_PW_DC_OFF);
> +	if (!power_well || !(power_well->domains & domain))
> +		return;
> +
> +	intel_power_well_put(dev_priv, power_well);
> +}
> +
>  /**
>   * intel_display_power_get_if_enabled - grab a reference for an enabled display power domain
>   * @dev_priv: i915 device instance
> @@ -1508,20 +1628,24 @@ bool intel_display_power_get_if_enabled(struct drm_i915_private *dev_priv,
>  					enum intel_display_power_domain domain)
>  {
>  	struct i915_power_domains *power_domains = &dev_priv->power_domains;
> -	bool is_enabled;
> +	bool is_enabled = false;
>  
>  	if (!intel_runtime_pm_get_if_in_use(dev_priv))
>  		return false;
>  
>  	mutex_lock(&power_domains->lock);
>  
> +	if (!dc_off_get_if_enabled(dev_priv, domain))
> +		goto out;
> +
>  	if (__intel_display_power_is_enabled(dev_priv, domain)) {
>  		__intel_display_power_get_domain(dev_priv, domain);
>  		is_enabled = true;
> -	} else {
> -		is_enabled = false;
>  	}
>  
> +	dc_off_put(dev_priv, domain);
> +
> +out:
>  	mutex_unlock(&power_domains->lock);
>  
>  	if (!is_enabled)
> @@ -1549,10 +1673,9 @@ void intel_display_power_put(struct drm_i915_private *dev_priv,
>  
>  	mutex_lock(&power_domains->lock);
>  
> -	WARN(!power_domains->domain_use_count[domain],
> -	     "Use count on domain %s is already zero\n",
> +	WARN(atomic_dec_return(&power_domains->domain_use_count[domain]) < 0,
> +	     "Use count on domain %s was already zero\n",
>  	     intel_display_power_domain_str(domain));
> -	power_domains->domain_use_count[domain]--;
>  
>  	for_each_power_domain_well_rev(dev_priv, power_well, BIT_ULL(domain))
>  		intel_power_well_put(dev_priv, power_well);
> @@ -1720,6 +1843,7 @@ void intel_display_power_put(struct drm_i915_private *dev_priv,
>  	BIT_ULL(POWER_DOMAIN_GT_IRQ) |			\
>  	BIT_ULL(POWER_DOMAIN_MODESET) |			\
>  	BIT_ULL(POWER_DOMAIN_AUX_A) |			\
> +	BIT_ULL(POWER_DOMAIN_VBLANK) |			\
>  	BIT_ULL(POWER_DOMAIN_INIT))
>  
>  #define BXT_DISPLAY_POWERWELL_2_POWER_DOMAINS (		\
> @@ -1743,6 +1867,7 @@ void intel_display_power_put(struct drm_i915_private *dev_priv,
>  	BIT_ULL(POWER_DOMAIN_MODESET) |			\
>  	BIT_ULL(POWER_DOMAIN_AUX_A) |			\
>  	BIT_ULL(POWER_DOMAIN_GMBUS) |			\
> +	BIT_ULL(POWER_DOMAIN_VBLANK) |			\
>  	BIT_ULL(POWER_DOMAIN_INIT))
>  #define BXT_DPIO_CMN_A_POWER_DOMAINS (			\
>  	BIT_ULL(POWER_DOMAIN_PORT_DDI_A_LANES) |		\
> @@ -1803,6 +1928,7 @@ void intel_display_power_put(struct drm_i915_private *dev_priv,
>  	BIT_ULL(POWER_DOMAIN_MODESET) |			\
>  	BIT_ULL(POWER_DOMAIN_AUX_A) |			\
>  	BIT_ULL(POWER_DOMAIN_GMBUS) |			\
> +	BIT_ULL(POWER_DOMAIN_VBLANK) |			\
>  	BIT_ULL(POWER_DOMAIN_INIT))
>  
>  #define CNL_DISPLAY_POWERWELL_2_POWER_DOMAINS (		\
> @@ -1850,6 +1976,7 @@ void intel_display_power_put(struct drm_i915_private *dev_priv,
>  	CNL_DISPLAY_POWERWELL_2_POWER_DOMAINS |		\
>  	BIT_ULL(POWER_DOMAIN_MODESET) |			\
>  	BIT_ULL(POWER_DOMAIN_AUX_A) |			\
> +	BIT_ULL(POWER_DOMAIN_VBLANK) |			\
>  	BIT_ULL(POWER_DOMAIN_INIT))
>  
>  static const struct i915_power_well_ops i9xx_always_on_power_well_ops = {
> @@ -2083,9 +2210,12 @@ bool intel_display_power_well_is_enabled(struct drm_i915_private *dev_priv,
>  {
>  	struct i915_power_well *power_well;
>  	bool ret;
> +	unsigned long flags = 0;
>  
>  	power_well = lookup_power_well(dev_priv, power_well_id);
> +	power_well_lock(power_well, flags);
>  	ret = power_well->ops->is_enabled(dev_priv, power_well);
> +	power_well_unlock(power_well, flags);
>  
>  	return ret;
>  }
> @@ -2120,6 +2250,8 @@ static struct i915_power_well skl_power_wells[] = {
>  		.domains = SKL_DISPLAY_DC_OFF_POWER_DOMAINS,
>  		.ops = &gen9_dc_off_power_well_ops,
>  		.id = SKL_DISP_PW_DC_OFF,
> +		.supports_atomic_ctx = true,
> +		.dc_off_disabled = false,
>  	},
>  	{
>  		.name = "power well 2",
> @@ -2180,6 +2312,8 @@ static struct i915_power_well bxt_power_wells[] = {
>  		.domains = BXT_DISPLAY_DC_OFF_POWER_DOMAINS,
>  		.ops = &gen9_dc_off_power_well_ops,
>  		.id = SKL_DISP_PW_DC_OFF,
> +		.supports_atomic_ctx = true,
> +		.dc_off_disabled = false,
>  	},
>  	{
>  		.name = "power well 2",
> @@ -2235,6 +2369,8 @@ static struct i915_power_well glk_power_wells[] = {
>  		.domains = GLK_DISPLAY_DC_OFF_POWER_DOMAINS,
>  		.ops = &gen9_dc_off_power_well_ops,
>  		.id = SKL_DISP_PW_DC_OFF,
> +		.supports_atomic_ctx = true,
> +		.dc_off_disabled = false,
>  	},
>  	{
>  		.name = "power well 2",
> @@ -2359,6 +2495,8 @@ static struct i915_power_well cnl_power_wells[] = {
>  		.domains = CNL_DISPLAY_DC_OFF_POWER_DOMAINS,
>  		.ops = &gen9_dc_off_power_well_ops,
>  		.id = SKL_DISP_PW_DC_OFF,
> +		.supports_atomic_ctx = true,
> +		.dc_off_disabled = false,
>  	},
>  	{
>  		.name = "power well 2",
> @@ -2487,6 +2625,7 @@ static void assert_power_well_ids_unique(struct drm_i915_private *dev_priv)
>  int intel_power_domains_init(struct drm_i915_private *dev_priv)
>  {
>  	struct i915_power_domains *power_domains = &dev_priv->power_domains;
> +	struct i915_power_well *power_well;
>  
>  	i915_modparams.disable_power_well =
>  		sanitize_disable_power_well_option(dev_priv,
> @@ -2524,6 +2663,10 @@ int intel_power_domains_init(struct drm_i915_private *dev_priv)
>  		set_power_wells(power_domains, i9xx_always_on_power_well);
>  	}
>  
> +	for_each_power_well(dev_priv, power_well)
> +		if (power_well->supports_atomic_ctx)
> +			spin_lock_init(&power_well->lock);
> +
>  	assert_power_well_ids_unique(dev_priv);
>  
>  	return 0;
> @@ -2571,9 +2714,13 @@ static void intel_power_domains_sync_hw(struct drm_i915_private *dev_priv)
>  
>  	mutex_lock(&power_domains->lock);
>  	for_each_power_well(dev_priv, power_well) {
> +		unsigned long flags = 0;
> +
> +		power_well_lock(power_well, flags);
>  		power_well->ops->sync_hw(dev_priv, power_well);
>  		power_well->hw_enabled = power_well->ops->is_enabled(dev_priv,
>  								     power_well);
> +		power_well_unlock(power_well, flags);
>  	}
>  	mutex_unlock(&power_domains->lock);
>  }
> @@ -3046,21 +3193,23 @@ void intel_power_domains_suspend(struct drm_i915_private *dev_priv)
>  		bxt_display_core_uninit(dev_priv);
>  }
>  
> -static void intel_power_domains_dump_info(struct drm_i915_private *dev_priv)
> +static void intel_power_domains_dump_info(struct drm_i915_private *dev_priv,
> +					  const int *power_well_use)
>  {
>  	struct i915_power_domains *power_domains = &dev_priv->power_domains;
>  	struct i915_power_well *power_well;
> +	int i = 0;
>  
>  	for_each_power_well(dev_priv, power_well) {
>  		enum intel_display_power_domain domain;
>  
>  		DRM_DEBUG_DRIVER("%-25s %d\n",
> -				 power_well->name, power_well->count);
> +				 power_well->name, power_well_use[i++]);
>  
>  		for_each_power_domain(domain, power_well->domains)
>  			DRM_DEBUG_DRIVER("  %-23s %d\n",
>  					 intel_display_power_domain_str(domain),
> -					 power_domains->domain_use_count[domain]);
> +					 atomic_read(&power_domains->domain_use_count[domain]));
>  	}
>  }
>  
> @@ -3079,6 +3228,7 @@ void intel_power_domains_verify_state(struct drm_i915_private *dev_priv)
>  	struct i915_power_domains *power_domains = &dev_priv->power_domains;
>  	struct i915_power_well *power_well;
>  	bool dump_domain_info;
> +	int power_well_use[dev_priv->power_domains.power_well_count];
>  
>  	mutex_lock(&power_domains->lock);
>  
> @@ -3087,6 +3237,14 @@ void intel_power_domains_verify_state(struct drm_i915_private *dev_priv)
>  		enum intel_display_power_domain domain;
>  		int domains_count;
>  		bool enabled;
> +		int well_count, i = 0;
> +		unsigned long flags = 0;
> +
> +		power_well_lock(power_well, flags);
> +		well_count = power_well->count;
> +		enabled = power_well->ops->is_enabled(dev_priv, power_well);
> +		power_well_unlock(power_well, flags);
> +		power_well_use[i++] = well_count;
>  
>  		/*
>  		 * Power wells not belonging to any domain (like the MISC_IO
> @@ -3096,20 +3254,19 @@ void intel_power_domains_verify_state(struct drm_i915_private *dev_priv)
>  		if (!power_well->domains)
>  			continue;
>  
> -		enabled = power_well->ops->is_enabled(dev_priv, power_well);
> -		if ((power_well->count || power_well->always_on) != enabled)
> +
> +		if ((well_count || power_well->always_on) != enabled)
>  			DRM_ERROR("power well %s state mismatch (refcount %d/enabled %d)",
> -				  power_well->name, power_well->count, enabled);
> +				  power_well->name, well_count, enabled);
>  
>  		domains_count = 0;
>  		for_each_power_domain(domain, power_well->domains)
> -			domains_count += power_domains->domain_use_count[domain];
> +			domains_count += atomic_read(&power_domains->domain_use_count[domain]);
>  
> -		if (power_well->count != domains_count) {
> +		if (well_count != domains_count) {
>  			DRM_ERROR("power well %s refcount/domain refcount mismatch "
>  				  "(refcount %d/domains refcount %d)\n",
> -				  power_well->name, power_well->count,
> -				  domains_count);
> +				  power_well->name, well_count, domains_count);
>  			dump_domain_info = true;
>  		}
>  	}
> @@ -3118,7 +3275,7 @@ void intel_power_domains_verify_state(struct drm_i915_private *dev_priv)
>  		static bool dumped;
>  
>  		if (!dumped) {
> -			intel_power_domains_dump_info(dev_priv);
> +			intel_power_domains_dump_info(dev_priv, power_well_use);
>  			dumped = true;
>  		}
>  	}
> -- 
> 2.11.0
> 

I will probably have more comments later, but just doing a brain dump now
since I end up forgetting to write yesterday...

The approach here in general is good and much better than that pre,post hooks.
But I just believe we can do this here in a more generic approach than deviating
the initial power well and domains.

Thanks,
Rodrigo.
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 3/5] drm/i915: Enable vblanks after verifying power domain states.
  2018-01-04 21:51     ` Pandiyan, Dhinakaran
@ 2018-01-05 11:23       ` Maarten Lankhorst
  2018-01-05 18:09         ` Rodrigo Vivi
  0 siblings, 1 reply; 22+ messages in thread
From: Maarten Lankhorst @ 2018-01-05 11:23 UTC (permalink / raw)
  To: Pandiyan, Dhinakaran
  Cc: daniel.vetter@ffwll.ch, intel-gfx@lists.freedesktop.org,
	Vivi, Rodrigo

Op 04-01-18 om 22:51 schreef Pandiyan, Dhinakaran:
> On Thu, 2018-01-04 at 12:35 +0100, Maarten Lankhorst wrote:
>> Wouldn't it be better to make intel_power_domains_verify_state work
>> correctly with the vblank irq?
> I tried to :) Since I changed the domain_use_count to atomic_t and moved
> it outside of the locks, verify_state became racy. Let me take another
> look.
>
> -DK

It sucks that we end up with 2 ways to handle power domains..

I'm trying to think of a cleaner way, coming up with none. :(

It would have been nice if we could do something like a seqlock for
the refcounts, but that would prevent us from blocking too..

Is it only the DC off power well we care about?

~Maarten

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 3/5] drm/i915: Enable vblanks after verifying power domain states.
  2018-01-05 11:23       ` Maarten Lankhorst
@ 2018-01-05 18:09         ` Rodrigo Vivi
  2018-01-05 18:45           ` Pandiyan, Dhinakaran
  0 siblings, 1 reply; 22+ messages in thread
From: Rodrigo Vivi @ 2018-01-05 18:09 UTC (permalink / raw)
  To: Maarten Lankhorst
  Cc: intel-gfx@lists.freedesktop.org, Pandiyan, Dhinakaran,
	daniel.vetter@ffwll.ch

On Fri, Jan 05, 2018 at 11:23:54AM +0000, Maarten Lankhorst wrote:
> Op 04-01-18 om 22:51 schreef Pandiyan, Dhinakaran:
> > On Thu, 2018-01-04 at 12:35 +0100, Maarten Lankhorst wrote:
> >> Wouldn't it be better to make intel_power_domains_verify_state work
> >> correctly with the vblank irq?
> > I tried to :) Since I changed the domain_use_count to atomic_t and moved
> > it outside of the locks, verify_state became racy. Let me take another
> > look.
> >
> > -DK
> 
> It sucks that we end up with 2 ways to handle power domains..

I also don't like that, but if we need to go with that I believe
we need to go with a generic approach.

> 
> I'm trying to think of a cleaner way, coming up with none. :(

me neither :(

> 
> It would have been nice if we could do something like a seqlock for
> the refcounts, but that would prevent us from blocking too..

reader does't block and writer doesn't wait for the reader so not
sure we could use this.

> 
> Is it only the DC off power well we care about?

It is the only call to any power well that comes from an spin-locked
region. So we can't sleep.

I think we looked to the option of changing the entire pw to spin locks
instead of mutexs, but we concluded it wasn't a viable option as well.
I just can't remember why right now.

Thanks,
Rodrigo.

> 
> ~Maarten
> 
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 3/5] drm/i915: Enable vblanks after verifying power domain states.
  2018-01-05 18:09         ` Rodrigo Vivi
@ 2018-01-05 18:45           ` Pandiyan, Dhinakaran
  0 siblings, 0 replies; 22+ messages in thread
From: Pandiyan, Dhinakaran @ 2018-01-05 18:45 UTC (permalink / raw)
  To: Vivi, Rodrigo; +Cc: daniel.vetter@ffwll.ch, intel-gfx@lists.freedesktop.org




On Fri, 2018-01-05 at 10:09 -0800, Rodrigo Vivi wrote:
> On Fri, Jan 05, 2018 at 11:23:54AM +0000, Maarten Lankhorst wrote:
> > Op 04-01-18 om 22:51 schreef Pandiyan, Dhinakaran:
> > > On Thu, 2018-01-04 at 12:35 +0100, Maarten Lankhorst wrote:
> > >> Wouldn't it be better to make intel_power_domains_verify_state work
> > >> correctly with the vblank irq?
> > > I tried to :) Since I changed the domain_use_count to atomic_t and moved
> > > it outside of the locks, verify_state became racy. Let me take another
> > > look.
> > >
> > > -DK
> > 
> > It sucks that we end up with 2 ways to handle power domains..
> 
> I also don't like that, but if we need to go with that I believe
> we need to go with a generic approach.
> 
> > 
> > I'm trying to think of a cleaner way, coming up with none. :(
> 
> me neither :(
> 
> > 
> > It would have been nice if we could do something like a seqlock for
> > the refcounts, but that would prevent us from blocking too..
> 
> reader does't block and writer doesn't wait for the reader so not
> sure we could use this.
> 
> > 
> > Is it only the DC off power well we care about?
> 

Yeah, that is sufficient to deal with frame counter resets.

> It is the only call to any power well that comes from an spin-locked
> region. So we can't sleep.
> 
> I think we looked to the option of changing the entire pw to spin locks
> instead of mutexs, but we concluded it wasn't a viable option as well.
> I just can't remember why right now.

Enable/disable for other power wells have wait_for_register()

> 
> Thanks,
> Rodrigo.
> 
> > 
> > ~Maarten
> > 
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 3/5] drm/i915: Enable vblanks after verifying power domain states.
  2018-01-04 11:35   ` Maarten Lankhorst
  2018-01-04 21:51     ` Pandiyan, Dhinakaran
@ 2018-01-06  9:51     ` Dhinakaran Pandiyan
  2018-01-08 14:43       ` Maarten Lankhorst
  1 sibling, 1 reply; 22+ messages in thread
From: Dhinakaran Pandiyan @ 2018-01-06  9:51 UTC (permalink / raw)
  To: intel-gfx; +Cc: Daniel Vetter, Dhinakaran Pandiyan, Rodrigo Vivi

On Thursday, January 4, 2018 12:35:48 PM PST Maarten Lankhorst wrote:
> Op 03-01-18 om 21:39 schreef Dhinakaran Pandiyan:
> > Since we want to allow for a non-blocking power domain for vblanks,
> > the power domain use count and power well use count will not be updated
> > atomically inside the power domain mutex (see next patch). This affects
> > verifying if sum(power_domain_use_count) == power_well_use_count at
> > init time. So do not enable vblanks until this verification is done.
> > 
> > Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> > Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
> > Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> > Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
> > ---
> > 
> >  drivers/gpu/drm/i915/intel_display.c | 24 ++++++++++++++++++++----
> >  1 file changed, 20 insertions(+), 4 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/i915/intel_display.c
> > b/drivers/gpu/drm/i915/intel_display.c index 0cd355978ab4..7bc874b8dac7
> > 100644
> > --- a/drivers/gpu/drm/i915/intel_display.c
> > +++ b/drivers/gpu/drm/i915/intel_display.c
> > @@ -14739,6 +14739,24 @@ static bool has_pch_trancoder(struct
> > drm_i915_private *dev_priv,> 
> >  		(HAS_PCH_LPT_H(dev_priv) && pch_transcoder == PIPE_A);
> >  
> >  }
> > 
> > +static void modeset_enable_vblanks(struct drm_i915_private *dev_priv)
> > +{
> > +	enum pipe pipe;
> > +
> > +	for_each_pipe(dev_priv, pipe) {
> > +		struct intel_crtc *crtc;
> > +
> > +		crtc = intel_get_crtc_for_pipe(dev_priv, pipe);
> > +
> > +		/* restore vblank interrupts to correct state */
> > +		drm_crtc_vblank_reset(&crtc->base);
> > +
> > +		if (crtc->active)
> > +			drm_crtc_vblank_on(&crtc->base);
> > +	}
> > +}
> > +
> > +
> > 
> >  static void intel_sanitize_crtc(struct intel_crtc *crtc,
> >  
> >  				struct drm_modeset_acquire_ctx *ctx)
> >  
> >  {
> > 
> > @@ -14754,13 +14772,9 @@ static void intel_sanitize_crtc(struct intel_crtc
> > *crtc,> 
> >  			   I915_READ(reg) & ~PIPECONF_FRAME_START_DELAY_MASK);
> >  	
> >  	}
> > 
> > -	/* restore vblank interrupts to correct state */
> > -	drm_crtc_vblank_reset(&crtc->base);
> > 
> >  	if (crtc->active) {
> >  	
> >  		struct intel_plane *plane;
> > 
> > -		drm_crtc_vblank_on(&crtc->base);
> > -
> > 
> >  		/* Disable everything but the primary plane */
> >  		for_each_intel_plane_on_crtc(dev, crtc, plane) {
> >  		
> >  			const struct intel_plane_state *plane_state =
> > 
> > @@ -15147,6 +15161,8 @@ intel_modeset_setup_hw_state(struct drm_device
> > *dev,> 
> >  	intel_power_domains_verify_state(dev_priv);
> > 
> > +	modeset_enable_vblanks(dev_priv);
> > +
> > 
> >  	intel_fbc_init_pipe_state(dev_priv);
> >  
> >  }
> 
> intel_pre_disable_primary_noatomic calls wait_for_vblank, which will fail
> with this patch. Wouldn't it be better to make
> intel_power_domains_verify_state work correctly with the vblank irq?

How about disabling vblanks just before power_domains_verify_state and 
enabling it back again?

Another option is to ignore the check in power_domains_verify_state  for 
DC_OFF. 

-DK

> 
> ~Maarten
> 
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx


_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 3/5] drm/i915: Enable vblanks after verifying power domain states.
  2018-01-06  9:51     ` Dhinakaran Pandiyan
@ 2018-01-08 14:43       ` Maarten Lankhorst
  2018-01-08 19:29         ` Pandiyan, Dhinakaran
  0 siblings, 1 reply; 22+ messages in thread
From: Maarten Lankhorst @ 2018-01-08 14:43 UTC (permalink / raw)
  To: Dhinakaran Pandiyan, intel-gfx
  Cc: Daniel Vetter, Dhinakaran Pandiyan, Rodrigo Vivi

Op 06-01-18 om 10:51 schreef Dhinakaran Pandiyan:
> On Thursday, January 4, 2018 12:35:48 PM PST Maarten Lankhorst wrote:
>> Op 03-01-18 om 21:39 schreef Dhinakaran Pandiyan:
>>> Since we want to allow for a non-blocking power domain for vblanks,
>>> the power domain use count and power well use count will not be updated
>>> atomically inside the power domain mutex (see next patch). This affects
>>> verifying if sum(power_domain_use_count) == power_well_use_count at
>>> init time. So do not enable vblanks until this verification is done.
>>>
>>> Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
>>> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
>>> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
>>> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
>>> Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
>>> ---
>>>
>>>  drivers/gpu/drm/i915/intel_display.c | 24 ++++++++++++++++++++----
>>>  1 file changed, 20 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/i915/intel_display.c
>>> b/drivers/gpu/drm/i915/intel_display.c index 0cd355978ab4..7bc874b8dac7
>>> 100644
>>> --- a/drivers/gpu/drm/i915/intel_display.c
>>> +++ b/drivers/gpu/drm/i915/intel_display.c
>>> @@ -14739,6 +14739,24 @@ static bool has_pch_trancoder(struct
>>> drm_i915_private *dev_priv,> 
>>>  		(HAS_PCH_LPT_H(dev_priv) && pch_transcoder == PIPE_A);
>>>  
>>>  }
>>>
>>> +static void modeset_enable_vblanks(struct drm_i915_private *dev_priv)
>>> +{
>>> +	enum pipe pipe;
>>> +
>>> +	for_each_pipe(dev_priv, pipe) {
>>> +		struct intel_crtc *crtc;
>>> +
>>> +		crtc = intel_get_crtc_for_pipe(dev_priv, pipe);
>>> +
>>> +		/* restore vblank interrupts to correct state */
>>> +		drm_crtc_vblank_reset(&crtc->base);
>>> +
>>> +		if (crtc->active)
>>> +			drm_crtc_vblank_on(&crtc->base);
>>> +	}
>>> +}
>>> +
>>> +
>>>
>>>  static void intel_sanitize_crtc(struct intel_crtc *crtc,
>>>  
>>>  				struct drm_modeset_acquire_ctx *ctx)
>>>  
>>>  {
>>>
>>> @@ -14754,13 +14772,9 @@ static void intel_sanitize_crtc(struct intel_crtc
>>> *crtc,> 
>>>  			   I915_READ(reg) & ~PIPECONF_FRAME_START_DELAY_MASK);
>>>  	
>>>  	}
>>>
>>> -	/* restore vblank interrupts to correct state */
>>> -	drm_crtc_vblank_reset(&crtc->base);
>>>
>>>  	if (crtc->active) {
>>>  	
>>>  		struct intel_plane *plane;
>>>
>>> -		drm_crtc_vblank_on(&crtc->base);
>>> -
>>>
>>>  		/* Disable everything but the primary plane */
>>>  		for_each_intel_plane_on_crtc(dev, crtc, plane) {
>>>  		
>>>  			const struct intel_plane_state *plane_state =
>>>
>>> @@ -15147,6 +15161,8 @@ intel_modeset_setup_hw_state(struct drm_device
>>> *dev,> 
>>>  	intel_power_domains_verify_state(dev_priv);
>>>
>>> +	modeset_enable_vblanks(dev_priv);
>>> +
>>>
>>>  	intel_fbc_init_pipe_state(dev_priv);
>>>  
>>>  }
>> intel_pre_disable_primary_noatomic calls wait_for_vblank, which will fail
>> with this patch. Wouldn't it be better to make
>> intel_power_domains_verify_state work correctly with the vblank irq?
> How about disabling vblanks just before power_domains_verify_state and 
> enabling it back again?
>
> Another option is to ignore the check in power_domains_verify_state  for 
> DC_OFF. 
That would be better, or at least assume it's at most 1 more than calculated..

~Maarten
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 3/5] drm/i915: Enable vblanks after verifying power domain states.
  2018-01-08 14:43       ` Maarten Lankhorst
@ 2018-01-08 19:29         ` Pandiyan, Dhinakaran
  0 siblings, 0 replies; 22+ messages in thread
From: Pandiyan, Dhinakaran @ 2018-01-08 19:29 UTC (permalink / raw)
  To: maarten.lankhorst@linux.intel.com
  Cc: daniel.vetter@ffwll.ch, intel-gfx@lists.freedesktop.org,
	Vivi, Rodrigo

On Mon, 2018-01-08 at 15:43 +0100, Maarten Lankhorst wrote:
> Op 06-01-18 om 10:51 schreef Dhinakaran Pandiyan:
> > On Thursday, January 4, 2018 12:35:48 PM PST Maarten Lankhorst wrote:
> >> Op 03-01-18 om 21:39 schreef Dhinakaran Pandiyan:
> >>> Since we want to allow for a non-blocking power domain for vblanks,
> >>> the power domain use count and power well use count will not be updated
> >>> atomically inside the power domain mutex (see next patch). This affects
> >>> verifying if sum(power_domain_use_count) == power_well_use_count at
> >>> init time. So do not enable vblanks until this verification is done.
> >>>
> >>> Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> >>> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
> >>> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
> >>> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> >>> Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
> >>> ---
> >>>
> >>>  drivers/gpu/drm/i915/intel_display.c | 24 ++++++++++++++++++++----
> >>>  1 file changed, 20 insertions(+), 4 deletions(-)
> >>>
> >>> diff --git a/drivers/gpu/drm/i915/intel_display.c
> >>> b/drivers/gpu/drm/i915/intel_display.c index 0cd355978ab4..7bc874b8dac7
> >>> 100644
> >>> --- a/drivers/gpu/drm/i915/intel_display.c
> >>> +++ b/drivers/gpu/drm/i915/intel_display.c
> >>> @@ -14739,6 +14739,24 @@ static bool has_pch_trancoder(struct
> >>> drm_i915_private *dev_priv,> 
> >>>  		(HAS_PCH_LPT_H(dev_priv) && pch_transcoder == PIPE_A);
> >>>  
> >>>  }
> >>>
> >>> +static void modeset_enable_vblanks(struct drm_i915_private *dev_priv)
> >>> +{
> >>> +	enum pipe pipe;
> >>> +
> >>> +	for_each_pipe(dev_priv, pipe) {
> >>> +		struct intel_crtc *crtc;
> >>> +
> >>> +		crtc = intel_get_crtc_for_pipe(dev_priv, pipe);
> >>> +
> >>> +		/* restore vblank interrupts to correct state */
> >>> +		drm_crtc_vblank_reset(&crtc->base);
> >>> +
> >>> +		if (crtc->active)
> >>> +			drm_crtc_vblank_on(&crtc->base);
> >>> +	}
> >>> +}
> >>> +
> >>> +
> >>>
> >>>  static void intel_sanitize_crtc(struct intel_crtc *crtc,
> >>>  
> >>>  				struct drm_modeset_acquire_ctx *ctx)
> >>>  
> >>>  {
> >>>
> >>> @@ -14754,13 +14772,9 @@ static void intel_sanitize_crtc(struct intel_crtc
> >>> *crtc,> 
> >>>  			   I915_READ(reg) & ~PIPECONF_FRAME_START_DELAY_MASK);
> >>>  	
> >>>  	}
> >>>
> >>> -	/* restore vblank interrupts to correct state */
> >>> -	drm_crtc_vblank_reset(&crtc->base);
> >>>
> >>>  	if (crtc->active) {
> >>>  	
> >>>  		struct intel_plane *plane;
> >>>
> >>> -		drm_crtc_vblank_on(&crtc->base);
> >>> -
> >>>
> >>>  		/* Disable everything but the primary plane */
> >>>  		for_each_intel_plane_on_crtc(dev, crtc, plane) {
> >>>  		
> >>>  			const struct intel_plane_state *plane_state =
> >>>
> >>> @@ -15147,6 +15161,8 @@ intel_modeset_setup_hw_state(struct drm_device
> >>> *dev,> 
> >>>  	intel_power_domains_verify_state(dev_priv);
> >>>
> >>> +	modeset_enable_vblanks(dev_priv);
> >>> +
> >>>
> >>>  	intel_fbc_init_pipe_state(dev_priv);
> >>>  
> >>>  }
> >> intel_pre_disable_primary_noatomic calls wait_for_vblank, which will fail
> >> with this patch. Wouldn't it be better to make
> >> intel_power_domains_verify_state work correctly with the vblank irq?
> > How about disabling vblanks just before power_domains_verify_state and 
> > enabling it back again?
> >
> > Another option is to ignore the check in power_domains_verify_state  for 
> > DC_OFF. 
> That would be better, or at least assume it's at most 1 more than calculated..

Sounds good.
-DK

> 
> ~Maarten
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 4/5] drm/i915: Introduce a non-blocking power domain for vblank interrupts
  2018-01-05  2:08   ` Rodrigo Vivi
@ 2018-01-08 19:59     ` Pandiyan, Dhinakaran
  2018-01-09  9:38       ` Maarten Lankhorst
  0 siblings, 1 reply; 22+ messages in thread
From: Pandiyan, Dhinakaran @ 2018-01-08 19:59 UTC (permalink / raw)
  To: Vivi, Rodrigo
  Cc: daniel.vetter@ffwll.ch, intel-gfx@lists.freedesktop.org,
	Zanoni, Paulo R


On Thu, 2018-01-04 at 18:08 -0800, Rodrigo Vivi wrote:
> On Wed, Jan 03, 2018 at 08:40:00PM +0000, Dhinakaran Pandiyan wrote:
> > When DC states are enabled and PSR is active, the hardware enters DC5/DC6
> > states resulting in the frame counter resetting. The frame counter reset
> > mess up the vblank counting logic as the diff between the new frame
> > counter value and the previous is negative, and this negative diff gets
> > applied as an unsigned value to the vblank count. We cannot reject negative
> > diffs altogether because they can arise from legitimate frame counter
> > overflows when there is a long period with vblank disabled. So, this
> > approach allows for the driver to notice a DC state toggle between a vblank
> > disable and enable and fill in the missed vblanks.
> > 
> > But, in order to disable DC states when vblank interrupts are required,
> > the DC_OFF power well has to be disabled in an atomic context. So,
> > introduce a new VBLANK power domain that can be acquired and released in
> > atomic contexts with these changes -
> > 1)  _vblank_get() and _vblank_put() methods skip the power_domain mutex
> > and use a spin lock for the DC power well.
> > 2) power_domains->domain_use_count is converted to an atomic_t array so
> > that it can be updated outside of the power domain mutex.
> > 
> > v3: Squash domain_use_count atomic_t conversion (Maarten)
> > v2: Fix deadlock by switching irqsave spinlock.
> >     Implement atomic version of get_if_enabled.
> >     Modify power_domain_verify_state to check power well use count and
> > enabled status atomically.
> >     Rewrite of intel_power_well_{get,put}
> > 
> > Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> > Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> > Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
> > Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
> > ---
> >  drivers/gpu/drm/i915/i915_debugfs.c     |   2 +-
> >  drivers/gpu/drm/i915/i915_drv.h         |  19 +++-
> >  drivers/gpu/drm/i915/intel_display.h    |   1 +
> >  drivers/gpu/drm/i915/intel_drv.h        |   3 +
> >  drivers/gpu/drm/i915/intel_runtime_pm.c | 195 ++++++++++++++++++++++++++++----
> >  5 files changed, 199 insertions(+), 21 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
> > index d81cb2513069..5a7ce734de02 100644
> > --- a/drivers/gpu/drm/i915/i915_debugfs.c
> > +++ b/drivers/gpu/drm/i915/i915_debugfs.c
> > @@ -2746,7 +2746,7 @@ static int i915_power_domain_info(struct seq_file *m, void *unused)
> >  		for_each_power_domain(power_domain, power_well->domains)
> >  			seq_printf(m, "  %-23s %d\n",
> >  				 intel_display_power_domain_str(power_domain),
> > -				 power_domains->domain_use_count[power_domain]);
> > +				 atomic_read(&power_domains->domain_use_count[power_domain]));
> >  	}
> >  
> >  	mutex_unlock(&power_domains->lock);
> > diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
> > index caebd5825279..61a635f03af7 100644
> > --- a/drivers/gpu/drm/i915/i915_drv.h
> > +++ b/drivers/gpu/drm/i915/i915_drv.h
> > @@ -1032,6 +1032,23 @@ struct i915_power_well {
> >  			bool has_fuses:1;
> >  		} hsw;
> >  	};
> > +
> > +	/* Lock to serialize access to count, hw_enabled and ops, used for
> > +	 * power wells that have supports_atomix_ctx set to True.
> > +	 */
> > +	spinlock_t lock;
> > +
> > +	/* Indicates that the get/put methods for this power well can be called
> > +	 * in atomic contexts, requires .ops to not sleep. This is valid
> > +	 * only for the DC_OFF power well currently.
> > +	 */
> > +	bool supports_atomic_ctx;
> > +
> > +	/* DC_OFF power well was disabled since the last time vblanks were
> > +	 * disabled.
> > +	 */
> > +	bool dc_off_disabled;
> > +
> >  	const struct i915_power_well_ops *ops;
> >  };
> >  
> > @@ -1045,7 +1062,7 @@ struct i915_power_domains {
> >  	int power_well_count;
> >  
> >  	struct mutex lock;
> > -	int domain_use_count[POWER_DOMAIN_NUM];
> > +	atomic_t domain_use_count[POWER_DOMAIN_NUM];
> >  	struct i915_power_well *power_wells;
> >  };
> >  
> > diff --git a/drivers/gpu/drm/i915/intel_display.h b/drivers/gpu/drm/i915/intel_display.h
> > index a0d2b6169361..3e9671ff6f79 100644
> > --- a/drivers/gpu/drm/i915/intel_display.h
> > +++ b/drivers/gpu/drm/i915/intel_display.h
> > @@ -172,6 +172,7 @@ enum intel_display_power_domain {
> >  	POWER_DOMAIN_AUX_C,
> >  	POWER_DOMAIN_AUX_D,
> >  	POWER_DOMAIN_GMBUS,
> > +	POWER_DOMAIN_VBLANK,
> 
> Maybe we could start a new category of atomic domains and on interations that makes sense go over both
> or making the domains in an array with is_atomic bool in case we can transform the atomic get,put to
> be really generic or into .enable .disable function pointers.


The generic interfaces we've discussed until now to get/put vblank power
domain atomically are still not generic enough. We might as well accept
DC_OFF to be a special case and deal with it as such - there is no other
power well that we need to enable/disable atomically. We can always
rewrite the code in the future if a better idea comes up.

> 
> >  	POWER_DOMAIN_MODESET,
> >  	POWER_DOMAIN_GT_IRQ,
> >  	POWER_DOMAIN_INIT,
> > diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h
> > index 30f791f89d64..164e62cb047b 100644
> > --- a/drivers/gpu/drm/i915/intel_drv.h
> > +++ b/drivers/gpu/drm/i915/intel_drv.h
> > @@ -1797,6 +1797,9 @@ bool intel_display_power_get_if_enabled(struct drm_i915_private *dev_priv,
> >  					enum intel_display_power_domain domain);
> >  void intel_display_power_put(struct drm_i915_private *dev_priv,
> >  			     enum intel_display_power_domain domain);
> > +void intel_display_power_vblank_get(struct drm_i915_private *dev_priv,
> > +				    bool *needs_restore);
> 
> can we dump the needs_restore and always restore the counter?
I checked this, seems possible.

> if so, we could use regular intel_power_domain_{get,put} functions and built the generic atomic inside it.
> 
> > +void intel_display_power_vblank_put(struct drm_i915_private *dev_priv);
> >  
> >  static inline void
> >  assert_rpm_device_not_suspended(struct drm_i915_private *dev_priv)
> > diff --git a/drivers/gpu/drm/i915/intel_runtime_pm.c b/drivers/gpu/drm/i915/intel_runtime_pm.c
> > index d758da6156a8..93b324dcc55d 100644
> > --- a/drivers/gpu/drm/i915/intel_runtime_pm.c
> > +++ b/drivers/gpu/drm/i915/intel_runtime_pm.c
> > @@ -56,6 +56,19 @@ static struct i915_power_well *
> >  lookup_power_well(struct drm_i915_private *dev_priv,
> >  		  enum i915_power_well_id power_well_id);
> >  
> > +/* Optimize for the case when this is called from atomic contexts,
> > + * although the case is unlikely.
> > + */
> > +#define power_well_lock(power_well, flags) do {			\
> > +	if (likely(power_well->supports_atomic_ctx))		\
> > +		spin_lock_irqsave(&power_well->lock, flags);	\
> > +	} while (0)
> > +
> > +#define power_well_unlock(power_well, flags) do {			\
> > +	if (likely(power_well->supports_atomic_ctx))			\
> > +		spin_unlock_irqrestore(&power_well->lock, flags);	\
> > +	} while (0)
> > +
> >  const char *
> >  intel_display_power_domain_str(enum intel_display_power_domain domain)
> >  {
> > @@ -126,6 +139,8 @@ intel_display_power_domain_str(enum intel_display_power_domain domain)
> >  		return "AUX_D";
> >  	case POWER_DOMAIN_GMBUS:
> >  		return "GMBUS";
> > +	case POWER_DOMAIN_VBLANK:
> > +		return "VBLANK";
> >  	case POWER_DOMAIN_INIT:
> >  		return "INIT";
> >  	case POWER_DOMAIN_MODESET:
> > @@ -141,6 +156,9 @@ intel_display_power_domain_str(enum intel_display_power_domain domain)
> >  static void intel_power_well_enable(struct drm_i915_private *dev_priv,
> >  				    struct i915_power_well *power_well)
> >  {
> > +	if (power_well->supports_atomic_ctx)
> > +		assert_spin_locked(&power_well->lock);
> > +
> >  	DRM_DEBUG_KMS("enabling %s\n", power_well->name);
> >  	power_well->ops->enable(dev_priv, power_well);
> >  	power_well->hw_enabled = true;
> > @@ -149,26 +167,43 @@ static void intel_power_well_enable(struct drm_i915_private *dev_priv,
> >  static void intel_power_well_disable(struct drm_i915_private *dev_priv,
> >  				     struct i915_power_well *power_well)
> >  {
> > +	if (power_well->supports_atomic_ctx)
> > +		assert_spin_locked(&power_well->lock);
> > +
> >  	DRM_DEBUG_KMS("disabling %s\n", power_well->name);
> >  	power_well->hw_enabled = false;
> >  	power_well->ops->disable(dev_priv, power_well);
> >  }
> >  
> > -static void intel_power_well_get(struct drm_i915_private *dev_priv,
> > +static void __intel_power_well_get(struct drm_i915_private *dev_priv,
> >  				 struct i915_power_well *power_well)
> >  {
> >  	if (!power_well->count++)
> >  		intel_power_well_enable(dev_priv, power_well);
> >  }
> >  
> > +static void intel_power_well_get(struct drm_i915_private *dev_priv,
> > +				 struct i915_power_well *power_well)
> > +{
> > +	unsigned long flags = 0;
> > +
> > +	power_well_lock(power_well, flags);
> > +	__intel_power_well_get(dev_priv, power_well);
> > +	power_well_unlock(power_well, flags);
> > +}
> > +
> >  static void intel_power_well_put(struct drm_i915_private *dev_priv,
> >  				 struct i915_power_well *power_well)
> >  {
> > +	unsigned long flags = 0;
> > +
> > +	power_well_lock(power_well, flags);
> >  	WARN(!power_well->count, "Use count on power well %s is already zero",
> >  	     power_well->name);
> >  
> >  	if (!--power_well->count)
> >  		intel_power_well_disable(dev_priv, power_well);
> > +	power_well_unlock(power_well, flags);
> >  }
> >  
> >  /**
> > @@ -736,6 +771,7 @@ static void gen9_dc_off_power_well_disable(struct drm_i915_private *dev_priv,
> >  		skl_enable_dc6(dev_priv);
> >  	else if (dev_priv->csr.allowed_dc_mask & DC_STATE_EN_UPTO_DC5)
> >  		gen9_enable_dc5(dev_priv);
> > +	power_well->dc_off_disabled = true;
> >  }
> >  
> >  static void i9xx_power_well_sync_hw_noop(struct drm_i915_private *dev_priv,
> > @@ -1453,6 +1489,58 @@ static void chv_pipe_power_well_disable(struct drm_i915_private *dev_priv,
> >  	chv_set_pipe_power_well(dev_priv, power_well, false);
> >  }
> >  
> > +void intel_display_power_vblank_get(struct drm_i915_private *dev_priv,
> > +				    bool *needs_restore)
> > +{
> > +	struct i915_power_domains *power_domains  = &dev_priv->power_domains;
> > +	struct i915_power_well *power_well;
> > +	enum intel_display_power_domain domain = POWER_DOMAIN_VBLANK;
> > +
> > +	*needs_restore = false;
> > +
> > +	if (!HAS_CSR(dev_priv))
> > +		return;
> > +
> > +	if (!(HAS_PSR(dev_priv) && dev_priv->psr.sink_support))
> > +		return;
> > +
> > +	intel_runtime_pm_get_noresume(dev_priv);
> > +
> > +	for_each_power_domain_well(dev_priv, power_well, BIT_ULL(domain)) {
> > +		unsigned long flags = 0;
> > +
> > +		power_well_lock(power_well, flags);
> > +		__intel_power_well_get(dev_priv, power_well);
> > +		*needs_restore = power_well->dc_off_disabled;
> > +		power_well->dc_off_disabled = false;
> 
> it seem also that by always restoring we don't need to add this specific dc_off_disabled
> variable inside the generic pw struct.
> 
> > +		power_well_unlock(power_well, flags);
> > +	}
> > +
> > +	atomic_inc(&power_domains->domain_use_count[domain]);
> > +}
> > +
> > +void intel_display_power_vblank_put(struct drm_i915_private *dev_priv)
> > +{
> > +	struct i915_power_domains *power_domains = &dev_priv->power_domains;
> > +	struct i915_power_well *power_well;
> > +	enum intel_display_power_domain domain = POWER_DOMAIN_VBLANK;
> > +
> > +	if (!HAS_CSR(dev_priv))
> > +		return;
> > +
> > +	if (!(HAS_PSR(dev_priv) && dev_priv->psr.sink_support))
> > +		return;
> 
> Can we remove these checks and do this for any vblank domain?
> I believe this kind of association should be only on the domain<->well association
> and not check for feature here.

Do you recommend moving it up to the caller? I want to avoid acquiring
locks, updating ref counts etc, when it is not needed.

Related to your question, if my understanding is correct, the hardware
does not really enter DC5/6 without PSR when a pipe is enabled. So
instead of allowing DC5/6, we might as well disable it when there is no
PSR.


> 
> > +
> > +	WARN(atomic_dec_return(&power_domains->domain_use_count[domain]) < 0,
> > +	     "Use count on domain %s was already zero\n",
> > +	     intel_display_power_domain_str(domain));
> 
> is the atomic safe enough here? or we need a spinlock?

I believe it is safe, domain_use_count[x] does not affect any subsequent
operation. We can get away with modifying it outside the power well spin
lock.
> 
> > +
> > +	for_each_power_domain_well_rev(dev_priv, power_well, BIT_ULL(domain))
> > +		intel_power_well_put(dev_priv, power_well);
> > +
> > +	intel_runtime_pm_put(dev_priv);
> > +}
> > +
> >  static void
> >  __intel_display_power_get_domain(struct drm_i915_private *dev_priv,
> >  				 enum intel_display_power_domain domain)
> > @@ -1463,7 +1551,7 @@ __intel_display_power_get_domain(struct drm_i915_private *dev_priv,
> >  	for_each_power_domain_well(dev_priv, power_well, BIT_ULL(domain))
> >  		intel_power_well_get(dev_priv, power_well);
> >  
> > -	power_domains->domain_use_count[domain]++;
> > +	atomic_inc(&power_domains->domain_use_count[domain]);
> >  }
> >  
> >  /**
> > @@ -1492,6 +1580,38 @@ void intel_display_power_get(struct drm_i915_private *dev_priv,
> >  	mutex_unlock(&power_domains->lock);
> >  }
> >  
> > +static bool dc_off_get_if_enabled(struct drm_i915_private *dev_priv,
> > +				  enum intel_display_power_domain domain)
> > +{
> > +	struct i915_power_well *power_well;
> > +	bool is_enabled;
> > +	unsigned long flags = 0;
> > +
> > +	power_well = lookup_power_well(dev_priv, SKL_DISP_PW_DC_OFF);
> > +	if (!power_well || !(power_well->domains & domain))
> > +		return true;
> > +
> > +	power_well_lock(power_well, flags);
> > +	is_enabled = power_well->hw_enabled;
> > +	if (is_enabled)
> > +		__intel_power_well_get(dev_priv, power_well);
> > +	power_well_unlock(power_well, flags);
> > +
> > +	return is_enabled;
> > +}
> > +
> > +static void dc_off_put(struct drm_i915_private *dev_priv,
> > +		       enum intel_display_power_domain domain)
> > +{
> > +	struct i915_power_well *power_well;
> > +
> > +	power_well = lookup_power_well(dev_priv, SKL_DISP_PW_DC_OFF);
> > +	if (!power_well || !(power_well->domains & domain))
> > +		return;
> > +
> > +	intel_power_well_put(dev_priv, power_well);
> > +}
> > +
> >  /**
> >   * intel_display_power_get_if_enabled - grab a reference for an enabled display power domain
> >   * @dev_priv: i915 device instance
> > @@ -1508,20 +1628,24 @@ bool intel_display_power_get_if_enabled(struct drm_i915_private *dev_priv,
> >  					enum intel_display_power_domain domain)
> >  {
> >  	struct i915_power_domains *power_domains = &dev_priv->power_domains;
> > -	bool is_enabled;
> > +	bool is_enabled = false;
> >  
> >  	if (!intel_runtime_pm_get_if_in_use(dev_priv))
> >  		return false;
> >  
> >  	mutex_lock(&power_domains->lock);
> >  
> > +	if (!dc_off_get_if_enabled(dev_priv, domain))
> > +		goto out;
> > +
> >  	if (__intel_display_power_is_enabled(dev_priv, domain)) {
> >  		__intel_display_power_get_domain(dev_priv, domain);
> >  		is_enabled = true;
> > -	} else {
> > -		is_enabled = false;
> >  	}
> >  
> > +	dc_off_put(dev_priv, domain);
> > +
> > +out:
> >  	mutex_unlock(&power_domains->lock);
> >  
> >  	if (!is_enabled)
> > @@ -1549,10 +1673,9 @@ void intel_display_power_put(struct drm_i915_private *dev_priv,
> >  
> >  	mutex_lock(&power_domains->lock);
> >  
> > -	WARN(!power_domains->domain_use_count[domain],
> > -	     "Use count on domain %s is already zero\n",
> > +	WARN(atomic_dec_return(&power_domains->domain_use_count[domain]) < 0,
> > +	     "Use count on domain %s was already zero\n",
> >  	     intel_display_power_domain_str(domain));
> > -	power_domains->domain_use_count[domain]--;
> >  
> >  	for_each_power_domain_well_rev(dev_priv, power_well, BIT_ULL(domain))
> >  		intel_power_well_put(dev_priv, power_well);
> > @@ -1720,6 +1843,7 @@ void intel_display_power_put(struct drm_i915_private *dev_priv,
> >  	BIT_ULL(POWER_DOMAIN_GT_IRQ) |			\
> >  	BIT_ULL(POWER_DOMAIN_MODESET) |			\
> >  	BIT_ULL(POWER_DOMAIN_AUX_A) |			\
> > +	BIT_ULL(POWER_DOMAIN_VBLANK) |			\
> >  	BIT_ULL(POWER_DOMAIN_INIT))
> >  
> >  #define BXT_DISPLAY_POWERWELL_2_POWER_DOMAINS (		\
> > @@ -1743,6 +1867,7 @@ void intel_display_power_put(struct drm_i915_private *dev_priv,
> >  	BIT_ULL(POWER_DOMAIN_MODESET) |			\
> >  	BIT_ULL(POWER_DOMAIN_AUX_A) |			\
> >  	BIT_ULL(POWER_DOMAIN_GMBUS) |			\
> > +	BIT_ULL(POWER_DOMAIN_VBLANK) |			\
> >  	BIT_ULL(POWER_DOMAIN_INIT))
> >  #define BXT_DPIO_CMN_A_POWER_DOMAINS (			\
> >  	BIT_ULL(POWER_DOMAIN_PORT_DDI_A_LANES) |		\
> > @@ -1803,6 +1928,7 @@ void intel_display_power_put(struct drm_i915_private *dev_priv,
> >  	BIT_ULL(POWER_DOMAIN_MODESET) |			\
> >  	BIT_ULL(POWER_DOMAIN_AUX_A) |			\
> >  	BIT_ULL(POWER_DOMAIN_GMBUS) |			\
> > +	BIT_ULL(POWER_DOMAIN_VBLANK) |			\
> >  	BIT_ULL(POWER_DOMAIN_INIT))
> >  
> >  #define CNL_DISPLAY_POWERWELL_2_POWER_DOMAINS (		\
> > @@ -1850,6 +1976,7 @@ void intel_display_power_put(struct drm_i915_private *dev_priv,
> >  	CNL_DISPLAY_POWERWELL_2_POWER_DOMAINS |		\
> >  	BIT_ULL(POWER_DOMAIN_MODESET) |			\
> >  	BIT_ULL(POWER_DOMAIN_AUX_A) |			\
> > +	BIT_ULL(POWER_DOMAIN_VBLANK) |			\
> >  	BIT_ULL(POWER_DOMAIN_INIT))
> >  
> >  static const struct i915_power_well_ops i9xx_always_on_power_well_ops = {
> > @@ -2083,9 +2210,12 @@ bool intel_display_power_well_is_enabled(struct drm_i915_private *dev_priv,
> >  {
> >  	struct i915_power_well *power_well;
> >  	bool ret;
> > +	unsigned long flags = 0;
> >  
> >  	power_well = lookup_power_well(dev_priv, power_well_id);
> > +	power_well_lock(power_well, flags);
> >  	ret = power_well->ops->is_enabled(dev_priv, power_well);
> > +	power_well_unlock(power_well, flags);
> >  
> >  	return ret;
> >  }
> > @@ -2120,6 +2250,8 @@ static struct i915_power_well skl_power_wells[] = {
> >  		.domains = SKL_DISPLAY_DC_OFF_POWER_DOMAINS,
> >  		.ops = &gen9_dc_off_power_well_ops,
> >  		.id = SKL_DISP_PW_DC_OFF,
> > +		.supports_atomic_ctx = true,
> > +		.dc_off_disabled = false,
> >  	},
> >  	{
> >  		.name = "power well 2",
> > @@ -2180,6 +2312,8 @@ static struct i915_power_well bxt_power_wells[] = {
> >  		.domains = BXT_DISPLAY_DC_OFF_POWER_DOMAINS,
> >  		.ops = &gen9_dc_off_power_well_ops,
> >  		.id = SKL_DISP_PW_DC_OFF,
> > +		.supports_atomic_ctx = true,
> > +		.dc_off_disabled = false,
> >  	},
> >  	{
> >  		.name = "power well 2",
> > @@ -2235,6 +2369,8 @@ static struct i915_power_well glk_power_wells[] = {
> >  		.domains = GLK_DISPLAY_DC_OFF_POWER_DOMAINS,
> >  		.ops = &gen9_dc_off_power_well_ops,
> >  		.id = SKL_DISP_PW_DC_OFF,
> > +		.supports_atomic_ctx = true,
> > +		.dc_off_disabled = false,
> >  	},
> >  	{
> >  		.name = "power well 2",
> > @@ -2359,6 +2495,8 @@ static struct i915_power_well cnl_power_wells[] = {
> >  		.domains = CNL_DISPLAY_DC_OFF_POWER_DOMAINS,
> >  		.ops = &gen9_dc_off_power_well_ops,
> >  		.id = SKL_DISP_PW_DC_OFF,
> > +		.supports_atomic_ctx = true,
> > +		.dc_off_disabled = false,
> >  	},
> >  	{
> >  		.name = "power well 2",
> > @@ -2487,6 +2625,7 @@ static void assert_power_well_ids_unique(struct drm_i915_private *dev_priv)
> >  int intel_power_domains_init(struct drm_i915_private *dev_priv)
> >  {
> >  	struct i915_power_domains *power_domains = &dev_priv->power_domains;
> > +	struct i915_power_well *power_well;
> >  
> >  	i915_modparams.disable_power_well =
> >  		sanitize_disable_power_well_option(dev_priv,
> > @@ -2524,6 +2663,10 @@ int intel_power_domains_init(struct drm_i915_private *dev_priv)
> >  		set_power_wells(power_domains, i9xx_always_on_power_well);
> >  	}
> >  
> > +	for_each_power_well(dev_priv, power_well)
> > +		if (power_well->supports_atomic_ctx)
> > +			spin_lock_init(&power_well->lock);
> > +
> >  	assert_power_well_ids_unique(dev_priv);
> >  
> >  	return 0;
> > @@ -2571,9 +2714,13 @@ static void intel_power_domains_sync_hw(struct drm_i915_private *dev_priv)
> >  
> >  	mutex_lock(&power_domains->lock);
> >  	for_each_power_well(dev_priv, power_well) {
> > +		unsigned long flags = 0;
> > +
> > +		power_well_lock(power_well, flags);
> >  		power_well->ops->sync_hw(dev_priv, power_well);
> >  		power_well->hw_enabled = power_well->ops->is_enabled(dev_priv,
> >  								     power_well);
> > +		power_well_unlock(power_well, flags);
> >  	}
> >  	mutex_unlock(&power_domains->lock);
> >  }
> > @@ -3046,21 +3193,23 @@ void intel_power_domains_suspend(struct drm_i915_private *dev_priv)
> >  		bxt_display_core_uninit(dev_priv);
> >  }
> >  
> > -static void intel_power_domains_dump_info(struct drm_i915_private *dev_priv)
> > +static void intel_power_domains_dump_info(struct drm_i915_private *dev_priv,
> > +					  const int *power_well_use)
> >  {
> >  	struct i915_power_domains *power_domains = &dev_priv->power_domains;
> >  	struct i915_power_well *power_well;
> > +	int i = 0;
> >  
> >  	for_each_power_well(dev_priv, power_well) {
> >  		enum intel_display_power_domain domain;
> >  
> >  		DRM_DEBUG_DRIVER("%-25s %d\n",
> > -				 power_well->name, power_well->count);
> > +				 power_well->name, power_well_use[i++]);
> >  
> >  		for_each_power_domain(domain, power_well->domains)
> >  			DRM_DEBUG_DRIVER("  %-23s %d\n",
> >  					 intel_display_power_domain_str(domain),
> > -					 power_domains->domain_use_count[domain]);
> > +					 atomic_read(&power_domains->domain_use_count[domain]));
> >  	}
> >  }
> >  
> > @@ -3079,6 +3228,7 @@ void intel_power_domains_verify_state(struct drm_i915_private *dev_priv)
> >  	struct i915_power_domains *power_domains = &dev_priv->power_domains;
> >  	struct i915_power_well *power_well;
> >  	bool dump_domain_info;
> > +	int power_well_use[dev_priv->power_domains.power_well_count];
> >  
> >  	mutex_lock(&power_domains->lock);
> >  
> > @@ -3087,6 +3237,14 @@ void intel_power_domains_verify_state(struct drm_i915_private *dev_priv)
> >  		enum intel_display_power_domain domain;
> >  		int domains_count;
> >  		bool enabled;
> > +		int well_count, i = 0;
> > +		unsigned long flags = 0;
> > +
> > +		power_well_lock(power_well, flags);
> > +		well_count = power_well->count;
> > +		enabled = power_well->ops->is_enabled(dev_priv, power_well);
> > +		power_well_unlock(power_well, flags);
> > +		power_well_use[i++] = well_count;
> >  
> >  		/*
> >  		 * Power wells not belonging to any domain (like the MISC_IO
> > @@ -3096,20 +3254,19 @@ void intel_power_domains_verify_state(struct drm_i915_private *dev_priv)
> >  		if (!power_well->domains)
> >  			continue;
> >  
> > -		enabled = power_well->ops->is_enabled(dev_priv, power_well);
> > -		if ((power_well->count || power_well->always_on) != enabled)
> > +
> > +		if ((well_count || power_well->always_on) != enabled)
> >  			DRM_ERROR("power well %s state mismatch (refcount %d/enabled %d)",
> > -				  power_well->name, power_well->count, enabled);
> > +				  power_well->name, well_count, enabled);
> >  
> >  		domains_count = 0;
> >  		for_each_power_domain(domain, power_well->domains)
> > -			domains_count += power_domains->domain_use_count[domain];
> > +			domains_count += atomic_read(&power_domains->domain_use_count[domain]);
> >  
> > -		if (power_well->count != domains_count) {
> > +		if (well_count != domains_count) {
> >  			DRM_ERROR("power well %s refcount/domain refcount mismatch "
> >  				  "(refcount %d/domains refcount %d)\n",
> > -				  power_well->name, power_well->count,
> > -				  domains_count);
> > +				  power_well->name, well_count, domains_count);
> >  			dump_domain_info = true;
> >  		}
> >  	}
> > @@ -3118,7 +3275,7 @@ void intel_power_domains_verify_state(struct drm_i915_private *dev_priv)
> >  		static bool dumped;
> >  
> >  		if (!dumped) {
> > -			intel_power_domains_dump_info(dev_priv);
> > +			intel_power_domains_dump_info(dev_priv, power_well_use);
> >  			dumped = true;
> >  		}
> >  	}
> > -- 
> > 2.11.0
> > 
> 
> I will probably have more comments later, but just doing a brain dump now
> since I end up forgetting to write yesterday...
> 
> The approach here in general is good and much better than that pre,post hooks.
> But I just believe we can do this here in a more generic approach than deviating
> the initial power well and domains.

I would have liked a generic approach (for display_power_{get,put}), but
I think this case is special enough that making it stand out is better.
 

> 
> Thanks,
> Rodrigo.
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 4/5] drm/i915: Introduce a non-blocking power domain for vblank interrupts
  2018-01-08 19:59     ` Pandiyan, Dhinakaran
@ 2018-01-09  9:38       ` Maarten Lankhorst
  2018-01-16 19:56         ` Pandiyan, Dhinakaran
  0 siblings, 1 reply; 22+ messages in thread
From: Maarten Lankhorst @ 2018-01-09  9:38 UTC (permalink / raw)
  To: Pandiyan, Dhinakaran, Vivi, Rodrigo
  Cc: daniel.vetter@ffwll.ch, intel-gfx@lists.freedesktop.org,
	Zanoni, Paulo R

Op 08-01-18 om 20:59 schreef Pandiyan, Dhinakaran:
> On Thu, 2018-01-04 at 18:08 -0800, Rodrigo Vivi wrote:
>> On Wed, Jan 03, 2018 at 08:40:00PM +0000, Dhinakaran Pandiyan wrote:
>>> When DC states are enabled and PSR is active, the hardware enters DC5/DC6
>>> states resulting in the frame counter resetting. The frame counter reset
>>> mess up the vblank counting logic as the diff between the new frame
>>> counter value and the previous is negative, and this negative diff gets
>>> applied as an unsigned value to the vblank count. We cannot reject negative
>>> diffs altogether because they can arise from legitimate frame counter
>>> overflows when there is a long period with vblank disabled. So, this
>>> approach allows for the driver to notice a DC state toggle between a vblank
>>> disable and enable and fill in the missed vblanks.
>>>
>>> But, in order to disable DC states when vblank interrupts are required,
>>> the DC_OFF power well has to be disabled in an atomic context. So,
>>> introduce a new VBLANK power domain that can be acquired and released in
>>> atomic contexts with these changes -
>>> 1)  _vblank_get() and _vblank_put() methods skip the power_domain mutex
>>> and use a spin lock for the DC power well.
>>> 2) power_domains->domain_use_count is converted to an atomic_t array so
>>> that it can be updated outside of the power domain mutex.
>>>
>>> v3: Squash domain_use_count atomic_t conversion (Maarten)
>>> v2: Fix deadlock by switching irqsave spinlock.
>>>     Implement atomic version of get_if_enabled.
>>>     Modify power_domain_verify_state to check power well use count and
>>> enabled status atomically.
>>>     Rewrite of intel_power_well_{get,put}
>>>
>>> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
>>> Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
>>> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
>>> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
>>> Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
>>> ---
>>>  drivers/gpu/drm/i915/i915_debugfs.c     |   2 +-
>>>  drivers/gpu/drm/i915/i915_drv.h         |  19 +++-
>>>  drivers/gpu/drm/i915/intel_display.h    |   1 +
>>>  drivers/gpu/drm/i915/intel_drv.h        |   3 +
>>>  drivers/gpu/drm/i915/intel_runtime_pm.c | 195 ++++++++++++++++++++++++++++----
>>>  5 files changed, 199 insertions(+), 21 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
>>> index d81cb2513069..5a7ce734de02 100644
>>> --- a/drivers/gpu/drm/i915/i915_debugfs.c
>>> +++ b/drivers/gpu/drm/i915/i915_debugfs.c
>>> @@ -2746,7 +2746,7 @@ static int i915_power_domain_info(struct seq_file *m, void *unused)
>>>  		for_each_power_domain(power_domain, power_well->domains)
>>>  			seq_printf(m, "  %-23s %d\n",
>>>  				 intel_display_power_domain_str(power_domain),
>>> -				 power_domains->domain_use_count[power_domain]);
>>> +				 atomic_read(&power_domains->domain_use_count[power_domain]));
>>>  	}
>>>  
>>>  	mutex_unlock(&power_domains->lock);
>>> diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
>>> index caebd5825279..61a635f03af7 100644
>>> --- a/drivers/gpu/drm/i915/i915_drv.h
>>> +++ b/drivers/gpu/drm/i915/i915_drv.h
>>> @@ -1032,6 +1032,23 @@ struct i915_power_well {
>>>  			bool has_fuses:1;
>>>  		} hsw;
>>>  	};
>>> +
>>> +	/* Lock to serialize access to count, hw_enabled and ops, used for
>>> +	 * power wells that have supports_atomix_ctx set to True.
>>> +	 */
>>> +	spinlock_t lock;
>>> +
>>> +	/* Indicates that the get/put methods for this power well can be called
>>> +	 * in atomic contexts, requires .ops to not sleep. This is valid
>>> +	 * only for the DC_OFF power well currently.
>>> +	 */
>>> +	bool supports_atomic_ctx;
>>> +
>>> +	/* DC_OFF power well was disabled since the last time vblanks were
>>> +	 * disabled.
>>> +	 */
>>> +	bool dc_off_disabled;
>>> +
>>>  	const struct i915_power_well_ops *ops;
>>>  };
>>>  
>>> @@ -1045,7 +1062,7 @@ struct i915_power_domains {
>>>  	int power_well_count;
>>>  
>>>  	struct mutex lock;
>>> -	int domain_use_count[POWER_DOMAIN_NUM];
>>> +	atomic_t domain_use_count[POWER_DOMAIN_NUM];
>>>  	struct i915_power_well *power_wells;
>>>  };
>>>  
>>> diff --git a/drivers/gpu/drm/i915/intel_display.h b/drivers/gpu/drm/i915/intel_display.h
>>> index a0d2b6169361..3e9671ff6f79 100644
>>> --- a/drivers/gpu/drm/i915/intel_display.h
>>> +++ b/drivers/gpu/drm/i915/intel_display.h
>>> @@ -172,6 +172,7 @@ enum intel_display_power_domain {
>>>  	POWER_DOMAIN_AUX_C,
>>>  	POWER_DOMAIN_AUX_D,
>>>  	POWER_DOMAIN_GMBUS,
>>> +	POWER_DOMAIN_VBLANK,
>> Maybe we could start a new category of atomic domains and on interations that makes sense go over both
>> or making the domains in an array with is_atomic bool in case we can transform the atomic get,put to
>> be really generic or into .enable .disable function pointers.
>
> The generic interfaces we've discussed until now to get/put vblank power
> domain atomically are still not generic enough. We might as well accept
> DC_OFF to be a special case and deal with it as such - there is no other
> power well that we need to enable/disable atomically. We can always
> rewrite the code in the future if a better idea comes up.
>
>>>  	POWER_DOMAIN_MODESET,
>>>  	POWER_DOMAIN_GT_IRQ,
>>>  	POWER_DOMAIN_INIT,
>>> diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h
>>> index 30f791f89d64..164e62cb047b 100644
>>> --- a/drivers/gpu/drm/i915/intel_drv.h
>>> +++ b/drivers/gpu/drm/i915/intel_drv.h
>>> @@ -1797,6 +1797,9 @@ bool intel_display_power_get_if_enabled(struct drm_i915_private *dev_priv,
>>>  					enum intel_display_power_domain domain);
>>>  void intel_display_power_put(struct drm_i915_private *dev_priv,
>>>  			     enum intel_display_power_domain domain);
>>> +void intel_display_power_vblank_get(struct drm_i915_private *dev_priv,
>>> +				    bool *needs_restore);
>> can we dump the needs_restore and always restore the counter?
> I checked this, seems possible.
>
>> if so, we could use regular intel_power_domain_{get,put} functions and built the generic atomic inside it.
>>
>>> +void intel_display_power_vblank_put(struct drm_i915_private *dev_priv);
>>>  
>>>  static inline void
>>>  assert_rpm_device_not_suspended(struct drm_i915_private *dev_priv)
>>> diff --git a/drivers/gpu/drm/i915/intel_runtime_pm.c b/drivers/gpu/drm/i915/intel_runtime_pm.c
>>> index d758da6156a8..93b324dcc55d 100644
>>> --- a/drivers/gpu/drm/i915/intel_runtime_pm.c
>>> +++ b/drivers/gpu/drm/i915/intel_runtime_pm.c
>>> @@ -56,6 +56,19 @@ static struct i915_power_well *
>>>  lookup_power_well(struct drm_i915_private *dev_priv,
>>>  		  enum i915_power_well_id power_well_id);
>>>  
>>> +/* Optimize for the case when this is called from atomic contexts,
>>> + * although the case is unlikely.
>>> + */
>>> +#define power_well_lock(power_well, flags) do {			\
>>> +	if (likely(power_well->supports_atomic_ctx))		\
>>> +		spin_lock_irqsave(&power_well->lock, flags);	\
>>> +	} while (0)
>>> +
>>> +#define power_well_unlock(power_well, flags) do {			\
>>> +	if (likely(power_well->supports_atomic_ctx))			\
>>> +		spin_unlock_irqrestore(&power_well->lock, flags);	\
>>> +	} while (0)
>>> +
>>>  const char *
>>>  intel_display_power_domain_str(enum intel_display_power_domain domain)
>>>  {
>>> @@ -126,6 +139,8 @@ intel_display_power_domain_str(enum intel_display_power_domain domain)
>>>  		return "AUX_D";
>>>  	case POWER_DOMAIN_GMBUS:
>>>  		return "GMBUS";
>>> +	case POWER_DOMAIN_VBLANK:
>>> +		return "VBLANK";
>>>  	case POWER_DOMAIN_INIT:
>>>  		return "INIT";
>>>  	case POWER_DOMAIN_MODESET:
>>> @@ -141,6 +156,9 @@ intel_display_power_domain_str(enum intel_display_power_domain domain)
>>>  static void intel_power_well_enable(struct drm_i915_private *dev_priv,
>>>  				    struct i915_power_well *power_well)
>>>  {
>>> +	if (power_well->supports_atomic_ctx)
>>> +		assert_spin_locked(&power_well->lock);
>>> +
>>>  	DRM_DEBUG_KMS("enabling %s\n", power_well->name);
>>>  	power_well->ops->enable(dev_priv, power_well);
>>>  	power_well->hw_enabled = true;
>>> @@ -149,26 +167,43 @@ static void intel_power_well_enable(struct drm_i915_private *dev_priv,
>>>  static void intel_power_well_disable(struct drm_i915_private *dev_priv,
>>>  				     struct i915_power_well *power_well)
>>>  {
>>> +	if (power_well->supports_atomic_ctx)
>>> +		assert_spin_locked(&power_well->lock);
>>> +
>>>  	DRM_DEBUG_KMS("disabling %s\n", power_well->name);
>>>  	power_well->hw_enabled = false;
>>>  	power_well->ops->disable(dev_priv, power_well);
>>>  }
>>>  
>>> -static void intel_power_well_get(struct drm_i915_private *dev_priv,
>>> +static void __intel_power_well_get(struct drm_i915_private *dev_priv,
>>>  				 struct i915_power_well *power_well)
>>>  {
>>>  	if (!power_well->count++)
>>>  		intel_power_well_enable(dev_priv, power_well);
>>>  }
>>>  
>>> +static void intel_power_well_get(struct drm_i915_private *dev_priv,
>>> +				 struct i915_power_well *power_well)
>>> +{
>>> +	unsigned long flags = 0;
>>> +
>>> +	power_well_lock(power_well, flags);
>>> +	__intel_power_well_get(dev_priv, power_well);
>>> +	power_well_unlock(power_well, flags);
>>> +}
>>> +
>>>  static void intel_power_well_put(struct drm_i915_private *dev_priv,
>>>  				 struct i915_power_well *power_well)
>>>  {
>>> +	unsigned long flags = 0;
>>> +
>>> +	power_well_lock(power_well, flags);
>>>  	WARN(!power_well->count, "Use count on power well %s is already zero",
>>>  	     power_well->name);
>>>  
>>>  	if (!--power_well->count)
>>>  		intel_power_well_disable(dev_priv, power_well);
>>> +	power_well_unlock(power_well, flags);
>>>  }
>>>  
>>>  /**
>>> @@ -736,6 +771,7 @@ static void gen9_dc_off_power_well_disable(struct drm_i915_private *dev_priv,
>>>  		skl_enable_dc6(dev_priv);
>>>  	else if (dev_priv->csr.allowed_dc_mask & DC_STATE_EN_UPTO_DC5)
>>>  		gen9_enable_dc5(dev_priv);
>>> +	power_well->dc_off_disabled = true;
>>>  }
>>>  
>>>  static void i9xx_power_well_sync_hw_noop(struct drm_i915_private *dev_priv,
>>> @@ -1453,6 +1489,58 @@ static void chv_pipe_power_well_disable(struct drm_i915_private *dev_priv,
>>>  	chv_set_pipe_power_well(dev_priv, power_well, false);
>>>  }
>>>  
>>> +void intel_display_power_vblank_get(struct drm_i915_private *dev_priv,
>>> +				    bool *needs_restore)
>>> +{
>>> +	struct i915_power_domains *power_domains  = &dev_priv->power_domains;
>>> +	struct i915_power_well *power_well;
>>> +	enum intel_display_power_domain domain = POWER_DOMAIN_VBLANK;
>>> +
>>> +	*needs_restore = false;
>>> +
>>> +	if (!HAS_CSR(dev_priv))
>>> +		return;
>>> +
>>> +	if (!(HAS_PSR(dev_priv) && dev_priv->psr.sink_support))
>>> +		return;
>>> +
>>> +	intel_runtime_pm_get_noresume(dev_priv);
>>> +
>>> +	for_each_power_domain_well(dev_priv, power_well, BIT_ULL(domain)) {
>>> +		unsigned long flags = 0;
>>> +
>>> +		power_well_lock(power_well, flags);
>>> +		__intel_power_well_get(dev_priv, power_well);
>>> +		*needs_restore = power_well->dc_off_disabled;
>>> +		power_well->dc_off_disabled = false;
>> it seem also that by always restoring we don't need to add this specific dc_off_disabled
>> variable inside the generic pw struct.
>>
>>> +		power_well_unlock(power_well, flags);
>>> +	}
>>> +
>>> +	atomic_inc(&power_domains->domain_use_count[domain]);
>>> +}
>>> +
>>> +void intel_display_power_vblank_put(struct drm_i915_private *dev_priv)
>>> +{
>>> +	struct i915_power_domains *power_domains = &dev_priv->power_domains;
>>> +	struct i915_power_well *power_well;
>>> +	enum intel_display_power_domain domain = POWER_DOMAIN_VBLANK;
>>> +
>>> +	if (!HAS_CSR(dev_priv))
>>> +		return;
>>> +
>>> +	if (!(HAS_PSR(dev_priv) && dev_priv->psr.sink_support))
>>> +		return;
>> Can we remove these checks and do this for any vblank domain?
>> I believe this kind of association should be only on the domain<->well association
>> and not check for feature here.
> Do you recommend moving it up to the caller? I want to avoid acquiring
> locks, updating ref counts etc, when it is not needed.
>
> Related to your question, if my understanding is correct, the hardware
> does not really enter DC5/6 without PSR when a pipe is enabled. So
> instead of allowing DC5/6, we might as well disable it when there is no
> PSR.
>
>
>>> +
>>> +	WARN(atomic_dec_return(&power_domains->domain_use_count[domain]) < 0,
>>> +	     "Use count on domain %s was already zero\n",
>>> +	     intel_display_power_domain_str(domain));
>> is the atomic safe enough here? or we need a spinlock?
> I believe it is safe, domain_use_count[x] does not affect any subsequent
> operation. We can get away with modifying it outside the power well spin
> lock.
>>> +
>>> +	for_each_power_domain_well_rev(dev_priv, power_well, BIT_ULL(domain))
>>> +		intel_power_well_put(dev_priv, power_well);
>>> +
>>> +	intel_runtime_pm_put(dev_priv);
>>> +}
>>> +
>>>  static void
>>>  __intel_display_power_get_domain(struct drm_i915_private *dev_priv,
>>>  				 enum intel_display_power_domain domain)
>>> @@ -1463,7 +1551,7 @@ __intel_display_power_get_domain(struct drm_i915_private *dev_priv,
>>>  	for_each_power_domain_well(dev_priv, power_well, BIT_ULL(domain))
>>>  		intel_power_well_get(dev_priv, power_well);
>>>  
>>> -	power_domains->domain_use_count[domain]++;
>>> +	atomic_inc(&power_domains->domain_use_count[domain]);
>>>  }
>>>  
>>>  /**
>>> @@ -1492,6 +1580,38 @@ void intel_display_power_get(struct drm_i915_private *dev_priv,
>>>  	mutex_unlock(&power_domains->lock);
>>>  }
>>>  
>>> +static bool dc_off_get_if_enabled(struct drm_i915_private *dev_priv,
>>> +				  enum intel_display_power_domain domain)
>>> +{
>>> +	struct i915_power_well *power_well;
>>> +	bool is_enabled;
>>> +	unsigned long flags = 0;
>>> +
>>> +	power_well = lookup_power_well(dev_priv, SKL_DISP_PW_DC_OFF);
>>> +	if (!power_well || !(power_well->domains & domain))
>>> +		return true;
>>> +
>>> +	power_well_lock(power_well, flags);
>>> +	is_enabled = power_well->hw_enabled;
>>> +	if (is_enabled)
>>> +		__intel_power_well_get(dev_priv, power_well);
>>> +	power_well_unlock(power_well, flags);
>>> +
>>> +	return is_enabled;
>>> +}
>>> +
>>> +static void dc_off_put(struct drm_i915_private *dev_priv,
>>> +		       enum intel_display_power_domain domain)
>>> +{
>>> +	struct i915_power_well *power_well;
>>> +
>>> +	power_well = lookup_power_well(dev_priv, SKL_DISP_PW_DC_OFF);
>>> +	if (!power_well || !(power_well->domains & domain))
>>> +		return;
>>> +
>>> +	intel_power_well_put(dev_priv, power_well);
>>> +}
>>> +
>>>  /**
>>>   * intel_display_power_get_if_enabled - grab a reference for an enabled display power domain
>>>   * @dev_priv: i915 device instance
>>> @@ -1508,20 +1628,24 @@ bool intel_display_power_get_if_enabled(struct drm_i915_private *dev_priv,
>>>  					enum intel_display_power_domain domain)
>>>  {
>>>  	struct i915_power_domains *power_domains = &dev_priv->power_domains;
>>> -	bool is_enabled;
>>> +	bool is_enabled = false;
>>>  
>>>  	if (!intel_runtime_pm_get_if_in_use(dev_priv))
>>>  		return false;
>>>  
>>>  	mutex_lock(&power_domains->lock);
>>>  
>>> +	if (!dc_off_get_if_enabled(dev_priv, domain))
>>> +		goto out;
>>> +
>>>  	if (__intel_display_power_is_enabled(dev_priv, domain)) {
>>>  		__intel_display_power_get_domain(dev_priv, domain);
>>>  		is_enabled = true;
>>> -	} else {
>>> -		is_enabled = false;
>>>  	}
>>>  
>>> +	dc_off_put(dev_priv, domain);
>>> +
>>> +out:
>>>  	mutex_unlock(&power_domains->lock);
>>>  
>>>  	if (!is_enabled)
>>> @@ -1549,10 +1673,9 @@ void intel_display_power_put(struct drm_i915_private *dev_priv,
>>>  
>>>  	mutex_lock(&power_domains->lock);
>>>  
>>> -	WARN(!power_domains->domain_use_count[domain],
>>> -	     "Use count on domain %s is already zero\n",
>>> +	WARN(atomic_dec_return(&power_domains->domain_use_count[domain]) < 0,
>>> +	     "Use count on domain %s was already zero\n",
>>>  	     intel_display_power_domain_str(domain));
>>> -	power_domains->domain_use_count[domain]--;
>>>  
>>>  	for_each_power_domain_well_rev(dev_priv, power_well, BIT_ULL(domain))
>>>  		intel_power_well_put(dev_priv, power_well);
>>> @@ -1720,6 +1843,7 @@ void intel_display_power_put(struct drm_i915_private *dev_priv,
>>>  	BIT_ULL(POWER_DOMAIN_GT_IRQ) |			\
>>>  	BIT_ULL(POWER_DOMAIN_MODESET) |			\
>>>  	BIT_ULL(POWER_DOMAIN_AUX_A) |			\
>>> +	BIT_ULL(POWER_DOMAIN_VBLANK) |			\
>>>  	BIT_ULL(POWER_DOMAIN_INIT))
>>>  
>>>  #define BXT_DISPLAY_POWERWELL_2_POWER_DOMAINS (		\
>>> @@ -1743,6 +1867,7 @@ void intel_display_power_put(struct drm_i915_private *dev_priv,
>>>  	BIT_ULL(POWER_DOMAIN_MODESET) |			\
>>>  	BIT_ULL(POWER_DOMAIN_AUX_A) |			\
>>>  	BIT_ULL(POWER_DOMAIN_GMBUS) |			\
>>> +	BIT_ULL(POWER_DOMAIN_VBLANK) |			\
>>>  	BIT_ULL(POWER_DOMAIN_INIT))
>>>  #define BXT_DPIO_CMN_A_POWER_DOMAINS (			\
>>>  	BIT_ULL(POWER_DOMAIN_PORT_DDI_A_LANES) |		\
>>> @@ -1803,6 +1928,7 @@ void intel_display_power_put(struct drm_i915_private *dev_priv,
>>>  	BIT_ULL(POWER_DOMAIN_MODESET) |			\
>>>  	BIT_ULL(POWER_DOMAIN_AUX_A) |			\
>>>  	BIT_ULL(POWER_DOMAIN_GMBUS) |			\
>>> +	BIT_ULL(POWER_DOMAIN_VBLANK) |			\
>>>  	BIT_ULL(POWER_DOMAIN_INIT))
>>>  
>>>  #define CNL_DISPLAY_POWERWELL_2_POWER_DOMAINS (		\
>>> @@ -1850,6 +1976,7 @@ void intel_display_power_put(struct drm_i915_private *dev_priv,
>>>  	CNL_DISPLAY_POWERWELL_2_POWER_DOMAINS |		\
>>>  	BIT_ULL(POWER_DOMAIN_MODESET) |			\
>>>  	BIT_ULL(POWER_DOMAIN_AUX_A) |			\
>>> +	BIT_ULL(POWER_DOMAIN_VBLANK) |			\
>>>  	BIT_ULL(POWER_DOMAIN_INIT))
>>>  
>>>  static const struct i915_power_well_ops i9xx_always_on_power_well_ops = {
>>> @@ -2083,9 +2210,12 @@ bool intel_display_power_well_is_enabled(struct drm_i915_private *dev_priv,
>>>  {
>>>  	struct i915_power_well *power_well;
>>>  	bool ret;
>>> +	unsigned long flags = 0;
>>>  
>>>  	power_well = lookup_power_well(dev_priv, power_well_id);
>>> +	power_well_lock(power_well, flags);
>>>  	ret = power_well->ops->is_enabled(dev_priv, power_well);
>>> +	power_well_unlock(power_well, flags);
>>>  
>>>  	return ret;
>>>  }
>>> @@ -2120,6 +2250,8 @@ static struct i915_power_well skl_power_wells[] = {
>>>  		.domains = SKL_DISPLAY_DC_OFF_POWER_DOMAINS,
>>>  		.ops = &gen9_dc_off_power_well_ops,
>>>  		.id = SKL_DISP_PW_DC_OFF,
>>> +		.supports_atomic_ctx = true,
>>> +		.dc_off_disabled = false,
>>>  	},
>>>  	{
>>>  		.name = "power well 2",
>>> @@ -2180,6 +2312,8 @@ static struct i915_power_well bxt_power_wells[] = {
>>>  		.domains = BXT_DISPLAY_DC_OFF_POWER_DOMAINS,
>>>  		.ops = &gen9_dc_off_power_well_ops,
>>>  		.id = SKL_DISP_PW_DC_OFF,
>>> +		.supports_atomic_ctx = true,
>>> +		.dc_off_disabled = false,
>>>  	},
>>>  	{
>>>  		.name = "power well 2",
>>> @@ -2235,6 +2369,8 @@ static struct i915_power_well glk_power_wells[] = {
>>>  		.domains = GLK_DISPLAY_DC_OFF_POWER_DOMAINS,
>>>  		.ops = &gen9_dc_off_power_well_ops,
>>>  		.id = SKL_DISP_PW_DC_OFF,
>>> +		.supports_atomic_ctx = true,
>>> +		.dc_off_disabled = false,
>>>  	},
>>>  	{
>>>  		.name = "power well 2",
>>> @@ -2359,6 +2495,8 @@ static struct i915_power_well cnl_power_wells[] = {
>>>  		.domains = CNL_DISPLAY_DC_OFF_POWER_DOMAINS,
>>>  		.ops = &gen9_dc_off_power_well_ops,
>>>  		.id = SKL_DISP_PW_DC_OFF,
>>> +		.supports_atomic_ctx = true,
>>> +		.dc_off_disabled = false,
>>>  	},
>>>  	{
>>>  		.name = "power well 2",
>>> @@ -2487,6 +2625,7 @@ static void assert_power_well_ids_unique(struct drm_i915_private *dev_priv)
>>>  int intel_power_domains_init(struct drm_i915_private *dev_priv)
>>>  {
>>>  	struct i915_power_domains *power_domains = &dev_priv->power_domains;
>>> +	struct i915_power_well *power_well;
>>>  
>>>  	i915_modparams.disable_power_well =
>>>  		sanitize_disable_power_well_option(dev_priv,
>>> @@ -2524,6 +2663,10 @@ int intel_power_domains_init(struct drm_i915_private *dev_priv)
>>>  		set_power_wells(power_domains, i9xx_always_on_power_well);
>>>  	}
>>>  
>>> +	for_each_power_well(dev_priv, power_well)
>>> +		if (power_well->supports_atomic_ctx)
>>> +			spin_lock_init(&power_well->lock);
>>> +
>>>  	assert_power_well_ids_unique(dev_priv);
>>>  
>>>  	return 0;
>>> @@ -2571,9 +2714,13 @@ static void intel_power_domains_sync_hw(struct drm_i915_private *dev_priv)
>>>  
>>>  	mutex_lock(&power_domains->lock);
>>>  	for_each_power_well(dev_priv, power_well) {
>>> +		unsigned long flags = 0;
>>> +
>>> +		power_well_lock(power_well, flags);
>>>  		power_well->ops->sync_hw(dev_priv, power_well);
>>>  		power_well->hw_enabled = power_well->ops->is_enabled(dev_priv,
>>>  								     power_well);
>>> +		power_well_unlock(power_well, flags);
>>>  	}
>>>  	mutex_unlock(&power_domains->lock);
>>>  }
>>> @@ -3046,21 +3193,23 @@ void intel_power_domains_suspend(struct drm_i915_private *dev_priv)
>>>  		bxt_display_core_uninit(dev_priv);
>>>  }
>>>  
>>> -static void intel_power_domains_dump_info(struct drm_i915_private *dev_priv)
>>> +static void intel_power_domains_dump_info(struct drm_i915_private *dev_priv,
>>> +					  const int *power_well_use)
>>>  {
>>>  	struct i915_power_domains *power_domains = &dev_priv->power_domains;
>>>  	struct i915_power_well *power_well;
>>> +	int i = 0;
>>>  
>>>  	for_each_power_well(dev_priv, power_well) {
>>>  		enum intel_display_power_domain domain;
>>>  
>>>  		DRM_DEBUG_DRIVER("%-25s %d\n",
>>> -				 power_well->name, power_well->count);
>>> +				 power_well->name, power_well_use[i++]);
>>>  
>>>  		for_each_power_domain(domain, power_well->domains)
>>>  			DRM_DEBUG_DRIVER("  %-23s %d\n",
>>>  					 intel_display_power_domain_str(domain),
>>> -					 power_domains->domain_use_count[domain]);
>>> +					 atomic_read(&power_domains->domain_use_count[domain]));
>>>  	}
>>>  }
>>>  
>>> @@ -3079,6 +3228,7 @@ void intel_power_domains_verify_state(struct drm_i915_private *dev_priv)
>>>  	struct i915_power_domains *power_domains = &dev_priv->power_domains;
>>>  	struct i915_power_well *power_well;
>>>  	bool dump_domain_info;
>>> +	int power_well_use[dev_priv->power_domains.power_well_count];
>>>  
>>>  	mutex_lock(&power_domains->lock);
>>>  
>>> @@ -3087,6 +3237,14 @@ void intel_power_domains_verify_state(struct drm_i915_private *dev_priv)
>>>  		enum intel_display_power_domain domain;
>>>  		int domains_count;
>>>  		bool enabled;
>>> +		int well_count, i = 0;
>>> +		unsigned long flags = 0;
>>> +
>>> +		power_well_lock(power_well, flags);
>>> +		well_count = power_well->count;
>>> +		enabled = power_well->ops->is_enabled(dev_priv, power_well);
>>> +		power_well_unlock(power_well, flags);
>>> +		power_well_use[i++] = well_count;
>>>  
>>>  		/*
>>>  		 * Power wells not belonging to any domain (like the MISC_IO
>>> @@ -3096,20 +3254,19 @@ void intel_power_domains_verify_state(struct drm_i915_private *dev_priv)
>>>  		if (!power_well->domains)
>>>  			continue;
>>>  
>>> -		enabled = power_well->ops->is_enabled(dev_priv, power_well);
>>> -		if ((power_well->count || power_well->always_on) != enabled)
>>> +
>>> +		if ((well_count || power_well->always_on) != enabled)
>>>  			DRM_ERROR("power well %s state mismatch (refcount %d/enabled %d)",
>>> -				  power_well->name, power_well->count, enabled);
>>> +				  power_well->name, well_count, enabled);
>>>  
>>>  		domains_count = 0;
>>>  		for_each_power_domain(domain, power_well->domains)
>>> -			domains_count += power_domains->domain_use_count[domain];
>>> +			domains_count += atomic_read(&power_domains->domain_use_count[domain]);
>>>  
>>> -		if (power_well->count != domains_count) {
>>> +		if (well_count != domains_count) {
>>>  			DRM_ERROR("power well %s refcount/domain refcount mismatch "
>>>  				  "(refcount %d/domains refcount %d)\n",
>>> -				  power_well->name, power_well->count,
>>> -				  domains_count);
>>> +				  power_well->name, well_count, domains_count);
>>>  			dump_domain_info = true;
>>>  		}
>>>  	}
>>> @@ -3118,7 +3275,7 @@ void intel_power_domains_verify_state(struct drm_i915_private *dev_priv)
>>>  		static bool dumped;
>>>  
>>>  		if (!dumped) {
>>> -			intel_power_domains_dump_info(dev_priv);
>>> +			intel_power_domains_dump_info(dev_priv, power_well_use);
>>>  			dumped = true;
>>>  		}
>>>  	}
>>> -- 
>>> 2.11.0
>>>
>> I will probably have more comments later, but just doing a brain dump now
>> since I end up forgetting to write yesterday...
>>
>> The approach here in general is good and much better than that pre,post hooks.
>> But I just believe we can do this here in a more generic approach than deviating
>> the initial power well and domains.
> I would have liked a generic approach (for display_power_{get,put}), but
> I think this case is special enough that making it stand out is better.
Agreed, I can think of no other way myself without making the generic case too complicated. The whole runtime power management became way too complex for a special case.

~Maarten
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 4/5] drm/i915: Introduce a non-blocking power domain for vblank interrupts
  2018-01-09  9:38       ` Maarten Lankhorst
@ 2018-01-16 19:56         ` Pandiyan, Dhinakaran
  0 siblings, 0 replies; 22+ messages in thread
From: Pandiyan, Dhinakaran @ 2018-01-16 19:56 UTC (permalink / raw)
  To: maarten.lankhorst@linux.intel.com
  Cc: daniel.vetter@ffwll.ch, intel-gfx@lists.freedesktop.org,
	Zanoni, Paulo R, Vivi, Rodrigo

On Tue, 2018-01-09 at 10:38 +0100, Maarten Lankhorst wrote:
> Op 08-01-18 om 20:59 schreef Pandiyan, Dhinakaran:
> > On Thu, 2018-01-04 at 18:08 -0800, Rodrigo Vivi wrote:
<snip>
> >> I will probably have more comments later, but just doing a brain dump now
> >> since I end up forgetting to write yesterday...
> >>
> >> The approach here in general is good and much better than that pre,post hooks.
> >> But I just believe we can do this here in a more generic approach than deviating
> >> the initial power well and domains.
> > I would have liked a generic approach (for display_power_{get,put}), but
> > I think this case is special enough that making it stand out is better.
> Agreed, I can think of no other way myself without making the generic case too complicated. The whole runtime power management became way too complex for a special case.
> 
I found out that DMC keeps the hardware out of DC5/6 when vblank
interrupts are enabled. This simplified the solution a lot
(https://patchwork.freedesktop.org/series/36435/) Thanks for your review
on this series, would appreciate any feedback on the new one too :)

-DK



> ~Maarten
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2018-01-16 19:56 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2018-01-03 20:39 [PATCH 1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled Dhinakaran Pandiyan
2018-01-03 20:39 ` [PATCH 2/5] drm/vblank: Restoring vblank counts after device runtime PM events Dhinakaran Pandiyan
2018-01-03 20:55   ` Chris Wilson
2018-01-03 20:39 ` [PATCH 3/5] drm/i915: Enable vblanks after verifying power domain states Dhinakaran Pandiyan
2018-01-03 20:57   ` Chris Wilson
2018-01-04 21:25     ` Pandiyan, Dhinakaran
2018-01-04 11:35   ` Maarten Lankhorst
2018-01-04 21:51     ` Pandiyan, Dhinakaran
2018-01-05 11:23       ` Maarten Lankhorst
2018-01-05 18:09         ` Rodrigo Vivi
2018-01-05 18:45           ` Pandiyan, Dhinakaran
2018-01-06  9:51     ` Dhinakaran Pandiyan
2018-01-08 14:43       ` Maarten Lankhorst
2018-01-08 19:29         ` Pandiyan, Dhinakaran
2018-01-03 20:40 ` [PATCH 4/5] drm/i915: Introduce a non-blocking power domain for vblank interrupts Dhinakaran Pandiyan
2018-01-05  2:08   ` Rodrigo Vivi
2018-01-08 19:59     ` Pandiyan, Dhinakaran
2018-01-09  9:38       ` Maarten Lankhorst
2018-01-16 19:56         ` Pandiyan, Dhinakaran
2018-01-03 20:40 ` [PATCH 5/5] drm/i915: Use the vblank power domain disallow or disable DC states Dhinakaran Pandiyan
2018-01-03 20:53 ` [PATCH 1/5] drm/vblank: Do not update vblank counts if vblanks are already disabled Chris Wilson
2018-01-03 21:01 ` ✓ Fi.CI.BAT: success for series starting with [1/5] " Patchwork

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox