public inbox for intel-gfx@lists.freedesktop.org
 help / color / mirror / Atom feed
* [PATCH 1/2] drm/i915/backlight: handle enabled but zero duty cycle at module load
@ 2016-08-23  8:10 Maarten Lankhorst
  2016-08-23  8:10 ` [PATCH] drm/i915: Fix botched merge that downgrades CSR versions Maarten Lankhorst
                   ` (6 more replies)
  0 siblings, 7 replies; 10+ messages in thread
From: Maarten Lankhorst @ 2016-08-23  8:10 UTC (permalink / raw)
  To: intel-gfx; +Cc: Jani Nikula

From: Jani Nikula <jani.nikula@intel.com>

Don't consider enabled but zero duty cycle backlight disabled. Clamp
level between min and max for sanity.

Signed-off-by: Jani Nikula <jani.nikula@intel.com>
---
 drivers/gpu/drm/i915/intel_panel.c | 40 +++++++++++++++++++++++---------------
 1 file changed, 24 insertions(+), 16 deletions(-)

diff --git a/drivers/gpu/drm/i915/intel_panel.c b/drivers/gpu/drm/i915/intel_panel.c
index 96c65d77e886..c10e9b0405e8 100644
--- a/drivers/gpu/drm/i915/intel_panel.c
+++ b/drivers/gpu/drm/i915/intel_panel.c
@@ -1430,10 +1430,11 @@ static int lpt_setup_backlight(struct intel_connector *connector, enum pipe unus
 	panel->backlight.min = get_backlight_min_vbt(connector);
 
 	val = lpt_get_backlight(connector);
-	panel->backlight.level = intel_panel_compute_brightness(connector, val);
+	val = intel_panel_compute_brightness(connector, val);
+	panel->backlight.level = clamp(val, panel->backlight.min,
+				       panel->backlight.max);
 
-	panel->backlight.enabled = (pch_ctl1 & BLM_PCH_PWM_ENABLE) &&
-		panel->backlight.level != 0;
+	panel->backlight.enabled = pch_ctl1 & BLM_PCH_PWM_ENABLE;
 
 	return 0;
 }
@@ -1459,11 +1460,13 @@ static int pch_setup_backlight(struct intel_connector *connector, enum pipe unus
 	panel->backlight.min = get_backlight_min_vbt(connector);
 
 	val = pch_get_backlight(connector);
-	panel->backlight.level = intel_panel_compute_brightness(connector, val);
+	val = intel_panel_compute_brightness(connector, val);
+	panel->backlight.level = clamp(val, panel->backlight.min,
+				       panel->backlight.max);
 
 	cpu_ctl2 = I915_READ(BLC_PWM_CPU_CTL2);
 	panel->backlight.enabled = (cpu_ctl2 & BLM_PWM_ENABLE) &&
-		(pch_ctl1 & BLM_PCH_PWM_ENABLE) && panel->backlight.level != 0;
+		(pch_ctl1 & BLM_PCH_PWM_ENABLE);
 
 	return 0;
 }
@@ -1498,9 +1501,11 @@ static int i9xx_setup_backlight(struct intel_connector *connector, enum pipe unu
 	panel->backlight.min = get_backlight_min_vbt(connector);
 
 	val = i9xx_get_backlight(connector);
-	panel->backlight.level = intel_panel_compute_brightness(connector, val);
+	val = intel_panel_compute_brightness(connector, val);
+	panel->backlight.level = clamp(val, panel->backlight.min,
+				       panel->backlight.max);
 
-	panel->backlight.enabled = panel->backlight.level != 0;
+	panel->backlight.enabled = val != 0;
 
 	return 0;
 }
@@ -1530,10 +1535,11 @@ static int i965_setup_backlight(struct intel_connector *connector, enum pipe unu
 	panel->backlight.min = get_backlight_min_vbt(connector);
 
 	val = i9xx_get_backlight(connector);
-	panel->backlight.level = intel_panel_compute_brightness(connector, val);
+	val = intel_panel_compute_brightness(connector, val);
+	panel->backlight.level = clamp(val, panel->backlight.min,
+				       panel->backlight.max);
 
-	panel->backlight.enabled = (ctl2 & BLM_PWM_ENABLE) &&
-		panel->backlight.level != 0;
+	panel->backlight.enabled = ctl2 & BLM_PWM_ENABLE;
 
 	return 0;
 }
@@ -1562,10 +1568,11 @@ static int vlv_setup_backlight(struct intel_connector *connector, enum pipe pipe
 	panel->backlight.min = get_backlight_min_vbt(connector);
 
 	val = _vlv_get_backlight(dev_priv, pipe);
-	panel->backlight.level = intel_panel_compute_brightness(connector, val);
+	val = intel_panel_compute_brightness(connector, val);
+	panel->backlight.level = clamp(val, panel->backlight.min,
+				       panel->backlight.max);
 
-	panel->backlight.enabled = (ctl2 & BLM_PWM_ENABLE) &&
-		panel->backlight.level != 0;
+	panel->backlight.enabled = ctl2 & BLM_PWM_ENABLE;
 
 	return 0;
 }
@@ -1607,10 +1614,11 @@ bxt_setup_backlight(struct intel_connector *connector, enum pipe unused)
 		return -ENODEV;
 
 	val = bxt_get_backlight(connector);
-	panel->backlight.level = intel_panel_compute_brightness(connector, val);
+	val = intel_panel_compute_brightness(connector, val);
+	panel->backlight.level = clamp(val, panel->backlight.min,
+				       panel->backlight.max);
 
-	panel->backlight.enabled = (pwm_ctl & BXT_BLC_PWM_ENABLE) &&
-		panel->backlight.level != 0;
+	panel->backlight.enabled = pwm_ctl & BXT_BLC_PWM_ENABLE;
 
 	return 0;
 }
-- 
2.7.4

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH] drm/i915: Fix botched merge that downgrades CSR versions.
  2016-08-23  8:10 [PATCH 1/2] drm/i915/backlight: handle enabled but zero duty cycle at module load Maarten Lankhorst
@ 2016-08-23  8:10 ` Maarten Lankhorst
  2016-08-25 11:29   ` Jani Nikula
  2016-08-23  8:10 ` [PATCH] locking/mutex: Add waiter parameter to mutex_optimistic_spin() Maarten Lankhorst
                   ` (5 subsequent siblings)
  6 siblings, 1 reply; 10+ messages in thread
From: Maarten Lankhorst @ 2016-08-23  8:10 UTC (permalink / raw)
  To: intel-gfx; +Cc: drm-intel-fixes

Merge commit 5e580523d9128a4d8 reverts the version bumping parts of
commit 4aa7fb9c3c4fa0. Bump the versions again and request the specific
firmware version.

The currently recommended versions are: SKL 1.26, KBL 1.01 and BXT 1.07.

Cc: Patrik Jakobsson <patrik.jakobsson@linux.intel.com>
Cc: Imre Deak <imre.deak@intel.com>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=97242
Cc: drm-intel-fixes@lists.freedesktop.org
Fixes: 5e580523d912 ("Backmerge tag 'v4.7' into drm-next")
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
---
 drivers/gpu/drm/i915/intel_csr.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/i915/intel_csr.c b/drivers/gpu/drm/i915/intel_csr.c
index fb27d187876c..1ea0e1f43397 100644
--- a/drivers/gpu/drm/i915/intel_csr.c
+++ b/drivers/gpu/drm/i915/intel_csr.c
@@ -34,15 +34,15 @@
  * low-power state and comes back to normal.
  */
 
-#define I915_CSR_KBL "i915/kbl_dmc_ver1.bin"
+#define I915_CSR_KBL "i915/kbl_dmc_ver1_01.bin"
 MODULE_FIRMWARE(I915_CSR_KBL);
 #define KBL_CSR_VERSION_REQUIRED	CSR_VERSION(1, 1)
 
-#define I915_CSR_SKL "i915/skl_dmc_ver1.bin"
+#define I915_CSR_SKL "i915/skl_dmc_ver1_26.bin"
 MODULE_FIRMWARE(I915_CSR_SKL);
-#define SKL_CSR_VERSION_REQUIRED	CSR_VERSION(1, 23)
+#define SKL_CSR_VERSION_REQUIRED	CSR_VERSION(1, 26)
 
-#define I915_CSR_BXT "i915/bxt_dmc_ver1.bin"
+#define I915_CSR_BXT "i915/bxt_dmc_ver1_07.bin"
 MODULE_FIRMWARE(I915_CSR_BXT);
 #define BXT_CSR_VERSION_REQUIRED	CSR_VERSION(1, 7)
 
-- 
2.7.4

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH] locking/mutex: Add waiter parameter to mutex_optimistic_spin()
  2016-08-23  8:10 [PATCH 1/2] drm/i915/backlight: handle enabled but zero duty cycle at module load Maarten Lankhorst
  2016-08-23  8:10 ` [PATCH] drm/i915: Fix botched merge that downgrades CSR versions Maarten Lankhorst
@ 2016-08-23  8:10 ` Maarten Lankhorst
  2016-08-23  8:10 ` [PATCH 2/2] drm/i915: Enable fast modesetting again Maarten Lankhorst
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 10+ messages in thread
From: Maarten Lankhorst @ 2016-08-23  8:10 UTC (permalink / raw)
  To: intel-gfx; +Cc: Waiman Long, Waiman Long

From: Waiman Long <Waiman.Long@hpe.com>

This patch adds a new waiter parameter to the mutex_optimistic_spin()
function to prepare it to be used by a waiter-spinner that doesn't
need to go into the OSQ as there can only be one waiter-spinner which
is the head of the waiting queue.

Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
 kernel/locking/mutex.c | 66 +++++++++++++++++++++++++++++++++++---------------
 1 file changed, 46 insertions(+), 20 deletions(-)

diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index a70b90d..875c925 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -273,11 +273,15 @@ static inline int mutex_can_spin_on_owner(struct mutex *lock)
 
 /*
  * Atomically try to take the lock when it is available
+ *
+ * For waiter-spinner, the count needs to be set to -1 first which will be
+ * cleared to 0 later on if the list becomes empty. For regular spinner,
+ * the count will be set to 0.
  */
-static inline bool mutex_try_to_acquire(struct mutex *lock)
+static inline bool mutex_try_to_acquire(struct mutex *lock, int waiter)
 {
 	return !mutex_is_locked(lock) &&
-		(atomic_cmpxchg_acquire(&lock->count, 1, 0) == 1);
+		(atomic_cmpxchg_acquire(&lock->count, 1, waiter ? -1 : 0) == 1);
 }
 
 /*
@@ -302,22 +306,42 @@ static inline bool mutex_try_to_acquire(struct mutex *lock)
  *
  * Returns true when the lock was taken, otherwise false, indicating
  * that we need to jump to the slowpath and sleep.
+ *
+ * The waiter flag is set to true if the spinner is a waiter in the wait
+ * queue. As the waiter has slept for a while, it should have priority to
+ * get the lock over the regular spinners. So going to wait at the end of
+ * the OSQ isn't fair to the waiter. Instead, it will spin on the lock
+ * directly and concurrently with the spinner at the head of the OSQ, if
+ * present. There may be a bit more cacheline contention in this case.
+ * The waiter also needs to set the lock to -1 instead of 0 on lock
+ * acquisition.
  */
 static bool mutex_optimistic_spin(struct mutex *lock,
-				  struct ww_acquire_ctx *ww_ctx, const bool use_ww_ctx)
+				  struct ww_acquire_ctx *ww_ctx,
+				  const bool use_ww_ctx, int waiter)
 {
 	struct task_struct *task = current;
+	bool acquired = false;
 
-	if (!mutex_can_spin_on_owner(lock))
-		goto done;
+	if (!waiter) {
+		/*
+		 * The purpose of the mutex_can_spin_on_owner() function is
+		 * to eliminate the overhead of osq_lock() and osq_unlock()
+		 * in case spinning isn't possible. As a waiter-spinner
+		 * is not going to take OSQ lock anyway, there is no need
+		 * to call mutex_can_spin_on_owner().
+		 */
+		if (!mutex_can_spin_on_owner(lock))
+			goto done;
 
-	/*
-	 * In order to avoid a stampede of mutex spinners trying to
-	 * acquire the mutex all at once, the spinners need to take a
-	 * MCS (queued) lock first before spinning on the owner field.
-	 */
-	if (!osq_lock(&lock->osq))
-		goto done;
+		/*
+		 * In order to avoid a stampede of mutex spinners trying to
+		 * acquire the mutex all at once, the spinners need to take a
+		 * MCS (queued) lock first before spinning on the owner field.
+		 */
+		if (!osq_lock(&lock->osq))
+			goto done;
+	}
 
 	while (true) {
 		struct task_struct *owner;
@@ -347,7 +371,7 @@ static bool mutex_optimistic_spin(struct mutex *lock,
 			break;
 
 		/* Try to acquire the mutex if it is unlocked. */
-		if (mutex_try_to_acquire(lock)) {
+		if (mutex_try_to_acquire(lock, waiter)) {
 			lock_acquired(&lock->dep_map, ip);
 
 			if (use_ww_ctx) {
@@ -358,8 +382,8 @@ static bool mutex_optimistic_spin(struct mutex *lock,
 			}
 
 			mutex_set_owner(lock);
-			osq_unlock(&lock->osq);
-			return true;
+			acquired = true;
+			break;
 		}
 
 		/*
@@ -380,14 +404,15 @@ static bool mutex_optimistic_spin(struct mutex *lock,
 		cpu_relax_lowlatency();
 	}
 
-	osq_unlock(&lock->osq);
+	if (!waiter)
+		osq_unlock(&lock->osq);
 done:
 	/*
 	 * If we fell out of the spin path because of need_resched(),
 	 * reschedule now, before we try-lock the mutex. This avoids getting
 	 * scheduled out right after we obtained the mutex.
 	 */
-	if (need_resched()) {
+	if (!acquired && need_resched()) {
 		/*
 		 * We _should_ have TASK_RUNNING here, but just in case
 		 * we do not, make it so, otherwise we might get stuck.
@@ -396,11 +421,12 @@ done:
 		schedule_preempt_disabled();
 	}
 
-	return false;
+	return acquired;
 }
 #else
 static bool mutex_optimistic_spin(struct mutex *lock,
-				  struct ww_acquire_ctx *ww_ctx, const bool use_ww_ctx)
+				  struct ww_acquire_ctx *ww_ctx,
+				  const bool use_ww_ctx, int waiter)
 {
 	return false;
 }
@@ -520,7 +546,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
 	preempt_disable();
 	mutex_acquire_nest(&lock->dep_map, subclass, 0, nest_lock, ip);
 
-	if (mutex_optimistic_spin(lock, ww_ctx, use_ww_ctx)) {
+	if (mutex_optimistic_spin(lock, ww_ctx, use_ww_ctx, false)) {
 		/* got the lock, yay! */
 		preempt_enable();
 		return 0;
-- 
2.5.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 2/2] drm/i915: Enable fast modesetting again.
  2016-08-23  8:10 [PATCH 1/2] drm/i915/backlight: handle enabled but zero duty cycle at module load Maarten Lankhorst
  2016-08-23  8:10 ` [PATCH] drm/i915: Fix botched merge that downgrades CSR versions Maarten Lankhorst
  2016-08-23  8:10 ` [PATCH] locking/mutex: Add waiter parameter to mutex_optimistic_spin() Maarten Lankhorst
@ 2016-08-23  8:10 ` Maarten Lankhorst
  2016-08-23  8:10 ` [PATCH] locking/mutex: Enable optimistic spinning of woken task in wait queue Maarten Lankhorst
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 10+ messages in thread
From: Maarten Lankhorst @ 2016-08-23  8:10 UTC (permalink / raw)
  To: intel-gfx; +Cc: Jani Nikula, Olof Johansson

This was reverted in commit 7383123647566, but reverted when a
Chromebook Pixel (2015) regression was reported.

Jani wrote a patch to fix backlight, but that one has been completely
ignored by the original reporter of the bug.

We now have more testcases for fastset, so it should be possible to
enable with some confidence about regressions.

Cc: Jani Nikula <jani.nikula@intel.com>
Cc: Olof Johansson <olof@lixom.net>
Testcase: kms_panel_fitting
---
 drivers/gpu/drm/i915/i915_params.c   | 5 -----
 drivers/gpu/drm/i915/i915_params.h   | 1 -
 drivers/gpu/drm/i915/intel_display.c | 3 +--
 3 files changed, 1 insertion(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_params.c b/drivers/gpu/drm/i915/i915_params.c
index 768ad89d9cd4..0f90138cd881 100644
--- a/drivers/gpu/drm/i915/i915_params.c
+++ b/drivers/gpu/drm/i915/i915_params.c
@@ -42,7 +42,6 @@ struct i915_params i915 __read_mostly = {
 	.preliminary_hw_support = IS_ENABLED(CONFIG_DRM_I915_PRELIMINARY_HW_SUPPORT),
 	.disable_power_well = -1,
 	.enable_ips = 1,
-	.fastboot = 0,
 	.prefault_disable = 0,
 	.load_detect_test = 0,
 	.force_reset_modeset_test = 0,
@@ -148,10 +147,6 @@ MODULE_PARM_DESC(disable_power_well,
 module_param_named_unsafe(enable_ips, i915.enable_ips, int, 0600);
 MODULE_PARM_DESC(enable_ips, "Enable IPS (default: true)");
 
-module_param_named(fastboot, i915.fastboot, bool, 0600);
-MODULE_PARM_DESC(fastboot,
-	"Try to skip unnecessary mode sets at boot time (default: false)");
-
 module_param_named_unsafe(prefault_disable, i915.prefault_disable, bool, 0600);
 MODULE_PARM_DESC(prefault_disable,
 	"Disable page prefaulting for pread/pwrite/reloc (default:false). "
diff --git a/drivers/gpu/drm/i915/i915_params.h b/drivers/gpu/drm/i915/i915_params.h
index 3a0dd78ddb38..749f6c48a69b 100644
--- a/drivers/gpu/drm/i915/i915_params.h
+++ b/drivers/gpu/drm/i915/i915_params.h
@@ -54,7 +54,6 @@ struct i915_params {
 	unsigned int inject_load_failure;
 	/* leave bools at the end to not create holes */
 	bool enable_hangcheck;
-	bool fastboot;
 	bool prefault_disable;
 	bool load_detect_test;
 	bool force_reset_modeset_test;
diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
index 0af6ce3d7dde..dee2e85aedb3 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -13898,8 +13898,7 @@ static int intel_atomic_check(struct drm_device *dev,
 			return ret;
 		}
 
-		if (i915.fastboot &&
-		    intel_pipe_config_compare(dev,
+		if (intel_pipe_config_compare(dev,
 					to_intel_crtc_state(crtc->state),
 					pipe_config, true)) {
 			crtc_state->mode_changed = false;
-- 
2.7.4

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH] locking/mutex: Enable optimistic spinning of woken task in wait queue
  2016-08-23  8:10 [PATCH 1/2] drm/i915/backlight: handle enabled but zero duty cycle at module load Maarten Lankhorst
                   ` (2 preceding siblings ...)
  2016-08-23  8:10 ` [PATCH 2/2] drm/i915: Enable fast modesetting again Maarten Lankhorst
@ 2016-08-23  8:10 ` Maarten Lankhorst
  2016-08-23  8:10 ` [PATCH] locking/mutex: Ensure forward progress of waiter-spinner Maarten Lankhorst
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 10+ messages in thread
From: Maarten Lankhorst @ 2016-08-23  8:10 UTC (permalink / raw)
  To: intel-gfx; +Cc: Waiman Long

From: Waiman Long <Waiman.Long@hpe.com>

Ding Tianhong reported a live-lock situation where a constant stream
of incoming optimistic spinners blocked a task in the wait list from
getting the mutex.

This patch attempts to fix this live-lock condition by enabling the
woken task in the wait queue to enter into an optimistic spinning
loop itself in parallel with the regular spinners in the OSQ. This
should prevent the live-lock condition from happening.

Running the AIM7 benchmarks on a 4-socket E7-4820 v3 system (with ext4
filesystem), the additional spinning of the waiter-spinning improved
performance for the following workloads at high user count:

  Workload	% Improvement
  --------	-------------
  alltests	    3.9%
  disk		    3.4%
  fserver	    2.0%
  long		    3.8%
  new_fserver	   10.5%

The other workloads were about the same as before.

Signed-off-by: Waiman Long <Waiman.Long@hpe.com>
---
 kernel/locking/mutex.c | 16 +++++++++++++++-
 1 file changed, 15 insertions(+), 1 deletion(-)

diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index 875c925..24133c1 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -535,6 +535,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
 	struct task_struct *task = current;
 	struct mutex_waiter waiter;
 	unsigned long flags;
+	bool  acquired = false;	/* True if the lock is acquired */
 	int ret;
 
 	if (use_ww_ctx) {
@@ -571,7 +572,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
 
 	lock_contended(&lock->dep_map, ip);
 
-	for (;;) {
+	while (!acquired) {
 		/*
 		 * Lets try to take the lock again - this is needed even if
 		 * we get here for the first time (shortly after failing to
@@ -606,6 +607,15 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
 		/* didn't get the lock, go to sleep: */
 		spin_unlock_mutex(&lock->wait_lock, flags);
 		schedule_preempt_disabled();
+
+		/*
+		 * Optimistically spinning on the mutex without the wait lock
+		 * The state has to be set to running to avoid another waker
+		 * spinning on the on_cpu flag while the woken waiter is
+		 * spinning on the mutex.
+		 */
+		acquired = mutex_optimistic_spin(lock, ww_ctx, use_ww_ctx,
+						 true);
 		spin_lock_mutex(&lock->wait_lock, flags);
 	}
 	__set_task_state(task, TASK_RUNNING);
@@ -616,6 +626,9 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
 		atomic_set(&lock->count, 0);
 	debug_mutex_free_waiter(&waiter);
 
+	if (acquired)
+		goto unlock;
+
 skip_wait:
 	/* got the lock - cleanup and rejoice! */
 	lock_acquired(&lock->dep_map, ip);
@@ -626,6 +639,7 @@ skip_wait:
 		ww_mutex_set_context_slowpath(ww, ww_ctx);
 	}
 
+unlock:
 	spin_unlock_mutex(&lock->wait_lock, flags);
 	preempt_enable();
 	return 0;
-- 
2.5.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH] locking/mutex: Ensure forward progress of waiter-spinner
  2016-08-23  8:10 [PATCH 1/2] drm/i915/backlight: handle enabled but zero duty cycle at module load Maarten Lankhorst
                   ` (3 preceding siblings ...)
  2016-08-23  8:10 ` [PATCH] locking/mutex: Enable optimistic spinning of woken task in wait queue Maarten Lankhorst
@ 2016-08-23  8:10 ` Maarten Lankhorst
  2016-08-23  8:10 ` [PATCH] Avoid mutex starvation when optimistic spinning is disabled Maarten Lankhorst
  2016-08-25 11:27 ` [PATCH 1/2] drm/i915/backlight: handle enabled but zero duty cycle at module load Jani Nikula
  6 siblings, 0 replies; 10+ messages in thread
From: Maarten Lankhorst @ 2016-08-23  8:10 UTC (permalink / raw)
  To: intel-gfx; +Cc: Waiman Long

From: Waiman Long <Waiman.Long@hpe.com>

As both an optimistic spinner and a waiter-spinner (a woken task from
the wait queue spinning) can be spinning on the lock at the same time,
we cannot ensure forward progress for the waiter-spinner. Therefore,
it is possible for the waiter-spinner to be starved of getting the
lock, though not likely.

This patch adds a flag to indicate that a waiter-spinner is
spinning and hence has priority over the acquisition of the lock. A
waiter-spinner sets this flag while spinning. An optimistic spinner
will check this flag and yield if set. This essentially makes the
waiter-spinner jump to the head of the optimistic spinning queue to
acquire the lock.

There will be no increase in size for the mutex structure for 64-bit
architectures. For 32-bit architectures, there will be a size increase
of 4 bytes.

Signed-off-by: Waiman Long <Waiman.Long@hpe.com>
---
 include/linux/mutex.h  |  1 +
 kernel/locking/mutex.c | 36 +++++++++++++++++++++++++++---------
 2 files changed, 28 insertions(+), 9 deletions(-)

diff --git a/include/linux/mutex.h b/include/linux/mutex.h
index 2cb7531..f8e91ad 100644
--- a/include/linux/mutex.h
+++ b/include/linux/mutex.h
@@ -57,6 +57,7 @@ struct mutex {
 #endif
 #ifdef CONFIG_MUTEX_SPIN_ON_OWNER
 	struct optimistic_spin_queue osq; /* Spinner MCS lock */
+	int waiter_spinning;
 #endif
 #ifdef CONFIG_DEBUG_MUTEXES
 	void			*magic;
diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index 24133c1..3bcd25c 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -55,6 +55,7 @@ __mutex_init(struct mutex *lock, const char *name, struct lock_class_key *key)
 	mutex_clear_owner(lock);
 #ifdef CONFIG_MUTEX_SPIN_ON_OWNER
 	osq_lock_init(&lock->osq);
+	lock->waiter_spinning = false;
 #endif
 
 	debug_mutex_init(lock, name, key);
@@ -341,9 +342,21 @@ static bool mutex_optimistic_spin(struct mutex *lock,
 		 */
 		if (!osq_lock(&lock->osq))
 			goto done;
+	} else {
+		/*
+		 * Turn on the waiter spinning flag to discourage the spinner
+		 * from getting the lock.
+		 */
+		lock->waiter_spinning = true;
 	}
 
-	while (true) {
+	/*
+	 * The cpu_relax_lowlatency() call is a compiler barrier which forces
+	 * everything in this loop to be re-loaded. We don't need memory
+	 * barriers as we'll eventually observe the right values at the cost
+	 * of a few extra spins.
+	 */
+	for (;; cpu_relax_lowlatency()) {
 		struct task_struct *owner;
 
 		if (use_ww_ctx && ww_ctx->acquired > 0) {
@@ -363,6 +376,17 @@ static bool mutex_optimistic_spin(struct mutex *lock,
 		}
 
 		/*
+		 * For regular opt-spinner, it waits until the waiter_spinning
+		 * flag isn't set. This will ensure forward progress for
+		 * the waiter spinner.
+		 */
+		if (!waiter && READ_ONCE(lock->waiter_spinning)) {
+			if (need_resched())
+				break;
+			continue;
+		}
+
+		/*
 		 * If there's an owner, wait for it to either
 		 * release the lock or go to sleep.
 		 */
@@ -394,18 +418,12 @@ static bool mutex_optimistic_spin(struct mutex *lock,
 		 */
 		if (!owner && (need_resched() || rt_task(task)))
 			break;
-
-		/*
-		 * The cpu_relax() call is a compiler barrier which forces
-		 * everything in this loop to be re-loaded. We don't need
-		 * memory barriers as we'll eventually observe the right
-		 * values at the cost of a few extra spins.
-		 */
-		cpu_relax_lowlatency();
 	}
 
 	if (!waiter)
 		osq_unlock(&lock->osq);
+	else
+		lock->waiter_spinning = false;
 done:
 	/*
 	 * If we fell out of the spin path because of need_resched(),
-- 
2.5.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH] Avoid mutex starvation when optimistic spinning is disabled
  2016-08-23  8:10 [PATCH 1/2] drm/i915/backlight: handle enabled but zero duty cycle at module load Maarten Lankhorst
                   ` (4 preceding siblings ...)
  2016-08-23  8:10 ` [PATCH] locking/mutex: Ensure forward progress of waiter-spinner Maarten Lankhorst
@ 2016-08-23  8:10 ` Maarten Lankhorst
  2016-08-23  8:40   ` kbuild test robot
  2016-08-25 11:27 ` [PATCH 1/2] drm/i915/backlight: handle enabled but zero duty cycle at module load Jani Nikula
  6 siblings, 1 reply; 10+ messages in thread
From: Maarten Lankhorst @ 2016-08-23  8:10 UTC (permalink / raw)
  To: intel-gfx; +Cc: Jason Low

From: Jason Low <jason.low2@hpe.com>

On Tue, 2016-07-19 at 16:04 -0700, Jason Low wrote:
> Hi Imre,
>
> Here is a patch which prevents a thread from spending too much "time"
> waiting for a mutex in the !CONFIG_MUTEX_SPIN_ON_OWNER case.
>
> Would you like to try this out and see if this addresses the mutex
> starvation issue you are seeing in your workload when optimistic
> spinning is disabled?

Although it looks like it didn't take care of the 'lock stealing' case
in the slowpath. Here is the updated fixed version:

[imre: fixed trymutex, ww_mutex functions, reset yield_to_waiter only in
 the waiter thread.]
Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 include/linux/mutex.h  |  2 ++
 kernel/locking/mutex.c | 80 +++++++++++++++++++++++++++++++++++++++++++-------
 2 files changed, 71 insertions(+), 11 deletions(-)

diff --git a/include/linux/mutex.h b/include/linux/mutex.h
index 2cb7531..c1ca68d 100644
--- a/include/linux/mutex.h
+++ b/include/linux/mutex.h
@@ -57,6 +57,8 @@ struct mutex {
 #endif
 #ifdef CONFIG_MUTEX_SPIN_ON_OWNER
 	struct optimistic_spin_queue osq; /* Spinner MCS lock */
+#else
+	bool yield_to_waiter; /* Prevent starvation when spinning disabled */
 #endif
 #ifdef CONFIG_DEBUG_MUTEXES
 	void			*magic;
diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index a70b90d..ce70299 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -55,6 +55,8 @@ __mutex_init(struct mutex *lock, const char *name, struct lock_class_key *key)
 	mutex_clear_owner(lock);
 #ifdef CONFIG_MUTEX_SPIN_ON_OWNER
 	osq_lock_init(&lock->osq);
+#else
+	lock->yield_to_waiter = false;
 #endif
 
 	debug_mutex_init(lock, name, key);
@@ -71,6 +73,9 @@ EXPORT_SYMBOL(__mutex_init);
  */
 __visible void __sched __mutex_lock_slowpath(atomic_t *lock_count);
 
+
+static inline bool need_yield_to_waiter(struct mutex *lock);
+
 /**
  * mutex_lock - acquire the mutex
  * @lock: the mutex to be acquired
@@ -95,11 +100,15 @@ __visible void __sched __mutex_lock_slowpath(atomic_t *lock_count);
 void __sched mutex_lock(struct mutex *lock)
 {
 	might_sleep();
+
 	/*
 	 * The locking fastpath is the 1->0 transition from
 	 * 'unlocked' into 'locked' state.
 	 */
-	__mutex_fastpath_lock(&lock->count, __mutex_lock_slowpath);
+	if (!need_yield_to_waiter(lock))
+		__mutex_fastpath_lock(&lock->count, __mutex_lock_slowpath);
+	else
+		__mutex_lock_slowpath(&lock->count);
 	mutex_set_owner(lock);
 }
 
@@ -398,12 +407,41 @@ done:
 
 	return false;
 }
+
+static inline bool do_yield_to_waiter(struct mutex *lock, int loops)
+{
+	return false;
+}
+
+static inline bool need_yield_to_waiter(struct mutex *lock)
+{
+	return false;
+}
+
 #else
 static bool mutex_optimistic_spin(struct mutex *lock,
 				  struct ww_acquire_ctx *ww_ctx, const bool use_ww_ctx)
 {
 	return false;
 }
+
+#define MUTEX_MAX_WAIT 32
+
+static inline bool do_yield_to_waiter(struct mutex *lock, int loops)
+{
+	if (loops < MUTEX_MAX_WAIT)
+		return false;
+
+	if (lock->yield_to_waiter != true)
+		lock->yield_to_waiter =true;
+
+	return true;
+}
+
+static inline bool need_yield_to_waiter(struct mutex *lock)
+{
+	return lock->yield_to_waiter;
+}
 #endif
 
 __visible __used noinline
@@ -509,7 +547,9 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
 	struct task_struct *task = current;
 	struct mutex_waiter waiter;
 	unsigned long flags;
+	bool yield_forced = false;
 	int ret;
+	int loop = 0;
 
 	if (use_ww_ctx) {
 		struct ww_mutex *ww = container_of(lock, struct ww_mutex, base);
@@ -532,7 +572,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
 	 * Once more, try to acquire the lock. Only try-lock the mutex if
 	 * it is unlocked to reduce unnecessary xchg() operations.
 	 */
-	if (!mutex_is_locked(lock) &&
+	if (!need_yield_to_waiter(lock) && !mutex_is_locked(lock) &&
 	    (atomic_xchg_acquire(&lock->count, 0) == 1))
 		goto skip_wait;
 
@@ -546,6 +586,8 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
 	lock_contended(&lock->dep_map, ip);
 
 	for (;;) {
+		loop++;
+
 		/*
 		 * Lets try to take the lock again - this is needed even if
 		 * we get here for the first time (shortly after failing to
@@ -556,7 +598,8 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
 		 * other waiters. We only attempt the xchg if the count is
 		 * non-negative in order to avoid unnecessary xchg operations:
 		 */
-		if (atomic_read(&lock->count) >= 0 &&
+		if ((!need_yield_to_waiter(lock) || loop > 1) &&
+		    atomic_read(&lock->count) >= 0 &&
 		    (atomic_xchg_acquire(&lock->count, -1) == 1))
 			break;
 
@@ -581,6 +624,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
 		spin_unlock_mutex(&lock->wait_lock, flags);
 		schedule_preempt_disabled();
 		spin_lock_mutex(&lock->wait_lock, flags);
+		yield_forced = do_yield_to_waiter(lock, loop);
 	}
 	__set_task_state(task, TASK_RUNNING);
 
@@ -590,6 +634,9 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
 		atomic_set(&lock->count, 0);
 	debug_mutex_free_waiter(&waiter);
 
+	if (yield_forced)
+		lock->yield_to_waiter = false;
+
 skip_wait:
 	/* got the lock - cleanup and rejoice! */
 	lock_acquired(&lock->dep_map, ip);
@@ -789,10 +836,13 @@ __mutex_lock_interruptible_slowpath(struct mutex *lock);
  */
 int __sched mutex_lock_interruptible(struct mutex *lock)
 {
-	int ret;
+	int ret = 1;
 
 	might_sleep();
-	ret =  __mutex_fastpath_lock_retval(&lock->count);
+
+	if (!need_yield_to_waiter(lock))
+		ret =  __mutex_fastpath_lock_retval(&lock->count);
+
 	if (likely(!ret)) {
 		mutex_set_owner(lock);
 		return 0;
@@ -804,10 +854,13 @@ EXPORT_SYMBOL(mutex_lock_interruptible);
 
 int __sched mutex_lock_killable(struct mutex *lock)
 {
-	int ret;
+	int ret = 1;
 
 	might_sleep();
-	ret = __mutex_fastpath_lock_retval(&lock->count);
+
+	if (!need_yield_to_waiter(lock))
+		ret = __mutex_fastpath_lock_retval(&lock->count);
+
 	if (likely(!ret)) {
 		mutex_set_owner(lock);
 		return 0;
@@ -905,6 +958,9 @@ int __sched mutex_trylock(struct mutex *lock)
 {
 	int ret;
 
+	if (need_yield_to_waiter(lock))
+		return 0;
+
 	ret = __mutex_fastpath_trylock(&lock->count, __mutex_trylock_slowpath);
 	if (ret)
 		mutex_set_owner(lock);
@@ -917,11 +973,12 @@ EXPORT_SYMBOL(mutex_trylock);
 int __sched
 __ww_mutex_lock(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
 {
-	int ret;
+	int ret = 1;
 
 	might_sleep();
 
-	ret = __mutex_fastpath_lock_retval(&lock->base.count);
+	if (!need_yield_to_waiter(&lock->base))
+		ret = __mutex_fastpath_lock_retval(&lock->base.count);
 
 	if (likely(!ret)) {
 		ww_mutex_set_context_fastpath(lock, ctx);
@@ -935,11 +992,12 @@ EXPORT_SYMBOL(__ww_mutex_lock);
 int __sched
 __ww_mutex_lock_interruptible(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
 {
-	int ret;
+	int ret = 1;
 
 	might_sleep();
 
-	ret = __mutex_fastpath_lock_retval(&lock->base.count);
+	if (!need_yield_to_waiter(&lock->base))
+		ret = __mutex_fastpath_lock_retval(&lock->base.count);
 
 	if (likely(!ret)) {
 		ww_mutex_set_context_fastpath(lock, ctx);
-- 
2.5.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH] Avoid mutex starvation when optimistic spinning is disabled
  2016-08-23  8:10 ` [PATCH] Avoid mutex starvation when optimistic spinning is disabled Maarten Lankhorst
@ 2016-08-23  8:40   ` kbuild test robot
  0 siblings, 0 replies; 10+ messages in thread
From: kbuild test robot @ 2016-08-23  8:40 UTC (permalink / raw)
  To: Maarten Lankhorst; +Cc: intel-gfx, Jason Low, kbuild-all

[-- Attachment #1: Type: text/plain, Size: 1562 bytes --]

Hi Jason,

[auto build test ERROR on tip/locking/core]
[also build test ERROR on v4.8-rc3 next-20160823]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]
[Suggest to use git(>=2.9.0) format-patch --base=<commit> (or --base=auto for convenience) to record what (public, well-known) commit your patch series was built on]
[Check https://git-scm.com/docs/git-format-patch for more information]

url:    https://github.com/0day-ci/linux/commits/Maarten-Lankhorst/Avoid-mutex-starvation-when-optimistic-spinning-is-disabled/20160823-161801
config: x86_64-randconfig-x013-201634 (attached as .config)
compiler: gcc-6 (Debian 6.1.1-9) 6.1.1 20160705
reproduce:
        # save the attached .config to linux build tree
        make ARCH=x86_64 

All errors (new ones prefixed by >>):

   kernel/locking/mutex.c: In function '__mutex_lock_common':
>> kernel/locking/mutex.c:638:7: error: 'struct mutex' has no member named 'yield_to_waiter'
      lock->yield_to_waiter = false;
          ^~

vim +638 kernel/locking/mutex.c

   632		/* set it to 0 if there are no waiters left: */
   633		if (likely(list_empty(&lock->wait_list)))
   634			atomic_set(&lock->count, 0);
   635		debug_mutex_free_waiter(&waiter);
   636	
   637		if (yield_forced)
 > 638			lock->yield_to_waiter = false;
   639	
   640	skip_wait:
   641		/* got the lock - cleanup and rejoice! */

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/octet-stream, Size: 30321 bytes --]

[-- Attachment #3: Type: text/plain, Size: 160 bytes --]

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 1/2] drm/i915/backlight: handle enabled but zero duty cycle at module load
  2016-08-23  8:10 [PATCH 1/2] drm/i915/backlight: handle enabled but zero duty cycle at module load Maarten Lankhorst
                   ` (5 preceding siblings ...)
  2016-08-23  8:10 ` [PATCH] Avoid mutex starvation when optimistic spinning is disabled Maarten Lankhorst
@ 2016-08-25 11:27 ` Jani Nikula
  6 siblings, 0 replies; 10+ messages in thread
From: Jani Nikula @ 2016-08-25 11:27 UTC (permalink / raw)
  To: Maarten Lankhorst, intel-gfx

On Tue, 23 Aug 2016, Maarten Lankhorst <maarten.lankhorst@linux.intel.com> wrote:
> From: Jani Nikula <jani.nikula@intel.com>
>
> Don't consider enabled but zero duty cycle backlight disabled. Clamp
> level between min and max for sanity.
>
> Signed-off-by: Jani Nikula <jani.nikula@intel.com>

I think this is the right thing to do.

In the future, perhaps we'll want to check for !enabled, and actually
enable the backlight too if backlight is present and the crtc is active.

Please just push this, and we'll see what happens.

BR,
Jani.


> ---
>  drivers/gpu/drm/i915/intel_panel.c | 40 +++++++++++++++++++++++---------------
>  1 file changed, 24 insertions(+), 16 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/intel_panel.c b/drivers/gpu/drm/i915/intel_panel.c
> index 96c65d77e886..c10e9b0405e8 100644
> --- a/drivers/gpu/drm/i915/intel_panel.c
> +++ b/drivers/gpu/drm/i915/intel_panel.c
> @@ -1430,10 +1430,11 @@ static int lpt_setup_backlight(struct intel_connector *connector, enum pipe unus
>  	panel->backlight.min = get_backlight_min_vbt(connector);
>  
>  	val = lpt_get_backlight(connector);
> -	panel->backlight.level = intel_panel_compute_brightness(connector, val);
> +	val = intel_panel_compute_brightness(connector, val);
> +	panel->backlight.level = clamp(val, panel->backlight.min,
> +				       panel->backlight.max);
>  
> -	panel->backlight.enabled = (pch_ctl1 & BLM_PCH_PWM_ENABLE) &&
> -		panel->backlight.level != 0;
> +	panel->backlight.enabled = pch_ctl1 & BLM_PCH_PWM_ENABLE;
>  
>  	return 0;
>  }
> @@ -1459,11 +1460,13 @@ static int pch_setup_backlight(struct intel_connector *connector, enum pipe unus
>  	panel->backlight.min = get_backlight_min_vbt(connector);
>  
>  	val = pch_get_backlight(connector);
> -	panel->backlight.level = intel_panel_compute_brightness(connector, val);
> +	val = intel_panel_compute_brightness(connector, val);
> +	panel->backlight.level = clamp(val, panel->backlight.min,
> +				       panel->backlight.max);
>  
>  	cpu_ctl2 = I915_READ(BLC_PWM_CPU_CTL2);
>  	panel->backlight.enabled = (cpu_ctl2 & BLM_PWM_ENABLE) &&
> -		(pch_ctl1 & BLM_PCH_PWM_ENABLE) && panel->backlight.level != 0;
> +		(pch_ctl1 & BLM_PCH_PWM_ENABLE);
>  
>  	return 0;
>  }
> @@ -1498,9 +1501,11 @@ static int i9xx_setup_backlight(struct intel_connector *connector, enum pipe unu
>  	panel->backlight.min = get_backlight_min_vbt(connector);
>  
>  	val = i9xx_get_backlight(connector);
> -	panel->backlight.level = intel_panel_compute_brightness(connector, val);
> +	val = intel_panel_compute_brightness(connector, val);
> +	panel->backlight.level = clamp(val, panel->backlight.min,
> +				       panel->backlight.max);
>  
> -	panel->backlight.enabled = panel->backlight.level != 0;
> +	panel->backlight.enabled = val != 0;
>  
>  	return 0;
>  }
> @@ -1530,10 +1535,11 @@ static int i965_setup_backlight(struct intel_connector *connector, enum pipe unu
>  	panel->backlight.min = get_backlight_min_vbt(connector);
>  
>  	val = i9xx_get_backlight(connector);
> -	panel->backlight.level = intel_panel_compute_brightness(connector, val);
> +	val = intel_panel_compute_brightness(connector, val);
> +	panel->backlight.level = clamp(val, panel->backlight.min,
> +				       panel->backlight.max);
>  
> -	panel->backlight.enabled = (ctl2 & BLM_PWM_ENABLE) &&
> -		panel->backlight.level != 0;
> +	panel->backlight.enabled = ctl2 & BLM_PWM_ENABLE;
>  
>  	return 0;
>  }
> @@ -1562,10 +1568,11 @@ static int vlv_setup_backlight(struct intel_connector *connector, enum pipe pipe
>  	panel->backlight.min = get_backlight_min_vbt(connector);
>  
>  	val = _vlv_get_backlight(dev_priv, pipe);
> -	panel->backlight.level = intel_panel_compute_brightness(connector, val);
> +	val = intel_panel_compute_brightness(connector, val);
> +	panel->backlight.level = clamp(val, panel->backlight.min,
> +				       panel->backlight.max);
>  
> -	panel->backlight.enabled = (ctl2 & BLM_PWM_ENABLE) &&
> -		panel->backlight.level != 0;
> +	panel->backlight.enabled = ctl2 & BLM_PWM_ENABLE;
>  
>  	return 0;
>  }
> @@ -1607,10 +1614,11 @@ bxt_setup_backlight(struct intel_connector *connector, enum pipe unused)
>  		return -ENODEV;
>  
>  	val = bxt_get_backlight(connector);
> -	panel->backlight.level = intel_panel_compute_brightness(connector, val);
> +	val = intel_panel_compute_brightness(connector, val);
> +	panel->backlight.level = clamp(val, panel->backlight.min,
> +				       panel->backlight.max);
>  
> -	panel->backlight.enabled = (pwm_ctl & BXT_BLC_PWM_ENABLE) &&
> -		panel->backlight.level != 0;
> +	panel->backlight.enabled = pwm_ctl & BXT_BLC_PWM_ENABLE;
>  
>  	return 0;
>  }

-- 
Jani Nikula, Intel Open Source Technology Center
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] drm/i915: Fix botched merge that downgrades CSR versions.
  2016-08-23  8:10 ` [PATCH] drm/i915: Fix botched merge that downgrades CSR versions Maarten Lankhorst
@ 2016-08-25 11:29   ` Jani Nikula
  0 siblings, 0 replies; 10+ messages in thread
From: Jani Nikula @ 2016-08-25 11:29 UTC (permalink / raw)
  To: Maarten Lankhorst, intel-gfx; +Cc: drm-intel-fixes

On Tue, 23 Aug 2016, Maarten Lankhorst <maarten.lankhorst@linux.intel.com> wrote:
> Merge commit 5e580523d9128a4d8 reverts the version bumping parts of
> commit 4aa7fb9c3c4fa0. Bump the versions again and request the specific
> firmware version.
>
> The currently recommended versions are: SKL 1.26, KBL 1.01 and BXT 1.07.
>
> Cc: Patrik Jakobsson <patrik.jakobsson@linux.intel.com>
> Cc: Imre Deak <imre.deak@intel.com>
> Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=97242
> Cc: drm-intel-fixes@lists.freedesktop.org
> Fixes: 5e580523d912 ("Backmerge tag 'v4.7' into drm-next")
> Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>

This is already in dinq as 536ab3ca19ef ("drm/i915: Fix botched merge
that downgrades CSR versions."), and cherry-picked to fixes as
177d91aaea4b ("drm/i915: Fix botched merge that downgrades CSR
versions."). It's also in my today's pull request to Dave.

BR,
Jani.


> ---
>  drivers/gpu/drm/i915/intel_csr.c | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/intel_csr.c b/drivers/gpu/drm/i915/intel_csr.c
> index fb27d187876c..1ea0e1f43397 100644
> --- a/drivers/gpu/drm/i915/intel_csr.c
> +++ b/drivers/gpu/drm/i915/intel_csr.c
> @@ -34,15 +34,15 @@
>   * low-power state and comes back to normal.
>   */
>  
> -#define I915_CSR_KBL "i915/kbl_dmc_ver1.bin"
> +#define I915_CSR_KBL "i915/kbl_dmc_ver1_01.bin"
>  MODULE_FIRMWARE(I915_CSR_KBL);
>  #define KBL_CSR_VERSION_REQUIRED	CSR_VERSION(1, 1)
>  
> -#define I915_CSR_SKL "i915/skl_dmc_ver1.bin"
> +#define I915_CSR_SKL "i915/skl_dmc_ver1_26.bin"
>  MODULE_FIRMWARE(I915_CSR_SKL);
> -#define SKL_CSR_VERSION_REQUIRED	CSR_VERSION(1, 23)
> +#define SKL_CSR_VERSION_REQUIRED	CSR_VERSION(1, 26)
>  
> -#define I915_CSR_BXT "i915/bxt_dmc_ver1.bin"
> +#define I915_CSR_BXT "i915/bxt_dmc_ver1_07.bin"
>  MODULE_FIRMWARE(I915_CSR_BXT);
>  #define BXT_CSR_VERSION_REQUIRED	CSR_VERSION(1, 7)

-- 
Jani Nikula, Intel Open Source Technology Center
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2016-08-25 11:29 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-08-23  8:10 [PATCH 1/2] drm/i915/backlight: handle enabled but zero duty cycle at module load Maarten Lankhorst
2016-08-23  8:10 ` [PATCH] drm/i915: Fix botched merge that downgrades CSR versions Maarten Lankhorst
2016-08-25 11:29   ` Jani Nikula
2016-08-23  8:10 ` [PATCH] locking/mutex: Add waiter parameter to mutex_optimistic_spin() Maarten Lankhorst
2016-08-23  8:10 ` [PATCH 2/2] drm/i915: Enable fast modesetting again Maarten Lankhorst
2016-08-23  8:10 ` [PATCH] locking/mutex: Enable optimistic spinning of woken task in wait queue Maarten Lankhorst
2016-08-23  8:10 ` [PATCH] locking/mutex: Ensure forward progress of waiter-spinner Maarten Lankhorst
2016-08-23  8:10 ` [PATCH] Avoid mutex starvation when optimistic spinning is disabled Maarten Lankhorst
2016-08-23  8:40   ` kbuild test robot
2016-08-25 11:27 ` [PATCH 1/2] drm/i915/backlight: handle enabled but zero duty cycle at module load Jani Nikula

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox