Intel-GFX Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
To: intel-gfx@lists.freedesktop.org
Cc: Stanislav.Lisovskiy@intel.com, jani.saarinen@intel.com,
	ville.syrjala@linux.intel.com
Subject: [PATCH 3/3] drm/i915: Disable SAGV on bw init, to force QGV point recalculation
Date: Mon, 19 Feb 2024 11:18:52 +0200	[thread overview]
Message-ID: <20240219091852.23162-4-stanislav.lisovskiy@intel.com> (raw)
In-Reply-To: <20240219091852.23162-1-stanislav.lisovskiy@intel.com>

Problem is that on some platforms, we do get QGV point mask in wrong
state on boot. However driver assumes it is set to 0
(i.e all points allowed), however in reality we might get them all restricted,
causing issues.
Lets disable SAGV initially to force proper QGV point state.
If more QGV points are available, driver will recalculate and update
those then after next commit.

v2: - Added trace to see which QGV/PSF GV point is used when SAGV is
      disabled.
v3: - Move force disable function to intel_bw_init in order to initialize
      bw state as well, so that hw/sw are immediately in sync after init.
v4: - Don't try sending PCode request, seems like it is not possible at
      intel_bw_init, however assigning bw->state to be restricted as if
      SAGV is off, still forces driveer to send PCode request anyway on
      next modeset, so the solution still works.
      However we still need to address the case, when no display is connected,
      which anyway requires much more changes.

v5: - Put PCode request back and apply temporary hack to make the
      request succeed(in case if there 2 PSF GV points with same BW, PCode
      accepts only if both points are restricted/unrestricted same time)
    - Fix argument sequence for adl_qgv_bw(Ville Syrjälä)

Signed-off-by: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
---
 drivers/gpu/drm/i915/display/intel_bw.c | 63 +++++++++++++++++++++++--
 1 file changed, 59 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_bw.c b/drivers/gpu/drm/i915/display/intel_bw.c
index 7baa1c13eccd..d9f34dc66a83 100644
--- a/drivers/gpu/drm/i915/display/intel_bw.c
+++ b/drivers/gpu/drm/i915/display/intel_bw.c
@@ -162,7 +162,7 @@ int icl_pcode_restrict_qgv_points(struct drm_i915_private *dev_priv,
 				1);
 
 	if (ret < 0) {
-		drm_err(&dev_priv->drm, "Failed to disable qgv points (%d) points: 0x%x\n", ret, points_mask);
+		drm_err(&dev_priv->drm, "Failed to disable qgv points (%x) points: 0x%x\n", ret, points_mask);
 		return ret;
 	}
 
@@ -662,7 +662,7 @@ static unsigned int adl_psf_bw(struct drm_i915_private *i915,
 }
 
 static unsigned int adl_qgv_bw(struct drm_i915_private *i915,
-			       int qgv_point, int num_active_planes)
+			       int num_active_planes, int qgv_point)
 {
 	unsigned int idx;
 
@@ -833,7 +833,7 @@ static unsigned int icl_max_bw_qgv_point(struct drm_i915_private *i915,
 	for (i = 0; i < num_qgv_points; i++) {
 		unsigned int max_data_rate;
 
-		max_data_rate = adl_qgv_bw(i915, i, num_active_planes);
+		max_data_rate = adl_qgv_bw(i915, num_active_planes, i);
 
 		/*
 		 * We need to know which qgv point gives us
@@ -852,6 +852,58 @@ static unsigned int icl_max_bw_qgv_point(struct drm_i915_private *i915,
 	return max_bw_point;
 }
 
+/*
+ * Due to some strange reason, we have to use a mask of PSF GV
+ * points, instead of finding the one which provides the highest bandwidth,
+ * this is because PCode rejects the request, if 2 PSF GV points, which have
+ * same bandwidth are not set/cleared same time.
+ */
+static unsigned int icl_max_bw_psf_gv_point_mask(struct drm_i915_private *i915)
+{
+	unsigned int num_psf_gv_points = i915->display.bw.max[0].num_psf_gv_points;
+	unsigned int max_bw = 0;
+	unsigned int max_bw_point_mask = 0;
+	int i;
+
+	for (i = 0; i < num_psf_gv_points; i++) {
+		unsigned int max_data_rate = adl_psf_bw(i915, i);
+
+		if (max_data_rate > max_bw) {
+			max_bw_point_mask = BIT(i);
+			max_bw = max_data_rate;
+		} else if (max_data_rate == max_bw)
+			max_bw_point_mask |= BIT(i);
+	}
+
+	return max_bw_point_mask;
+}
+
+static void icl_force_disable_sagv(struct drm_i915_private *i915, struct intel_bw_state *bw_state)
+{
+	unsigned int max_bw_qgv_point = icl_max_bw_qgv_point(i915, 0);
+	unsigned int max_bw_psf_gv_point_mask = icl_max_bw_psf_gv_point_mask(i915);
+	unsigned int qgv_points;
+	unsigned int psf_points;
+	int ret;
+
+	qgv_points = BIT(max_bw_qgv_point);
+	psf_points = max_bw_psf_gv_point_mask;
+
+	bw_state->qgv_points_mask = ~(ICL_PCODE_REQ_QGV_PT(qgv_points)|
+				      ADLS_PCODE_REQ_PSF_PT(psf_points)) &
+				      icl_qgv_points_mask(i915);
+
+	drm_dbg_kms(&i915->drm, "Forcing SAGV disable: mask %x\n", bw_state->qgv_points_mask);
+
+	ret = icl_pcode_restrict_qgv_points(i915, bw_state->qgv_points_mask);
+
+	if (ret)
+		drm_dbg_kms(&i915->drm, "Restricting GV points failed: %x\n", ret);
+	else
+		drm_dbg_kms(&i915->drm, "Restricting GV points succeeded\n");
+
+}
+
 static int mtl_find_qgv_points(struct drm_i915_private *i915,
 			       unsigned int data_rate,
 			       unsigned int num_active_planes,
@@ -943,7 +995,7 @@ static int icl_find_qgv_points(struct drm_i915_private *i915,
 	for (i = 0; i < num_qgv_points; i++) {
 		unsigned int max_data_rate;
 
-		max_data_rate = adl_qgv_bw(i915, i, num_active_planes);
+		max_data_rate = adl_qgv_bw(i915, num_active_planes, i);
 
 		if (max_data_rate >= data_rate)
 			qgv_points |= BIT(i);
@@ -1351,5 +1403,8 @@ int intel_bw_init(struct drm_i915_private *dev_priv)
 	intel_atomic_global_obj_init(dev_priv, &dev_priv->display.bw.obj,
 				     &state->base, &intel_bw_funcs);
 
+	if (DISPLAY_VER(dev_priv) < 14)
+		icl_force_disable_sagv(dev_priv, state);
+
 	return 0;
 }
-- 
2.37.3


  parent reply	other threads:[~2024-02-19  9:19 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-02-19  9:18 [PATCH 0/3] QGV/SAGV related fixes Stanislav Lisovskiy
2024-02-19  9:18 ` [PATCH 1/3] drm/i915: Add meaningful traces for QGV point info error handling Stanislav Lisovskiy
2024-02-19  9:18 ` [PATCH 2/3] drm/i915: Extract code required to calculate max qgv/psf gv point Stanislav Lisovskiy
2024-02-19  9:18 ` Stanislav Lisovskiy [this message]
2024-02-19 14:48 ` ✗ Fi.CI.CHECKPATCH: warning for QGV/SAGV related fixes (rev6) Patchwork
2024-02-19 14:49 ` ✗ Fi.CI.SPARSE: " Patchwork
2024-02-19 15:09 ` ✗ Fi.CI.BAT: failure " Patchwork
  -- strict thread matches above, loose matches on Subject: below --
2024-02-20  9:31 [PATCH 0/3] QGV/SAGV related fixes Stanislav Lisovskiy
2024-02-20  9:31 ` [PATCH 3/3] drm/i915: Disable SAGV on bw init, to force QGV point recalculation Stanislav Lisovskiy
2024-03-20 22:33   ` Govindapillai, Vinod
2024-01-17 15:57 [PATCH 0/3] QGV/SAGV related fixes Stanislav Lisovskiy
2024-01-17 15:57 ` [PATCH 3/3] drm/i915: Disable SAGV on bw init, to force QGV point recalculation Stanislav Lisovskiy
2024-01-18  8:35   ` Ville Syrjälä
2024-01-18  8:50     ` Lisovskiy, Stanislav
2024-01-18  9:07       ` Ville Syrjälä
2024-01-18  9:26         ` Lisovskiy, Stanislav
2024-02-01 12:33         ` Lisovskiy, Stanislav

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20240219091852.23162-4-stanislav.lisovskiy@intel.com \
    --to=stanislav.lisovskiy@intel.com \
    --cc=intel-gfx@lists.freedesktop.org \
    --cc=jani.saarinen@intel.com \
    --cc=ville.syrjala@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox