From: "Souza, Jose" <jose.souza@intel.com>
To: "ville.syrjala@linux.intel.com" <ville.syrjala@linux.intel.com>
Cc: "intel-gfx@lists.freedesktop.org" <intel-gfx@lists.freedesktop.org>
Subject: Re: [Intel-gfx] [PATCH 08/16] drm/i915: Query the vswing levels per-lane for icl combo phy
Date: Mon, 1 Nov 2021 17:36:26 +0000 [thread overview]
Message-ID: <2ec52bd3d17d55542a0873d4fd240d4577a418fd.camel@intel.com> (raw)
In-Reply-To: <YX+9QfQty2vn6yoP@intel.com>
On Mon, 2021-11-01 at 12:11 +0200, Ville Syrjälä wrote:
> On Fri, Oct 29, 2021 at 09:57:02PM +0000, Souza, Jose wrote:
> > On Wed, 2021-10-06 at 23:49 +0300, Ville Syrjala wrote:
> > > From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > >
> > > Prepare for per-lane drive settings by querying the desired vswing
> > > level per-lane.
> > >
> > > Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > > ---
> > > drivers/gpu/drm/i915/display/intel_ddi.c | 7 ++++++-
> > > 1 file changed, 6 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/drivers/gpu/drm/i915/display/intel_ddi.c b/drivers/gpu/drm/i915/display/intel_ddi.c
> > > index aa789cabc55b..4c400f0e7347 100644
> > > --- a/drivers/gpu/drm/i915/display/intel_ddi.c
> > > +++ b/drivers/gpu/drm/i915/display/intel_ddi.c
> > > @@ -1039,7 +1039,6 @@ static void icl_ddi_combo_vswing_program(struct intel_encoder *encoder,
> > > const struct intel_crtc_state *crtc_state)
> > > {
> > > struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
> > > - int level = intel_ddi_level(encoder, crtc_state, 0);
> > > const struct intel_ddi_buf_trans *trans;
> > > enum phy phy = intel_port_to_phy(dev_priv, encoder->port);
> > > int n_entries, ln;
> > > @@ -1069,6 +1068,8 @@ static void icl_ddi_combo_vswing_program(struct intel_encoder *encoder,
> > >
> > > /* Program PORT_TX_DW2 */
> > > for (ln = 0; ln < 4; ln++) {
> > > + int level = intel_ddi_level(encoder, crtc_state, ln);
> > > +
> > > val = intel_de_read(dev_priv, ICL_PORT_TX_DW2_LN(ln, phy));
> > > val &= ~(SWING_SEL_LOWER_MASK | SWING_SEL_UPPER_MASK |
> > > RCOMP_SCALAR_MASK);
> > > @@ -1082,6 +1083,8 @@ static void icl_ddi_combo_vswing_program(struct intel_encoder *encoder,
> > > /* Program PORT_TX_DW4 */
> > > /* We cannot write to GRP. It would overwrite individual loadgen. */
> > > for (ln = 0; ln < 4; ln++) {
> > > + int level = intel_ddi_level(encoder, crtc_state, ln);
> > > +
> > > val = intel_de_read(dev_priv, ICL_PORT_TX_DW4_LN(ln, phy));
> > > val &= ~(POST_CURSOR_1_MASK | POST_CURSOR_2_MASK |
> > > CURSOR_COEFF_MASK);
> > > @@ -1093,6 +1096,8 @@ static void icl_ddi_combo_vswing_program(struct intel_encoder *encoder,
> > >
> > > /* Program PORT_TX_DW7 */
> > > for (ln = 0; ln < 4; ln++) {
> > > + int level = intel_ddi_level(encoder, crtc_state, ln);
> > > +
> > > val = intel_de_read(dev_priv, ICL_PORT_TX_DW7_LN(ln, phy));
> > > val &= ~N_SCALAR_MASK;
> > > val |= N_SCALAR(trans->entries[level].icl.dw7_n_scalar);
> >
> > The cover letter or one of the earlier patches should have some explanation about the reasons of this the group to lane conversion.
>
> They do say it's for per-lane drive setings. Not rally sure what to add
> to that.
>
> > Reading one of the later patches I understood is because DP 2.0 allows different level per lane but would be nice to know for sure the reason.
>
> It has always been a feature of DP, we just never implemented it for
> whatever reason.
>
> >
> > What if it is only using 2 lanes? Programming disabled lanes will cause any issue?
>
> Depends on whether the registers are available or not. For CHV I know
> the unused lanes will be fully powered off and you can't actually access
> the registers (and vlv_dpio_read() will actually WARN when it sees the
> ~0 value from an inaccessible register). For later platforms I don't
> actually know what happens. We don't have an equivalent of that CHV WARN
> but I would hope that we'd get an unclaimed reg warning if the register
> is inaccessible.
>
> Although I suppose there's isn't any real harm in poking inaccssible
> registers. The reads should just return all 0s or all 1s, and the
> writes go to /dev/null.
>
Fair enough.
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
next prev parent reply other threads:[~2021-11-01 17:37 UTC|newest]
Thread overview: 42+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-10-06 20:49 [Intel-gfx] [PATCH 00/16] drm/i915: DP per-lane drive settings for icl+ Ville Syrjala
2021-10-06 20:49 ` [Intel-gfx] [PATCH 01/16] drm/i915: Remove pointless extra namespace from dkl/snps buf trans structs Ville Syrjala
2021-10-08 10:18 ` Jani Nikula
2021-10-06 20:49 ` [Intel-gfx] [PATCH 02/16] drm/i915: Shrink {icl_mg, tgl_dkl}_phy_ddi_buf_trans Ville Syrjala
2021-10-08 10:19 ` Jani Nikula
2021-10-06 20:49 ` [Intel-gfx] [PATCH 03/16] drm/i915: Use standard form terminating condition for lane for loops Ville Syrjala
2021-10-08 10:19 ` Jani Nikula
2021-10-06 20:49 ` [Intel-gfx] [PATCH 04/16] drm/i915: Add all per-lane register definitions for icl combo phy Ville Syrjala
2021-10-08 10:21 ` Jani Nikula
2021-10-08 10:29 ` Ville Syrjälä
2021-10-06 20:49 ` [Intel-gfx] [PATCH 05/16] drm/i915: Remove dead DKL_TX_LOADGEN_SHARING_PMD_DISABLE stuff Ville Syrjala
2021-10-08 10:23 ` Jani Nikula
2021-10-06 20:49 ` [Intel-gfx] [PATCH 06/16] drm/i915: Extract icl_combo_phy_loadgen_select() Ville Syrjala
2021-10-08 10:25 ` Jani Nikula
2021-10-06 20:49 ` [Intel-gfx] [PATCH 07/16] drm/i915: Stop using group access when progrmming icl combo phy TX Ville Syrjala
2021-10-29 21:53 ` Souza, Jose
2021-10-06 20:49 ` [Intel-gfx] [PATCH 08/16] drm/i915: Query the vswing levels per-lane for icl combo phy Ville Syrjala
2021-10-29 21:57 ` Souza, Jose
2021-11-01 10:11 ` Ville Syrjälä
2021-11-01 17:36 ` Souza, Jose [this message]
2021-10-06 20:49 ` [Intel-gfx] [PATCH 09/16] drm/i915: Query the vswing levels per-lane for icl mg phy Ville Syrjala
2021-10-29 21:59 ` Souza, Jose
2021-11-01 9:56 ` Ville Syrjälä
2021-10-06 20:49 ` [Intel-gfx] [PATCH 10/16] drm/i915: Query the vswing levels per-lane for tgl dkl phy Ville Syrjala
2021-10-29 21:59 ` Souza, Jose
2021-10-06 20:49 ` [Intel-gfx] [PATCH 11/16] drm/i915: Query the vswing levels per-lane for snps phy Ville Syrjala
2021-10-29 22:00 ` Souza, Jose
2021-10-06 20:49 ` [Intel-gfx] [PATCH 12/16] drm/i915: Enable per-lane drive settings for icl+ Ville Syrjala
2021-10-29 22:04 ` Souza, Jose
2021-10-06 20:49 ` [Intel-gfx] [PATCH 13/16] drm/i915: Use intel_de_rmw() for tgl dkl phy programming Ville Syrjala
2021-10-29 22:01 ` Souza, Jose
2021-10-06 20:49 ` [Intel-gfx] [PATCH 14/16] drm/i915: Use intel_de_rmw() for icl mg " Ville Syrjala
2021-10-29 22:02 ` Souza, Jose
2021-10-06 20:49 ` [Intel-gfx] [PATCH 15/16] drm/i915: Use intel_de_rmw() for icl combo " Ville Syrjala
2021-10-29 22:02 ` Souza, Jose
2021-10-06 20:49 ` [Intel-gfx] [PATCH 16/16] drm/i915: Fix icl+ combo phy static lane power down setup Ville Syrjala
2021-10-28 13:25 ` Imre Deak
2021-10-28 17:43 ` Jani Nikula
2021-10-07 0:18 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for drm/i915: DP per-lane drive settings for icl+ Patchwork
2021-10-07 0:20 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
2021-10-07 0:52 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2021-10-07 3:08 ` [Intel-gfx] ✗ Fi.CI.IGT: failure " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=2ec52bd3d17d55542a0873d4fd240d4577a418fd.camel@intel.com \
--to=jose.souza@intel.com \
--cc=intel-gfx@lists.freedesktop.org \
--cc=ville.syrjala@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox