From: Jani Nikula <jani.nikula@linux.intel.com>
To: Manasi Navare <manasi.d.navare@intel.com>,
intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org
Cc: Daniel Vetter <daniel.vetter@intel.com>
Subject: Re: [PATCH 2/4] drm/i915: Compute sink's max lane count/link BW at Hotplug
Date: Thu, 08 Dec 2016 23:23:39 +0200 [thread overview]
Message-ID: <874m2eglno.fsf@intel.com> (raw)
In-Reply-To: <1480984058-552-3-git-send-email-manasi.d.navare@intel.com>
On Tue, 06 Dec 2016, Manasi Navare <manasi.d.navare@intel.com> wrote:
> Sink's capabilities are advertised through DPCD registers and get
> updated only on hotplug. So they should be computed only once in the
> long pulse handler and saved off in intel_dp structure for the use
> later. For this reason two new fields max_sink_lane_count and
> max_sink_link_bw are added to intel_dp structure.
>
> This also simplifies the fallback link rate/lane count logic
> to handle link training failure. In that case, the max_sink_link_bw
> and max_sink_lane_count can be reccomputed to match the fallback
> values lowering the sink capabilities due to link train failure.
>
> Cc: Ville Syrjala <ville.syrjala@linux.intel.com>
> Cc: Jani Nikula <jani.nikula@linux.intel.com>
> Cc: Daniel Vetter <daniel.vetter@intel.com>
> Signed-off-by: Manasi Navare <manasi.d.navare@intel.com>
Reviewed-by: Jani Nikula <jani.nikula@intel.com>
Eventually we may want to call the fields *link* rates, because that's
what they'll effectively be. Transient values that don't reflect the
sink or source capabilities, but the link capabilities.
> ---
> drivers/gpu/drm/i915/intel_dp.c | 10 ++++++++--
> drivers/gpu/drm/i915/intel_drv.h | 4 ++++
> 2 files changed, 12 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
> index db75bb9..434dc7d 100644
> --- a/drivers/gpu/drm/i915/intel_dp.c
> +++ b/drivers/gpu/drm/i915/intel_dp.c
> @@ -156,7 +156,7 @@ static u8 intel_dp_max_lane_count(struct intel_dp *intel_dp)
> u8 source_max, sink_max;
>
> source_max = intel_dig_port->max_lanes;
> - sink_max = drm_dp_max_lane_count(intel_dp->dpcd);
> + sink_max = intel_dp->max_sink_lane_count;
>
> return min(source_max, sink_max);
> }
> @@ -213,7 +213,7 @@ static u8 intel_dp_max_lane_count(struct intel_dp *intel_dp)
>
> *sink_rates = default_rates;
>
> - return (intel_dp_max_link_bw(intel_dp) >> 3) + 1;
> + return (intel_dp->max_sink_link_bw >> 3) + 1;
> }
>
> static int
> @@ -4395,6 +4395,12 @@ static bool intel_digital_port_connected(struct drm_i915_private *dev_priv,
> yesno(intel_dp_source_supports_hbr2(intel_dp)),
> yesno(drm_dp_tps3_supported(intel_dp->dpcd)));
>
> + /* Set the max lane count for sink */
> + intel_dp->max_sink_lane_count = drm_dp_max_lane_count(intel_dp->dpcd);
> +
> + /* Set the max link BW for sink */
> + intel_dp->max_sink_link_bw = intel_dp_max_link_bw(intel_dp);
> +
> intel_dp_print_rates(intel_dp);
>
> intel_dp_read_desc(intel_dp);
> diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h
> index fd77a3b..b6526ad 100644
> --- a/drivers/gpu/drm/i915/intel_drv.h
> +++ b/drivers/gpu/drm/i915/intel_drv.h
> @@ -906,6 +906,10 @@ struct intel_dp {
> /* sink rates as reported by DP_SUPPORTED_LINK_RATES */
> uint8_t num_sink_rates;
> int sink_rates[DP_MAX_SUPPORTED_RATES];
> + /* Max lane count for the sink as per DPCD registers */
> + uint8_t max_sink_lane_count;
> + /* Max link BW for the sink as per DPCD registers */
> + int max_sink_link_bw;
> /* sink or branch descriptor */
> struct intel_dp_desc desc;
> struct drm_dp_aux aux;
--
Jani Nikula, Intel Open Source Technology Center
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
next prev parent reply other threads:[~2016-12-08 21:23 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-12-06 0:27 [PATCH 0/4] Link Training failure handling by sending Hotplug Uevent Manasi Navare
2016-12-06 0:27 ` [PATCH 1/4] drm: Add a new connector atomic property for link status Manasi Navare
2016-12-06 7:23 ` [Intel-gfx] " Daniel Vetter
2016-12-06 15:56 ` Manasi Navare
2016-12-06 16:07 ` [PATCH v4 " Manasi Navare
2016-12-08 15:05 ` Jani Nikula
2016-12-08 15:28 ` [Intel-gfx] " Daniel Vetter
2016-12-08 19:04 ` [PATCH v5 " Manasi Navare
2016-12-08 19:36 ` Sean Paul
2016-12-08 19:48 ` Manasi Navare
2016-12-08 19:47 ` [PATCH v6 " Manasi Navare
2016-12-06 0:27 ` [PATCH 2/4] drm/i915: Compute sink's max lane count/link BW at Hotplug Manasi Navare
2016-12-08 18:15 ` Manasi Navare
2016-12-08 21:23 ` Jani Nikula [this message]
2016-12-08 21:39 ` Manasi Navare
2016-12-08 21:48 ` Manasi Navare
2016-12-13 14:28 ` Jani Nikula
2016-12-06 0:27 ` [PATCH 3/4] drm/i915: Find fallback link rate/lane count Manasi Navare
2016-12-08 18:19 ` Manasi Navare
2016-12-08 21:46 ` Jani Nikula
2016-12-08 22:05 ` Manasi Navare
2016-12-09 3:05 ` [PATCH v7 " Manasi Navare
2016-12-09 9:54 ` Jani Nikula
2016-12-13 14:36 ` Jani Nikula
2016-12-06 0:27 ` [PATCH 4/4] drm/i915: Implement Link Rate fallback on Link training failure Manasi Navare
2016-12-08 18:23 ` Manasi Navare
2016-12-08 21:51 ` Jani Nikula
2016-12-08 22:09 ` Manasi Navare
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=874m2eglno.fsf@intel.com \
--to=jani.nikula@linux.intel.com \
--cc=daniel.vetter@intel.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=intel-gfx@lists.freedesktop.org \
--cc=manasi.d.navare@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).