intel-gfx.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
From: Rodrigo Vivi <rodrigo.vivi@intel.com>
To: Nathan Ciobanu <nathan.d.ciobanu@linux.intel.com>
Cc: intel-gfx@lists.freedesktop.org, dhinakaran.pandiyan@intel.com,
	marc.herbert@intel.com
Subject: Re: [PATCH v3] drm/i915/dp: Give up link training clock recovery after 10 failed attempts
Date: Mon, 16 Jul 2018 15:34:58 -0700	[thread overview]
Message-ID: <20180716223458.GB2324@intel.com> (raw)
In-Reply-To: <1531764885-18185-1-git-send-email-nathan.d.ciobanu@linux.intel.com>

On Mon, Jul 16, 2018 at 11:14:45AM -0700, Nathan Ciobanu wrote:
> Limit the link training clock recovery loop to 10 failed attempts at
> LANEx_CR_DONE per DP 1.4 spec section 3.5.1.2.2.

Thanks... so this made me look to DP 1.3 and DP 1.4 differences here.
Are we already addressing all new conditions?

I wonder if that should be at drm level.

I noticed that one difference, 100us wait difference is already
addressed there at drm level...

> Some USB-C MST hubs
> cause us to get stuck in this loop indefinitely requesting
> 
>     voltage swing: 0, pre-emphasis level: 2
>     voltage swing: 1, pre-emphasis level: 2
>     voltage swing: 0, pre-emphasis level: 3
> 
> over and over: max_vswing is never reached, drm_dp_clock_recovery_ok()
> never returns true and voltage_tries always gets reset to 1. The driver
> sends those values to the hub but the hub keeps requesting new values
> every time in the pattern shown above.
> 
> Changes in v2:
>     - updated commit message (DK, Manasi)
>     - defined DP_DP14_MAX_CR_TRIES (Marc)
>     - made the loop iterate for max 10 times (Rodrigo, Marc)
> 
> Changes in v3:
>     - changed error message to use DP_DP14_MAX_CR_TRIES
> 
> Signed-off-by: Nathan Ciobanu <nathan.d.ciobanu@linux.intel.com>
> ---
>  drivers/gpu/drm/i915/intel_dp_link_training.c | 7 +++++--
>  include/drm/drm_dp_helper.h                   | 1 +
>  2 files changed, 6 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/intel_dp_link_training.c b/drivers/gpu/drm/i915/intel_dp_link_training.c
> index 4da6e33c7fa1..4bfba65c431c 100644
> --- a/drivers/gpu/drm/i915/intel_dp_link_training.c
> +++ b/drivers/gpu/drm/i915/intel_dp_link_training.c
> @@ -129,7 +129,7 @@ static bool intel_dp_link_max_vswing_reached(struct intel_dp *intel_dp)
>  intel_dp_link_training_clock_recovery(struct intel_dp *intel_dp)
>  {
>  	uint8_t voltage;
> -	int voltage_tries, max_vswing_tries;
> +	int voltage_tries, max_vswing_tries, cr_tries;
>  	uint8_t link_config[2];
>  	uint8_t link_bw, rate_select;
>  
> @@ -172,7 +172,7 @@ static bool intel_dp_link_max_vswing_reached(struct intel_dp *intel_dp)
>  
>  	voltage_tries = 1;
>  	max_vswing_tries = 0;
> -	for (;;) {
> +	for (cr_tries = 0; cr_tries < DP_DP14_MAX_CR_TRIES; ++cr_tries) {

I know that I was the on that asked you to make this global here,
but just now with better spec pointers and definition is that I noticed
that this is only for DP 1.4 spec while this code here runs for all
DP...  So we need to change in a way that we check the spec version somehow...

and possibly at drm level?!

>  		uint8_t link_status[DP_LINK_STATUS_SIZE];
>  
>  		drm_dp_link_train_clock_recovery_delay(intel_dp->dpcd);
> @@ -216,6 +216,9 @@ static bool intel_dp_link_max_vswing_reached(struct intel_dp *intel_dp)
>  			++max_vswing_tries;
>  
>  	}
> +	DRM_ERROR("Failed clock recovery %d times, giving up!\n",
> +		  DP_DP14_MAX_CR_TRIES);
> +	return false;
>  }
>  
>  /*
> diff --git a/include/drm/drm_dp_helper.h b/include/drm/drm_dp_helper.h
> index c01564991a9f..2303ad8ed24e 100644
> --- a/include/drm/drm_dp_helper.h
> +++ b/include/drm/drm_dp_helper.h
> @@ -820,6 +820,7 @@
>  
>  #define DP_DP13_DPCD_REV                    0x2200
>  #define DP_DP13_MAX_LINK_RATE               0x2201
> +#define DP_DP14_MAX_CR_TRIES                10
>  
>  #define DP_DPRX_FEATURE_ENUMERATION_LIST    0x2210  /* DP 1.3 */
>  # define DP_GTC_CAP					(1 << 0)  /* DP 1.3 */
> -- 
> 1.9.1
> 
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

  parent reply	other threads:[~2018-07-16 22:35 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-07-16 18:14 [PATCH v3] drm/i915/dp: Give up link training clock recovery after 10 failed attempts Nathan Ciobanu
2018-07-16 19:22 ` Marc Herbert
2018-07-16 22:30   ` Rodrigo Vivi
2018-07-16 23:27     ` Nathan Ciobanu
2018-07-16 23:39       ` Rodrigo Vivi
2018-07-16 22:34 ` Rodrigo Vivi [this message]
2018-07-16 23:23   ` Nathan Ciobanu
2018-07-16 23:40     ` Rodrigo Vivi
2018-07-16 23:51       ` Marc Herbert
2018-07-17 22:21         ` Dhinakaran Pandiyan
2018-07-18  1:05           ` Nathan Ciobanu
2018-07-18  3:00             ` Dhinakaran Pandiyan
2018-07-19 17:01             ` Rodrigo Vivi
2018-07-19 18:42               ` Nathan Ciobanu
2018-07-17  8:01 ` ✗ Fi.CI.BAT: failure for drm/i915/dp: Give up link training clock recovery after 10 failed attempts (rev3) Patchwork
2018-07-17  9:24 ` ✓ Fi.CI.IGT: success " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180716223458.GB2324@intel.com \
    --to=rodrigo.vivi@intel.com \
    --cc=dhinakaran.pandiyan@intel.com \
    --cc=intel-gfx@lists.freedesktop.org \
    --cc=marc.herbert@intel.com \
    --cc=nathan.d.ciobanu@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).