* [PATCH v2 00/13] drm/i915/dp: link rate and lane count refactoring
@ 2017-02-03 14:19 Jani Nikula
2017-02-03 14:19 ` [PATCH v2 01/13] drm/i915/dp: use known correct array size in rate_to_index Jani Nikula
` (14 more replies)
0 siblings, 15 replies; 20+ messages in thread
From: Jani Nikula @ 2017-02-03 14:19 UTC (permalink / raw)
To: intel-gfx; +Cc: jani.nikula, dhinakaran.pandiyan
v2 of [1], rebased and review addressed.
BR,
Jani.
[1] http://mid.mail-archive.com/cover.1485459621.git.jani.nikula@intel.com
Jani Nikula (13):
drm/i915/dp: use known correct array size in rate_to_index
drm/i915/dp: return errors from rate_to_index()
drm/i915/dp: rename rate_to_index() to intel_dp_rate_index() and reuse
drm/i915/dp: cache source rates at init
drm/i915/dp: generate and cache sink rate array for all DP, not just
eDP 1.4
drm/i915/dp: use the sink rates array for max sink rates
drm/i915/dp: cache common rates with sink rates
drm/i915/dp: do not limit rate seek when not needed
drm/i915/dp: don't call the link parameters sink parameters
drm/i915/dp: add functions for max common link rate and lane count
drm/i915/mst: use max link not sink lane count
drm/i915/dp: localize link rate index variable more
drm/i915/dp: use readb and writeb calls for single byte DPCD access
drivers/gpu/drm/i915/intel_dp.c | 284 ++++++++++++++------------
drivers/gpu/drm/i915/intel_dp_link_training.c | 3 +-
drivers/gpu/drm/i915/intel_dp_mst.c | 4 +-
drivers/gpu/drm/i915/intel_drv.h | 20 +-
4 files changed, 173 insertions(+), 138 deletions(-)
--
2.1.4
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 20+ messages in thread
* [PATCH v2 01/13] drm/i915/dp: use known correct array size in rate_to_index
2017-02-03 14:19 [PATCH v2 00/13] drm/i915/dp: link rate and lane count refactoring Jani Nikula
@ 2017-02-03 14:19 ` Jani Nikula
2017-02-03 14:19 ` [PATCH v2 02/13] drm/i915/dp: return errors from rate_to_index() Jani Nikula
` (13 subsequent siblings)
14 siblings, 0 replies; 20+ messages in thread
From: Jani Nikula @ 2017-02-03 14:19 UTC (permalink / raw)
To: intel-gfx; +Cc: jani.nikula, dhinakaran.pandiyan
I can't think of a real world bug this could cause now, but this will be
required in follow-up work. While at it, change the parameter order to
be slightly more sensible.
Cc: Manasi Navare <manasi.d.navare@intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
---
drivers/gpu/drm/i915/intel_dp.c | 11 ++++++-----
1 file changed, 6 insertions(+), 5 deletions(-)
diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
index 2ab0192595c2..3b809b6d186d 100644
--- a/drivers/gpu/drm/i915/intel_dp.c
+++ b/drivers/gpu/drm/i915/intel_dp.c
@@ -1542,12 +1542,12 @@ bool intel_dp_read_desc(struct intel_dp *intel_dp)
return true;
}
-static int rate_to_index(int find, const int *rates)
+static int rate_to_index(const int *rates, int len, int rate)
{
- int i = 0;
+ int i;
- for (i = 0; i < DP_MAX_SUPPORTED_RATES; ++i)
- if (find == rates[i])
+ for (i = 0; i < len; i++)
+ if (rate == rates[i])
break;
return i;
@@ -1568,7 +1568,8 @@ intel_dp_max_link_rate(struct intel_dp *intel_dp)
int intel_dp_rate_select(struct intel_dp *intel_dp, int rate)
{
- return rate_to_index(rate, intel_dp->sink_rates);
+ return rate_to_index(intel_dp->sink_rates, intel_dp->num_sink_rates,
+ rate);
}
void intel_dp_compute_rate(struct intel_dp *intel_dp, int port_clock,
--
2.1.4
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH v2 02/13] drm/i915/dp: return errors from rate_to_index()
2017-02-03 14:19 [PATCH v2 00/13] drm/i915/dp: link rate and lane count refactoring Jani Nikula
2017-02-03 14:19 ` [PATCH v2 01/13] drm/i915/dp: use known correct array size in rate_to_index Jani Nikula
@ 2017-02-03 14:19 ` Jani Nikula
2017-02-03 14:19 ` [PATCH v2 03/13] drm/i915/dp: rename rate_to_index() to intel_dp_rate_index() and reuse Jani Nikula
` (12 subsequent siblings)
14 siblings, 0 replies; 20+ messages in thread
From: Jani Nikula @ 2017-02-03 14:19 UTC (permalink / raw)
To: intel-gfx; +Cc: jani.nikula, dhinakaran.pandiyan
We shouldn't silently use the first element if we can't find the rate
we're looking for. Make rate_to_index() more generally useful, and
fallback to the first element in the caller, with a big warning.
Cc: Manasi Navare <manasi.d.navare@intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
---
drivers/gpu/drm/i915/intel_dp.c | 13 +++++++++----
1 file changed, 9 insertions(+), 4 deletions(-)
diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
index 3b809b6d186d..75926a5b900f 100644
--- a/drivers/gpu/drm/i915/intel_dp.c
+++ b/drivers/gpu/drm/i915/intel_dp.c
@@ -1548,9 +1548,9 @@ static int rate_to_index(const int *rates, int len, int rate)
for (i = 0; i < len; i++)
if (rate == rates[i])
- break;
+ return i;
- return i;
+ return -1;
}
int
@@ -1568,8 +1568,13 @@ intel_dp_max_link_rate(struct intel_dp *intel_dp)
int intel_dp_rate_select(struct intel_dp *intel_dp, int rate)
{
- return rate_to_index(intel_dp->sink_rates, intel_dp->num_sink_rates,
- rate);
+ int i = rate_to_index(intel_dp->sink_rates, intel_dp->num_sink_rates,
+ rate);
+
+ if (WARN_ON(i < 0))
+ i = 0;
+
+ return i;
}
void intel_dp_compute_rate(struct intel_dp *intel_dp, int port_clock,
--
2.1.4
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH v2 03/13] drm/i915/dp: rename rate_to_index() to intel_dp_rate_index() and reuse
2017-02-03 14:19 [PATCH v2 00/13] drm/i915/dp: link rate and lane count refactoring Jani Nikula
2017-02-03 14:19 ` [PATCH v2 01/13] drm/i915/dp: use known correct array size in rate_to_index Jani Nikula
2017-02-03 14:19 ` [PATCH v2 02/13] drm/i915/dp: return errors from rate_to_index() Jani Nikula
@ 2017-02-03 14:19 ` Jani Nikula
2017-02-03 14:19 ` [PATCH v2 04/13] drm/i915/dp: cache source rates at init Jani Nikula
` (11 subsequent siblings)
14 siblings, 0 replies; 20+ messages in thread
From: Jani Nikula @ 2017-02-03 14:19 UTC (permalink / raw)
To: intel-gfx; +Cc: jani.nikula, dhinakaran.pandiyan
Rename the function, move it at the top, and reuse in
intel_dp_link_rate_index(). If there was a reason in the past to use
reverse search order here, there isn't now.
The names may be slightly confusing now, but intel_dp_link_rate_index()
will go away in follow-up patches.
v2: Use name intel_dp_rate_index (Dhinakaran)
Cc: Manasi Navare <manasi.d.navare@intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
---
drivers/gpu/drm/i915/intel_dp.c | 34 +++++++++++++++-------------------
1 file changed, 15 insertions(+), 19 deletions(-)
diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
index 75926a5b900f..ceae1f4c952a 100644
--- a/drivers/gpu/drm/i915/intel_dp.c
+++ b/drivers/gpu/drm/i915/intel_dp.c
@@ -266,6 +266,18 @@ static int intersect_rates(const int *source_rates, int source_len,
return k;
}
+/* return index of rate in rates array, or -1 if not found */
+static int intel_dp_rate_index(const int *rates, int len, int rate)
+{
+ int i;
+
+ for (i = 0; i < len; i++)
+ if (rate == rates[i])
+ return i;
+
+ return -1;
+}
+
static int intel_dp_common_rates(struct intel_dp *intel_dp,
int *common_rates)
{
@@ -284,15 +296,10 @@ static int intel_dp_link_rate_index(struct intel_dp *intel_dp,
int *common_rates, int link_rate)
{
int common_len;
- int index;
common_len = intel_dp_common_rates(intel_dp, common_rates);
- for (index = 0; index < common_len; index++) {
- if (link_rate == common_rates[common_len - index - 1])
- return common_len - index - 1;
- }
- return -1;
+ return intel_dp_rate_index(common_rates, common_len, link_rate);
}
int intel_dp_get_link_train_fallback_values(struct intel_dp *intel_dp,
@@ -1542,17 +1549,6 @@ bool intel_dp_read_desc(struct intel_dp *intel_dp)
return true;
}
-static int rate_to_index(const int *rates, int len, int rate)
-{
- int i;
-
- for (i = 0; i < len; i++)
- if (rate == rates[i])
- return i;
-
- return -1;
-}
-
int
intel_dp_max_link_rate(struct intel_dp *intel_dp)
{
@@ -1568,8 +1564,8 @@ intel_dp_max_link_rate(struct intel_dp *intel_dp)
int intel_dp_rate_select(struct intel_dp *intel_dp, int rate)
{
- int i = rate_to_index(intel_dp->sink_rates, intel_dp->num_sink_rates,
- rate);
+ int i = intel_dp_rate_index(intel_dp->sink_rates,
+ intel_dp->num_sink_rates, rate);
if (WARN_ON(i < 0))
i = 0;
--
2.1.4
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH v2 04/13] drm/i915/dp: cache source rates at init
2017-02-03 14:19 [PATCH v2 00/13] drm/i915/dp: link rate and lane count refactoring Jani Nikula
` (2 preceding siblings ...)
2017-02-03 14:19 ` [PATCH v2 03/13] drm/i915/dp: rename rate_to_index() to intel_dp_rate_index() and reuse Jani Nikula
@ 2017-02-03 14:19 ` Jani Nikula
2017-02-03 14:19 ` [PATCH v2 05/13] drm/i915/dp: generate and cache sink rate array for all DP, not just eDP 1.4 Jani Nikula
` (10 subsequent siblings)
14 siblings, 0 replies; 20+ messages in thread
From: Jani Nikula @ 2017-02-03 14:19 UTC (permalink / raw)
To: intel-gfx; +Cc: jani.nikula, dhinakaran.pandiyan
We need the source rates array so often that it makes sense to set it
once at init. This reduces function calls when we need the rates, making
the code easier to follow.
Cc: Manasi Navare <manasi.d.navare@intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
---
drivers/gpu/drm/i915/intel_dp.c | 35 +++++++++++++++++++++--------------
drivers/gpu/drm/i915/intel_drv.h | 3 +++
2 files changed, 24 insertions(+), 14 deletions(-)
diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
index ceae1f4c952a..2378f0651cbd 100644
--- a/drivers/gpu/drm/i915/intel_dp.c
+++ b/drivers/gpu/drm/i915/intel_dp.c
@@ -218,21 +218,25 @@ intel_dp_sink_rates(struct intel_dp *intel_dp, const int **sink_rates)
return (intel_dp->max_sink_link_bw >> 3) + 1;
}
-static int
-intel_dp_source_rates(struct intel_dp *intel_dp, const int **source_rates)
+static void
+intel_dp_set_source_rates(struct intel_dp *intel_dp)
{
struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp);
struct drm_i915_private *dev_priv = to_i915(dig_port->base.base.dev);
+ const int *source_rates;
int size;
+ /* This should only be done once */
+ WARN_ON(intel_dp->source_rates || intel_dp->num_source_rates);
+
if (IS_GEN9_LP(dev_priv)) {
- *source_rates = bxt_rates;
+ source_rates = bxt_rates;
size = ARRAY_SIZE(bxt_rates);
} else if (IS_GEN9_BC(dev_priv)) {
- *source_rates = skl_rates;
+ source_rates = skl_rates;
size = ARRAY_SIZE(skl_rates);
} else {
- *source_rates = default_rates;
+ source_rates = default_rates;
size = ARRAY_SIZE(default_rates);
}
@@ -240,7 +244,8 @@ intel_dp_source_rates(struct intel_dp *intel_dp, const int **source_rates)
if (!intel_dp_source_supports_hbr2(intel_dp))
size--;
- return size;
+ intel_dp->source_rates = source_rates;
+ intel_dp->num_source_rates = size;
}
static int intersect_rates(const int *source_rates, int source_len,
@@ -281,13 +286,13 @@ static int intel_dp_rate_index(const int *rates, int len, int rate)
static int intel_dp_common_rates(struct intel_dp *intel_dp,
int *common_rates)
{
- const int *source_rates, *sink_rates;
- int source_len, sink_len;
+ const int *sink_rates;
+ int sink_len;
sink_len = intel_dp_sink_rates(intel_dp, &sink_rates);
- source_len = intel_dp_source_rates(intel_dp, &source_rates);
- return intersect_rates(source_rates, source_len,
+ return intersect_rates(intel_dp->source_rates,
+ intel_dp->num_source_rates,
sink_rates, sink_len,
common_rates);
}
@@ -1497,16 +1502,16 @@ static void snprintf_int_array(char *str, size_t len,
static void intel_dp_print_rates(struct intel_dp *intel_dp)
{
- const int *source_rates, *sink_rates;
- int source_len, sink_len, common_len;
+ const int *sink_rates;
+ int sink_len, common_len;
int common_rates[DP_MAX_SUPPORTED_RATES];
char str[128]; /* FIXME: too big for stack? */
if ((drm_debug & DRM_UT_KMS) == 0)
return;
- source_len = intel_dp_source_rates(intel_dp, &source_rates);
- snprintf_int_array(str, sizeof(str), source_rates, source_len);
+ snprintf_int_array(str, sizeof(str),
+ intel_dp->source_rates, intel_dp->num_source_rates);
DRM_DEBUG_KMS("source rates: %s\n", str);
sink_len = intel_dp_sink_rates(intel_dp, &sink_rates);
@@ -5922,6 +5927,8 @@ intel_dp_init_connector(struct intel_digital_port *intel_dig_port,
intel_dig_port->max_lanes, port_name(port)))
return false;
+ intel_dp_set_source_rates(intel_dp);
+
intel_dp->pps_pipe = INVALID_PIPE;
intel_dp->active_pipe = INVALID_PIPE;
diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h
index 393f24340e74..f132d4aea1ad 100644
--- a/drivers/gpu/drm/i915/intel_drv.h
+++ b/drivers/gpu/drm/i915/intel_drv.h
@@ -930,6 +930,9 @@ struct intel_dp {
uint8_t psr_dpcd[EDP_PSR_RECEIVER_CAP_SIZE];
uint8_t downstream_ports[DP_MAX_DOWNSTREAM_PORTS];
uint8_t edp_dpcd[EDP_DISPLAY_CTL_CAP_SIZE];
+ /* source rates */
+ int num_source_rates;
+ const int *source_rates;
/* sink rates as reported by DP_SUPPORTED_LINK_RATES */
uint8_t num_sink_rates;
int sink_rates[DP_MAX_SUPPORTED_RATES];
--
2.1.4
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH v2 05/13] drm/i915/dp: generate and cache sink rate array for all DP, not just eDP 1.4
2017-02-03 14:19 [PATCH v2 00/13] drm/i915/dp: link rate and lane count refactoring Jani Nikula
` (3 preceding siblings ...)
2017-02-03 14:19 ` [PATCH v2 04/13] drm/i915/dp: cache source rates at init Jani Nikula
@ 2017-02-03 14:19 ` Jani Nikula
2017-02-03 17:16 ` Ville Syrjälä
2017-02-03 14:19 ` [PATCH v2 06/13] drm/i915/dp: use the sink rates array for max sink rates Jani Nikula
` (9 subsequent siblings)
14 siblings, 1 reply; 20+ messages in thread
From: Jani Nikula @ 2017-02-03 14:19 UTC (permalink / raw)
To: intel-gfx; +Cc: jani.nikula, dhinakaran.pandiyan
There is some conflation related to sink rates, making this change more
complicated than it would otherwise have to be. There are three changes
here that are rather difficult to split up:
1) Use the intel_dp->sink_rates array for all DP, not just eDP 1.4. We
initialize it from DPCD on eDP 1.4 like before, but generate it based
on DP_MAX_LINK_RATE on others. This reduces code complexity when we
need to use the sink rates; they are all always in the sink_rates
array.
2) Update the sink rate array whenever we read DPCD, and use the
information from there. This increases code readability when we need
the sink rates.
3) Disentangle fallback rate limiting from sink rates. In the code, the
max rate is a dynamic property of the *link*, not of the *sink*. Do
the limiting after intersecting the source and sink rates, which are
static properties of the devices.
This paves the way for follow-up refactoring that I've refrained from
doing here to keep this change as simple as it possibly can.
v2: introduce use_rate_select and handle non-confirming eDP (Ville)
Cc: Manasi Navare <manasi.d.navare@intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
---
drivers/gpu/drm/i915/intel_dp.c | 79 ++++++++++++++++++---------
drivers/gpu/drm/i915/intel_dp_link_training.c | 3 +-
drivers/gpu/drm/i915/intel_drv.h | 5 +-
3 files changed, 59 insertions(+), 28 deletions(-)
diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
index 2378f0651cbd..66efe8044ac9 100644
--- a/drivers/gpu/drm/i915/intel_dp.c
+++ b/drivers/gpu/drm/i915/intel_dp.c
@@ -133,6 +133,34 @@ static void vlv_steal_power_sequencer(struct drm_device *dev,
enum pipe pipe);
static void intel_dp_unset_edid(struct intel_dp *intel_dp);
+static int intel_dp_num_rates(u8 link_bw_code)
+{
+ switch (link_bw_code) {
+ default:
+ WARN(1, "invalid max DP link bw val %x, using 1.62Gbps\n",
+ link_bw_code);
+ case DP_LINK_BW_1_62:
+ return 1;
+ case DP_LINK_BW_2_7:
+ return 2;
+ case DP_LINK_BW_5_4:
+ return 3;
+ }
+}
+
+/* update sink rates from dpcd */
+static void intel_dp_set_sink_rates(struct intel_dp *intel_dp)
+{
+ int i, num_rates;
+
+ num_rates = intel_dp_num_rates(intel_dp->dpcd[DP_MAX_LINK_RATE]);
+
+ for (i = 0; i < num_rates; i++)
+ intel_dp->sink_rates[i] = default_rates[i];
+
+ intel_dp->num_sink_rates = num_rates;
+}
+
static int
intel_dp_max_link_bw(struct intel_dp *intel_dp)
{
@@ -205,19 +233,6 @@ intel_dp_downstream_max_dotclock(struct intel_dp *intel_dp)
return max_dotclk;
}
-static int
-intel_dp_sink_rates(struct intel_dp *intel_dp, const int **sink_rates)
-{
- if (intel_dp->num_sink_rates) {
- *sink_rates = intel_dp->sink_rates;
- return intel_dp->num_sink_rates;
- }
-
- *sink_rates = default_rates;
-
- return (intel_dp->max_sink_link_bw >> 3) + 1;
-}
-
static void
intel_dp_set_source_rates(struct intel_dp *intel_dp)
{
@@ -286,15 +301,22 @@ static int intel_dp_rate_index(const int *rates, int len, int rate)
static int intel_dp_common_rates(struct intel_dp *intel_dp,
int *common_rates)
{
- const int *sink_rates;
- int sink_len;
+ int max_rate = drm_dp_bw_code_to_link_rate(intel_dp->max_sink_link_bw);
+ int i, common_len;
- sink_len = intel_dp_sink_rates(intel_dp, &sink_rates);
+ common_len = intersect_rates(intel_dp->source_rates,
+ intel_dp->num_source_rates,
+ intel_dp->sink_rates,
+ intel_dp->num_sink_rates,
+ common_rates);
- return intersect_rates(intel_dp->source_rates,
- intel_dp->num_source_rates,
- sink_rates, sink_len,
- common_rates);
+ /* Limit results by potentially reduced max rate */
+ for (i = 0; i < common_len; i++) {
+ if (common_rates[common_len - i - 1] <= max_rate)
+ return common_len - i;
+ }
+
+ return 0;
}
static int intel_dp_link_rate_index(struct intel_dp *intel_dp,
@@ -1502,8 +1524,7 @@ static void snprintf_int_array(char *str, size_t len,
static void intel_dp_print_rates(struct intel_dp *intel_dp)
{
- const int *sink_rates;
- int sink_len, common_len;
+ int common_len;
int common_rates[DP_MAX_SUPPORTED_RATES];
char str[128]; /* FIXME: too big for stack? */
@@ -1514,8 +1535,8 @@ static void intel_dp_print_rates(struct intel_dp *intel_dp)
intel_dp->source_rates, intel_dp->num_source_rates);
DRM_DEBUG_KMS("source rates: %s\n", str);
- sink_len = intel_dp_sink_rates(intel_dp, &sink_rates);
- snprintf_int_array(str, sizeof(str), sink_rates, sink_len);
+ snprintf_int_array(str, sizeof(str),
+ intel_dp->sink_rates, intel_dp->num_sink_rates);
DRM_DEBUG_KMS("sink rates: %s\n", str);
common_len = intel_dp_common_rates(intel_dp, common_rates);
@@ -1581,7 +1602,8 @@ int intel_dp_rate_select(struct intel_dp *intel_dp, int rate)
void intel_dp_compute_rate(struct intel_dp *intel_dp, int port_clock,
uint8_t *link_bw, uint8_t *rate_select)
{
- if (intel_dp->num_sink_rates) {
+ /* eDP 1.4 rate select method. */
+ if (intel_dp->use_rate_select) {
*link_bw = 0;
*rate_select =
intel_dp_rate_select(intel_dp, port_clock);
@@ -3718,6 +3740,11 @@ intel_edp_init_dpcd(struct intel_dp *intel_dp)
intel_dp->num_sink_rates = i;
}
+ if (intel_dp->num_sink_rates)
+ intel_dp->use_rate_select = true;
+ else
+ intel_dp_set_sink_rates(intel_dp);
+
return true;
}
@@ -3728,6 +3755,8 @@ intel_dp_get_dpcd(struct intel_dp *intel_dp)
if (!intel_dp_read_dpcd(intel_dp))
return false;
+ intel_dp_set_sink_rates(intel_dp);
+
if (drm_dp_dpcd_read(&intel_dp->aux, DP_SINK_COUNT,
&intel_dp->sink_count, 1) < 0)
return false;
diff --git a/drivers/gpu/drm/i915/intel_dp_link_training.c b/drivers/gpu/drm/i915/intel_dp_link_training.c
index 0048b520baf7..694ad0ffb523 100644
--- a/drivers/gpu/drm/i915/intel_dp_link_training.c
+++ b/drivers/gpu/drm/i915/intel_dp_link_training.c
@@ -146,7 +146,8 @@ intel_dp_link_training_clock_recovery(struct intel_dp *intel_dp)
link_config[1] |= DP_LANE_COUNT_ENHANCED_FRAME_EN;
drm_dp_dpcd_write(&intel_dp->aux, DP_LINK_BW_SET, link_config, 2);
- if (intel_dp->num_sink_rates)
+ /* eDP 1.4 rate select method. */
+ if (!link_bw)
drm_dp_dpcd_write(&intel_dp->aux, DP_LINK_RATE_SET,
&rate_select, 1);
diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h
index f132d4aea1ad..3a6f092a2ec3 100644
--- a/drivers/gpu/drm/i915/intel_drv.h
+++ b/drivers/gpu/drm/i915/intel_drv.h
@@ -933,9 +933,10 @@ struct intel_dp {
/* source rates */
int num_source_rates;
const int *source_rates;
- /* sink rates as reported by DP_SUPPORTED_LINK_RATES */
- uint8_t num_sink_rates;
+ /* sink rates as reported by DP_MAX_LINK_RATE/DP_SUPPORTED_LINK_RATES */
+ int num_sink_rates;
int sink_rates[DP_MAX_SUPPORTED_RATES];
+ bool use_rate_select;
/* Max lane count for the sink as per DPCD registers */
uint8_t max_sink_lane_count;
/* Max link BW for the sink as per DPCD registers */
--
2.1.4
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH v2 06/13] drm/i915/dp: use the sink rates array for max sink rates
2017-02-03 14:19 [PATCH v2 00/13] drm/i915/dp: link rate and lane count refactoring Jani Nikula
` (4 preceding siblings ...)
2017-02-03 14:19 ` [PATCH v2 05/13] drm/i915/dp: generate and cache sink rate array for all DP, not just eDP 1.4 Jani Nikula
@ 2017-02-03 14:19 ` Jani Nikula
2017-02-03 14:19 ` [PATCH v2 07/13] drm/i915/dp: cache common rates with " Jani Nikula
` (8 subsequent siblings)
14 siblings, 0 replies; 20+ messages in thread
From: Jani Nikula @ 2017-02-03 14:19 UTC (permalink / raw)
To: intel-gfx; +Cc: jani.nikula, dhinakaran.pandiyan
Looking at DPCD DP_MAX_LINK_RATE may be completely bogus for eDP 1.4
which is allowed to use link rate select method and have 0 in max link
rate. With this change, it makes sense to store the max rate as the
actual rate rather than as a bw code.
Cc: Manasi Navare <manasi.d.navare@intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
---
drivers/gpu/drm/i915/intel_dp.c | 28 +++++++---------------------
drivers/gpu/drm/i915/intel_drv.h | 2 +-
2 files changed, 8 insertions(+), 22 deletions(-)
diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
index 66efe8044ac9..04bbd7c5cfe9 100644
--- a/drivers/gpu/drm/i915/intel_dp.c
+++ b/drivers/gpu/drm/i915/intel_dp.c
@@ -161,23 +161,9 @@ static void intel_dp_set_sink_rates(struct intel_dp *intel_dp)
intel_dp->num_sink_rates = num_rates;
}
-static int
-intel_dp_max_link_bw(struct intel_dp *intel_dp)
+static int intel_dp_max_sink_rate(struct intel_dp *intel_dp)
{
- int max_link_bw = intel_dp->dpcd[DP_MAX_LINK_RATE];
-
- switch (max_link_bw) {
- case DP_LINK_BW_1_62:
- case DP_LINK_BW_2_7:
- case DP_LINK_BW_5_4:
- break;
- default:
- WARN(1, "invalid max DP link bw val %x, using 1.62Gbps\n",
- max_link_bw);
- max_link_bw = DP_LINK_BW_1_62;
- break;
- }
- return max_link_bw;
+ return intel_dp->sink_rates[intel_dp->num_sink_rates - 1];
}
static u8 intel_dp_max_lane_count(struct intel_dp *intel_dp)
@@ -301,7 +287,7 @@ static int intel_dp_rate_index(const int *rates, int len, int rate)
static int intel_dp_common_rates(struct intel_dp *intel_dp,
int *common_rates)
{
- int max_rate = drm_dp_bw_code_to_link_rate(intel_dp->max_sink_link_bw);
+ int max_rate = intel_dp->max_sink_link_rate;
int i, common_len;
common_len = intersect_rates(intel_dp->source_rates,
@@ -339,10 +325,10 @@ int intel_dp_get_link_train_fallback_values(struct intel_dp *intel_dp,
common_rates,
link_rate);
if (link_rate_index > 0) {
- intel_dp->max_sink_link_bw = drm_dp_link_rate_to_bw_code(common_rates[link_rate_index - 1]);
+ intel_dp->max_sink_link_rate = common_rates[link_rate_index - 1];
intel_dp->max_sink_lane_count = lane_count;
} else if (lane_count > 1) {
- intel_dp->max_sink_link_bw = intel_dp_max_link_bw(intel_dp);
+ intel_dp->max_sink_link_rate = intel_dp_max_sink_rate(intel_dp);
intel_dp->max_sink_lane_count = lane_count >> 1;
} else {
DRM_ERROR("Link Training Unsuccessful\n");
@@ -4663,8 +4649,8 @@ intel_dp_long_pulse(struct intel_connector *intel_connector)
/* Set the max lane count for sink */
intel_dp->max_sink_lane_count = drm_dp_max_lane_count(intel_dp->dpcd);
- /* Set the max link BW for sink */
- intel_dp->max_sink_link_bw = intel_dp_max_link_bw(intel_dp);
+ /* Set the max link rate for sink */
+ intel_dp->max_sink_link_rate = intel_dp_max_sink_rate(intel_dp);
intel_dp_print_rates(intel_dp);
diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h
index 3a6f092a2ec3..4493dad1fe77 100644
--- a/drivers/gpu/drm/i915/intel_drv.h
+++ b/drivers/gpu/drm/i915/intel_drv.h
@@ -940,7 +940,7 @@ struct intel_dp {
/* Max lane count for the sink as per DPCD registers */
uint8_t max_sink_lane_count;
/* Max link BW for the sink as per DPCD registers */
- int max_sink_link_bw;
+ int max_sink_link_rate;
/* sink or branch descriptor */
struct intel_dp_desc desc;
struct drm_dp_aux aux;
--
2.1.4
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH v2 07/13] drm/i915/dp: cache common rates with sink rates
2017-02-03 14:19 [PATCH v2 00/13] drm/i915/dp: link rate and lane count refactoring Jani Nikula
` (5 preceding siblings ...)
2017-02-03 14:19 ` [PATCH v2 06/13] drm/i915/dp: use the sink rates array for max sink rates Jani Nikula
@ 2017-02-03 14:19 ` Jani Nikula
2017-02-03 14:19 ` [PATCH v2 08/13] drm/i915/dp: do not limit rate seek when not needed Jani Nikula
` (7 subsequent siblings)
14 siblings, 0 replies; 20+ messages in thread
From: Jani Nikula @ 2017-02-03 14:19 UTC (permalink / raw)
To: intel-gfx; +Cc: jani.nikula, dhinakaran.pandiyan
Now that source rates are static and sink rates are updated whenever
DPCD is updated, we can do and cache the intersection of them whenever
sink rates are updated. This reduces code complexity, as we don't have
to keep calling the functions to intersect. We also get rid of several
common rates arrays on stack.
Limiting the common rates by a max link rate can be done by picking the
first N elements of the cached common rates.
v2: get rid of the local common_rates variable (Manasi)
Cc: Manasi Navare <manasi.d.navare@intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
---
drivers/gpu/drm/i915/intel_dp.c | 71 ++++++++++++++++++++++------------------
drivers/gpu/drm/i915/intel_drv.h | 3 ++
2 files changed, 42 insertions(+), 32 deletions(-)
diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
index 04bbd7c5cfe9..c458fa90aaad 100644
--- a/drivers/gpu/drm/i915/intel_dp.c
+++ b/drivers/gpu/drm/i915/intel_dp.c
@@ -284,17 +284,29 @@ static int intel_dp_rate_index(const int *rates, int len, int rate)
return -1;
}
-static int intel_dp_common_rates(struct intel_dp *intel_dp,
- int *common_rates)
+static void intel_dp_set_common_rates(struct intel_dp *intel_dp)
{
- int max_rate = intel_dp->max_sink_link_rate;
- int i, common_len;
+ WARN_ON(!intel_dp->num_source_rates || !intel_dp->num_sink_rates);
- common_len = intersect_rates(intel_dp->source_rates,
- intel_dp->num_source_rates,
- intel_dp->sink_rates,
- intel_dp->num_sink_rates,
- common_rates);
+ intel_dp->num_common_rates = intersect_rates(intel_dp->source_rates,
+ intel_dp->num_source_rates,
+ intel_dp->sink_rates,
+ intel_dp->num_sink_rates,
+ intel_dp->common_rates);
+
+ /* Paranoia, there should always be something in common. */
+ if (WARN_ON(intel_dp->num_common_rates == 0)) {
+ intel_dp->common_rates[0] = default_rates[0];
+ intel_dp->num_common_rates = 1;
+ }
+}
+
+/* get length of common rates potentially limited by max_rate */
+static int intel_dp_common_len_rate_limit(struct intel_dp *intel_dp,
+ int max_rate)
+{
+ const int *common_rates = intel_dp->common_rates;
+ int i, common_len = intel_dp->num_common_rates;
/* Limit results by potentially reduced max rate */
for (i = 0; i < common_len; i++) {
@@ -305,25 +317,23 @@ static int intel_dp_common_rates(struct intel_dp *intel_dp,
return 0;
}
-static int intel_dp_link_rate_index(struct intel_dp *intel_dp,
- int *common_rates, int link_rate)
+static int intel_dp_link_rate_index(struct intel_dp *intel_dp, int link_rate)
{
int common_len;
- common_len = intel_dp_common_rates(intel_dp, common_rates);
+ common_len = intel_dp_common_len_rate_limit(intel_dp,
+ intel_dp->max_sink_link_rate);
- return intel_dp_rate_index(common_rates, common_len, link_rate);
+ return intel_dp_rate_index(intel_dp->common_rates, common_len, link_rate);
}
int intel_dp_get_link_train_fallback_values(struct intel_dp *intel_dp,
int link_rate, uint8_t lane_count)
{
- int common_rates[DP_MAX_SUPPORTED_RATES];
+ const int *common_rates = intel_dp->common_rates;
int link_rate_index;
- link_rate_index = intel_dp_link_rate_index(intel_dp,
- common_rates,
- link_rate);
+ link_rate_index = intel_dp_link_rate_index(intel_dp, link_rate);
if (link_rate_index > 0) {
intel_dp->max_sink_link_rate = common_rates[link_rate_index - 1];
intel_dp->max_sink_lane_count = lane_count;
@@ -1510,8 +1520,6 @@ static void snprintf_int_array(char *str, size_t len,
static void intel_dp_print_rates(struct intel_dp *intel_dp)
{
- int common_len;
- int common_rates[DP_MAX_SUPPORTED_RATES];
char str[128]; /* FIXME: too big for stack? */
if ((drm_debug & DRM_UT_KMS) == 0)
@@ -1525,8 +1533,8 @@ static void intel_dp_print_rates(struct intel_dp *intel_dp)
intel_dp->sink_rates, intel_dp->num_sink_rates);
DRM_DEBUG_KMS("sink rates: %s\n", str);
- common_len = intel_dp_common_rates(intel_dp, common_rates);
- snprintf_int_array(str, sizeof(str), common_rates, common_len);
+ snprintf_int_array(str, sizeof(str),
+ intel_dp->common_rates, intel_dp->num_common_rates);
DRM_DEBUG_KMS("common rates: %s\n", str);
}
@@ -1564,14 +1572,14 @@ bool intel_dp_read_desc(struct intel_dp *intel_dp)
int
intel_dp_max_link_rate(struct intel_dp *intel_dp)
{
- int rates[DP_MAX_SUPPORTED_RATES] = {};
int len;
- len = intel_dp_common_rates(intel_dp, rates);
+ len = intel_dp_common_len_rate_limit(intel_dp,
+ intel_dp->max_sink_link_rate);
if (WARN_ON(len <= 0))
return 162000;
- return rates[len - 1];
+ return intel_dp->common_rates[len - 1];
}
int intel_dp_rate_select(struct intel_dp *intel_dp, int rate)
@@ -1640,11 +1648,11 @@ intel_dp_compute_config(struct intel_encoder *encoder,
int link_rate_index;
int bpp, mode_rate;
int link_avail, link_clock;
- int common_rates[DP_MAX_SUPPORTED_RATES] = {};
int common_len;
uint8_t link_bw, rate_select;
- common_len = intel_dp_common_rates(intel_dp, common_rates);
+ common_len = intel_dp_common_len_rate_limit(intel_dp,
+ intel_dp->max_sink_link_rate);
/* No common link rates between source and sink */
WARN_ON(common_len <= 0);
@@ -1682,7 +1690,6 @@ intel_dp_compute_config(struct intel_encoder *encoder,
/* Use values requested by Compliance Test Request */
if (intel_dp->compliance.test_type == DP_TEST_LINK_TRAINING) {
link_rate_index = intel_dp_link_rate_index(intel_dp,
- common_rates,
intel_dp->compliance.test_link_rate);
if (link_rate_index >= 0)
min_clock = max_clock = link_rate_index;
@@ -1690,7 +1697,7 @@ intel_dp_compute_config(struct intel_encoder *encoder,
}
DRM_DEBUG_KMS("DP link computation with max lane count %i "
"max bw %d pixel clock %iKHz\n",
- max_lane_count, common_rates[max_clock],
+ max_lane_count, intel_dp->common_rates[max_clock],
adjusted_mode->crtc_clock);
/* Walk through all bpp values. Luckily they're all nicely spaced with 2
@@ -1726,7 +1733,7 @@ intel_dp_compute_config(struct intel_encoder *encoder,
lane_count <= max_lane_count;
lane_count <<= 1) {
- link_clock = common_rates[clock];
+ link_clock = intel_dp->common_rates[clock];
link_avail = intel_dp_max_data_rate(link_clock,
lane_count);
@@ -1758,7 +1765,7 @@ intel_dp_compute_config(struct intel_encoder *encoder,
pipe_config->lane_count = lane_count;
pipe_config->pipe_bpp = bpp;
- pipe_config->port_clock = common_rates[clock];
+ pipe_config->port_clock = intel_dp->common_rates[clock];
intel_dp_compute_rate(intel_dp, pipe_config->port_clock,
&link_bw, &rate_select);
@@ -3725,6 +3732,7 @@ intel_edp_init_dpcd(struct intel_dp *intel_dp)
}
intel_dp->num_sink_rates = i;
}
+ intel_dp_set_common_rates(intel_dp);
if (intel_dp->num_sink_rates)
intel_dp->use_rate_select = true;
@@ -3742,6 +3750,7 @@ intel_dp_get_dpcd(struct intel_dp *intel_dp)
return false;
intel_dp_set_sink_rates(intel_dp);
+ intel_dp_set_common_rates(intel_dp);
if (drm_dp_dpcd_read(&intel_dp->aux, DP_SINK_COUNT,
&intel_dp->sink_count, 1) < 0)
@@ -3964,7 +3973,6 @@ static uint8_t intel_dp_autotest_link_training(struct intel_dp *intel_dp)
{
int status = 0;
int min_lane_count = 1;
- int common_rates[DP_MAX_SUPPORTED_RATES] = {};
int link_rate_index, test_link_rate;
uint8_t test_lane_count, test_link_bw;
/* (DP CTS 1.2)
@@ -3993,7 +4001,6 @@ static uint8_t intel_dp_autotest_link_training(struct intel_dp *intel_dp)
/* Validate the requested link rate */
test_link_rate = drm_dp_bw_code_to_link_rate(test_link_bw);
link_rate_index = intel_dp_link_rate_index(intel_dp,
- common_rates,
test_link_rate);
if (link_rate_index < 0)
return DP_TEST_NAK;
diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h
index 4493dad1fe77..bd358a668cc0 100644
--- a/drivers/gpu/drm/i915/intel_drv.h
+++ b/drivers/gpu/drm/i915/intel_drv.h
@@ -937,6 +937,9 @@ struct intel_dp {
int num_sink_rates;
int sink_rates[DP_MAX_SUPPORTED_RATES];
bool use_rate_select;
+ /* intersection of source and sink rates */
+ int num_common_rates;
+ int common_rates[DP_MAX_SUPPORTED_RATES];
/* Max lane count for the sink as per DPCD registers */
uint8_t max_sink_lane_count;
/* Max link BW for the sink as per DPCD registers */
--
2.1.4
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH v2 08/13] drm/i915/dp: do not limit rate seek when not needed
2017-02-03 14:19 [PATCH v2 00/13] drm/i915/dp: link rate and lane count refactoring Jani Nikula
` (6 preceding siblings ...)
2017-02-03 14:19 ` [PATCH v2 07/13] drm/i915/dp: cache common rates with " Jani Nikula
@ 2017-02-03 14:19 ` Jani Nikula
2017-02-03 14:19 ` [PATCH v2 09/13] drm/i915/dp: don't call the link parameters sink parameters Jani Nikula
` (6 subsequent siblings)
14 siblings, 0 replies; 20+ messages in thread
From: Jani Nikula @ 2017-02-03 14:19 UTC (permalink / raw)
To: intel-gfx; +Cc: jani.nikula, dhinakaran.pandiyan
In link training fallback, we're trying to find a rate that we know is
in a sorted array of common link rates. We don't need to limit the array
using the max rate. For test request, the DP CTS doesn't say we should
limit the rate based on earlier fallback.
Cc: Manasi Navare <manasi.d.navare@intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
---
drivers/gpu/drm/i915/intel_dp.c | 31 ++++++++++++-------------------
1 file changed, 12 insertions(+), 19 deletions(-)
diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
index c458fa90aaad..eaab56bff79d 100644
--- a/drivers/gpu/drm/i915/intel_dp.c
+++ b/drivers/gpu/drm/i915/intel_dp.c
@@ -317,25 +317,16 @@ static int intel_dp_common_len_rate_limit(struct intel_dp *intel_dp,
return 0;
}
-static int intel_dp_link_rate_index(struct intel_dp *intel_dp, int link_rate)
-{
- int common_len;
-
- common_len = intel_dp_common_len_rate_limit(intel_dp,
- intel_dp->max_sink_link_rate);
-
- return intel_dp_rate_index(intel_dp->common_rates, common_len, link_rate);
-}
-
int intel_dp_get_link_train_fallback_values(struct intel_dp *intel_dp,
int link_rate, uint8_t lane_count)
{
- const int *common_rates = intel_dp->common_rates;
- int link_rate_index;
+ int index;
- link_rate_index = intel_dp_link_rate_index(intel_dp, link_rate);
- if (link_rate_index > 0) {
- intel_dp->max_sink_link_rate = common_rates[link_rate_index - 1];
+ index = intel_dp_rate_index(intel_dp->common_rates,
+ intel_dp->num_common_rates,
+ link_rate);
+ if (index > 0) {
+ intel_dp->max_sink_link_rate = intel_dp->common_rates[index - 1];
intel_dp->max_sink_lane_count = lane_count;
} else if (lane_count > 1) {
intel_dp->max_sink_link_rate = intel_dp_max_sink_rate(intel_dp);
@@ -1689,8 +1680,9 @@ intel_dp_compute_config(struct intel_encoder *encoder,
/* Use values requested by Compliance Test Request */
if (intel_dp->compliance.test_type == DP_TEST_LINK_TRAINING) {
- link_rate_index = intel_dp_link_rate_index(intel_dp,
- intel_dp->compliance.test_link_rate);
+ link_rate_index = intel_dp_rate_index(intel_dp->common_rates,
+ intel_dp->num_common_rates,
+ intel_dp->compliance.test_link_rate);
if (link_rate_index >= 0)
min_clock = max_clock = link_rate_index;
min_lane_count = max_lane_count = intel_dp->compliance.test_lane_count;
@@ -4000,8 +3992,9 @@ static uint8_t intel_dp_autotest_link_training(struct intel_dp *intel_dp)
}
/* Validate the requested link rate */
test_link_rate = drm_dp_bw_code_to_link_rate(test_link_bw);
- link_rate_index = intel_dp_link_rate_index(intel_dp,
- test_link_rate);
+ link_rate_index = intel_dp_rate_index(intel_dp->common_rates,
+ intel_dp->num_common_rates,
+ test_link_rate);
if (link_rate_index < 0)
return DP_TEST_NAK;
--
2.1.4
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH v2 09/13] drm/i915/dp: don't call the link parameters sink parameters
2017-02-03 14:19 [PATCH v2 00/13] drm/i915/dp: link rate and lane count refactoring Jani Nikula
` (7 preceding siblings ...)
2017-02-03 14:19 ` [PATCH v2 08/13] drm/i915/dp: do not limit rate seek when not needed Jani Nikula
@ 2017-02-03 14:19 ` Jani Nikula
2017-02-03 14:19 ` [PATCH v2 10/13] drm/i915/dp: add functions for max common link rate and lane count Jani Nikula
` (5 subsequent siblings)
14 siblings, 0 replies; 20+ messages in thread
From: Jani Nikula @ 2017-02-03 14:19 UTC (permalink / raw)
To: intel-gfx; +Cc: jani.nikula, dhinakaran.pandiyan
If we modify these on the fly depending on the link conditions, don't
pretend they are sink properties.
Some link vs. sink confusion still remains, but we'll take care of them
in follow-up patches.
Cc: Manasi Navare <manasi.d.navare@intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
---
drivers/gpu/drm/i915/intel_dp.c | 25 ++++++++++++-------------
drivers/gpu/drm/i915/intel_drv.h | 8 ++++----
2 files changed, 16 insertions(+), 17 deletions(-)
diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
index eaab56bff79d..fada982dff8e 100644
--- a/drivers/gpu/drm/i915/intel_dp.c
+++ b/drivers/gpu/drm/i915/intel_dp.c
@@ -172,7 +172,7 @@ static u8 intel_dp_max_lane_count(struct intel_dp *intel_dp)
u8 source_max, sink_max;
source_max = intel_dig_port->max_lanes;
- sink_max = intel_dp->max_sink_lane_count;
+ sink_max = intel_dp->max_link_lane_count;
return min(source_max, sink_max);
}
@@ -326,11 +326,11 @@ int intel_dp_get_link_train_fallback_values(struct intel_dp *intel_dp,
intel_dp->num_common_rates,
link_rate);
if (index > 0) {
- intel_dp->max_sink_link_rate = intel_dp->common_rates[index - 1];
- intel_dp->max_sink_lane_count = lane_count;
+ intel_dp->max_link_rate = intel_dp->common_rates[index - 1];
+ intel_dp->max_link_lane_count = lane_count;
} else if (lane_count > 1) {
- intel_dp->max_sink_link_rate = intel_dp_max_sink_rate(intel_dp);
- intel_dp->max_sink_lane_count = lane_count >> 1;
+ intel_dp->max_link_rate = intel_dp_max_sink_rate(intel_dp);
+ intel_dp->max_link_lane_count = lane_count >> 1;
} else {
DRM_ERROR("Link Training Unsuccessful\n");
return -1;
@@ -1565,8 +1565,7 @@ intel_dp_max_link_rate(struct intel_dp *intel_dp)
{
int len;
- len = intel_dp_common_len_rate_limit(intel_dp,
- intel_dp->max_sink_link_rate);
+ len = intel_dp_common_len_rate_limit(intel_dp, intel_dp->max_link_rate);
if (WARN_ON(len <= 0))
return 162000;
@@ -1643,7 +1642,7 @@ intel_dp_compute_config(struct intel_encoder *encoder,
uint8_t link_bw, rate_select;
common_len = intel_dp_common_len_rate_limit(intel_dp,
- intel_dp->max_sink_link_rate);
+ intel_dp->max_link_rate);
/* No common link rates between source and sink */
WARN_ON(common_len <= 0);
@@ -3981,7 +3980,7 @@ static uint8_t intel_dp_autotest_link_training(struct intel_dp *intel_dp)
test_lane_count &= DP_MAX_LANE_COUNT_MASK;
/* Validate the requested lane count */
if (test_lane_count < min_lane_count ||
- test_lane_count > intel_dp->max_sink_lane_count)
+ test_lane_count > intel_dp->max_link_lane_count)
return DP_TEST_NAK;
status = drm_dp_dpcd_readb(&intel_dp->aux, DP_TEST_LINK_RATE,
@@ -4646,11 +4645,11 @@ intel_dp_long_pulse(struct intel_connector *intel_connector)
yesno(intel_dp_source_supports_hbr2(intel_dp)),
yesno(drm_dp_tps3_supported(intel_dp->dpcd)));
- /* Set the max lane count for sink */
- intel_dp->max_sink_lane_count = drm_dp_max_lane_count(intel_dp->dpcd);
+ /* Set the max lane count for link */
+ intel_dp->max_link_lane_count = drm_dp_max_lane_count(intel_dp->dpcd);
- /* Set the max link rate for sink */
- intel_dp->max_sink_link_rate = intel_dp_max_sink_rate(intel_dp);
+ /* Set the max link rate for link */
+ intel_dp->max_link_rate = intel_dp_max_sink_rate(intel_dp);
intel_dp_print_rates(intel_dp);
diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h
index bd358a668cc0..2c4752d84f5b 100644
--- a/drivers/gpu/drm/i915/intel_drv.h
+++ b/drivers/gpu/drm/i915/intel_drv.h
@@ -940,10 +940,10 @@ struct intel_dp {
/* intersection of source and sink rates */
int num_common_rates;
int common_rates[DP_MAX_SUPPORTED_RATES];
- /* Max lane count for the sink as per DPCD registers */
- uint8_t max_sink_lane_count;
- /* Max link BW for the sink as per DPCD registers */
- int max_sink_link_rate;
+ /* Max lane count for the current link */
+ int max_link_lane_count;
+ /* Max rate for the current link */
+ int max_link_rate;
/* sink or branch descriptor */
struct intel_dp_desc desc;
struct drm_dp_aux aux;
--
2.1.4
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH v2 10/13] drm/i915/dp: add functions for max common link rate and lane count
2017-02-03 14:19 [PATCH v2 00/13] drm/i915/dp: link rate and lane count refactoring Jani Nikula
` (8 preceding siblings ...)
2017-02-03 14:19 ` [PATCH v2 09/13] drm/i915/dp: don't call the link parameters sink parameters Jani Nikula
@ 2017-02-03 14:19 ` Jani Nikula
2017-02-03 14:19 ` [PATCH v2 11/13] drm/i915/mst: use max link not sink " Jani Nikula
` (4 subsequent siblings)
14 siblings, 0 replies; 20+ messages in thread
From: Jani Nikula @ 2017-02-03 14:19 UTC (permalink / raw)
To: intel-gfx; +Cc: jani.nikula, dhinakaran.pandiyan
These are the theoretical maximums common for source and sink. These are
the maximums we should start with. They may be degraded in case of link
training failures, and the dynamic link values are stored separately.
Cc: Manasi Navare <manasi.d.navare@intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
---
drivers/gpu/drm/i915/intel_dp.c | 29 +++++++++++++++++------------
1 file changed, 17 insertions(+), 12 deletions(-)
diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
index fada982dff8e..916c07cc6ad6 100644
--- a/drivers/gpu/drm/i915/intel_dp.c
+++ b/drivers/gpu/drm/i915/intel_dp.c
@@ -161,22 +161,27 @@ static void intel_dp_set_sink_rates(struct intel_dp *intel_dp)
intel_dp->num_sink_rates = num_rates;
}
-static int intel_dp_max_sink_rate(struct intel_dp *intel_dp)
+/* Theoretical max between source and sink */
+static int intel_dp_max_common_rate(struct intel_dp *intel_dp)
{
- return intel_dp->sink_rates[intel_dp->num_sink_rates - 1];
+ return intel_dp->common_rates[intel_dp->num_common_rates - 1];
}
-static u8 intel_dp_max_lane_count(struct intel_dp *intel_dp)
+/* Theoretical max between source and sink */
+static int intel_dp_max_common_lane_count(struct intel_dp *intel_dp)
{
struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp);
- u8 source_max, sink_max;
-
- source_max = intel_dig_port->max_lanes;
- sink_max = intel_dp->max_link_lane_count;
+ int source_max = intel_dig_port->max_lanes;
+ int sink_max = drm_dp_max_lane_count(intel_dp->dpcd);
return min(source_max, sink_max);
}
+static int intel_dp_max_lane_count(struct intel_dp *intel_dp)
+{
+ return intel_dp->max_link_lane_count;
+}
+
int
intel_dp_link_required(int pixel_clock, int bpp)
{
@@ -329,7 +334,7 @@ int intel_dp_get_link_train_fallback_values(struct intel_dp *intel_dp,
intel_dp->max_link_rate = intel_dp->common_rates[index - 1];
intel_dp->max_link_lane_count = lane_count;
} else if (lane_count > 1) {
- intel_dp->max_link_rate = intel_dp_max_sink_rate(intel_dp);
+ intel_dp->max_link_rate = intel_dp_max_common_rate(intel_dp);
intel_dp->max_link_lane_count = lane_count >> 1;
} else {
DRM_ERROR("Link Training Unsuccessful\n");
@@ -4645,11 +4650,11 @@ intel_dp_long_pulse(struct intel_connector *intel_connector)
yesno(intel_dp_source_supports_hbr2(intel_dp)),
yesno(drm_dp_tps3_supported(intel_dp->dpcd)));
- /* Set the max lane count for link */
- intel_dp->max_link_lane_count = drm_dp_max_lane_count(intel_dp->dpcd);
+ /* Initial max link lane count */
+ intel_dp->max_link_lane_count = intel_dp_max_common_lane_count(intel_dp);
- /* Set the max link rate for link */
- intel_dp->max_link_rate = intel_dp_max_sink_rate(intel_dp);
+ /* Initial max link rate */
+ intel_dp->max_link_rate = intel_dp_max_common_rate(intel_dp);
intel_dp_print_rates(intel_dp);
--
2.1.4
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH v2 11/13] drm/i915/mst: use max link not sink lane count
2017-02-03 14:19 [PATCH v2 00/13] drm/i915/dp: link rate and lane count refactoring Jani Nikula
` (9 preceding siblings ...)
2017-02-03 14:19 ` [PATCH v2 10/13] drm/i915/dp: add functions for max common link rate and lane count Jani Nikula
@ 2017-02-03 14:19 ` Jani Nikula
2017-02-03 14:19 ` [PATCH v2 12/13] drm/i915/dp: localize link rate index variable more Jani Nikula
` (3 subsequent siblings)
14 siblings, 0 replies; 20+ messages in thread
From: Jani Nikula @ 2017-02-03 14:19 UTC (permalink / raw)
To: intel-gfx; +Cc: jani.nikula, dhinakaran.pandiyan
The source might not support as many lanes as the sink, or the link
training might have failed at higher lane counts. Take these into
account.
Cc: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Cc: Manasi Navare <manasi.d.navare@intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
---
drivers/gpu/drm/i915/intel_dp.c | 2 +-
drivers/gpu/drm/i915/intel_dp_mst.c | 4 ++--
drivers/gpu/drm/i915/intel_drv.h | 1 +
3 files changed, 4 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
index 916c07cc6ad6..58ec70f316c5 100644
--- a/drivers/gpu/drm/i915/intel_dp.c
+++ b/drivers/gpu/drm/i915/intel_dp.c
@@ -177,7 +177,7 @@ static int intel_dp_max_common_lane_count(struct intel_dp *intel_dp)
return min(source_max, sink_max);
}
-static int intel_dp_max_lane_count(struct intel_dp *intel_dp)
+int intel_dp_max_lane_count(struct intel_dp *intel_dp)
{
return intel_dp->max_link_lane_count;
}
diff --git a/drivers/gpu/drm/i915/intel_dp_mst.c b/drivers/gpu/drm/i915/intel_dp_mst.c
index 6a85d388f936..8d047ec508ac 100644
--- a/drivers/gpu/drm/i915/intel_dp_mst.c
+++ b/drivers/gpu/drm/i915/intel_dp_mst.c
@@ -56,7 +56,7 @@ static bool intel_dp_mst_compute_config(struct intel_encoder *encoder,
* for MST we always configure max link bw - the spec doesn't
* seem to suggest we should do otherwise.
*/
- lane_count = drm_dp_max_lane_count(intel_dp->dpcd);
+ lane_count = intel_dp_max_lane_count(intel_dp);
pipe_config->lane_count = lane_count;
@@ -357,7 +357,7 @@ intel_dp_mst_mode_valid(struct drm_connector *connector,
int max_rate, mode_rate, max_lanes, max_link_clock;
max_link_clock = intel_dp_max_link_rate(intel_dp);
- max_lanes = drm_dp_max_lane_count(intel_dp->dpcd);
+ max_lanes = intel_dp_max_lane_count(intel_dp);
max_rate = intel_dp_max_data_rate(max_link_clock, max_lanes);
mode_rate = intel_dp_link_required(mode->clock, bpp);
diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h
index 2c4752d84f5b..9f14c4a13cee 100644
--- a/drivers/gpu/drm/i915/intel_drv.h
+++ b/drivers/gpu/drm/i915/intel_drv.h
@@ -1468,6 +1468,7 @@ void intel_dp_add_properties(struct intel_dp *intel_dp, struct drm_connector *co
void intel_dp_mst_suspend(struct drm_device *dev);
void intel_dp_mst_resume(struct drm_device *dev);
int intel_dp_max_link_rate(struct intel_dp *intel_dp);
+int intel_dp_max_lane_count(struct intel_dp *intel_dp);
int intel_dp_rate_select(struct intel_dp *intel_dp, int rate);
void intel_dp_hot_plug(struct intel_encoder *intel_encoder);
void intel_power_sequencer_reset(struct drm_i915_private *dev_priv);
--
2.1.4
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH v2 12/13] drm/i915/dp: localize link rate index variable more
2017-02-03 14:19 [PATCH v2 00/13] drm/i915/dp: link rate and lane count refactoring Jani Nikula
` (10 preceding siblings ...)
2017-02-03 14:19 ` [PATCH v2 11/13] drm/i915/mst: use max link not sink " Jani Nikula
@ 2017-02-03 14:19 ` Jani Nikula
2017-02-07 20:17 ` Manasi Navare
2017-02-03 14:19 ` [PATCH v2 13/13] drm/i915/dp: use readb and writeb calls for single byte DPCD access Jani Nikula
` (2 subsequent siblings)
14 siblings, 1 reply; 20+ messages in thread
From: Jani Nikula @ 2017-02-03 14:19 UTC (permalink / raw)
To: intel-gfx; +Cc: jani.nikula, dhinakaran.pandiyan
Localize link_rate_index to the if block, and rename to just index to
reduce indent.
Cc: Manasi Navare <manasi.d.navare@intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
---
drivers/gpu/drm/i915/intel_dp.c | 13 +++++++------
1 file changed, 7 insertions(+), 6 deletions(-)
diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
index 58ec70f316c5..90ae95f2ecb2 100644
--- a/drivers/gpu/drm/i915/intel_dp.c
+++ b/drivers/gpu/drm/i915/intel_dp.c
@@ -1640,7 +1640,6 @@ intel_dp_compute_config(struct intel_encoder *encoder,
/* Conveniently, the link BW constants become indices with a shift...*/
int min_clock = 0;
int max_clock;
- int link_rate_index;
int bpp, mode_rate;
int link_avail, link_clock;
int common_len;
@@ -1684,11 +1683,13 @@ intel_dp_compute_config(struct intel_encoder *encoder,
/* Use values requested by Compliance Test Request */
if (intel_dp->compliance.test_type == DP_TEST_LINK_TRAINING) {
- link_rate_index = intel_dp_rate_index(intel_dp->common_rates,
- intel_dp->num_common_rates,
- intel_dp->compliance.test_link_rate);
- if (link_rate_index >= 0)
- min_clock = max_clock = link_rate_index;
+ int index;
+
+ index = intel_dp_rate_index(intel_dp->common_rates,
+ intel_dp->num_common_rates,
+ intel_dp->compliance.test_link_rate);
+ if (index >= 0)
+ min_clock = max_clock = index;
min_lane_count = max_lane_count = intel_dp->compliance.test_lane_count;
}
DRM_DEBUG_KMS("DP link computation with max lane count %i "
--
2.1.4
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH v2 13/13] drm/i915/dp: use readb and writeb calls for single byte DPCD access
2017-02-03 14:19 [PATCH v2 00/13] drm/i915/dp: link rate and lane count refactoring Jani Nikula
` (11 preceding siblings ...)
2017-02-03 14:19 ` [PATCH v2 12/13] drm/i915/dp: localize link rate index variable more Jani Nikula
@ 2017-02-03 14:19 ` Jani Nikula
2017-02-03 17:25 ` Ville Syrjälä
2017-02-03 16:56 ` ✗ Fi.CI.BAT: failure for drm/i915/dp: link rate and lane count refactoring (rev2) Patchwork
2017-02-07 20:15 ` [PATCH v2 00/13] drm/i915/dp: link rate and lane count refactoring Manasi Navare
14 siblings, 1 reply; 20+ messages in thread
From: Jani Nikula @ 2017-02-03 14:19 UTC (permalink / raw)
To: intel-gfx; +Cc: jani.nikula, dhinakaran.pandiyan
This is what we have the readb and writeb variants for. Do some minor
return value and variable cleanup while at it.
Cc: Manasi Navare <manasi.d.navare@intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
---
drivers/gpu/drm/i915/intel_dp.c | 37 +++++++++++++++++--------------------
1 file changed, 17 insertions(+), 20 deletions(-)
diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
index 90ae95f2ecb2..fdd4abfc2380 100644
--- a/drivers/gpu/drm/i915/intel_dp.c
+++ b/drivers/gpu/drm/i915/intel_dp.c
@@ -3677,9 +3677,9 @@ intel_edp_init_dpcd(struct intel_dp *intel_dp)
uint8_t frame_sync_cap;
dev_priv->psr.sink_support = true;
- drm_dp_dpcd_read(&intel_dp->aux,
- DP_SINK_DEVICE_AUX_FRAME_SYNC_CAP,
- &frame_sync_cap, 1);
+ drm_dp_dpcd_readb(&intel_dp->aux,
+ DP_SINK_DEVICE_AUX_FRAME_SYNC_CAP,
+ &frame_sync_cap);
dev_priv->psr.aux_frame_sync = frame_sync_cap ? true : false;
/* PSR2 needs frame sync as well */
dev_priv->psr.psr2_support = dev_priv->psr.aux_frame_sync;
@@ -3749,8 +3749,8 @@ intel_dp_get_dpcd(struct intel_dp *intel_dp)
intel_dp_set_sink_rates(intel_dp);
intel_dp_set_common_rates(intel_dp);
- if (drm_dp_dpcd_read(&intel_dp->aux, DP_SINK_COUNT,
- &intel_dp->sink_count, 1) < 0)
+ if (drm_dp_dpcd_readb(&intel_dp->aux, DP_SINK_COUNT,
+ &intel_dp->sink_count) <= 0)
return false;
/*
@@ -3787,7 +3787,7 @@ intel_dp_get_dpcd(struct intel_dp *intel_dp)
static bool
intel_dp_can_mst(struct intel_dp *intel_dp)
{
- u8 buf[1];
+ u8 mstm_cap;
if (!i915.enable_dp_mst)
return false;
@@ -3798,10 +3798,10 @@ intel_dp_can_mst(struct intel_dp *intel_dp)
if (intel_dp->dpcd[DP_DPCD_REV] < 0x12)
return false;
- if (drm_dp_dpcd_read(&intel_dp->aux, DP_MSTM_CAP, buf, 1) != 1)
+ if (drm_dp_dpcd_readb(&intel_dp->aux, DP_MSTM_CAP, &mstm_cap) != 1)
return false;
- return buf[0] & DP_MST_CAP;
+ return mstm_cap & DP_MST_CAP;
}
static void
@@ -3947,9 +3947,8 @@ int intel_dp_sink_crc(struct intel_dp *intel_dp, u8 *crc)
static bool
intel_dp_get_sink_irq(struct intel_dp *intel_dp, u8 *sink_irq_vector)
{
- return drm_dp_dpcd_read(&intel_dp->aux,
- DP_DEVICE_SERVICE_IRQ_VECTOR,
- sink_irq_vector, 1) == 1;
+ return drm_dp_dpcd_readb(&intel_dp->aux, DP_DEVICE_SERVICE_IRQ_VECTOR,
+ sink_irq_vector) == 1;
}
static bool
@@ -4012,13 +4011,13 @@ static uint8_t intel_dp_autotest_link_training(struct intel_dp *intel_dp)
static uint8_t intel_dp_autotest_video_pattern(struct intel_dp *intel_dp)
{
uint8_t test_pattern;
- uint16_t test_misc;
+ uint8_t test_misc;
__be16 h_width, v_height;
int status = 0;
/* Read the TEST_PATTERN (DP CTS 3.1.5) */
- status = drm_dp_dpcd_read(&intel_dp->aux, DP_TEST_PATTERN,
- &test_pattern, 1);
+ status = drm_dp_dpcd_readb(&intel_dp->aux, DP_TEST_PATTERN,
+ &test_pattern);
if (status <= 0) {
DRM_DEBUG_KMS("Test pattern read failed\n");
return DP_TEST_NAK;
@@ -4040,8 +4039,8 @@ static uint8_t intel_dp_autotest_video_pattern(struct intel_dp *intel_dp)
return DP_TEST_NAK;
}
- status = drm_dp_dpcd_read(&intel_dp->aux, DP_TEST_MISC0,
- &test_misc, 1);
+ status = drm_dp_dpcd_readb(&intel_dp->aux, DP_TEST_MISC0,
+ &test_misc);
if (status <= 0) {
DRM_DEBUG_KMS("TEST MISC read failed\n");
return DP_TEST_NAK;
@@ -4100,10 +4099,8 @@ static uint8_t intel_dp_autotest_edid(struct intel_dp *intel_dp)
*/
block += intel_connector->detect_edid->extensions;
- if (!drm_dp_dpcd_write(&intel_dp->aux,
- DP_TEST_EDID_CHECKSUM,
- &block->checksum,
- 1))
+ if (drm_dp_dpcd_writeb(&intel_dp->aux, DP_TEST_EDID_CHECKSUM,
+ block->checksum) <= 0)
DRM_DEBUG_KMS("Failed to write EDID checksum\n");
test_result = DP_TEST_ACK | DP_TEST_EDID_CHECKSUM_WRITE;
--
2.1.4
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 20+ messages in thread
* ✗ Fi.CI.BAT: failure for drm/i915/dp: link rate and lane count refactoring (rev2)
2017-02-03 14:19 [PATCH v2 00/13] drm/i915/dp: link rate and lane count refactoring Jani Nikula
` (12 preceding siblings ...)
2017-02-03 14:19 ` [PATCH v2 13/13] drm/i915/dp: use readb and writeb calls for single byte DPCD access Jani Nikula
@ 2017-02-03 16:56 ` Patchwork
2017-02-07 20:15 ` [PATCH v2 00/13] drm/i915/dp: link rate and lane count refactoring Manasi Navare
14 siblings, 0 replies; 20+ messages in thread
From: Patchwork @ 2017-02-03 16:56 UTC (permalink / raw)
To: Jani Nikula; +Cc: intel-gfx
== Series Details ==
Series: drm/i915/dp: link rate and lane count refactoring (rev2)
URL : https://patchwork.freedesktop.org/series/18359/
State : failure
== Summary ==
Series 18359v2 drm/i915/dp: link rate and lane count refactoring
https://patchwork.freedesktop.org/api/1.0/series/18359/revisions/2/mbox/
Test drv_hangman:
Subgroup error-state-basic:
pass -> INCOMPLETE (fi-hsw-4770)
Test drv_module_reload:
Subgroup basic-reload:
pass -> DMESG-WARN (fi-skl-6700hq)
Subgroup basic-reload-final:
pass -> DMESG-WARN (fi-skl-6700hq)
Subgroup basic-reload-inject:
pass -> DMESG-WARN (fi-skl-6700hq)
fi-bdw-5557u total:247 pass:233 dwarn:0 dfail:0 fail:0 skip:14
fi-bsw-n3050 total:247 pass:208 dwarn:0 dfail:0 fail:0 skip:39
fi-bxt-j4205 total:247 pass:225 dwarn:0 dfail:0 fail:0 skip:22
fi-bxt-t5700 total:78 pass:65 dwarn:0 dfail:0 fail:0 skip:12
fi-byt-j1900 total:247 pass:220 dwarn:0 dfail:0 fail:0 skip:27
fi-byt-n2820 total:247 pass:216 dwarn:0 dfail:0 fail:0 skip:31
fi-hsw-4770 total:5 pass:4 dwarn:0 dfail:0 fail:0 skip:0
fi-hsw-4770r total:247 pass:228 dwarn:0 dfail:0 fail:0 skip:19
fi-ilk-650 total:14 pass:13 dwarn:0 dfail:0 fail:0 skip:0
fi-ivb-3520m total:247 pass:226 dwarn:0 dfail:0 fail:0 skip:21
fi-ivb-3770 total:247 pass:226 dwarn:0 dfail:0 fail:0 skip:21
fi-kbl-7500u total:247 pass:224 dwarn:0 dfail:0 fail:2 skip:21
fi-skl-6260u total:247 pass:234 dwarn:0 dfail:0 fail:0 skip:13
fi-skl-6700hq total:247 pass:224 dwarn:3 dfail:0 fail:0 skip:20
fi-skl-6700k total:247 pass:222 dwarn:4 dfail:0 fail:0 skip:21
fi-skl-6770hq total:247 pass:234 dwarn:0 dfail:0 fail:0 skip:13
fi-snb-2520m total:247 pass:216 dwarn:0 dfail:0 fail:0 skip:31
fi-snb-2600 total:247 pass:215 dwarn:0 dfail:0 fail:0 skip:32
e7f4379a8f59bebb1bdefe0584e128dfdd27a86a drm-tip: 2017y-02m-03d-13h-03m-49s UTC integration manifest
39cbb96 drm/i915/dp: use readb and writeb calls for single byte DPCD access
7797cdc drm/i915/dp: localize link rate index variable more
2884d95 drm/i915/mst: use max link not sink lane count
5f7b654 drm/i915/dp: add functions for max common link rate and lane count
458bf0b drm/i915/dp: don't call the link parameters sink parameters
21e65cb drm/i915/dp: do not limit rate seek when not needed
471940a drm/i915/dp: cache common rates with sink rates
264ebd5 drm/i915/dp: use the sink rates array for max sink rates
c57d9bc drm/i915/dp: generate and cache sink rate array for all DP, not just eDP 1.4
6e710b4 drm/i915/dp: cache source rates at init
5a5c8cf1 drm/i915/dp: rename rate_to_index() to intel_dp_rate_index() and reuse
996f860 drm/i915/dp: return errors from rate_to_index()
5af7d1a drm/i915/dp: use known correct array size in rate_to_index
== Logs ==
For more details see: https://intel-gfx-ci.01.org/CI/Patchwork_3695/
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v2 05/13] drm/i915/dp: generate and cache sink rate array for all DP, not just eDP 1.4
2017-02-03 14:19 ` [PATCH v2 05/13] drm/i915/dp: generate and cache sink rate array for all DP, not just eDP 1.4 Jani Nikula
@ 2017-02-03 17:16 ` Ville Syrjälä
2017-02-04 9:10 ` Jani Nikula
0 siblings, 1 reply; 20+ messages in thread
From: Ville Syrjälä @ 2017-02-03 17:16 UTC (permalink / raw)
To: Jani Nikula; +Cc: intel-gfx, dhinakaran.pandiyan
On Fri, Feb 03, 2017 at 04:19:28PM +0200, Jani Nikula wrote:
> There is some conflation related to sink rates, making this change more
> complicated than it would otherwise have to be. There are three changes
> here that are rather difficult to split up:
>
> 1) Use the intel_dp->sink_rates array for all DP, not just eDP 1.4. We
> initialize it from DPCD on eDP 1.4 like before, but generate it based
> on DP_MAX_LINK_RATE on others. This reduces code complexity when we
> need to use the sink rates; they are all always in the sink_rates
> array.
>
> 2) Update the sink rate array whenever we read DPCD, and use the
> information from there. This increases code readability when we need
> the sink rates.
>
> 3) Disentangle fallback rate limiting from sink rates. In the code, the
> max rate is a dynamic property of the *link*, not of the *sink*. Do
> the limiting after intersecting the source and sink rates, which are
> static properties of the devices.
>
> This paves the way for follow-up refactoring that I've refrained from
> doing here to keep this change as simple as it possibly can.
>
> v2: introduce use_rate_select and handle non-confirming eDP (Ville)
>
> Cc: Manasi Navare <manasi.d.navare@intel.com>
> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
> Signed-off-by: Jani Nikula <jani.nikula@intel.com>
> ---
> drivers/gpu/drm/i915/intel_dp.c | 79 ++++++++++++++++++---------
> drivers/gpu/drm/i915/intel_dp_link_training.c | 3 +-
> drivers/gpu/drm/i915/intel_drv.h | 5 +-
> 3 files changed, 59 insertions(+), 28 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
> index 2378f0651cbd..66efe8044ac9 100644
> --- a/drivers/gpu/drm/i915/intel_dp.c
> +++ b/drivers/gpu/drm/i915/intel_dp.c
> @@ -133,6 +133,34 @@ static void vlv_steal_power_sequencer(struct drm_device *dev,
> enum pipe pipe);
> static void intel_dp_unset_edid(struct intel_dp *intel_dp);
>
> +static int intel_dp_num_rates(u8 link_bw_code)
> +{
> + switch (link_bw_code) {
> + default:
> + WARN(1, "invalid max DP link bw val %x, using 1.62Gbps\n",
> + link_bw_code);
> + case DP_LINK_BW_1_62:
> + return 1;
> + case DP_LINK_BW_2_7:
> + return 2;
> + case DP_LINK_BW_5_4:
> + return 3;
> + }
> +}
> +
> +/* update sink rates from dpcd */
> +static void intel_dp_set_sink_rates(struct intel_dp *intel_dp)
> +{
> + int i, num_rates;
> +
> + num_rates = intel_dp_num_rates(intel_dp->dpcd[DP_MAX_LINK_RATE]);
> +
> + for (i = 0; i < num_rates; i++)
> + intel_dp->sink_rates[i] = default_rates[i];
> +
> + intel_dp->num_sink_rates = num_rates;
> +}
> +
> static int
> intel_dp_max_link_bw(struct intel_dp *intel_dp)
> {
> @@ -205,19 +233,6 @@ intel_dp_downstream_max_dotclock(struct intel_dp *intel_dp)
> return max_dotclk;
> }
>
> -static int
> -intel_dp_sink_rates(struct intel_dp *intel_dp, const int **sink_rates)
> -{
> - if (intel_dp->num_sink_rates) {
> - *sink_rates = intel_dp->sink_rates;
> - return intel_dp->num_sink_rates;
> - }
> -
> - *sink_rates = default_rates;
> -
> - return (intel_dp->max_sink_link_bw >> 3) + 1;
> -}
> -
> static void
> intel_dp_set_source_rates(struct intel_dp *intel_dp)
> {
> @@ -286,15 +301,22 @@ static int intel_dp_rate_index(const int *rates, int len, int rate)
> static int intel_dp_common_rates(struct intel_dp *intel_dp,
> int *common_rates)
> {
> - const int *sink_rates;
> - int sink_len;
> + int max_rate = drm_dp_bw_code_to_link_rate(intel_dp->max_sink_link_bw);
> + int i, common_len;
>
> - sink_len = intel_dp_sink_rates(intel_dp, &sink_rates);
> + common_len = intersect_rates(intel_dp->source_rates,
> + intel_dp->num_source_rates,
> + intel_dp->sink_rates,
> + intel_dp->num_sink_rates,
> + common_rates);
>
> - return intersect_rates(intel_dp->source_rates,
> - intel_dp->num_source_rates,
> - sink_rates, sink_len,
> - common_rates);
> + /* Limit results by potentially reduced max rate */
> + for (i = 0; i < common_len; i++) {
> + if (common_rates[common_len - i - 1] <= max_rate)
> + return common_len - i;
> + }
> +
> + return 0;
> }
>
> static int intel_dp_link_rate_index(struct intel_dp *intel_dp,
> @@ -1502,8 +1524,7 @@ static void snprintf_int_array(char *str, size_t len,
>
> static void intel_dp_print_rates(struct intel_dp *intel_dp)
> {
> - const int *sink_rates;
> - int sink_len, common_len;
> + int common_len;
> int common_rates[DP_MAX_SUPPORTED_RATES];
> char str[128]; /* FIXME: too big for stack? */
>
> @@ -1514,8 +1535,8 @@ static void intel_dp_print_rates(struct intel_dp *intel_dp)
> intel_dp->source_rates, intel_dp->num_source_rates);
> DRM_DEBUG_KMS("source rates: %s\n", str);
>
> - sink_len = intel_dp_sink_rates(intel_dp, &sink_rates);
> - snprintf_int_array(str, sizeof(str), sink_rates, sink_len);
> + snprintf_int_array(str, sizeof(str),
> + intel_dp->sink_rates, intel_dp->num_sink_rates);
> DRM_DEBUG_KMS("sink rates: %s\n", str);
>
> common_len = intel_dp_common_rates(intel_dp, common_rates);
> @@ -1581,7 +1602,8 @@ int intel_dp_rate_select(struct intel_dp *intel_dp, int rate)
> void intel_dp_compute_rate(struct intel_dp *intel_dp, int port_clock,
> uint8_t *link_bw, uint8_t *rate_select)
> {
> - if (intel_dp->num_sink_rates) {
> + /* eDP 1.4 rate select method. */
> + if (intel_dp->use_rate_select) {
> *link_bw = 0;
> *rate_select =
> intel_dp_rate_select(intel_dp, port_clock);
> @@ -3718,6 +3740,11 @@ intel_edp_init_dpcd(struct intel_dp *intel_dp)
> intel_dp->num_sink_rates = i;
> }
>
> + if (intel_dp->num_sink_rates)
> + intel_dp->use_rate_select = true;
> + else
> + intel_dp_set_sink_rates(intel_dp);
> +
> return true;
> }
>
> @@ -3728,6 +3755,8 @@ intel_dp_get_dpcd(struct intel_dp *intel_dp)
> if (!intel_dp_read_dpcd(intel_dp))
> return false;
>
> + intel_dp_set_sink_rates(intel_dp);
> +
Isn't that going to clobber whatever intel_edp_init_dpcd() set up?
> if (drm_dp_dpcd_read(&intel_dp->aux, DP_SINK_COUNT,
> &intel_dp->sink_count, 1) < 0)
> return false;
> diff --git a/drivers/gpu/drm/i915/intel_dp_link_training.c b/drivers/gpu/drm/i915/intel_dp_link_training.c
> index 0048b520baf7..694ad0ffb523 100644
> --- a/drivers/gpu/drm/i915/intel_dp_link_training.c
> +++ b/drivers/gpu/drm/i915/intel_dp_link_training.c
> @@ -146,7 +146,8 @@ intel_dp_link_training_clock_recovery(struct intel_dp *intel_dp)
> link_config[1] |= DP_LANE_COUNT_ENHANCED_FRAME_EN;
> drm_dp_dpcd_write(&intel_dp->aux, DP_LINK_BW_SET, link_config, 2);
>
> - if (intel_dp->num_sink_rates)
> + /* eDP 1.4 rate select method. */
> + if (!link_bw)
> drm_dp_dpcd_write(&intel_dp->aux, DP_LINK_RATE_SET,
> &rate_select, 1);
>
> diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h
> index f132d4aea1ad..3a6f092a2ec3 100644
> --- a/drivers/gpu/drm/i915/intel_drv.h
> +++ b/drivers/gpu/drm/i915/intel_drv.h
> @@ -933,9 +933,10 @@ struct intel_dp {
> /* source rates */
> int num_source_rates;
> const int *source_rates;
> - /* sink rates as reported by DP_SUPPORTED_LINK_RATES */
> - uint8_t num_sink_rates;
> + /* sink rates as reported by DP_MAX_LINK_RATE/DP_SUPPORTED_LINK_RATES */
> + int num_sink_rates;
> int sink_rates[DP_MAX_SUPPORTED_RATES];
> + bool use_rate_select;
> /* Max lane count for the sink as per DPCD registers */
> uint8_t max_sink_lane_count;
> /* Max link BW for the sink as per DPCD registers */
> --
> 2.1.4
--
Ville Syrjälä
Intel OTC
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v2 13/13] drm/i915/dp: use readb and writeb calls for single byte DPCD access
2017-02-03 14:19 ` [PATCH v2 13/13] drm/i915/dp: use readb and writeb calls for single byte DPCD access Jani Nikula
@ 2017-02-03 17:25 ` Ville Syrjälä
0 siblings, 0 replies; 20+ messages in thread
From: Ville Syrjälä @ 2017-02-03 17:25 UTC (permalink / raw)
To: Jani Nikula; +Cc: intel-gfx, dhinakaran.pandiyan
On Fri, Feb 03, 2017 at 04:19:36PM +0200, Jani Nikula wrote:
> This is what we have the readb and writeb variants for. Do some minor
> return value and variable cleanup while at it.
>
> Cc: Manasi Navare <manasi.d.navare@intel.com>
> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
> Signed-off-by: Jani Nikula <jani.nikula@intel.com>
> ---
> drivers/gpu/drm/i915/intel_dp.c | 37 +++++++++++++++++--------------------
> 1 file changed, 17 insertions(+), 20 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
> index 90ae95f2ecb2..fdd4abfc2380 100644
> --- a/drivers/gpu/drm/i915/intel_dp.c
> +++ b/drivers/gpu/drm/i915/intel_dp.c
> @@ -3677,9 +3677,9 @@ intel_edp_init_dpcd(struct intel_dp *intel_dp)
> uint8_t frame_sync_cap;
>
> dev_priv->psr.sink_support = true;
> - drm_dp_dpcd_read(&intel_dp->aux,
> - DP_SINK_DEVICE_AUX_FRAME_SYNC_CAP,
> - &frame_sync_cap, 1);
> + drm_dp_dpcd_readb(&intel_dp->aux,
> + DP_SINK_DEVICE_AUX_FRAME_SYNC_CAP,
> + &frame_sync_cap);
> dev_priv->psr.aux_frame_sync = frame_sync_cap ? true : false;
> /* PSR2 needs frame sync as well */
> dev_priv->psr.psr2_support = dev_priv->psr.aux_frame_sync;
> @@ -3749,8 +3749,8 @@ intel_dp_get_dpcd(struct intel_dp *intel_dp)
> intel_dp_set_sink_rates(intel_dp);
> intel_dp_set_common_rates(intel_dp);
>
> - if (drm_dp_dpcd_read(&intel_dp->aux, DP_SINK_COUNT,
> - &intel_dp->sink_count, 1) < 0)
> + if (drm_dp_dpcd_readb(&intel_dp->aux, DP_SINK_COUNT,
> + &intel_dp->sink_count) <= 0)
As an additional change it would be nice to have a local variable
for this since we'll do '->sink_count = ->sink_count & SOMETHING'
further down. IMO it's somewhat confusing to first read the unmasked
valuedirectly into the final location and then mask in place.
> return false;
>
> /*
> @@ -3787,7 +3787,7 @@ intel_dp_get_dpcd(struct intel_dp *intel_dp)
> static bool
> intel_dp_can_mst(struct intel_dp *intel_dp)
> {
> - u8 buf[1];
> + u8 mstm_cap;
>
> if (!i915.enable_dp_mst)
> return false;
> @@ -3798,10 +3798,10 @@ intel_dp_can_mst(struct intel_dp *intel_dp)
> if (intel_dp->dpcd[DP_DPCD_REV] < 0x12)
> return false;
>
> - if (drm_dp_dpcd_read(&intel_dp->aux, DP_MSTM_CAP, buf, 1) != 1)
> + if (drm_dp_dpcd_readb(&intel_dp->aux, DP_MSTM_CAP, &mstm_cap) != 1)
> return false;
>
> - return buf[0] & DP_MST_CAP;
> + return mstm_cap & DP_MST_CAP;
> }
>
> static void
> @@ -3947,9 +3947,8 @@ int intel_dp_sink_crc(struct intel_dp *intel_dp, u8 *crc)
> static bool
> intel_dp_get_sink_irq(struct intel_dp *intel_dp, u8 *sink_irq_vector)
> {
> - return drm_dp_dpcd_read(&intel_dp->aux,
> - DP_DEVICE_SERVICE_IRQ_VECTOR,
> - sink_irq_vector, 1) == 1;
> + return drm_dp_dpcd_readb(&intel_dp->aux, DP_DEVICE_SERVICE_IRQ_VECTOR,
> + sink_irq_vector) == 1;
> }
>
> static bool
> @@ -4012,13 +4011,13 @@ static uint8_t intel_dp_autotest_link_training(struct intel_dp *intel_dp)
> static uint8_t intel_dp_autotest_video_pattern(struct intel_dp *intel_dp)
> {
> uint8_t test_pattern;
> - uint16_t test_misc;
> + uint8_t test_misc;
> __be16 h_width, v_height;
> int status = 0;
>
> /* Read the TEST_PATTERN (DP CTS 3.1.5) */
> - status = drm_dp_dpcd_read(&intel_dp->aux, DP_TEST_PATTERN,
> - &test_pattern, 1);
> + status = drm_dp_dpcd_readb(&intel_dp->aux, DP_TEST_PATTERN,
> + &test_pattern);
> if (status <= 0) {
> DRM_DEBUG_KMS("Test pattern read failed\n");
> return DP_TEST_NAK;
> @@ -4040,8 +4039,8 @@ static uint8_t intel_dp_autotest_video_pattern(struct intel_dp *intel_dp)
> return DP_TEST_NAK;
> }
>
> - status = drm_dp_dpcd_read(&intel_dp->aux, DP_TEST_MISC0,
> - &test_misc, 1);
> + status = drm_dp_dpcd_readb(&intel_dp->aux, DP_TEST_MISC0,
> + &test_misc);
> if (status <= 0) {
> DRM_DEBUG_KMS("TEST MISC read failed\n");
> return DP_TEST_NAK;
> @@ -4100,10 +4099,8 @@ static uint8_t intel_dp_autotest_edid(struct intel_dp *intel_dp)
> */
> block += intel_connector->detect_edid->extensions;
>
> - if (!drm_dp_dpcd_write(&intel_dp->aux,
> - DP_TEST_EDID_CHECKSUM,
> - &block->checksum,
> - 1))
> + if (drm_dp_dpcd_writeb(&intel_dp->aux, DP_TEST_EDID_CHECKSUM,
> + block->checksum) <= 0)
> DRM_DEBUG_KMS("Failed to write EDID checksum\n");
>
> test_result = DP_TEST_ACK | DP_TEST_EDID_CHECKSUM_WRITE;
> --
> 2.1.4
--
Ville Syrjälä
Intel OTC
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v2 05/13] drm/i915/dp: generate and cache sink rate array for all DP, not just eDP 1.4
2017-02-03 17:16 ` Ville Syrjälä
@ 2017-02-04 9:10 ` Jani Nikula
0 siblings, 0 replies; 20+ messages in thread
From: Jani Nikula @ 2017-02-04 9:10 UTC (permalink / raw)
To: Ville Syrjälä; +Cc: intel-gfx, dhinakaran.pandiyan
On Fri, 03 Feb 2017, Ville Syrjälä <ville.syrjala@linux.intel.com> wrote:
> On Fri, Feb 03, 2017 at 04:19:28PM +0200, Jani Nikula wrote:
>> There is some conflation related to sink rates, making this change more
>> complicated than it would otherwise have to be. There are three changes
>> here that are rather difficult to split up:
>>
>> 1) Use the intel_dp->sink_rates array for all DP, not just eDP 1.4. We
>> initialize it from DPCD on eDP 1.4 like before, but generate it based
>> on DP_MAX_LINK_RATE on others. This reduces code complexity when we
>> need to use the sink rates; they are all always in the sink_rates
>> array.
>>
>> 2) Update the sink rate array whenever we read DPCD, and use the
>> information from there. This increases code readability when we need
>> the sink rates.
>>
>> 3) Disentangle fallback rate limiting from sink rates. In the code, the
>> max rate is a dynamic property of the *link*, not of the *sink*. Do
>> the limiting after intersecting the source and sink rates, which are
>> static properties of the devices.
>>
>> This paves the way for follow-up refactoring that I've refrained from
>> doing here to keep this change as simple as it possibly can.
>>
>> v2: introduce use_rate_select and handle non-confirming eDP (Ville)
>>
>> Cc: Manasi Navare <manasi.d.navare@intel.com>
>> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
>> Signed-off-by: Jani Nikula <jani.nikula@intel.com>
>> ---
>> drivers/gpu/drm/i915/intel_dp.c | 79 ++++++++++++++++++---------
>> drivers/gpu/drm/i915/intel_dp_link_training.c | 3 +-
>> drivers/gpu/drm/i915/intel_drv.h | 5 +-
>> 3 files changed, 59 insertions(+), 28 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
>> index 2378f0651cbd..66efe8044ac9 100644
>> --- a/drivers/gpu/drm/i915/intel_dp.c
>> +++ b/drivers/gpu/drm/i915/intel_dp.c
>> @@ -133,6 +133,34 @@ static void vlv_steal_power_sequencer(struct drm_device *dev,
>> enum pipe pipe);
>> static void intel_dp_unset_edid(struct intel_dp *intel_dp);
>>
>> +static int intel_dp_num_rates(u8 link_bw_code)
>> +{
>> + switch (link_bw_code) {
>> + default:
>> + WARN(1, "invalid max DP link bw val %x, using 1.62Gbps\n",
>> + link_bw_code);
>> + case DP_LINK_BW_1_62:
>> + return 1;
>> + case DP_LINK_BW_2_7:
>> + return 2;
>> + case DP_LINK_BW_5_4:
>> + return 3;
>> + }
>> +}
>> +
>> +/* update sink rates from dpcd */
>> +static void intel_dp_set_sink_rates(struct intel_dp *intel_dp)
>> +{
>> + int i, num_rates;
>> +
>> + num_rates = intel_dp_num_rates(intel_dp->dpcd[DP_MAX_LINK_RATE]);
>> +
>> + for (i = 0; i < num_rates; i++)
>> + intel_dp->sink_rates[i] = default_rates[i];
>> +
>> + intel_dp->num_sink_rates = num_rates;
>> +}
>> +
>> static int
>> intel_dp_max_link_bw(struct intel_dp *intel_dp)
>> {
>> @@ -205,19 +233,6 @@ intel_dp_downstream_max_dotclock(struct intel_dp *intel_dp)
>> return max_dotclk;
>> }
>>
>> -static int
>> -intel_dp_sink_rates(struct intel_dp *intel_dp, const int **sink_rates)
>> -{
>> - if (intel_dp->num_sink_rates) {
>> - *sink_rates = intel_dp->sink_rates;
>> - return intel_dp->num_sink_rates;
>> - }
>> -
>> - *sink_rates = default_rates;
>> -
>> - return (intel_dp->max_sink_link_bw >> 3) + 1;
>> -}
>> -
>> static void
>> intel_dp_set_source_rates(struct intel_dp *intel_dp)
>> {
>> @@ -286,15 +301,22 @@ static int intel_dp_rate_index(const int *rates, int len, int rate)
>> static int intel_dp_common_rates(struct intel_dp *intel_dp,
>> int *common_rates)
>> {
>> - const int *sink_rates;
>> - int sink_len;
>> + int max_rate = drm_dp_bw_code_to_link_rate(intel_dp->max_sink_link_bw);
>> + int i, common_len;
>>
>> - sink_len = intel_dp_sink_rates(intel_dp, &sink_rates);
>> + common_len = intersect_rates(intel_dp->source_rates,
>> + intel_dp->num_source_rates,
>> + intel_dp->sink_rates,
>> + intel_dp->num_sink_rates,
>> + common_rates);
>>
>> - return intersect_rates(intel_dp->source_rates,
>> - intel_dp->num_source_rates,
>> - sink_rates, sink_len,
>> - common_rates);
>> + /* Limit results by potentially reduced max rate */
>> + for (i = 0; i < common_len; i++) {
>> + if (common_rates[common_len - i - 1] <= max_rate)
>> + return common_len - i;
>> + }
>> +
>> + return 0;
>> }
>>
>> static int intel_dp_link_rate_index(struct intel_dp *intel_dp,
>> @@ -1502,8 +1524,7 @@ static void snprintf_int_array(char *str, size_t len,
>>
>> static void intel_dp_print_rates(struct intel_dp *intel_dp)
>> {
>> - const int *sink_rates;
>> - int sink_len, common_len;
>> + int common_len;
>> int common_rates[DP_MAX_SUPPORTED_RATES];
>> char str[128]; /* FIXME: too big for stack? */
>>
>> @@ -1514,8 +1535,8 @@ static void intel_dp_print_rates(struct intel_dp *intel_dp)
>> intel_dp->source_rates, intel_dp->num_source_rates);
>> DRM_DEBUG_KMS("source rates: %s\n", str);
>>
>> - sink_len = intel_dp_sink_rates(intel_dp, &sink_rates);
>> - snprintf_int_array(str, sizeof(str), sink_rates, sink_len);
>> + snprintf_int_array(str, sizeof(str),
>> + intel_dp->sink_rates, intel_dp->num_sink_rates);
>> DRM_DEBUG_KMS("sink rates: %s\n", str);
>>
>> common_len = intel_dp_common_rates(intel_dp, common_rates);
>> @@ -1581,7 +1602,8 @@ int intel_dp_rate_select(struct intel_dp *intel_dp, int rate)
>> void intel_dp_compute_rate(struct intel_dp *intel_dp, int port_clock,
>> uint8_t *link_bw, uint8_t *rate_select)
>> {
>> - if (intel_dp->num_sink_rates) {
>> + /* eDP 1.4 rate select method. */
>> + if (intel_dp->use_rate_select) {
>> *link_bw = 0;
>> *rate_select =
>> intel_dp_rate_select(intel_dp, port_clock);
>> @@ -3718,6 +3740,11 @@ intel_edp_init_dpcd(struct intel_dp *intel_dp)
>> intel_dp->num_sink_rates = i;
>> }
>>
>> + if (intel_dp->num_sink_rates)
>> + intel_dp->use_rate_select = true;
>> + else
>> + intel_dp_set_sink_rates(intel_dp);
>> +
>> return true;
>> }
>>
>> @@ -3728,6 +3755,8 @@ intel_dp_get_dpcd(struct intel_dp *intel_dp)
>> if (!intel_dp_read_dpcd(intel_dp))
>> return false;
>>
>> + intel_dp_set_sink_rates(intel_dp);
>> +
>
> Isn't that going to clobber whatever intel_edp_init_dpcd() set up?
I think you're right, I missed the intel_dp_hpd_pulse ->
intel_dp_short_pulse -> intel_dp_get_dpcd path. Our eDP paths have been
separated from the DP paths quite a bit lately, but not completely.
I guess I'll need to make intel_dp_set_sink_rates() include all the eDP
stuff from intel_edp_init_dpcd(), to DTRT regardless of when it's
called.
BR,
Jani.
>
>> if (drm_dp_dpcd_read(&intel_dp->aux, DP_SINK_COUNT,
>> &intel_dp->sink_count, 1) < 0)
>> return false;
>> diff --git a/drivers/gpu/drm/i915/intel_dp_link_training.c b/drivers/gpu/drm/i915/intel_dp_link_training.c
>> index 0048b520baf7..694ad0ffb523 100644
>> --- a/drivers/gpu/drm/i915/intel_dp_link_training.c
>> +++ b/drivers/gpu/drm/i915/intel_dp_link_training.c
>> @@ -146,7 +146,8 @@ intel_dp_link_training_clock_recovery(struct intel_dp *intel_dp)
>> link_config[1] |= DP_LANE_COUNT_ENHANCED_FRAME_EN;
>> drm_dp_dpcd_write(&intel_dp->aux, DP_LINK_BW_SET, link_config, 2);
>>
>> - if (intel_dp->num_sink_rates)
>> + /* eDP 1.4 rate select method. */
>> + if (!link_bw)
>> drm_dp_dpcd_write(&intel_dp->aux, DP_LINK_RATE_SET,
>> &rate_select, 1);
>>
>> diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h
>> index f132d4aea1ad..3a6f092a2ec3 100644
>> --- a/drivers/gpu/drm/i915/intel_drv.h
>> +++ b/drivers/gpu/drm/i915/intel_drv.h
>> @@ -933,9 +933,10 @@ struct intel_dp {
>> /* source rates */
>> int num_source_rates;
>> const int *source_rates;
>> - /* sink rates as reported by DP_SUPPORTED_LINK_RATES */
>> - uint8_t num_sink_rates;
>> + /* sink rates as reported by DP_MAX_LINK_RATE/DP_SUPPORTED_LINK_RATES */
>> + int num_sink_rates;
>> int sink_rates[DP_MAX_SUPPORTED_RATES];
>> + bool use_rate_select;
>> /* Max lane count for the sink as per DPCD registers */
>> uint8_t max_sink_lane_count;
>> /* Max link BW for the sink as per DPCD registers */
>> --
>> 2.1.4
--
Jani Nikula, Intel Open Source Technology Center
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v2 00/13] drm/i915/dp: link rate and lane count refactoring
2017-02-03 14:19 [PATCH v2 00/13] drm/i915/dp: link rate and lane count refactoring Jani Nikula
` (13 preceding siblings ...)
2017-02-03 16:56 ` ✗ Fi.CI.BAT: failure for drm/i915/dp: link rate and lane count refactoring (rev2) Patchwork
@ 2017-02-07 20:15 ` Manasi Navare
14 siblings, 0 replies; 20+ messages in thread
From: Manasi Navare @ 2017-02-07 20:15 UTC (permalink / raw)
To: Jani Nikula; +Cc: intel-gfx, dhinakaran.pandiyan
Hi Jani,
Thanks for this patch series. This definitely makes use
of lane count and link rate cleaner while handling the link failures.
I have tested these patches with compliance device along with my pending
DRM link-status patches and it does the fallback as expected.
It does not solve the problem of max link rate/lane count getting reset
through ->detect callback from fill_modes(), but that will be fixed in a
separate patch.
As far as these patches, the fallback and compliance works properly:
Tested-by: Manasi Navare <manasi.d.navare@intel.com>
Manasi
On Fri, Feb 03, 2017 at 04:19:23PM +0200, Jani Nikula wrote:
> v2 of [1], rebased and review addressed.
>
> BR,
> Jani.
>
>
> [1] http://mid.mail-archive.com/cover.1485459621.git.jani.nikula@intel.com
>
>
> Jani Nikula (13):
> drm/i915/dp: use known correct array size in rate_to_index
> drm/i915/dp: return errors from rate_to_index()
> drm/i915/dp: rename rate_to_index() to intel_dp_rate_index() and reuse
> drm/i915/dp: cache source rates at init
> drm/i915/dp: generate and cache sink rate array for all DP, not just
> eDP 1.4
> drm/i915/dp: use the sink rates array for max sink rates
> drm/i915/dp: cache common rates with sink rates
> drm/i915/dp: do not limit rate seek when not needed
> drm/i915/dp: don't call the link parameters sink parameters
> drm/i915/dp: add functions for max common link rate and lane count
> drm/i915/mst: use max link not sink lane count
> drm/i915/dp: localize link rate index variable more
> drm/i915/dp: use readb and writeb calls for single byte DPCD access
>
> drivers/gpu/drm/i915/intel_dp.c | 284 ++++++++++++++------------
> drivers/gpu/drm/i915/intel_dp_link_training.c | 3 +-
> drivers/gpu/drm/i915/intel_dp_mst.c | 4 +-
> drivers/gpu/drm/i915/intel_drv.h | 20 +-
> 4 files changed, 173 insertions(+), 138 deletions(-)
>
> --
> 2.1.4
>
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v2 12/13] drm/i915/dp: localize link rate index variable more
2017-02-03 14:19 ` [PATCH v2 12/13] drm/i915/dp: localize link rate index variable more Jani Nikula
@ 2017-02-07 20:17 ` Manasi Navare
0 siblings, 0 replies; 20+ messages in thread
From: Manasi Navare @ 2017-02-07 20:17 UTC (permalink / raw)
To: Jani Nikula; +Cc: intel-gfx, dhinakaran.pandiyan
This has been tested to correctly execute the compliance request.
Reviewed-by: Manasi Navare <manasi.d.navare@intel.com>
Manasi
On Fri, Feb 03, 2017 at 04:19:35PM +0200, Jani Nikula wrote:
> Localize link_rate_index to the if block, and rename to just index to
> reduce indent.
>
> Cc: Manasi Navare <manasi.d.navare@intel.com>
> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
> Signed-off-by: Jani Nikula <jani.nikula@intel.com>
> ---
> drivers/gpu/drm/i915/intel_dp.c | 13 +++++++------
> 1 file changed, 7 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
> index 58ec70f316c5..90ae95f2ecb2 100644
> --- a/drivers/gpu/drm/i915/intel_dp.c
> +++ b/drivers/gpu/drm/i915/intel_dp.c
> @@ -1640,7 +1640,6 @@ intel_dp_compute_config(struct intel_encoder *encoder,
> /* Conveniently, the link BW constants become indices with a shift...*/
> int min_clock = 0;
> int max_clock;
> - int link_rate_index;
> int bpp, mode_rate;
> int link_avail, link_clock;
> int common_len;
> @@ -1684,11 +1683,13 @@ intel_dp_compute_config(struct intel_encoder *encoder,
>
> /* Use values requested by Compliance Test Request */
> if (intel_dp->compliance.test_type == DP_TEST_LINK_TRAINING) {
> - link_rate_index = intel_dp_rate_index(intel_dp->common_rates,
> - intel_dp->num_common_rates,
> - intel_dp->compliance.test_link_rate);
> - if (link_rate_index >= 0)
> - min_clock = max_clock = link_rate_index;
> + int index;
> +
> + index = intel_dp_rate_index(intel_dp->common_rates,
> + intel_dp->num_common_rates,
> + intel_dp->compliance.test_link_rate);
> + if (index >= 0)
> + min_clock = max_clock = index;
> min_lane_count = max_lane_count = intel_dp->compliance.test_lane_count;
> }
> DRM_DEBUG_KMS("DP link computation with max lane count %i "
> --
> 2.1.4
>
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 20+ messages in thread
end of thread, other threads:[~2017-02-07 20:18 UTC | newest]
Thread overview: 20+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-02-03 14:19 [PATCH v2 00/13] drm/i915/dp: link rate and lane count refactoring Jani Nikula
2017-02-03 14:19 ` [PATCH v2 01/13] drm/i915/dp: use known correct array size in rate_to_index Jani Nikula
2017-02-03 14:19 ` [PATCH v2 02/13] drm/i915/dp: return errors from rate_to_index() Jani Nikula
2017-02-03 14:19 ` [PATCH v2 03/13] drm/i915/dp: rename rate_to_index() to intel_dp_rate_index() and reuse Jani Nikula
2017-02-03 14:19 ` [PATCH v2 04/13] drm/i915/dp: cache source rates at init Jani Nikula
2017-02-03 14:19 ` [PATCH v2 05/13] drm/i915/dp: generate and cache sink rate array for all DP, not just eDP 1.4 Jani Nikula
2017-02-03 17:16 ` Ville Syrjälä
2017-02-04 9:10 ` Jani Nikula
2017-02-03 14:19 ` [PATCH v2 06/13] drm/i915/dp: use the sink rates array for max sink rates Jani Nikula
2017-02-03 14:19 ` [PATCH v2 07/13] drm/i915/dp: cache common rates with " Jani Nikula
2017-02-03 14:19 ` [PATCH v2 08/13] drm/i915/dp: do not limit rate seek when not needed Jani Nikula
2017-02-03 14:19 ` [PATCH v2 09/13] drm/i915/dp: don't call the link parameters sink parameters Jani Nikula
2017-02-03 14:19 ` [PATCH v2 10/13] drm/i915/dp: add functions for max common link rate and lane count Jani Nikula
2017-02-03 14:19 ` [PATCH v2 11/13] drm/i915/mst: use max link not sink " Jani Nikula
2017-02-03 14:19 ` [PATCH v2 12/13] drm/i915/dp: localize link rate index variable more Jani Nikula
2017-02-07 20:17 ` Manasi Navare
2017-02-03 14:19 ` [PATCH v2 13/13] drm/i915/dp: use readb and writeb calls for single byte DPCD access Jani Nikula
2017-02-03 17:25 ` Ville Syrjälä
2017-02-03 16:56 ` ✗ Fi.CI.BAT: failure for drm/i915/dp: link rate and lane count refactoring (rev2) Patchwork
2017-02-07 20:15 ` [PATCH v2 00/13] drm/i915/dp: link rate and lane count refactoring Manasi Navare
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).