From: "Taylor, Clinton A" <clinton.a.taylor@intel.com>
To: "Sousa, Gustavo" <gustavo.sousa@intel.com>,
"intel-gfx@lists.freedesktop.org"
<intel-gfx@lists.freedesktop.org>
Subject: Re: [Intel-gfx] [PATCH 1/4] drm/i915/cx0: Add intel_cx0_get_owned_lane_mask()
Date: Wed, 2 Aug 2023 21:41:27 +0000 [thread overview]
Message-ID: <879b5cef429c19e943913a59a61aae87d41d92df.camel@intel.com> (raw)
In-Reply-To: <20230725212716.3060259-2-gustavo.sousa@intel.com>
On Tue, 2023-07-25 at 18:27 -0300, Gustavo Sousa wrote:
> There are more parts of C10/C20 programming that need to take owned
> lanes into account. Define the function intel_cx0_get_owned_lane_mask()
> and use it. There will be new users of that function in upcoming
> changes.
>
> BSpec: 64539
> Signed-off-by: Gustavo Sousa <gustavo.sousa@intel.com>
> ---
> drivers/gpu/drm/i915/display/intel_cx0_phy.c | 44 ++++++++++++--------
> 1 file changed, 27 insertions(+), 17 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/display/intel_cx0_phy.c
> b/drivers/gpu/drm/i915/display/intel_cx0_phy.c
> index 1b00ef2c6185..b903ceb0b56a 100644
> --- a/drivers/gpu/drm/i915/display/intel_cx0_phy.c
> +++ b/drivers/gpu/drm/i915/display/intel_cx0_phy.c
> @@ -46,6 +46,22 @@ static int lane_mask_to_lane(u8 lane_mask)
> return ilog2(lane_mask);
> }
>
> +static u8 intel_cx0_get_owned_lane_mask(struct drm_i915_private *i915,
> + struct intel_encoder *encoder)
> +{
> + struct intel_digital_port *dig_port = enc_to_dig_port(encoder);
> +
> + if (!intel_tc_port_in_dp_alt_mode(dig_port))
> + return INTEL_CX0_BOTH_LANES;
> +
> + /*
> + * In DP-alt with pin assignment D, only PHY lane 0 is owned
> + * by display and lane 1 is owned by USB.
> + */
lane_revesal is not being handled here. Do we need to take lane_reversal into account
with Pin assignment D is being used?
-Clint
> + return intel_tc_port_fia_max_lane_count(dig_port) > 2
> + ? INTEL_CX0_BOTH_LANES : INTEL_CX0_LANE0;
> +}
> +
> static void
> assert_dc_off(struct drm_i915_private *i915)
> {
> @@ -2534,17 +2550,15 @@ static void intel_cx0_phy_lane_reset(struct drm_i915_private
> *i915,
> {
> enum port port = encoder->port;
> enum phy phy = intel_port_to_phy(i915, port);
> - bool both_lanes = intel_tc_port_fia_max_lane_count(enc_to_dig_port(encoder)) > 2;
> - u8 lane_mask = lane_reversal ? INTEL_CX0_LANE1 :
> - INTEL_CX0_LANE0;
> - u32 lane_pipe_reset = both_lanes ?
> - XELPDP_LANE_PIPE_RESET(0) |
> - XELPDP_LANE_PIPE_RESET(1) :
> - XELPDP_LANE_PIPE_RESET(0);
> - u32 lane_phy_current_status = both_lanes ?
> - XELPDP_LANE_PHY_CURRENT_STATUS(0) |
> - XELPDP_LANE_PHY_CURRENT_STATUS(1) :
> - XELPDP_LANE_PHY_CURRENT_STATUS(0);
> + u8 owned_lane_mask = intel_cx0_get_owned_lane_mask(i915, encoder);
> + u8 lane_mask = lane_reversal ? INTEL_CX0_LANE1 : INTEL_CX0_LANE0;
> + u32 lane_pipe_reset = owned_lane_mask == INTEL_CX0_BOTH_LANES
> + ? XELPDP_LANE_PIPE_RESET(0) | XELPDP_LANE_PIPE_RESET(1)
> + : XELPDP_LANE_PIPE_RESET(0);
> + u32 lane_phy_current_status = owned_lane_mask == INTEL_CX0_BOTH_LANES
> + ? (XELPDP_LANE_PHY_CURRENT_STATUS(0) |
> + XELPDP_LANE_PHY_CURRENT_STATUS(1))
> + : XELPDP_LANE_PHY_CURRENT_STATUS(0);
>
> if (__intel_de_wait_for_register(i915, XELPDP_PORT_BUF_CTL1(port),
> XELPDP_PORT_BUF_SOC_PHY_READY,
> @@ -2564,15 +2578,11 @@ static void intel_cx0_phy_lane_reset(struct drm_i915_private
> *i915,
> phy_name(phy), XELPDP_PORT_RESET_START_TIMEOUT_US);
>
> intel_de_rmw(i915, XELPDP_PORT_CLOCK_CTL(port),
> - intel_cx0_get_pclk_refclk_request(both_lanes ?
> - INTEL_CX0_BOTH_LANES :
> - INTEL_CX0_LANE0),
> + intel_cx0_get_pclk_refclk_request(owned_lane_mask),
> intel_cx0_get_pclk_refclk_request(lane_mask));
>
> if (__intel_de_wait_for_register(i915, XELPDP_PORT_CLOCK_CTL(port),
> - intel_cx0_get_pclk_refclk_ack(both_lanes ?
> - INTEL_CX0_BOTH_LANE
> S :
> - INTEL_CX0_LANE0),
> + intel_cx0_get_pclk_refclk_ack(owned_lane_mask),
> intel_cx0_get_pclk_refclk_ack(lane_mask),
> XELPDP_REFCLK_ENABLE_TIMEOUT_US, 0, NULL))
> drm_warn(&i915->drm, "PHY %c failed to request refclk after %dus.\n",
next prev parent reply other threads:[~2023-08-02 21:41 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-07-25 21:27 [Intel-gfx] [PATCH 0/4] Fix C10/C20 implementation w.r.t. owned PHY lanes Gustavo Sousa
2023-07-25 21:27 ` [Intel-gfx] [PATCH 1/4] drm/i915/cx0: Add intel_cx0_get_owned_lane_mask() Gustavo Sousa
2023-08-02 21:41 ` Taylor, Clinton A [this message]
2023-08-03 14:02 ` Gustavo Sousa
2023-08-08 10:43 ` Kahola, Mika
2023-07-25 21:27 ` [Intel-gfx] [PATCH 2/4] drm/i915: Simplify intel_cx0_program_phy_lane() with loop Gustavo Sousa
2023-07-31 11:04 ` Jani Nikula
2023-07-31 12:58 ` Gustavo Sousa
2023-07-31 15:14 ` Jani Nikula
2023-07-31 16:03 ` Gustavo Sousa
2023-07-25 21:27 ` [Intel-gfx] [PATCH 3/4] drm/i915/cx0: Enable/disable TX only for owned PHY lanes Gustavo Sousa
2023-08-14 9:25 ` Kahola, Mika
2023-07-25 21:27 ` [Intel-gfx] [PATCH 4/4] drm/i915/cx0: Program vswing only for owned lanes Gustavo Sousa
2023-08-14 9:27 ` Kahola, Mika
2023-07-25 22:34 ` [Intel-gfx] ✓ Fi.CI.BAT: success for Fix C10/C20 implementation w.r.t. owned PHY lanes Patchwork
2023-07-26 4:56 ` [Intel-gfx] ✗ Fi.CI.IGT: failure " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=879b5cef429c19e943913a59a61aae87d41d92df.camel@intel.com \
--to=clinton.a.taylor@intel.com \
--cc=gustavo.sousa@intel.com \
--cc=intel-gfx@lists.freedesktop.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox