From: Manasi Navare <manasi.d.navare@intel.com>
To: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Cc: jani.nikula@intel.com, intel-gfx@lists.freedesktop.org
Subject: Re: [PATCH 1/2] drm/dp/i915: Fix DP link rate math
Date: Thu, 10 Nov 2016 15:55:49 -0800 [thread overview]
Message-ID: <20161110235549.GB4623@intel.com> (raw)
In-Reply-To: <1478755950-2778-1-git-send-email-dhinakaran.pandiyan@intel.com>
On Wed, Nov 09, 2016 at 09:32:29PM -0800, Dhinakaran Pandiyan wrote:
> We store DP link rates as link clock frequencies in kHz, just like all
> other clock values. But, DP link rates in the DP Spec are expressed in
> Gbps/lane, which seems to have led to some confusion.
>
> E.g., for HBR2
> Max. data rate = 5.4 Gbps/lane x 4 lane x 8/10 x 1/8 = 2160000 kBps
> where, 8/10 is for channel encoding and 1/8 is for bit to Byte conversion
>
> Using link clock frequency, like we do
> Max. data rate = 540000 kHz * 4 lanes = 2160000 kSymbols/s
> Because, each symbol has 8 bit of data, this is 2160000 kBps
> and there is no need to account for channel encoding here.
>
> But, currently we do 540000 kHz * 4 lanes * (8/10) = 1728000 kBps
>
> Similarly, while computing the required link bandwidth for a mode,
> there is a mysterious 1/10 term.
> This should simply be pixel_clock kHz * bpp * 1/8 to give the final
> result in kBps
>
> Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
> ---
> drivers/gpu/drm/i915/intel_dp.c | 28 +++++++++-------------------
> 1 file changed, 9 insertions(+), 19 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
> index 8f313c1..7a9e122 100644
> --- a/drivers/gpu/drm/i915/intel_dp.c
> +++ b/drivers/gpu/drm/i915/intel_dp.c
> @@ -161,33 +161,23 @@ static u8 intel_dp_max_lane_count(struct intel_dp *intel_dp)
> return min(source_max, sink_max);
> }
>
> -/*
> - * The units on the numbers in the next two are... bizarre. Examples will
> - * make it clearer; this one parallels an example in the eDP spec.
> - *
> - * intel_dp_max_data_rate for one lane of 2.7GHz evaluates as:
> - *
> - * 270000 * 1 * 8 / 10 == 216000
> - *
> - * The actual data capacity of that configuration is 2.16Gbit/s, so the
> - * units are decakilobits. ->clock in a drm_display_mode is in kilohertz -
> - * or equivalently, kilopixels per second - so for 1680x1050R it'd be
> - * 119000. At 18bpp that's 2142000 kilobits per second.
> - *
> - * Thus the strange-looking division by 10 in intel_dp_link_required, to
> - * get the result in decakilobits instead of kilobits.
> - */
> -
> static int
> intel_dp_link_required(int pixel_clock, int bpp)
> {
> - return (pixel_clock * bpp + 9) / 10;
> + /* pixel_clock is in kHz, divide bpp by 8 to return the value in kBps*/
> + return (pixel_clock * bpp + 7) / 8;
> }
>
> static int
> intel_dp_max_data_rate(int max_link_clock, int max_lanes)
> {
> - return (max_link_clock * max_lanes * 8) / 10;
> + /* max_link_clock is the link symbol clock (LS_Clk) in kHz and not the
> + * link rate that is generally expressed in Gbps. Since, 8 bits data is
> + * transmitted every LS_Clk per lane, there is no need to account for
> + * the channel encoding that is done in the PHY layer here.
> + */
> +
Max Link is the max link rate for the actual physical link of the DP cable.
So eventually PHY layer is going to encode the bits and generate 10 bits
for every 8 bits, so the code rate will be 8/10 and the useful net rate (sending
useful bits) will be link_rate * code rate = link_rate * 8/10.
So the max available link rate at the link layer should be this useful net rate
and so IMHO we should consider this channel encoding into account.
Manasi
> + return (max_link_clock * max_lanes);
> }
>
> static int
> --
> 2.7.4
>
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
next prev parent reply other threads:[~2016-11-10 23:53 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-11-10 5:32 [PATCH 1/2] drm/dp/i915: Fix DP link rate math Dhinakaran Pandiyan
2016-11-10 5:32 ` [PATCH 2/2] drm/i915/dp: Validate mode against max. link data rate for DP MST Dhinakaran Pandiyan
2016-11-10 23:32 ` Manasi Navare
2016-11-14 21:35 ` Pandiyan, Dhinakaran
2016-11-15 18:59 ` Ville Syrjälä
2016-11-15 20:57 ` Pandiyan, Dhinakaran
2016-11-11 17:41 ` Ville Syrjälä
2016-11-11 20:34 ` Pandiyan, Dhinakaran
2016-11-10 6:16 ` ✓ Fi.CI.BAT: success for series starting with [1/2] drm/dp/i915: Fix DP link rate math Patchwork
2016-11-10 23:55 ` Manasi Navare [this message]
2016-11-11 3:03 ` [PATCH 1/2] " Pandiyan, Dhinakaran
2016-11-11 13:39 ` Ville Syrjälä
2016-11-11 20:28 ` Pandiyan, Dhinakaran
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20161110235549.GB4623@intel.com \
--to=manasi.d.navare@intel.com \
--cc=dhinakaran.pandiyan@intel.com \
--cc=intel-gfx@lists.freedesktop.org \
--cc=jani.nikula@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).