public inbox for intel-xe@lists.freedesktop.org
 help / color / mirror / Atom feed
From: "Govindapillai, Vinod" <vinod.govindapillai@intel.com>
To: "ville.syrjala@linux.intel.com" <ville.syrjala@linux.intel.com>
Cc: "intel-xe@lists.freedesktop.org" <intel-xe@lists.freedesktop.org>,
	"intel-gfx@lists.freedesktop.org"
	<intel-gfx@lists.freedesktop.org>,
	"Syrjala, Ville" <ville.syrjala@intel.com>
Subject: Re: [PATCH] drm/i915/bw: reduce the pm demand peak bw based on display data-rate
Date: Mon, 27 Apr 2026 13:19:31 +0000	[thread overview]
Message-ID: <337ad2d3ce11e4173eed338b9eee923bc0537b9e.camel@intel.com> (raw)
In-Reply-To: <ae9YpNyZcfE1k2_m@intel.com>

On Mon, 2026-04-27 at 15:37 +0300, Ville Syrjälä wrote:
> On Mon, Apr 27, 2026 at 12:11:16PM +0300, Vinod Govindapillai wrote:
> > In xe3+, soc can lower the fabric frequency when the display
> > needs less bandwidth than the minimum GV point. The threshold
> > has been defined as 20GB/s. So if the required display data rate
> > is less than this threshold and the slelected GV point is 0 and
> > the GV point peak bw is greater than 20GB/s, we could set the
> > peak bw for the pm demand to this threshold. The currentc pcode
> > can handle this and adjust the fabric frequency accordingly.
> > 
> > Bspec: 68880
> > Signed-off-by: Vinod Govindapillai <vinod.govindapillai@intel.com>
> > ---
> >  drivers/gpu/drm/i915/display/intel_bw.c | 16 ++++++++++++++++
> >  1 file changed, 16 insertions(+)
> > 
> > diff --git a/drivers/gpu/drm/i915/display/intel_bw.c
> > b/drivers/gpu/drm/i915/display/intel_bw.c
> > index 9c3a9bbb49f6..005761baca93 100644
> > --- a/drivers/gpu/drm/i915/display/intel_bw.c
> > +++ b/drivers/gpu/drm/i915/display/intel_bw.c
> > @@ -52,6 +52,8 @@ struct intel_qgv_point {
> >  
> >  #define DEPROGBWPCLIMIT		60
> >  
> > +#define XE3_PEAK_BW_THRESHOLD	20000
> > +
> >  struct intel_psf_gv_point {
> >  	u8 clk; /* clock in multiples of 16.6666 MHz */
> >  };
> > @@ -1045,6 +1047,7 @@ static int mtl_find_qgv_points(struct
> > intel_display *display,
> >  	unsigned int best_rate = UINT_MAX;
> >  	unsigned int num_qgv_points = display-
> > >bw.max[0].num_qgv_points;
> >  	unsigned int qgv_peak_bw  = 0;
> > +	int qgv_point = num_qgv_points;
> >  	int i;
> >  	int ret;
> >  
> > @@ -1083,6 +1086,7 @@ static int mtl_find_qgv_points(struct
> > intel_display *display,
> >  		if (max_data_rate - data_rate < best_rate) {
> >  			best_rate = max_data_rate - data_rate;
> >  			qgv_peak_bw = display-
> > >bw.max[bw_index].peakbw[i];
> > +			qgv_point = i;
> >  		}
> >  
> >  		drm_dbg_kms(display->drm, "QGV point %d: max bw %d
> > required %d qgv_peak_bw: %d\n",
> > @@ -1102,6 +1106,18 @@ static int mtl_find_qgv_points(struct
> > intel_display *display,
> >  		return -EINVAL;
> >  	}
> >  
> > +	/*
> > +	 * For xe3+, if display's required memory bw <= 20GB/s and
> > the selected
> > +	 * peak bw of QGV[0] is >= 20 GB/s, we can reduce the peak
> > bw for the
> > +	 * pm demand QCLK GV to 20GB/s
> > +	 */
> > +	if (DISPLAY_VER(display) >= 30 && data_rate <=
> > XE3_PEAK_BW_THRESHOLD &&
> > +	    qgv_point == 0 && qgv_peak_bw >=
> > XE3_PEAK_BW_THRESHOLD) {
> > +		qgv_peak_bw = XE3_PEAK_BW_THRESHOLD;
> > +		drm_dbg_kms(display->drm, "Low display data-rate.
> > Reduce PM demand bw for QGV: %d",
> > +			    qgv_peak_bw);
> > +	}
> 
> I can't figure out what that does. If this is the thing I think it
> is,
> then the plan was to just add a new QGV point (in driver) for the
> lower
> frequency.
> 
qgv_peak_bw will be set to 20gbps on the next pmdemand request if the
condition matches.
new_bw_state->qgv_point_peakbw = DIV_ROUND_CLOSEST(qgv_peak_bw, 100);

Technically it could be a new driver level QGV point as you mentioned.
But that would be tweaking the existing QGC points related code in PTL+
with populating rest of the struct intel_qgv_point members from qgv[0].
We have the maximum number of qgv points defined as 8. So if there were
8 points already defined in bios, we will be in trouble I think. 

Also I did not see any such plans if it was suggested somewhere. Could
you pls point that to me?

As per the Bspec 68880, this is what I found!
"
If lowest unmasked GV point is point 0 AND GV point 0 bandwidth >= 20
GB/s AND display required memory bandwidth <= 20 GB/s, PM_DMD Bandwidth
for QCLK GV = INT(200)
Else, PM_DMD Bandwidth for QCLK GV = INT(GV point bandwidth / 100) in
100s of MB/s 
"

BR
Vinod

> > +
> >  	/* MTL PM DEMAND expects QGV BW parameter in multiples of
> > 100 mbps */
> >  	new_bw_state->qgv_point_peakbw =
> > DIV_ROUND_CLOSEST(qgv_peak_bw, 100);
> >  
> > -- 
> > 2.43.0
> 


  reply	other threads:[~2026-04-27 13:28 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-27  9:11 [PATCH] drm/i915/bw: reduce the pm demand peak bw based on display data-rate Vinod Govindapillai
2026-04-27 11:29 ` ✓ CI.KUnit: success for " Patchwork
2026-04-27 12:25 ` ✗ Xe.CI.BAT: failure " Patchwork
2026-04-27 12:37 ` [PATCH] " Ville Syrjälä
2026-04-27 13:19   ` Govindapillai, Vinod [this message]
2026-04-27 13:28 ` ✗ Xe.CI.FULL: failure for " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=337ad2d3ce11e4173eed338b9eee923bc0537b9e.camel@intel.com \
    --to=vinod.govindapillai@intel.com \
    --cc=intel-gfx@lists.freedesktop.org \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=ville.syrjala@intel.com \
    --cc=ville.syrjala@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox