Intel-GFX Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: "Ville Syrjälä" <ville.syrjala@linux.intel.com>
To: "Shankar, Uma" <uma.shankar@intel.com>
Cc: "intel-gfx@lists.freedesktop.org"
	<intel-gfx@lists.freedesktop.org>,
	"intel-xe@lists.freedesktop.org" <intel-xe@lists.freedesktop.org>
Subject: Re: [PATCH 4/9] drm/i915/dmc: Extract dmc_load_program()
Date: Fri, 13 Jun 2025 17:18:27 +0300	[thread overview]
Message-ID: <aEwzM5v2lk2ySbMI@intel.com> (raw)
In-Reply-To: <DM4PR11MB6360E45A005DAB22EA0151ACF474A@DM4PR11MB6360.namprd11.prod.outlook.com>

On Thu, Jun 12, 2025 at 08:16:51PM +0000, Shankar, Uma wrote:
> 
> 
> > -----Original Message-----
> > From: Intel-gfx <intel-gfx-bounces@lists.freedesktop.org> On Behalf Of Ville
> > Syrjala
> > Sent: Wednesday, June 11, 2025 9:23 PM
> > To: intel-gfx@lists.freedesktop.org
> > Cc: intel-xe@lists.freedesktop.org
> > Subject: [PATCH 4/9] drm/i915/dmc: Extract dmc_load_program()
> > 
> > From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > 
> > We'll be needing to reload the program for individual DMCs.
> > To make that possible pull the code to load the program for a single DMC into a
> > new function.
> > 
> > This does change the order of things during init/resume a bit; previously we loaded
> > the program RAM for all DMCs first, and then loaded the MMIO registers for all
> > DMCs. Now those operations will be interleaved between different DMCs.
> 
> Haven't found any documentation mandating this sequence, so should be ok.

I was also pondering about the safety of the whole DMC reloading
process. I think it still has lots of potential races:
- we disable the event handlers first, and the reload the program,
  but there is no explicit sync in between to guarantee the DMC isn't
  still executing one of the old handlers. Not sure what such a sync
  would be though since there are various triggers. Maybe a vblank wait
  or two (if the pipe is active) would at least guarantee all the frame
  timing related triggers would be done.
- we load the firmware mmio list in the order specified by the firmware,
  which always seems to have the EVT_CTL before the EVT_HTP. That means
  we are enabling the events before the HTP is in place, potentially
  causing the DMC to start executing from some random location. We could
  eg. do two loops over the mmio list, first loop would do all the !EVT_CTL
  registers, and a second loop would do just the EVT_CTL registers.

Though I'm not sure the GOP even loads any DMC firmware, so perhaps none 
of this is really matters for normal use cases.

> 
> Reviewed-by: Uma Shankar <uma.shankar@intel.com>
> 
> > Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > ---
> >  drivers/gpu/drm/i915/display/intel_dmc.c | 78 +++++++++++++-----------
> >  1 file changed, 42 insertions(+), 36 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/i915/display/intel_dmc.c
> > b/drivers/gpu/drm/i915/display/intel_dmc.c
> > index 5a43298cd0e7..331db28039db 100644
> > --- a/drivers/gpu/drm/i915/display/intel_dmc.c
> > +++ b/drivers/gpu/drm/i915/display/intel_dmc.c
> > @@ -432,25 +432,22 @@ static void disable_event_handler(struct intel_display
> > *display,
> >  	intel_de_write(display, htp_reg, 0);
> >  }
> > 
> > -static void disable_all_event_handlers(struct intel_display *display)
> > +static void disable_all_event_handlers(struct intel_display *display,
> > +				       enum intel_dmc_id dmc_id)
> >  {
> > -	enum intel_dmc_id dmc_id;
> > +	int handler;
> > 
> >  	/* TODO: disable the event handlers on pre-GEN12 platforms as well */
> >  	if (DISPLAY_VER(display) < 12)
> >  		return;
> > 
> > -	for_each_dmc_id(dmc_id) {
> > -		int handler;
> > +	if (!has_dmc_id_fw(display, dmc_id))
> > +		return;
> > 
> > -		if (!has_dmc_id_fw(display, dmc_id))
> > -			continue;
> > -
> > -		for (handler = 0; handler <
> > DMC_EVENT_HANDLER_COUNT_GEN12; handler++)
> > -			disable_event_handler(display,
> > -					      DMC_EVT_CTL(display, dmc_id,
> > handler),
> > -					      DMC_EVT_HTP(display, dmc_id,
> > handler));
> > -	}
> > +	for (handler = 0; handler < DMC_EVENT_HANDLER_COUNT_GEN12;
> > handler++)
> > +		disable_event_handler(display,
> > +				      DMC_EVT_CTL(display, dmc_id, handler),
> > +				      DMC_EVT_HTP(display, dmc_id, handler));
> >  }
> > 
> >  static void adlp_pipedmc_clock_gating_wa(struct intel_display *display, bool
> > enable) @@ -578,6 +575,30 @@ static u32 dmc_mmiodata(struct intel_display
> > *display,
> >  		return dmc->dmc_info[dmc_id].mmiodata[i];
> >  }
> > 
> > +static void dmc_load_program(struct intel_display *display,
> > +			     enum intel_dmc_id dmc_id)
> > +{
> > +	struct intel_dmc *dmc = display_to_dmc(display);
> > +	int i;
> > +
> > +	disable_all_event_handlers(display, dmc_id);
> > +
> > +	preempt_disable();
> > +
> > +	for (i = 0; i < dmc->dmc_info[dmc_id].dmc_fw_size; i++) {
> > +		intel_de_write_fw(display,
> > +				  DMC_PROGRAM(dmc-
> > >dmc_info[dmc_id].start_mmioaddr, i),
> > +				  dmc->dmc_info[dmc_id].payload[i]);
> > +	}
> > +
> > +	preempt_enable();
> > +
> > +	for (i = 0; i < dmc->dmc_info[dmc_id].mmio_count; i++) {
> > +		intel_de_write(display, dmc->dmc_info[dmc_id].mmioaddr[i],
> > +			       dmc_mmiodata(display, dmc, dmc_id, i));
> > +	}
> > +}
> > +
> >  void intel_dmc_enable_pipe(struct intel_display *display, enum pipe pipe)  {
> >  	enum intel_dmc_id dmc_id = PIPE_TO_DMC_ID(pipe); @@ -685,37
> > +706,17 @@ void intel_dmc_start_pkgc_exit_at_start_of_undelayed_vblank(struct
> > intel_display  void intel_dmc_load_program(struct intel_display *display)  {
> >  	struct i915_power_domains *power_domains = &display->power.domains;
> > -	struct intel_dmc *dmc = display_to_dmc(display);
> >  	enum intel_dmc_id dmc_id;
> > -	u32 i;
> > 
> >  	if (!intel_dmc_has_payload(display))
> >  		return;
> > 
> > -	pipedmc_clock_gating_wa(display, true);
> > -
> > -	disable_all_event_handlers(display);
> > -
> >  	assert_display_rpm_held(display);
> > 
> > -	preempt_disable();
> > +	pipedmc_clock_gating_wa(display, true);
> > 
> > -	for_each_dmc_id(dmc_id) {
> > -		for (i = 0; i < dmc->dmc_info[dmc_id].dmc_fw_size; i++) {
> > -			intel_de_write_fw(display,
> > -					  DMC_PROGRAM(dmc-
> > >dmc_info[dmc_id].start_mmioaddr, i),
> > -					  dmc->dmc_info[dmc_id].payload[i]);
> > -		}
> > -	}
> > -
> > -	preempt_enable();
> > -
> > -	for_each_dmc_id(dmc_id) {
> > -		for (i = 0; i < dmc->dmc_info[dmc_id].mmio_count; i++) {
> > -			intel_de_write(display, dmc-
> > >dmc_info[dmc_id].mmioaddr[i],
> > -				       dmc_mmiodata(display, dmc, dmc_id, i));
> > -		}
> > -	}
> > +	for_each_dmc_id(dmc_id)
> > +		dmc_load_program(display, dmc_id);
> > 
> >  	power_domains->dc_state = 0;
> > 
> > @@ -733,11 +734,16 @@ void intel_dmc_load_program(struct intel_display
> > *display)
> >   */
> >  void intel_dmc_disable_program(struct intel_display *display)  {
> > +	enum intel_dmc_id dmc_id;
> > +
> >  	if (!intel_dmc_has_payload(display))
> >  		return;
> > 
> >  	pipedmc_clock_gating_wa(display, true);
> > -	disable_all_event_handlers(display);
> > +
> > +	for_each_dmc_id(dmc_id)
> > +		disable_all_event_handlers(display, dmc_id);
> > +
> >  	pipedmc_clock_gating_wa(display, false);  }
> > 
> > --
> > 2.49.0
> 

-- 
Ville Syrjälä
Intel

  reply	other threads:[~2025-06-13 14:18 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-06-11 15:52 [PATCH 0/9] drm/i915/dmc: Deal with loss of pipe DMC state Ville Syrjala
2025-06-11 15:52 ` [PATCH 1/9] drm/i915/dmc: Limit pipe DMC clock gating w/a to just ADL/DG2/MTL Ville Syrjala
2025-06-11 15:52 ` [PATCH 2/9] drm/i915/dmc: Parametrize MTL_PIPEDMC_GATING_DIS Ville Syrjala
2025-06-11 15:52 ` [PATCH 3/9] drm/i915/dmc: Shuffle code around Ville Syrjala
2025-06-11 15:52 ` [PATCH 4/9] drm/i915/dmc: Extract dmc_load_program() Ville Syrjala
2025-06-12 20:16   ` Shankar, Uma
2025-06-13 14:18     ` Ville Syrjälä [this message]
2025-06-17 18:58       ` Shankar, Uma
2025-06-11 15:52 ` [PATCH 5/9] drm/i915/dmc: Reload pipe DMC state on TGL when enabling pipe A Ville Syrjala
2025-06-12 20:32   ` Shankar, Uma
2025-06-13 14:09     ` Ville Syrjälä
2025-06-17 18:51       ` Shankar, Uma
2025-06-11 15:52 ` [PATCH 6/9] drm/i915/dmc: Reload pipe DMC MMIO registers for pipe C/D on PTL+ Ville Syrjala
2025-06-11 15:52 ` [PATCH 7/9] drm/i915/dmc: Assert DMC is loaded harder Ville Syrjala
2025-06-12 20:55   ` Shankar, Uma
2025-06-11 15:52 ` [PATCH 8/9] drm/i915/dmc: Pass crtc_state to intel_dmc_{enable, disable}_pipe() Ville Syrjala
2025-06-12 20:58   ` Shankar, Uma
2025-06-11 15:52 ` [PATCH 9/9] drm/i915/dmc: Do not enable the pipe DMC on TGL when PSR is possible Ville Syrjala
2025-06-12 21:02   ` Shankar, Uma
2025-06-11 17:37 ` ✗ Fi.CI.CHECKPATCH: warning for drm/i915/dmc: Deal with loss of pipe DMC state Patchwork
2025-06-11 17:59 ` ✓ i915.CI.BAT: success " Patchwork
2025-06-11 21:09 ` ✗ i915.CI.Full: failure " Patchwork
2025-06-14  9:13 ` ✓ i915.CI.BAT: success for drm/i915/dmc: Deal with loss of pipe DMC state (rev2) Patchwork
2025-06-14 11:01 ` ✗ i915.CI.Full: failure " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aEwzM5v2lk2ySbMI@intel.com \
    --to=ville.syrjala@linux.intel.com \
    --cc=intel-gfx@lists.freedesktop.org \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=uma.shankar@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox