From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 93E5EC71148 for ; Fri, 13 Jun 2025 14:18:32 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 34AC910E9D6; Fri, 13 Jun 2025 14:18:32 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="XYYiB7Qv"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.14]) by gabe.freedesktop.org (Postfix) with ESMTPS id D906F10E9D6; Fri, 13 Jun 2025 14:18:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1749824311; x=1781360311; h=date:from:to:cc:subject:message-id:references: mime-version:content-transfer-encoding:in-reply-to; bh=QiiEs1DPZasB0mMXo0JT/aK0YYRI00YkzgoZtVpiccc=; b=XYYiB7Qv+29kYOrPoSkwD+R/7Vio7r6Zx4tBKKJFmYtLz7TZXomOcdDm fnmhxyoFLhRmUvUGDdTSJAm13q1mNj8bH/hEocHKFK3W2ALS2aMLajg7y TsTD+A3cdG5OxyqNxqyjFMYxks/TRgLvssadH4jHuvejJxurg/SrqX1Ct VGuoz2UBw0uucpZheBMc7Ut5UQPxVzS+wt+2scG6aPmDZN3HO1pS0c4t+ zMDD0+5vcletwDEsUb92WYcX/I2cpneBjSESJ8FJcr173jn9p+jFMitBX G55PLSSBJVXTQgMSNaK1cwPQNG3hC6Zr25HoVPBwD7QaR/pL9XJApJVjX w==; X-CSE-ConnectionGUID: H7ckifuiS7KZVMh/V92kzA== X-CSE-MsgGUID: XoGPk1l+TJyJ3yGQPdlbIA== X-IronPort-AV: E=McAfee;i="6800,10657,11463"; a="52135734" X-IronPort-AV: E=Sophos;i="6.16,233,1744095600"; d="scan'208";a="52135734" Received: from orviesa005.jf.intel.com ([10.64.159.145]) by fmvoesa108.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Jun 2025 07:18:30 -0700 X-CSE-ConnectionGUID: 5wUvRyvWRnqgevUWbAXXsw== X-CSE-MsgGUID: PxxELP7vST6R/TWTjcEK4g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.16,233,1744095600"; d="scan'208";a="153129411" Received: from johunt-mobl9.ger.corp.intel.com (HELO stinkbox) ([10.245.245.161]) by orviesa005.jf.intel.com with SMTP; 13 Jun 2025 07:18:28 -0700 Received: by stinkbox (sSMTP sendmail emulation); Fri, 13 Jun 2025 17:18:27 +0300 Date: Fri, 13 Jun 2025 17:18:27 +0300 From: Ville =?iso-8859-1?Q?Syrj=E4l=E4?= To: "Shankar, Uma" Cc: "intel-gfx@lists.freedesktop.org" , "intel-xe@lists.freedesktop.org" Subject: Re: [PATCH 4/9] drm/i915/dmc: Extract dmc_load_program() Message-ID: References: <20250611155241.24191-1-ville.syrjala@linux.intel.com> <20250611155241.24191-5-ville.syrjala@linux.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: X-Patchwork-Hint: comment X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" On Thu, Jun 12, 2025 at 08:16:51PM +0000, Shankar, Uma wrote: > > > > -----Original Message----- > > From: Intel-gfx On Behalf Of Ville > > Syrjala > > Sent: Wednesday, June 11, 2025 9:23 PM > > To: intel-gfx@lists.freedesktop.org > > Cc: intel-xe@lists.freedesktop.org > > Subject: [PATCH 4/9] drm/i915/dmc: Extract dmc_load_program() > > > > From: Ville Syrjälä > > > > We'll be needing to reload the program for individual DMCs. > > To make that possible pull the code to load the program for a single DMC into a > > new function. > > > > This does change the order of things during init/resume a bit; previously we loaded > > the program RAM for all DMCs first, and then loaded the MMIO registers for all > > DMCs. Now those operations will be interleaved between different DMCs. > > Haven't found any documentation mandating this sequence, so should be ok. I was also pondering about the safety of the whole DMC reloading process. I think it still has lots of potential races: - we disable the event handlers first, and the reload the program, but there is no explicit sync in between to guarantee the DMC isn't still executing one of the old handlers. Not sure what such a sync would be though since there are various triggers. Maybe a vblank wait or two (if the pipe is active) would at least guarantee all the frame timing related triggers would be done. - we load the firmware mmio list in the order specified by the firmware, which always seems to have the EVT_CTL before the EVT_HTP. That means we are enabling the events before the HTP is in place, potentially causing the DMC to start executing from some random location. We could eg. do two loops over the mmio list, first loop would do all the !EVT_CTL registers, and a second loop would do just the EVT_CTL registers. Though I'm not sure the GOP even loads any DMC firmware, so perhaps none of this is really matters for normal use cases. > > Reviewed-by: Uma Shankar > > > Signed-off-by: Ville Syrjälä > > --- > > drivers/gpu/drm/i915/display/intel_dmc.c | 78 +++++++++++++----------- > > 1 file changed, 42 insertions(+), 36 deletions(-) > > > > diff --git a/drivers/gpu/drm/i915/display/intel_dmc.c > > b/drivers/gpu/drm/i915/display/intel_dmc.c > > index 5a43298cd0e7..331db28039db 100644 > > --- a/drivers/gpu/drm/i915/display/intel_dmc.c > > +++ b/drivers/gpu/drm/i915/display/intel_dmc.c > > @@ -432,25 +432,22 @@ static void disable_event_handler(struct intel_display > > *display, > > intel_de_write(display, htp_reg, 0); > > } > > > > -static void disable_all_event_handlers(struct intel_display *display) > > +static void disable_all_event_handlers(struct intel_display *display, > > + enum intel_dmc_id dmc_id) > > { > > - enum intel_dmc_id dmc_id; > > + int handler; > > > > /* TODO: disable the event handlers on pre-GEN12 platforms as well */ > > if (DISPLAY_VER(display) < 12) > > return; > > > > - for_each_dmc_id(dmc_id) { > > - int handler; > > + if (!has_dmc_id_fw(display, dmc_id)) > > + return; > > > > - if (!has_dmc_id_fw(display, dmc_id)) > > - continue; > > - > > - for (handler = 0; handler < > > DMC_EVENT_HANDLER_COUNT_GEN12; handler++) > > - disable_event_handler(display, > > - DMC_EVT_CTL(display, dmc_id, > > handler), > > - DMC_EVT_HTP(display, dmc_id, > > handler)); > > - } > > + for (handler = 0; handler < DMC_EVENT_HANDLER_COUNT_GEN12; > > handler++) > > + disable_event_handler(display, > > + DMC_EVT_CTL(display, dmc_id, handler), > > + DMC_EVT_HTP(display, dmc_id, handler)); > > } > > > > static void adlp_pipedmc_clock_gating_wa(struct intel_display *display, bool > > enable) @@ -578,6 +575,30 @@ static u32 dmc_mmiodata(struct intel_display > > *display, > > return dmc->dmc_info[dmc_id].mmiodata[i]; > > } > > > > +static void dmc_load_program(struct intel_display *display, > > + enum intel_dmc_id dmc_id) > > +{ > > + struct intel_dmc *dmc = display_to_dmc(display); > > + int i; > > + > > + disable_all_event_handlers(display, dmc_id); > > + > > + preempt_disable(); > > + > > + for (i = 0; i < dmc->dmc_info[dmc_id].dmc_fw_size; i++) { > > + intel_de_write_fw(display, > > + DMC_PROGRAM(dmc- > > >dmc_info[dmc_id].start_mmioaddr, i), > > + dmc->dmc_info[dmc_id].payload[i]); > > + } > > + > > + preempt_enable(); > > + > > + for (i = 0; i < dmc->dmc_info[dmc_id].mmio_count; i++) { > > + intel_de_write(display, dmc->dmc_info[dmc_id].mmioaddr[i], > > + dmc_mmiodata(display, dmc, dmc_id, i)); > > + } > > +} > > + > > void intel_dmc_enable_pipe(struct intel_display *display, enum pipe pipe) { > > enum intel_dmc_id dmc_id = PIPE_TO_DMC_ID(pipe); @@ -685,37 > > +706,17 @@ void intel_dmc_start_pkgc_exit_at_start_of_undelayed_vblank(struct > > intel_display void intel_dmc_load_program(struct intel_display *display) { > > struct i915_power_domains *power_domains = &display->power.domains; > > - struct intel_dmc *dmc = display_to_dmc(display); > > enum intel_dmc_id dmc_id; > > - u32 i; > > > > if (!intel_dmc_has_payload(display)) > > return; > > > > - pipedmc_clock_gating_wa(display, true); > > - > > - disable_all_event_handlers(display); > > - > > assert_display_rpm_held(display); > > > > - preempt_disable(); > > + pipedmc_clock_gating_wa(display, true); > > > > - for_each_dmc_id(dmc_id) { > > - for (i = 0; i < dmc->dmc_info[dmc_id].dmc_fw_size; i++) { > > - intel_de_write_fw(display, > > - DMC_PROGRAM(dmc- > > >dmc_info[dmc_id].start_mmioaddr, i), > > - dmc->dmc_info[dmc_id].payload[i]); > > - } > > - } > > - > > - preempt_enable(); > > - > > - for_each_dmc_id(dmc_id) { > > - for (i = 0; i < dmc->dmc_info[dmc_id].mmio_count; i++) { > > - intel_de_write(display, dmc- > > >dmc_info[dmc_id].mmioaddr[i], > > - dmc_mmiodata(display, dmc, dmc_id, i)); > > - } > > - } > > + for_each_dmc_id(dmc_id) > > + dmc_load_program(display, dmc_id); > > > > power_domains->dc_state = 0; > > > > @@ -733,11 +734,16 @@ void intel_dmc_load_program(struct intel_display > > *display) > > */ > > void intel_dmc_disable_program(struct intel_display *display) { > > + enum intel_dmc_id dmc_id; > > + > > if (!intel_dmc_has_payload(display)) > > return; > > > > pipedmc_clock_gating_wa(display, true); > > - disable_all_event_handlers(display); > > + > > + for_each_dmc_id(dmc_id) > > + disable_all_event_handlers(display, dmc_id); > > + > > pipedmc_clock_gating_wa(display, false); } > > > > -- > > 2.49.0 > -- Ville Syrjälä Intel