From: Jani Nikula <jani.nikula@linux.intel.com>
To: Nemesa Garg <nemesa.garg@intel.com>,
intel-gfx@lists.freedesktop.org, intel-xe@lists.freedesktop.org,
mika.kahola@intel.com
Cc: Nemesa Garg <nemesa.garg@intel.com>
Subject: Re: [PATCH] drm/i915/display: Implement wa_14024400148
Date: Tue, 15 Apr 2025 13:33:07 +0300 [thread overview]
Message-ID: <87ecxtsoik.fsf@intel.com> (raw)
In-Reply-To: <20250415094633.2465122-1-nemesa.garg@intel.com>
On Tue, 15 Apr 2025, Nemesa Garg <nemesa.garg@intel.com> wrote:
> Workaround recommend use polling method
> for pm_demand to finish as to avoid timeout.
>
> Signed-off-by: Nemesa Garg <nemesa.garg@intel.com>
> ---
> drivers/gpu/drm/i915/display/intel_pmdemand.c | 27 +++++++++++++++++--
> 1 file changed, 25 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/display/intel_pmdemand.c b/drivers/gpu/drm/i915/display/intel_pmdemand.c
> index d22b5469672d..610d05b73b99 100644
> --- a/drivers/gpu/drm/i915/display/intel_pmdemand.c
> +++ b/drivers/gpu/drm/i915/display/intel_pmdemand.c
> @@ -478,6 +478,22 @@ static bool intel_pmdemand_req_complete(struct intel_display *display)
> XELPDP_PMDEMAND_REQ_ENABLE);
> }
>
> +static void intel_pmdemand_poll(struct intel_display *display)
> +{
> + const unsigned int timeout_ms = 10;
> + u32 status;
> + int ret;
> +
> + ret = intel_de_wait_custom(display, XELPDP_INITIATE_PMDEMAND_REQUEST(1),
> + XELPDP_PMDEMAND_REQ_ENABLE, 0,
> + 50, timeout_ms, &status);
> +
> + if (ret == -ETIMEDOUT)
> + drm_err(display->drm,
> + "timeout within %ums (status 0x%08x)\n",
> + timeout_ms, status);
Imagine seeing "timeout within 10ms" in dmesg.
Timeout of what?
> +}
> +
> static void intel_pmdemand_wait(struct intel_display *display)
> {
> if (!wait_event_timeout(display->pmdemand.waitqueue,
> @@ -508,7 +524,11 @@ void intel_pmdemand_program_dbuf(struct intel_display *display,
> intel_de_rmw(display, XELPDP_INITIATE_PMDEMAND_REQUEST(1), 0,
> XELPDP_PMDEMAND_REQ_ENABLE);
>
> - intel_pmdemand_wait(display);
> + /* Wa_14024400148 For lnl use polling method */
> + if (DISPLAY_VER(display) == 20)
> + intel_pmdemand_poll(display);
> + else
> + intel_pmdemand_wait(display);
Please just hide this within intel_pmdemand_wait() instead of
duplicating it everywhere.
>
> unlock:
> mutex_unlock(&display->pmdemand.lock);
> @@ -617,7 +637,10 @@ intel_pmdemand_program_params(struct intel_display *display,
> intel_de_rmw(display, XELPDP_INITIATE_PMDEMAND_REQUEST(1), 0,
> XELPDP_PMDEMAND_REQ_ENABLE);
>
> - intel_pmdemand_wait(display);
> + if (DISPLAY_VER(display) == 20)
> + intel_pmdemand_poll(display);
> + else
> + intel_pmdemand_wait(display);
>
> unlock:
> mutex_unlock(&display->pmdemand.lock);
--
Jani Nikula, Intel
next prev parent reply other threads:[~2025-04-15 10:33 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-04-15 9:46 [PATCH] drm/i915/display: Implement wa_14024400148 Nemesa Garg
2025-04-15 10:33 ` Jani Nikula [this message]
2025-04-15 12:56 ` ✗ i915.CI.BAT: failure for " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87ecxtsoik.fsf@intel.com \
--to=jani.nikula@linux.intel.com \
--cc=intel-gfx@lists.freedesktop.org \
--cc=intel-xe@lists.freedesktop.org \
--cc=mika.kahola@intel.com \
--cc=nemesa.garg@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).