From: Rodrigo Vivi <rodrigo.vivi@intel.com>
To: Arnd Bergmann <arnd@kernel.org>
Cc: "Jani Nikula" <jani.nikula@linux.intel.com>,
"Joonas Lahtinen" <joonas.lahtinen@linux.intel.com>,
"Tvrtko Ursulin" <tursulin@ursulin.net>,
"David Airlie" <airlied@gmail.com>,
"Simona Vetter" <simona@ffwll.ch>,
"Arnd Bergmann" <arnd@arndb.de>,
"Ville Syrjälä" <ville.syrjala@linux.intel.com>,
"Vinod Govindapillai" <vinod.govindapillai@intel.com>,
"Suraj Kandpal" <suraj.kandpal@intel.com>,
"Mitul Golani" <mitulkumar.ajitkumar.golani@intel.com>,
"Stanislav Lisovskiy" <stanislav.lisovskiy@intel.com>,
intel-gfx@lists.freedesktop.org, intel-xe@lists.freedesktop.org,
dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH] drm/i915/wm: reduce stack usage in skl_print_wm_changes()
Date: Tue, 24 Jun 2025 17:16:57 -0400 [thread overview]
Message-ID: <aFsVydrSSuj-1M5S@intel.com> (raw)
In-Reply-To: <20250620113748.3869160-1-arnd@kernel.org>
On Fri, Jun 20, 2025 at 01:37:45PM +0200, Arnd Bergmann wrote:
> From: Arnd Bergmann <arnd@arndb.de>
>
> When KMSAN is enabled, this function causes has a rather excessive stack usage:
>
> drivers/gpu/drm/i915/display/skl_watermark.c:2977:1: error: stack frame size (1432) exceeds limit (1408) in 'skl_compute_wm' [-Werror,-Wframe-larger-than]
>
> This is apparently all caused by the varargs calls to drm_dbg_kms(). Inlining
> this into skl_compute_wm() means that any function called by skl_compute_wm()
> has its own stack on top of that.
>
> Move the worst bit into a separate function marked as noinline_for_stack to
> limit that to the one code path that actually needs it.
>
> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
> ---
> drivers/gpu/drm/i915/display/skl_watermark.c | 176 ++++++++++---------
> 1 file changed, 92 insertions(+), 84 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/display/skl_watermark.c b/drivers/gpu/drm/i915/display/skl_watermark.c
> index 2c2371574d6f..b7c92c718c8f 100644
> --- a/drivers/gpu/drm/i915/display/skl_watermark.c
> +++ b/drivers/gpu/drm/i915/display/skl_watermark.c
> @@ -2680,6 +2680,97 @@ static char enast(bool enable)
> return enable ? '*' : ' ';
> }
>
> +static noinline_for_stack void
on this I'm mostly trusting you and your compiler, but it looks safe
enough...
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
and pushing to drm-intel-next as well...
Thanks for all the patches
> +skl_print_plane_changes(struct intel_display *display,
> + struct intel_plane *plane,
> + const struct skl_plane_wm *old_wm,
> + const struct skl_plane_wm *new_wm)
> +{
> + drm_dbg_kms(display->drm,
> + "[PLANE:%d:%s] level %cwm0,%cwm1,%cwm2,%cwm3,%cwm4,%cwm5,%cwm6,%cwm7,%ctwm,%cswm,%cstwm"
> + " -> %cwm0,%cwm1,%cwm2,%cwm3,%cwm4,%cwm5,%cwm6,%cwm7,%ctwm,%cswm,%cstwm\n",
> + plane->base.base.id, plane->base.name,
> + enast(old_wm->wm[0].enable), enast(old_wm->wm[1].enable),
> + enast(old_wm->wm[2].enable), enast(old_wm->wm[3].enable),
> + enast(old_wm->wm[4].enable), enast(old_wm->wm[5].enable),
> + enast(old_wm->wm[6].enable), enast(old_wm->wm[7].enable),
> + enast(old_wm->trans_wm.enable),
> + enast(old_wm->sagv.wm0.enable),
> + enast(old_wm->sagv.trans_wm.enable),
> + enast(new_wm->wm[0].enable), enast(new_wm->wm[1].enable),
> + enast(new_wm->wm[2].enable), enast(new_wm->wm[3].enable),
> + enast(new_wm->wm[4].enable), enast(new_wm->wm[5].enable),
> + enast(new_wm->wm[6].enable), enast(new_wm->wm[7].enable),
> + enast(new_wm->trans_wm.enable),
> + enast(new_wm->sagv.wm0.enable),
> + enast(new_wm->sagv.trans_wm.enable));
> +
> + drm_dbg_kms(display->drm,
> + "[PLANE:%d:%s] lines %c%3d,%c%3d,%c%3d,%c%3d,%c%3d,%c%3d,%c%3d,%c%3d,%c%3d,%c%3d,%c%4d"
> + " -> %c%3d,%c%3d,%c%3d,%c%3d,%c%3d,%c%3d,%c%3d,%c%3d,%c%3d,%c%3d,%c%4d\n",
> + plane->base.base.id, plane->base.name,
> + enast(old_wm->wm[0].ignore_lines), old_wm->wm[0].lines,
> + enast(old_wm->wm[1].ignore_lines), old_wm->wm[1].lines,
> + enast(old_wm->wm[2].ignore_lines), old_wm->wm[2].lines,
> + enast(old_wm->wm[3].ignore_lines), old_wm->wm[3].lines,
> + enast(old_wm->wm[4].ignore_lines), old_wm->wm[4].lines,
> + enast(old_wm->wm[5].ignore_lines), old_wm->wm[5].lines,
> + enast(old_wm->wm[6].ignore_lines), old_wm->wm[6].lines,
> + enast(old_wm->wm[7].ignore_lines), old_wm->wm[7].lines,
> + enast(old_wm->trans_wm.ignore_lines), old_wm->trans_wm.lines,
> + enast(old_wm->sagv.wm0.ignore_lines), old_wm->sagv.wm0.lines,
> + enast(old_wm->sagv.trans_wm.ignore_lines), old_wm->sagv.trans_wm.lines,
> + enast(new_wm->wm[0].ignore_lines), new_wm->wm[0].lines,
> + enast(new_wm->wm[1].ignore_lines), new_wm->wm[1].lines,
> + enast(new_wm->wm[2].ignore_lines), new_wm->wm[2].lines,
> + enast(new_wm->wm[3].ignore_lines), new_wm->wm[3].lines,
> + enast(new_wm->wm[4].ignore_lines), new_wm->wm[4].lines,
> + enast(new_wm->wm[5].ignore_lines), new_wm->wm[5].lines,
> + enast(new_wm->wm[6].ignore_lines), new_wm->wm[6].lines,
> + enast(new_wm->wm[7].ignore_lines), new_wm->wm[7].lines,
> + enast(new_wm->trans_wm.ignore_lines), new_wm->trans_wm.lines,
> + enast(new_wm->sagv.wm0.ignore_lines), new_wm->sagv.wm0.lines,
> + enast(new_wm->sagv.trans_wm.ignore_lines), new_wm->sagv.trans_wm.lines);
> +
> + drm_dbg_kms(display->drm,
> + "[PLANE:%d:%s] blocks %4d,%4d,%4d,%4d,%4d,%4d,%4d,%4d,%4d,%4d,%5d"
> + " -> %4d,%4d,%4d,%4d,%4d,%4d,%4d,%4d,%4d,%4d,%5d\n",
> + plane->base.base.id, plane->base.name,
> + old_wm->wm[0].blocks, old_wm->wm[1].blocks,
> + old_wm->wm[2].blocks, old_wm->wm[3].blocks,
> + old_wm->wm[4].blocks, old_wm->wm[5].blocks,
> + old_wm->wm[6].blocks, old_wm->wm[7].blocks,
> + old_wm->trans_wm.blocks,
> + old_wm->sagv.wm0.blocks,
> + old_wm->sagv.trans_wm.blocks,
> + new_wm->wm[0].blocks, new_wm->wm[1].blocks,
> + new_wm->wm[2].blocks, new_wm->wm[3].blocks,
> + new_wm->wm[4].blocks, new_wm->wm[5].blocks,
> + new_wm->wm[6].blocks, new_wm->wm[7].blocks,
> + new_wm->trans_wm.blocks,
> + new_wm->sagv.wm0.blocks,
> + new_wm->sagv.trans_wm.blocks);
> +
> + drm_dbg_kms(display->drm,
> + "[PLANE:%d:%s] min_ddb %4d,%4d,%4d,%4d,%4d,%4d,%4d,%4d,%4d,%4d,%5d"
> + " -> %4d,%4d,%4d,%4d,%4d,%4d,%4d,%4d,%4d,%4d,%5d\n",
> + plane->base.base.id, plane->base.name,
> + old_wm->wm[0].min_ddb_alloc, old_wm->wm[1].min_ddb_alloc,
> + old_wm->wm[2].min_ddb_alloc, old_wm->wm[3].min_ddb_alloc,
> + old_wm->wm[4].min_ddb_alloc, old_wm->wm[5].min_ddb_alloc,
> + old_wm->wm[6].min_ddb_alloc, old_wm->wm[7].min_ddb_alloc,
> + old_wm->trans_wm.min_ddb_alloc,
> + old_wm->sagv.wm0.min_ddb_alloc,
> + old_wm->sagv.trans_wm.min_ddb_alloc,
> + new_wm->wm[0].min_ddb_alloc, new_wm->wm[1].min_ddb_alloc,
> + new_wm->wm[2].min_ddb_alloc, new_wm->wm[3].min_ddb_alloc,
> + new_wm->wm[4].min_ddb_alloc, new_wm->wm[5].min_ddb_alloc,
> + new_wm->wm[6].min_ddb_alloc, new_wm->wm[7].min_ddb_alloc,
> + new_wm->trans_wm.min_ddb_alloc,
> + new_wm->sagv.wm0.min_ddb_alloc,
> + new_wm->sagv.trans_wm.min_ddb_alloc);
> +}
> +
> static void
> skl_print_wm_changes(struct intel_atomic_state *state)
> {
> @@ -2709,7 +2800,6 @@ skl_print_wm_changes(struct intel_atomic_state *state)
>
> if (skl_ddb_entry_equal(old, new))
> continue;
> -
> drm_dbg_kms(display->drm,
> "[PLANE:%d:%s] ddb (%4d - %4d) -> (%4d - %4d), size %4d -> %4d\n",
> plane->base.base.id, plane->base.name,
> @@ -2727,89 +2817,7 @@ skl_print_wm_changes(struct intel_atomic_state *state)
> if (skl_plane_wm_equals(display, old_wm, new_wm))
> continue;
>
> - drm_dbg_kms(display->drm,
> - "[PLANE:%d:%s] level %cwm0,%cwm1,%cwm2,%cwm3,%cwm4,%cwm5,%cwm6,%cwm7,%ctwm,%cswm,%cstwm"
> - " -> %cwm0,%cwm1,%cwm2,%cwm3,%cwm4,%cwm5,%cwm6,%cwm7,%ctwm,%cswm,%cstwm\n",
> - plane->base.base.id, plane->base.name,
> - enast(old_wm->wm[0].enable), enast(old_wm->wm[1].enable),
> - enast(old_wm->wm[2].enable), enast(old_wm->wm[3].enable),
> - enast(old_wm->wm[4].enable), enast(old_wm->wm[5].enable),
> - enast(old_wm->wm[6].enable), enast(old_wm->wm[7].enable),
> - enast(old_wm->trans_wm.enable),
> - enast(old_wm->sagv.wm0.enable),
> - enast(old_wm->sagv.trans_wm.enable),
> - enast(new_wm->wm[0].enable), enast(new_wm->wm[1].enable),
> - enast(new_wm->wm[2].enable), enast(new_wm->wm[3].enable),
> - enast(new_wm->wm[4].enable), enast(new_wm->wm[5].enable),
> - enast(new_wm->wm[6].enable), enast(new_wm->wm[7].enable),
> - enast(new_wm->trans_wm.enable),
> - enast(new_wm->sagv.wm0.enable),
> - enast(new_wm->sagv.trans_wm.enable));
> -
> - drm_dbg_kms(display->drm,
> - "[PLANE:%d:%s] lines %c%3d,%c%3d,%c%3d,%c%3d,%c%3d,%c%3d,%c%3d,%c%3d,%c%3d,%c%3d,%c%4d"
> - " -> %c%3d,%c%3d,%c%3d,%c%3d,%c%3d,%c%3d,%c%3d,%c%3d,%c%3d,%c%3d,%c%4d\n",
> - plane->base.base.id, plane->base.name,
> - enast(old_wm->wm[0].ignore_lines), old_wm->wm[0].lines,
> - enast(old_wm->wm[1].ignore_lines), old_wm->wm[1].lines,
> - enast(old_wm->wm[2].ignore_lines), old_wm->wm[2].lines,
> - enast(old_wm->wm[3].ignore_lines), old_wm->wm[3].lines,
> - enast(old_wm->wm[4].ignore_lines), old_wm->wm[4].lines,
> - enast(old_wm->wm[5].ignore_lines), old_wm->wm[5].lines,
> - enast(old_wm->wm[6].ignore_lines), old_wm->wm[6].lines,
> - enast(old_wm->wm[7].ignore_lines), old_wm->wm[7].lines,
> - enast(old_wm->trans_wm.ignore_lines), old_wm->trans_wm.lines,
> - enast(old_wm->sagv.wm0.ignore_lines), old_wm->sagv.wm0.lines,
> - enast(old_wm->sagv.trans_wm.ignore_lines), old_wm->sagv.trans_wm.lines,
> - enast(new_wm->wm[0].ignore_lines), new_wm->wm[0].lines,
> - enast(new_wm->wm[1].ignore_lines), new_wm->wm[1].lines,
> - enast(new_wm->wm[2].ignore_lines), new_wm->wm[2].lines,
> - enast(new_wm->wm[3].ignore_lines), new_wm->wm[3].lines,
> - enast(new_wm->wm[4].ignore_lines), new_wm->wm[4].lines,
> - enast(new_wm->wm[5].ignore_lines), new_wm->wm[5].lines,
> - enast(new_wm->wm[6].ignore_lines), new_wm->wm[6].lines,
> - enast(new_wm->wm[7].ignore_lines), new_wm->wm[7].lines,
> - enast(new_wm->trans_wm.ignore_lines), new_wm->trans_wm.lines,
> - enast(new_wm->sagv.wm0.ignore_lines), new_wm->sagv.wm0.lines,
> - enast(new_wm->sagv.trans_wm.ignore_lines), new_wm->sagv.trans_wm.lines);
> -
> - drm_dbg_kms(display->drm,
> - "[PLANE:%d:%s] blocks %4d,%4d,%4d,%4d,%4d,%4d,%4d,%4d,%4d,%4d,%5d"
> - " -> %4d,%4d,%4d,%4d,%4d,%4d,%4d,%4d,%4d,%4d,%5d\n",
> - plane->base.base.id, plane->base.name,
> - old_wm->wm[0].blocks, old_wm->wm[1].blocks,
> - old_wm->wm[2].blocks, old_wm->wm[3].blocks,
> - old_wm->wm[4].blocks, old_wm->wm[5].blocks,
> - old_wm->wm[6].blocks, old_wm->wm[7].blocks,
> - old_wm->trans_wm.blocks,
> - old_wm->sagv.wm0.blocks,
> - old_wm->sagv.trans_wm.blocks,
> - new_wm->wm[0].blocks, new_wm->wm[1].blocks,
> - new_wm->wm[2].blocks, new_wm->wm[3].blocks,
> - new_wm->wm[4].blocks, new_wm->wm[5].blocks,
> - new_wm->wm[6].blocks, new_wm->wm[7].blocks,
> - new_wm->trans_wm.blocks,
> - new_wm->sagv.wm0.blocks,
> - new_wm->sagv.trans_wm.blocks);
> -
> - drm_dbg_kms(display->drm,
> - "[PLANE:%d:%s] min_ddb %4d,%4d,%4d,%4d,%4d,%4d,%4d,%4d,%4d,%4d,%5d"
> - " -> %4d,%4d,%4d,%4d,%4d,%4d,%4d,%4d,%4d,%4d,%5d\n",
> - plane->base.base.id, plane->base.name,
> - old_wm->wm[0].min_ddb_alloc, old_wm->wm[1].min_ddb_alloc,
> - old_wm->wm[2].min_ddb_alloc, old_wm->wm[3].min_ddb_alloc,
> - old_wm->wm[4].min_ddb_alloc, old_wm->wm[5].min_ddb_alloc,
> - old_wm->wm[6].min_ddb_alloc, old_wm->wm[7].min_ddb_alloc,
> - old_wm->trans_wm.min_ddb_alloc,
> - old_wm->sagv.wm0.min_ddb_alloc,
> - old_wm->sagv.trans_wm.min_ddb_alloc,
> - new_wm->wm[0].min_ddb_alloc, new_wm->wm[1].min_ddb_alloc,
> - new_wm->wm[2].min_ddb_alloc, new_wm->wm[3].min_ddb_alloc,
> - new_wm->wm[4].min_ddb_alloc, new_wm->wm[5].min_ddb_alloc,
> - new_wm->wm[6].min_ddb_alloc, new_wm->wm[7].min_ddb_alloc,
> - new_wm->trans_wm.min_ddb_alloc,
> - new_wm->sagv.wm0.min_ddb_alloc,
> - new_wm->sagv.trans_wm.min_ddb_alloc);
> + skl_print_plane_changes(display, plane, old_wm, new_wm);
> }
> }
> }
> --
> 2.39.5
>
prev parent reply other threads:[~2025-06-24 21:17 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-06-20 11:37 [PATCH] drm/i915/wm: reduce stack usage in skl_print_wm_changes() Arnd Bergmann
2025-06-24 21:16 ` Rodrigo Vivi [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aFsVydrSSuj-1M5S@intel.com \
--to=rodrigo.vivi@intel.com \
--cc=airlied@gmail.com \
--cc=arnd@arndb.de \
--cc=arnd@kernel.org \
--cc=dri-devel@lists.freedesktop.org \
--cc=intel-gfx@lists.freedesktop.org \
--cc=intel-xe@lists.freedesktop.org \
--cc=jani.nikula@linux.intel.com \
--cc=joonas.lahtinen@linux.intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mitulkumar.ajitkumar.golani@intel.com \
--cc=simona@ffwll.ch \
--cc=stanislav.lisovskiy@intel.com \
--cc=suraj.kandpal@intel.com \
--cc=tursulin@ursulin.net \
--cc=ville.syrjala@linux.intel.com \
--cc=vinod.govindapillai@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).