* [PATCH] net: qed: reduce stack usage for TLV processing
@ 2025-06-20 13:09 Arnd Bergmann
2025-06-20 14:09 ` Alexander Lobakin
2025-06-23 13:30 ` patchwork-bot+netdevbpf
0 siblings, 2 replies; 3+ messages in thread
From: Arnd Bergmann @ 2025-06-20 13:09 UTC (permalink / raw)
To: Manish Chopra, Andrew Lunn, David S. Miller, Eric Dumazet,
Jakub Kicinski, Paolo Abeni, Nathan Chancellor
Cc: Arnd Bergmann, Nick Desaulniers, Bill Wendling, Justin Stitt,
netdev, linux-kernel, llvm
From: Arnd Bergmann <arnd@arndb.de>
clang gets a bit confused by the code in the qed_mfw_process_tlv_req and
ends up spilling registers to the stack hundreds of times. When sanitizers
are enabled, this can end up blowing the stack warning limit:
drivers/net/ethernet/qlogic/qed/qed_mng_tlv.c:1244:5: error: stack frame size (1824) exceeds limit (1280) in 'qed_mfw_process_tlv_req' [-Werror,-Wframe-larger-than]
Apparently the problem is the complexity of qed_mfw_update_tlvs()
after inlining, and marking the four main branches of that function
as noinline_for_stack makes this problem completely go away, the stack
usage goes down to 100 bytes.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
---
If anyone feels adventurous and able to figure out what exactly goes
wrong in clang, I can provide preprocessed source files for debugging.
---
drivers/net/ethernet/qlogic/qed/qed_mng_tlv.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/net/ethernet/qlogic/qed/qed_mng_tlv.c b/drivers/net/ethernet/qlogic/qed/qed_mng_tlv.c
index f55eed092f25..7d78f072b0a1 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_mng_tlv.c
+++ b/drivers/net/ethernet/qlogic/qed/qed_mng_tlv.c
@@ -242,7 +242,7 @@ static int qed_mfw_get_tlv_group(u8 tlv_type, u8 *tlv_group)
}
/* Returns size of the data buffer or, -1 in case TLV data is not available. */
-static int
+static noinline_for_stack int
qed_mfw_get_gen_tlv_value(struct qed_drv_tlv_hdr *p_tlv,
struct qed_mfw_tlv_generic *p_drv_buf,
struct qed_tlv_parsed_buf *p_buf)
@@ -304,7 +304,7 @@ qed_mfw_get_gen_tlv_value(struct qed_drv_tlv_hdr *p_tlv,
return -1;
}
-static int
+static noinline_for_stack int
qed_mfw_get_eth_tlv_value(struct qed_drv_tlv_hdr *p_tlv,
struct qed_mfw_tlv_eth *p_drv_buf,
struct qed_tlv_parsed_buf *p_buf)
@@ -438,7 +438,7 @@ qed_mfw_get_tlv_time_value(struct qed_mfw_tlv_time *p_time,
return QED_MFW_TLV_TIME_SIZE;
}
-static int
+static noinline_for_stack int
qed_mfw_get_fcoe_tlv_value(struct qed_drv_tlv_hdr *p_tlv,
struct qed_mfw_tlv_fcoe *p_drv_buf,
struct qed_tlv_parsed_buf *p_buf)
@@ -1073,7 +1073,7 @@ qed_mfw_get_fcoe_tlv_value(struct qed_drv_tlv_hdr *p_tlv,
return -1;
}
-static int
+static noinline_for_stack int
qed_mfw_get_iscsi_tlv_value(struct qed_drv_tlv_hdr *p_tlv,
struct qed_mfw_tlv_iscsi *p_drv_buf,
struct qed_tlv_parsed_buf *p_buf)
--
2.39.5
^ permalink raw reply related [flat|nested] 3+ messages in thread
* Re: [PATCH] net: qed: reduce stack usage for TLV processing
2025-06-20 13:09 [PATCH] net: qed: reduce stack usage for TLV processing Arnd Bergmann
@ 2025-06-20 14:09 ` Alexander Lobakin
2025-06-23 13:30 ` patchwork-bot+netdevbpf
1 sibling, 0 replies; 3+ messages in thread
From: Alexander Lobakin @ 2025-06-20 14:09 UTC (permalink / raw)
To: Arnd Bergmann
Cc: Manish Chopra, Andrew Lunn, David S. Miller, Eric Dumazet,
Jakub Kicinski, Paolo Abeni, Nathan Chancellor, Arnd Bergmann,
Nick Desaulniers, Bill Wendling, Justin Stitt, netdev,
linux-kernel, llvm
From: Arnd Bergmann <arnd@kernel.org>
Date: Fri, 20 Jun 2025 15:09:53 +0200
> From: Arnd Bergmann <arnd@arndb.de>
>
> clang gets a bit confused by the code in the qed_mfw_process_tlv_req and
> ends up spilling registers to the stack hundreds of times. When sanitizers
> are enabled, this can end up blowing the stack warning limit:
>
> drivers/net/ethernet/qlogic/qed/qed_mng_tlv.c:1244:5: error: stack frame size (1824) exceeds limit (1280) in 'qed_mfw_process_tlv_req' [-Werror,-Wframe-larger-than]
>
> Apparently the problem is the complexity of qed_mfw_update_tlvs()
> after inlining, and marking the four main branches of that function
> as noinline_for_stack makes this problem completely go away, the stack
> usage goes down to 100 bytes.
>
> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Alexander Lobakin <aleksander.lobakin@intel.com>
Thanks,
Olek
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH] net: qed: reduce stack usage for TLV processing
2025-06-20 13:09 [PATCH] net: qed: reduce stack usage for TLV processing Arnd Bergmann
2025-06-20 14:09 ` Alexander Lobakin
@ 2025-06-23 13:30 ` patchwork-bot+netdevbpf
1 sibling, 0 replies; 3+ messages in thread
From: patchwork-bot+netdevbpf @ 2025-06-23 13:30 UTC (permalink / raw)
To: Arnd Bergmann
Cc: manishc, andrew+netdev, davem, edumazet, kuba, pabeni, nathan,
arnd, nick.desaulniers+lkml, morbo, justinstitt, netdev,
linux-kernel, llvm
Hello:
This patch was applied to netdev/net.git (main)
by David S. Miller <davem@davemloft.net>:
On Fri, 20 Jun 2025 15:09:53 +0200 you wrote:
> From: Arnd Bergmann <arnd@arndb.de>
>
> clang gets a bit confused by the code in the qed_mfw_process_tlv_req and
> ends up spilling registers to the stack hundreds of times. When sanitizers
> are enabled, this can end up blowing the stack warning limit:
>
> drivers/net/ethernet/qlogic/qed/qed_mng_tlv.c:1244:5: error: stack frame size (1824) exceeds limit (1280) in 'qed_mfw_process_tlv_req' [-Werror,-Wframe-larger-than]
>
> [...]
Here is the summary with links:
- net: qed: reduce stack usage for TLV processing
https://git.kernel.org/netdev/net/c/95b6759a8183
You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2025-06-23 13:29 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-06-20 13:09 [PATCH] net: qed: reduce stack usage for TLV processing Arnd Bergmann
2025-06-20 14:09 ` Alexander Lobakin
2025-06-23 13:30 ` patchwork-bot+netdevbpf
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).