From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4E45AC77B61 for ; Tue, 28 Mar 2023 15:13:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233006AbjC1PNA (ORCPT ); Tue, 28 Mar 2023 11:13:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53506 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233010AbjC1PMs (ORCPT ); Tue, 28 Mar 2023 11:12:48 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 11080EFB0 for ; Tue, 28 Mar 2023 08:12:22 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 8894FB81D89 for ; Tue, 28 Mar 2023 15:11:23 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D7C9BC433EF; Tue, 28 Mar 2023 15:11:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1680016282; bh=9pqy6cm2pliud712t3PhK4bvOEz6Syd+QExKEaPjr6M=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=zDUh2y3dtHxmhQsn7f4nsjaXj90N3ugCDgndEhxA1SSXqfx8ksP7e4wNr2WvGFbwW GG2Ogh+h0F9K/zeBlQnwjz8OiMJaeubSVzXxVdkuJh5E/4BBc31fyr2vOhJwa5TUuJ FchJ1pdAx6M0hXdPwUaZ4yyaK8jjUlteYFPyux74= From: Greg Kroah-Hartman To: stable@vger.kernel.org Cc: Greg Kroah-Hartman , patches@lists.linux.dev, Jamal Hadi Salim , Davide Caratti , Marcelo Ricardo Leitner , Paolo Abeni , Sasha Levin Subject: [PATCH 5.15 100/146] net/sched: act_mirred: better wording on protection against excessive stack growth Date: Tue, 28 Mar 2023 16:43:09 +0200 Message-Id: <20230328142606.850741197@linuxfoundation.org> X-Mailer: git-send-email 2.40.0 In-Reply-To: <20230328142602.660084725@linuxfoundation.org> References: <20230328142602.660084725@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Davide Caratti [ Upstream commit 78dcdffe0418ac8f3f057f26fe71ccf4d8ed851f ] with commit e2ca070f89ec ("net: sched: protect against stack overflow in TC act_mirred"), act_mirred protected itself against excessive stack growth using per_cpu counter of nested calls to tcf_mirred_act(), and capping it to MIRRED_RECURSION_LIMIT. However, such protection does not detect recursion/loops in case the packet is enqueued to the backlog (for example, when the mirred target device has RPS or skb timestamping enabled). Change the wording from "recursion" to "nesting" to make it more clear to readers. CC: Jamal Hadi Salim Signed-off-by: Davide Caratti Reviewed-by: Marcelo Ricardo Leitner Acked-by: Jamal Hadi Salim Signed-off-by: Paolo Abeni Stable-dep-of: ca22da2fbd69 ("act_mirred: use the backlog for nested calls to mirred ingress") Signed-off-by: Sasha Levin --- net/sched/act_mirred.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/net/sched/act_mirred.c b/net/sched/act_mirred.c index efc963ab995a3..b28d49495de09 100644 --- a/net/sched/act_mirred.c +++ b/net/sched/act_mirred.c @@ -28,8 +28,8 @@ static LIST_HEAD(mirred_list); static DEFINE_SPINLOCK(mirred_list_lock); -#define MIRRED_RECURSION_LIMIT 4 -static DEFINE_PER_CPU(unsigned int, mirred_rec_level); +#define MIRRED_NEST_LIMIT 4 +static DEFINE_PER_CPU(unsigned int, mirred_nest_level); static bool tcf_mirred_is_act_redirect(int action) { @@ -223,7 +223,7 @@ static int tcf_mirred_act(struct sk_buff *skb, const struct tc_action *a, struct sk_buff *skb2 = skb; bool m_mac_header_xmit; struct net_device *dev; - unsigned int rec_level; + unsigned int nest_level; int retval, err = 0; bool use_reinsert; bool want_ingress; @@ -234,11 +234,11 @@ static int tcf_mirred_act(struct sk_buff *skb, const struct tc_action *a, int mac_len; bool at_nh; - rec_level = __this_cpu_inc_return(mirred_rec_level); - if (unlikely(rec_level > MIRRED_RECURSION_LIMIT)) { + nest_level = __this_cpu_inc_return(mirred_nest_level); + if (unlikely(nest_level > MIRRED_NEST_LIMIT)) { net_warn_ratelimited("Packet exceeded mirred recursion limit on dev %s\n", netdev_name(skb->dev)); - __this_cpu_dec(mirred_rec_level); + __this_cpu_dec(mirred_nest_level); return TC_ACT_SHOT; } @@ -308,7 +308,7 @@ static int tcf_mirred_act(struct sk_buff *skb, const struct tc_action *a, err = tcf_mirred_forward(res->ingress, skb); if (err) tcf_action_inc_overlimit_qstats(&m->common); - __this_cpu_dec(mirred_rec_level); + __this_cpu_dec(mirred_nest_level); return TC_ACT_CONSUMED; } } @@ -320,7 +320,7 @@ static int tcf_mirred_act(struct sk_buff *skb, const struct tc_action *a, if (tcf_mirred_is_act_redirect(m_eaction)) retval = TC_ACT_SHOT; } - __this_cpu_dec(mirred_rec_level); + __this_cpu_dec(mirred_nest_level); return retval; } -- 2.39.2