From: Saeed Mahameed <saeed@kernel.org>
To: "David S. Miller" <davem@davemloft.net>,
Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>,
Eric Dumazet <edumazet@google.com>
Cc: Saeed Mahameed <saeedm@nvidia.com>,
netdev@vger.kernel.org, Tariq Toukan <tariqt@nvidia.com>,
Vlad Buslov <vladbu@nvidia.com>, Roi Dayan <roid@nvidia.com>
Subject: [net V2 6/9] net/mlx5e: Check for NOT_READY flag state after locking
Date: Wed, 5 Jul 2023 10:57:54 -0700 [thread overview]
Message-ID: <20230705175757.284614-7-saeed@kernel.org> (raw)
In-Reply-To: <20230705175757.284614-1-saeed@kernel.org>
From: Vlad Buslov <vladbu@nvidia.com>
Currently the check for NOT_READY flag is performed before obtaining the
necessary lock. This opens a possibility for race condition when the flow
is concurrently removed from unready_flows list by the workqueue task,
which causes a double-removal from the list and a crash[0]. Fix the issue
by moving the flag check inside the section protected by
uplink_priv->unready_flows_lock mutex.
[0]:
[44376.389654] general protection fault, probably for non-canonical address 0xdead000000000108: 0000 [#1] SMP
[44376.391665] CPU: 7 PID: 59123 Comm: tc Not tainted 6.4.0-rc4+ #1
[44376.392984] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.13.0-0-gf21b5a4aeb02-prebuilt.qemu.org 04/01/2014
[44376.395342] RIP: 0010:mlx5e_tc_del_fdb_flow+0xb3/0x340 [mlx5_core]
[44376.396857] Code: 00 48 8b b8 68 ce 02 00 e8 8a 4d 02 00 4c 8d a8 a8 01 00 00 4c 89 ef e8 8b 79 88 e1 48 8b 83 98 06 00 00 48 8b 93 90 06 00 00 <48> 89 42 08 48 89 10 48 b8 00 01 00 00 00 00 ad de 48 89 83 90 06
[44376.399167] RSP: 0018:ffff88812cc97570 EFLAGS: 00010246
[44376.399680] RAX: dead000000000122 RBX: ffff8881088e3800 RCX: ffff8881881bac00
[44376.400337] RDX: dead000000000100 RSI: ffff88812cc97500 RDI: ffff8881242f71b0
[44376.401001] RBP: ffff88811cbb0940 R08: 0000000000000400 R09: 0000000000000001
[44376.401663] R10: 0000000000000001 R11: 0000000000000000 R12: ffff88812c944000
[44376.402342] R13: ffff8881242f71a8 R14: ffff8881222b4000 R15: 0000000000000000
[44376.402999] FS: 00007f0451104800(0000) GS:ffff88852cb80000(0000) knlGS:0000000000000000
[44376.403787] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[44376.404343] CR2: 0000000000489108 CR3: 0000000123a79003 CR4: 0000000000370ea0
[44376.405004] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[44376.405665] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[44376.406339] Call Trace:
[44376.406651] <TASK>
[44376.406939] ? die_addr+0x33/0x90
[44376.407311] ? exc_general_protection+0x192/0x390
[44376.407795] ? asm_exc_general_protection+0x22/0x30
[44376.408292] ? mlx5e_tc_del_fdb_flow+0xb3/0x340 [mlx5_core]
[44376.408876] __mlx5e_tc_del_fdb_peer_flow+0xbc/0xe0 [mlx5_core]
[44376.409482] mlx5e_tc_del_flow+0x42/0x210 [mlx5_core]
[44376.410055] mlx5e_flow_put+0x25/0x50 [mlx5_core]
[44376.410529] mlx5e_delete_flower+0x24b/0x350 [mlx5_core]
[44376.411043] tc_setup_cb_reoffload+0x22/0x80
[44376.411462] fl_reoffload+0x261/0x2f0 [cls_flower]
[44376.411907] ? mlx5e_rep_indr_setup_ft_cb+0x160/0x160 [mlx5_core]
[44376.412481] ? mlx5e_rep_indr_setup_ft_cb+0x160/0x160 [mlx5_core]
[44376.413044] tcf_block_playback_offloads+0x76/0x170
[44376.413497] tcf_block_unbind+0x7b/0xd0
[44376.413881] tcf_block_setup+0x17d/0x1c0
[44376.414269] tcf_block_offload_cmd.isra.0+0xf1/0x130
[44376.414725] tcf_block_offload_unbind+0x43/0x70
[44376.415153] __tcf_block_put+0x82/0x150
[44376.415532] ingress_destroy+0x22/0x30 [sch_ingress]
[44376.415986] qdisc_destroy+0x3b/0xd0
[44376.416343] qdisc_graft+0x4d0/0x620
[44376.416706] tc_get_qdisc+0x1c9/0x3b0
[44376.417074] rtnetlink_rcv_msg+0x29c/0x390
[44376.419978] ? rep_movs_alternative+0x3a/0xa0
[44376.420399] ? rtnl_calcit.isra.0+0x120/0x120
[44376.420813] netlink_rcv_skb+0x54/0x100
[44376.421192] netlink_unicast+0x1f6/0x2c0
[44376.421573] netlink_sendmsg+0x232/0x4a0
[44376.421980] sock_sendmsg+0x38/0x60
[44376.422328] ____sys_sendmsg+0x1d0/0x1e0
[44376.422709] ? copy_msghdr_from_user+0x6d/0xa0
[44376.423127] ___sys_sendmsg+0x80/0xc0
[44376.423495] ? ___sys_recvmsg+0x8b/0xc0
[44376.423869] __sys_sendmsg+0x51/0x90
[44376.424226] do_syscall_64+0x3d/0x90
[44376.424587] entry_SYSCALL_64_after_hwframe+0x46/0xb0
[44376.425046] RIP: 0033:0x7f045134f887
[44376.425403] Code: 0a 00 f7 d8 64 89 02 48 c7 c0 ff ff ff ff eb b9 0f 1f 00 f3 0f 1e fa 64 8b 04 25 18 00 00 00 85 c0 75 10 b8 2e 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 51 c3 48 83 ec 28 89 54 24 1c 48 89 74 24 10
[44376.426914] RSP: 002b:00007ffd63a82b98 EFLAGS: 00000246 ORIG_RAX: 000000000000002e
[44376.427592] RAX: ffffffffffffffda RBX: 000000006481955f RCX: 00007f045134f887
[44376.428195] RDX: 0000000000000000 RSI: 00007ffd63a82c00 RDI: 0000000000000003
[44376.428796] RBP: 0000000000000000 R08: 0000000000000001 R09: 0000000000000000
[44376.429404] R10: 00007f0451208708 R11: 0000000000000246 R12: 0000000000000001
[44376.430039] R13: 0000000000409980 R14: 000000000047e538 R15: 0000000000485400
[44376.430644] </TASK>
[44376.430907] Modules linked in: mlx5_ib mlx5_core act_mirred act_tunnel_key cls_flower vxlan dummy sch_ingress openvswitch nsh rpcrdma rdma_ucm ib_iser libiscsi scsi_transport_iscsi ib_umad rdma_cm ib_ipoib iw_cm ib_cm ib_uverbs ib_core xt_conntrack xt_MASQUERADE nf_conntrack_netlink nfnetlink xt_addrtype iptable_nat nf_nat br_netfilter rpcsec_g
ss_krb5 auth_rpcgss oid_registry overlay zram zsmalloc fuse [last unloaded: mlx5_core]
[44376.433936] ---[ end trace 0000000000000000 ]---
[44376.434373] RIP: 0010:mlx5e_tc_del_fdb_flow+0xb3/0x340 [mlx5_core]
[44376.434951] Code: 00 48 8b b8 68 ce 02 00 e8 8a 4d 02 00 4c 8d a8 a8 01 00 00 4c 89 ef e8 8b 79 88 e1 48 8b 83 98 06 00 00 48 8b 93 90 06 00 00 <48> 89 42 08 48 89 10 48 b8 00 01 00 00 00 00 ad de 48 89 83 90 06
[44376.436452] RSP: 0018:ffff88812cc97570 EFLAGS: 00010246
[44376.436924] RAX: dead000000000122 RBX: ffff8881088e3800 RCX: ffff8881881bac00
[44376.437530] RDX: dead000000000100 RSI: ffff88812cc97500 RDI: ffff8881242f71b0
[44376.438179] RBP: ffff88811cbb0940 R08: 0000000000000400 R09: 0000000000000001
[44376.438786] R10: 0000000000000001 R11: 0000000000000000 R12: ffff88812c944000
[44376.439393] R13: ffff8881242f71a8 R14: ffff8881222b4000 R15: 0000000000000000
[44376.439998] FS: 00007f0451104800(0000) GS:ffff88852cb80000(0000) knlGS:0000000000000000
[44376.440714] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[44376.441225] CR2: 0000000000489108 CR3: 0000000123a79003 CR4: 0000000000370ea0
[44376.441843] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[44376.442471] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Fixes: ad86755b18d5 ("net/mlx5e: Protect unready flows with dedicated lock")
Signed-off-by: Vlad Buslov <vladbu@nvidia.com>
Reviewed-by: Roi Dayan <roid@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
---
drivers/net/ethernet/mellanox/mlx5/core/en_tc.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
index 41dc26800f48..8d0a3f69693e 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
@@ -1639,7 +1639,8 @@ static void remove_unready_flow(struct mlx5e_tc_flow *flow)
uplink_priv = &rpriv->uplink_priv;
mutex_lock(&uplink_priv->unready_flows_lock);
- unready_flow_del(flow);
+ if (flow_flag_test(flow, NOT_READY))
+ unready_flow_del(flow);
mutex_unlock(&uplink_priv->unready_flows_lock);
}
@@ -1932,8 +1933,7 @@ static void mlx5e_tc_del_fdb_flow(struct mlx5e_priv *priv,
esw_attr = attr->esw_attr;
mlx5e_put_flow_tunnel_id(flow);
- if (flow_flag_test(flow, NOT_READY))
- remove_unready_flow(flow);
+ remove_unready_flow(flow);
if (mlx5e_is_offloaded_flow(flow)) {
if (flow_flag_test(flow, SLOW))
--
2.41.0
next prev parent reply other threads:[~2023-07-05 17:58 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-07-05 17:57 [pull request][net V2 0/9] mlx5 fixes 2023-07-05 Saeed Mahameed
2023-07-05 17:57 ` [net V2 1/9] net/mlx5e: fix double free in mlx5e_destroy_flow_table Saeed Mahameed
2023-07-05 19:23 ` Michal Kubiak
2023-07-07 2:20 ` patchwork-bot+netdevbpf
2023-07-05 17:57 ` [net V2 2/9] net/mlx5e: fix memory leak in mlx5e_fs_tt_redirect_any_create Saeed Mahameed
2023-07-05 19:29 ` Michal Kubiak
2023-07-05 17:57 ` [net V2 3/9] net/mlx5e: fix memory leak in mlx5e_ptp_open Saeed Mahameed
2023-07-05 19:31 ` Michal Kubiak
2023-07-05 17:57 ` [net V2 4/9] net/mlx5e: RX, Fix flush and close release flow of regular rq for legacy rq Saeed Mahameed
2023-07-05 17:57 ` [net V2 5/9] net/mlx5: Register a unique thermal zone per device Saeed Mahameed
2023-07-05 19:48 ` Michal Kubiak
2023-07-06 3:20 ` Jakub Kicinski
2023-07-06 6:09 ` Saeed Mahameed
2023-07-06 15:42 ` Jakub Kicinski
2023-07-05 17:57 ` Saeed Mahameed [this message]
2023-07-05 17:57 ` [net V2 7/9] net/mlx5e: TC, CT: Offload ct clear only once Saeed Mahameed
2023-07-05 17:57 ` [net V2 8/9] net/mlx5: Query hca_cap_2 only when supported Saeed Mahameed
2023-07-05 19:51 ` Michal Kubiak
2023-07-05 17:57 ` [net V2 9/9] net/mlx5e: RX, Fix page_pool page fragment tracking for XDP Saeed Mahameed
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230705175757.284614-7-saeed@kernel.org \
--to=saeed@kernel.org \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=kuba@kernel.org \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=roid@nvidia.com \
--cc=saeedm@nvidia.com \
--cc=tariqt@nvidia.com \
--cc=vladbu@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).