* [PATCH net 0/2] Fixes for miss to tc action series @ 2023-04-26 12:14 Vlad Buslov 2023-04-26 12:14 ` [PATCH net 1/2] net/sched: flower: fix filter idr initialization Vlad Buslov 2023-04-26 12:14 ` [PATCH net 2/2] net/sched: flower: fix error handler on replace Vlad Buslov 0 siblings, 2 replies; 20+ messages in thread From: Vlad Buslov @ 2023-04-26 12:14 UTC (permalink / raw) To: davem, kuba Cc: netdev, jhs, xiyou.wangcong, jiri, marcelo.leitner, paulb, simon.horman, Vlad Buslov Vlad Buslov (2): net/sched: flower: fix filter idr initialization net/sched: flower: fix error handler on replace net/sched/cls_flower.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) -- 2.39.2 ^ permalink raw reply [flat|nested] 20+ messages in thread
* [PATCH net 1/2] net/sched: flower: fix filter idr initialization 2023-04-26 12:14 [PATCH net 0/2] Fixes for miss to tc action series Vlad Buslov @ 2023-04-26 12:14 ` Vlad Buslov 2023-04-26 14:25 ` Simon Horman ` (2 more replies) 2023-04-26 12:14 ` [PATCH net 2/2] net/sched: flower: fix error handler on replace Vlad Buslov 1 sibling, 3 replies; 20+ messages in thread From: Vlad Buslov @ 2023-04-26 12:14 UTC (permalink / raw) To: davem, kuba Cc: netdev, jhs, xiyou.wangcong, jiri, marcelo.leitner, paulb, simon.horman, Vlad Buslov The cited commit moved idr initialization too early in fl_change() which allows concurrent users to access the filter that is still being initialized and is in inconsistent state, which, in turn, can cause NULL pointer dereference [0]. Since there is no obvious way to fix the ordering without reverting the whole cited commit, alternative approach taken to first insert NULL pointer into idr in order to allocate the handle but still cause fl_get() to return NULL and prevent concurrent users from seeing the filter while providing miss-to-action infrastructure with valid handle id early in fl_change(). [ 152.434728] general protection fault, probably for non-canonical address 0xdffffc0000000000: 0000 [#1] SMP KASAN [ 152.436163] KASAN: null-ptr-deref in range [0x0000000000000000-0x0000000000000007] [ 152.437269] CPU: 4 PID: 3877 Comm: tc Not tainted 6.3.0-rc4+ #5 [ 152.438110] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.13.0-0-gf21b5a4aeb02-prebuilt.qemu.org 04/01/2014 [ 152.439644] RIP: 0010:fl_dump_key+0x8b/0x1d10 [cls_flower] [ 152.440461] Code: 01 f2 02 f2 c7 40 08 04 f2 04 f2 c7 40 0c 04 f3 f3 f3 65 48 8b 04 25 28 00 00 00 48 89 84 24 00 01 00 00 48 89 c8 48 c1 e8 03 <0f> b6 04 10 84 c0 74 08 3c 03 0f 8e 98 19 00 00 8b 13 85 d2 74 57 [ 152.442885] RSP: 0018:ffff88817a28f158 EFLAGS: 00010246 [ 152.443851] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000 [ 152.444826] RDX: dffffc0000000000 RSI: ffffffff8500ae80 RDI: ffff88810a987900 [ 152.445791] RBP: ffff888179d88240 R08: ffff888179d8845c R09: ffff888179d88240 [ 152.446780] R10: ffffed102f451e48 R11: 00000000fffffff2 R12: ffff88810a987900 [ 152.447741] R13: ffffffff8500ae80 R14: ffff88810a987900 R15: ffff888149b3c738 [ 152.448756] FS: 00007f5eb2a34800(0000) GS:ffff88881ec00000(0000) knlGS:0000000000000000 [ 152.449888] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 152.450685] CR2: 000000000046ad19 CR3: 000000010b0bd006 CR4: 0000000000370ea0 [ 152.451641] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 152.452628] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [ 152.453588] Call Trace: [ 152.454032] <TASK> [ 152.454447] ? netlink_sendmsg+0x7a1/0xcb0 [ 152.455109] ? sock_sendmsg+0xc5/0x190 [ 152.455689] ? ____sys_sendmsg+0x535/0x6b0 [ 152.456320] ? ___sys_sendmsg+0xeb/0x170 [ 152.456916] ? do_syscall_64+0x3d/0x90 [ 152.457529] ? entry_SYSCALL_64_after_hwframe+0x46/0xb0 [ 152.458321] ? ___sys_sendmsg+0xeb/0x170 [ 152.458958] ? __sys_sendmsg+0xb5/0x140 [ 152.459564] ? do_syscall_64+0x3d/0x90 [ 152.460122] ? entry_SYSCALL_64_after_hwframe+0x46/0xb0 [ 152.460852] ? fl_dump_key_options.part.0+0xea0/0xea0 [cls_flower] [ 152.461710] ? _raw_spin_lock+0x7a/0xd0 [ 152.462299] ? _raw_read_lock_irq+0x30/0x30 [ 152.462924] ? nla_put+0x15e/0x1c0 [ 152.463480] fl_dump+0x228/0x650 [cls_flower] [ 152.464112] ? fl_tmplt_dump+0x210/0x210 [cls_flower] [ 152.464854] ? __kmem_cache_alloc_node+0x1a7/0x330 [ 152.465592] ? nla_put+0x15e/0x1c0 [ 152.466160] tcf_fill_node+0x515/0x9a0 [ 152.466766] ? tc_setup_offload_action+0xf0/0xf0 [ 152.467463] ? __alloc_skb+0x13c/0x2a0 [ 152.468067] ? __build_skb_around+0x330/0x330 [ 152.468814] ? fl_get+0x107/0x1a0 [cls_flower] [ 152.469503] tc_del_tfilter+0x718/0x1330 [ 152.470115] ? is_bpf_text_address+0xa/0x20 [ 152.470765] ? tc_ctl_chain+0xee0/0xee0 [ 152.471335] ? __kernel_text_address+0xe/0x30 [ 152.471948] ? unwind_get_return_address+0x56/0xa0 [ 152.472639] ? __thaw_task+0x150/0x150 [ 152.473218] ? arch_stack_walk+0x98/0xf0 [ 152.473839] ? __stack_depot_save+0x35/0x4c0 [ 152.474501] ? stack_trace_save+0x91/0xc0 [ 152.475119] ? security_capable+0x51/0x90 [ 152.475741] rtnetlink_rcv_msg+0x2c1/0x9d0 [ 152.476387] ? rtnl_calcit.isra.0+0x2b0/0x2b0 [ 152.477042] ? __sys_sendmsg+0xb5/0x140 [ 152.477664] ? do_syscall_64+0x3d/0x90 [ 152.478255] ? entry_SYSCALL_64_after_hwframe+0x46/0xb0 [ 152.479010] ? __stack_depot_save+0x35/0x4c0 [ 152.479679] ? __stack_depot_save+0x35/0x4c0 [ 152.480346] netlink_rcv_skb+0x12c/0x360 [ 152.480929] ? rtnl_calcit.isra.0+0x2b0/0x2b0 [ 152.481517] ? do_syscall_64+0x3d/0x90 [ 152.482061] ? netlink_ack+0x1550/0x1550 [ 152.482612] ? rhashtable_walk_peek+0x170/0x170 [ 152.483262] ? kmem_cache_alloc_node+0x1af/0x390 [ 152.483875] ? _copy_from_iter+0x3d6/0xc70 [ 152.484528] netlink_unicast+0x553/0x790 [ 152.485168] ? netlink_attachskb+0x6a0/0x6a0 [ 152.485848] ? unwind_next_frame+0x11cc/0x1a10 [ 152.486538] ? arch_stack_walk+0x61/0xf0 [ 152.487169] netlink_sendmsg+0x7a1/0xcb0 [ 152.487799] ? netlink_unicast+0x790/0x790 [ 152.488355] ? iovec_from_user.part.0+0x4d/0x220 [ 152.488990] ? _raw_spin_lock+0x7a/0xd0 [ 152.489598] ? netlink_unicast+0x790/0x790 [ 152.490236] sock_sendmsg+0xc5/0x190 [ 152.490796] ____sys_sendmsg+0x535/0x6b0 [ 152.491394] ? import_iovec+0x7/0x10 [ 152.491964] ? kernel_sendmsg+0x30/0x30 [ 152.492561] ? __copy_msghdr+0x3c0/0x3c0 [ 152.493160] ? do_syscall_64+0x3d/0x90 [ 152.493706] ___sys_sendmsg+0xeb/0x170 [ 152.494283] ? may_open_dev+0xd0/0xd0 [ 152.494858] ? copy_msghdr_from_user+0x110/0x110 [ 152.495541] ? __handle_mm_fault+0x2678/0x4ad0 [ 152.496205] ? copy_page_range+0x2360/0x2360 [ 152.496862] ? __fget_light+0x57/0x520 [ 152.497449] ? mas_find+0x1c0/0x1c0 [ 152.498026] ? sockfd_lookup_light+0x1a/0x140 [ 152.498703] __sys_sendmsg+0xb5/0x140 [ 152.499306] ? __sys_sendmsg_sock+0x20/0x20 [ 152.499951] ? do_user_addr_fault+0x369/0xd80 [ 152.500595] do_syscall_64+0x3d/0x90 [ 152.501185] entry_SYSCALL_64_after_hwframe+0x46/0xb0 [ 152.501917] RIP: 0033:0x7f5eb294f887 [ 152.502494] Code: 0a 00 f7 d8 64 89 02 48 c7 c0 ff ff ff ff eb b9 0f 1f 00 f3 0f 1e fa 64 8b 04 25 18 00 00 00 85 c0 75 10 b8 2e 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 51 c3 48 83 ec 28 89 54 24 1c 48 89 74 24 10 [ 152.505008] RSP: 002b:00007ffd2c708f78 EFLAGS: 00000246 ORIG_RAX: 000000000000002e [ 152.506152] RAX: ffffffffffffffda RBX: 00000000642d9472 RCX: 00007f5eb294f887 [ 152.507134] RDX: 0000000000000000 RSI: 00007ffd2c708fe0 RDI: 0000000000000003 [ 152.508113] RBP: 0000000000000000 R08: 0000000000000001 R09: 0000000000000000 [ 152.509119] R10: 00007f5eb2808708 R11: 0000000000000246 R12: 0000000000000001 [ 152.510068] R13: 0000000000000000 R14: 00007ffd2c70d1b8 R15: 0000000000485400 [ 152.511031] </TASK> [ 152.511444] Modules linked in: cls_flower sch_ingress openvswitch nsh mlx5_vdpa vringh vhost_iotlb vdpa mlx5_ib mlx5_core rpcrdma rdma_ucm ib_iser libiscsi scsi_transport_iscsi ib_umad rdma_cm ib_ipoib iw_cm ib_cm ib_uverbs ib_core xt_conntrack xt_MASQUERADE nf_conntrack_netlink nfnetlink xt_addrtype iptable_nat nf_nat br_netfilter overlay zram zsmalloc fuse [last unloaded: mlx5_core] [ 152.515720] ---[ end trace 0000000000000000 ]--- Fixes: 08a0063df3ae ("net/sched: flower: Move filter handle initialization earlier") Signed-off-by: Vlad Buslov <vladbu@nvidia.com> --- net/sched/cls_flower.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c index 475fe222a855..1844545bef37 100644 --- a/net/sched/cls_flower.c +++ b/net/sched/cls_flower.c @@ -2210,10 +2210,10 @@ static int fl_change(struct net *net, struct sk_buff *in_skb, spin_lock(&tp->lock); if (!handle) { handle = 1; - err = idr_alloc_u32(&head->handle_idr, fnew, &handle, + err = idr_alloc_u32(&head->handle_idr, NULL, &handle, INT_MAX, GFP_ATOMIC); } else { - err = idr_alloc_u32(&head->handle_idr, fnew, &handle, + err = idr_alloc_u32(&head->handle_idr, NULL, &handle, handle, GFP_ATOMIC); /* Filter with specified handle was concurrently @@ -2378,7 +2378,7 @@ static void fl_walk(struct tcf_proto *tp, struct tcf_walker *arg, rcu_read_lock(); idr_for_each_entry_continue_ul(&head->handle_idr, f, tmp, id) { /* don't return filters that are being deleted */ - if (!refcount_inc_not_zero(&f->refcnt)) + if (!f || !refcount_inc_not_zero(&f->refcnt)) continue; rcu_read_unlock(); -- 2.39.2 ^ permalink raw reply related [flat|nested] 20+ messages in thread
* Re: [PATCH net 1/2] net/sched: flower: fix filter idr initialization 2023-04-26 12:14 ` [PATCH net 1/2] net/sched: flower: fix filter idr initialization Vlad Buslov @ 2023-04-26 14:25 ` Simon Horman 2023-04-26 14:27 ` Pedro Tammela 2023-04-27 5:53 ` Paul Blakey 2 siblings, 0 replies; 20+ messages in thread From: Simon Horman @ 2023-04-26 14:25 UTC (permalink / raw) To: Vlad Buslov Cc: davem, kuba, netdev, jhs, xiyou.wangcong, jiri, marcelo.leitner, paulb On Wed, Apr 26, 2023 at 02:14:14PM +0200, Vlad Buslov wrote: > The cited commit moved idr initialization too early in fl_change() which > allows concurrent users to access the filter that is still being > initialized and is in inconsistent state, which, in turn, can cause NULL > pointer dereference [0]. Since there is no obvious way to fix the ordering > without reverting the whole cited commit, alternative approach taken to > first insert NULL pointer into idr in order to allocate the handle but > still cause fl_get() to return NULL and prevent concurrent users from > seeing the filter while providing miss-to-action infrastructure with valid > handle id early in fl_change(). > > [ 152.434728] general protection fault, probably for non-canonical address 0xdffffc0000000000: 0000 [#1] SMP KASAN > [ 152.436163] KASAN: null-ptr-deref in range [0x0000000000000000-0x0000000000000007] > [ 152.437269] CPU: 4 PID: 3877 Comm: tc Not tainted 6.3.0-rc4+ #5 > [ 152.438110] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.13.0-0-gf21b5a4aeb02-prebuilt.qemu.org 04/01/2014 > [ 152.439644] RIP: 0010:fl_dump_key+0x8b/0x1d10 [cls_flower] > [ 152.440461] Code: 01 f2 02 f2 c7 40 08 04 f2 04 f2 c7 40 0c 04 f3 f3 f3 65 48 8b 04 25 28 00 00 00 48 89 84 24 00 01 00 00 48 89 c8 48 c1 e8 03 <0f> b6 04 10 84 c0 74 08 3c 03 0f 8e 98 19 00 00 8b 13 85 d2 74 57 > [ 152.442885] RSP: 0018:ffff88817a28f158 EFLAGS: 00010246 > [ 152.443851] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000 > [ 152.444826] RDX: dffffc0000000000 RSI: ffffffff8500ae80 RDI: ffff88810a987900 > [ 152.445791] RBP: ffff888179d88240 R08: ffff888179d8845c R09: ffff888179d88240 > [ 152.446780] R10: ffffed102f451e48 R11: 00000000fffffff2 R12: ffff88810a987900 > [ 152.447741] R13: ffffffff8500ae80 R14: ffff88810a987900 R15: ffff888149b3c738 > [ 152.448756] FS: 00007f5eb2a34800(0000) GS:ffff88881ec00000(0000) knlGS:0000000000000000 > [ 152.449888] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 > [ 152.450685] CR2: 000000000046ad19 CR3: 000000010b0bd006 CR4: 0000000000370ea0 > [ 152.451641] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 > [ 152.452628] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 > [ 152.453588] Call Trace: > [ 152.454032] <TASK> > [ 152.454447] ? netlink_sendmsg+0x7a1/0xcb0 > [ 152.455109] ? sock_sendmsg+0xc5/0x190 > [ 152.455689] ? ____sys_sendmsg+0x535/0x6b0 > [ 152.456320] ? ___sys_sendmsg+0xeb/0x170 > [ 152.456916] ? do_syscall_64+0x3d/0x90 > [ 152.457529] ? entry_SYSCALL_64_after_hwframe+0x46/0xb0 > [ 152.458321] ? ___sys_sendmsg+0xeb/0x170 > [ 152.458958] ? __sys_sendmsg+0xb5/0x140 > [ 152.459564] ? do_syscall_64+0x3d/0x90 > [ 152.460122] ? entry_SYSCALL_64_after_hwframe+0x46/0xb0 > [ 152.460852] ? fl_dump_key_options.part.0+0xea0/0xea0 [cls_flower] > [ 152.461710] ? _raw_spin_lock+0x7a/0xd0 > [ 152.462299] ? _raw_read_lock_irq+0x30/0x30 > [ 152.462924] ? nla_put+0x15e/0x1c0 > [ 152.463480] fl_dump+0x228/0x650 [cls_flower] > [ 152.464112] ? fl_tmplt_dump+0x210/0x210 [cls_flower] > [ 152.464854] ? __kmem_cache_alloc_node+0x1a7/0x330 > [ 152.465592] ? nla_put+0x15e/0x1c0 > [ 152.466160] tcf_fill_node+0x515/0x9a0 > [ 152.466766] ? tc_setup_offload_action+0xf0/0xf0 > [ 152.467463] ? __alloc_skb+0x13c/0x2a0 > [ 152.468067] ? __build_skb_around+0x330/0x330 > [ 152.468814] ? fl_get+0x107/0x1a0 [cls_flower] > [ 152.469503] tc_del_tfilter+0x718/0x1330 > [ 152.470115] ? is_bpf_text_address+0xa/0x20 > [ 152.470765] ? tc_ctl_chain+0xee0/0xee0 > [ 152.471335] ? __kernel_text_address+0xe/0x30 > [ 152.471948] ? unwind_get_return_address+0x56/0xa0 > [ 152.472639] ? __thaw_task+0x150/0x150 > [ 152.473218] ? arch_stack_walk+0x98/0xf0 > [ 152.473839] ? __stack_depot_save+0x35/0x4c0 > [ 152.474501] ? stack_trace_save+0x91/0xc0 > [ 152.475119] ? security_capable+0x51/0x90 > [ 152.475741] rtnetlink_rcv_msg+0x2c1/0x9d0 > [ 152.476387] ? rtnl_calcit.isra.0+0x2b0/0x2b0 > [ 152.477042] ? __sys_sendmsg+0xb5/0x140 > [ 152.477664] ? do_syscall_64+0x3d/0x90 > [ 152.478255] ? entry_SYSCALL_64_after_hwframe+0x46/0xb0 > [ 152.479010] ? __stack_depot_save+0x35/0x4c0 > [ 152.479679] ? __stack_depot_save+0x35/0x4c0 > [ 152.480346] netlink_rcv_skb+0x12c/0x360 > [ 152.480929] ? rtnl_calcit.isra.0+0x2b0/0x2b0 > [ 152.481517] ? do_syscall_64+0x3d/0x90 > [ 152.482061] ? netlink_ack+0x1550/0x1550 > [ 152.482612] ? rhashtable_walk_peek+0x170/0x170 > [ 152.483262] ? kmem_cache_alloc_node+0x1af/0x390 > [ 152.483875] ? _copy_from_iter+0x3d6/0xc70 > [ 152.484528] netlink_unicast+0x553/0x790 > [ 152.485168] ? netlink_attachskb+0x6a0/0x6a0 > [ 152.485848] ? unwind_next_frame+0x11cc/0x1a10 > [ 152.486538] ? arch_stack_walk+0x61/0xf0 > [ 152.487169] netlink_sendmsg+0x7a1/0xcb0 > [ 152.487799] ? netlink_unicast+0x790/0x790 > [ 152.488355] ? iovec_from_user.part.0+0x4d/0x220 > [ 152.488990] ? _raw_spin_lock+0x7a/0xd0 > [ 152.489598] ? netlink_unicast+0x790/0x790 > [ 152.490236] sock_sendmsg+0xc5/0x190 > [ 152.490796] ____sys_sendmsg+0x535/0x6b0 > [ 152.491394] ? import_iovec+0x7/0x10 > [ 152.491964] ? kernel_sendmsg+0x30/0x30 > [ 152.492561] ? __copy_msghdr+0x3c0/0x3c0 > [ 152.493160] ? do_syscall_64+0x3d/0x90 > [ 152.493706] ___sys_sendmsg+0xeb/0x170 > [ 152.494283] ? may_open_dev+0xd0/0xd0 > [ 152.494858] ? copy_msghdr_from_user+0x110/0x110 > [ 152.495541] ? __handle_mm_fault+0x2678/0x4ad0 > [ 152.496205] ? copy_page_range+0x2360/0x2360 > [ 152.496862] ? __fget_light+0x57/0x520 > [ 152.497449] ? mas_find+0x1c0/0x1c0 > [ 152.498026] ? sockfd_lookup_light+0x1a/0x140 > [ 152.498703] __sys_sendmsg+0xb5/0x140 > [ 152.499306] ? __sys_sendmsg_sock+0x20/0x20 > [ 152.499951] ? do_user_addr_fault+0x369/0xd80 > [ 152.500595] do_syscall_64+0x3d/0x90 > [ 152.501185] entry_SYSCALL_64_after_hwframe+0x46/0xb0 > [ 152.501917] RIP: 0033:0x7f5eb294f887 > [ 152.502494] Code: 0a 00 f7 d8 64 89 02 48 c7 c0 ff ff ff ff eb b9 0f 1f 00 f3 0f 1e fa 64 8b 04 25 18 00 00 00 85 c0 75 10 b8 2e 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 51 c3 48 83 ec 28 89 54 24 1c 48 89 74 24 10 > [ 152.505008] RSP: 002b:00007ffd2c708f78 EFLAGS: 00000246 ORIG_RAX: 000000000000002e > [ 152.506152] RAX: ffffffffffffffda RBX: 00000000642d9472 RCX: 00007f5eb294f887 > [ 152.507134] RDX: 0000000000000000 RSI: 00007ffd2c708fe0 RDI: 0000000000000003 > [ 152.508113] RBP: 0000000000000000 R08: 0000000000000001 R09: 0000000000000000 > [ 152.509119] R10: 00007f5eb2808708 R11: 0000000000000246 R12: 0000000000000001 > [ 152.510068] R13: 0000000000000000 R14: 00007ffd2c70d1b8 R15: 0000000000485400 > [ 152.511031] </TASK> > [ 152.511444] Modules linked in: cls_flower sch_ingress openvswitch nsh mlx5_vdpa vringh vhost_iotlb vdpa mlx5_ib mlx5_core rpcrdma rdma_ucm ib_iser libiscsi scsi_transport_iscsi ib_umad rdma_cm ib_ipoib iw_cm ib_cm ib_uverbs ib_core xt_conntrack xt_MASQUERADE nf_conntrack_netlink nfnetlink xt_addrtype iptable_nat nf_nat br_netfilter overlay zram zsmalloc fuse [last unloaded: mlx5_core] > [ 152.515720] ---[ end trace 0000000000000000 ]--- > > Fixes: 08a0063df3ae ("net/sched: flower: Move filter handle initialization earlier") > Signed-off-by: Vlad Buslov <vladbu@nvidia.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> ^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH net 1/2] net/sched: flower: fix filter idr initialization 2023-04-26 12:14 ` [PATCH net 1/2] net/sched: flower: fix filter idr initialization Vlad Buslov 2023-04-26 14:25 ` Simon Horman @ 2023-04-26 14:27 ` Pedro Tammela 2023-04-27 5:53 ` Paul Blakey 2 siblings, 0 replies; 20+ messages in thread From: Pedro Tammela @ 2023-04-26 14:27 UTC (permalink / raw) To: Vlad Buslov, davem, kuba Cc: netdev, jhs, xiyou.wangcong, jiri, marcelo.leitner, paulb, simon.horman On 26/04/2023 09:14, Vlad Buslov wrote: > The cited commit moved idr initialization too early in fl_change() which > allows concurrent users to access the filter that is still being > initialized and is in inconsistent state, which, in turn, can cause NULL > pointer dereference [0]. Since there is no obvious way to fix the ordering > without reverting the whole cited commit, alternative approach taken to > first insert NULL pointer into idr in order to allocate the handle but > still cause fl_get() to return NULL and prevent concurrent users from > seeing the filter while providing miss-to-action infrastructure with valid > handle id early in fl_change(). > > [ 152.434728] general protection fault, probably for non-canonical address 0xdffffc0000000000: 0000 [#1] SMP KASAN > [ 152.436163] KASAN: null-ptr-deref in range [0x0000000000000000-0x0000000000000007] > [ 152.437269] CPU: 4 PID: 3877 Comm: tc Not tainted 6.3.0-rc4+ #5 > [ 152.438110] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.13.0-0-gf21b5a4aeb02-prebuilt.qemu.org 04/01/2014 > [ 152.439644] RIP: 0010:fl_dump_key+0x8b/0x1d10 [cls_flower] > [ 152.440461] Code: 01 f2 02 f2 c7 40 08 04 f2 04 f2 c7 40 0c 04 f3 f3 f3 65 48 8b 04 25 28 00 00 00 48 89 84 24 00 01 00 00 48 89 c8 48 c1 e8 03 <0f> b6 04 10 84 c0 74 08 3c 03 0f 8e 98 19 00 00 8b 13 85 d2 74 57 > [ 152.442885] RSP: 0018:ffff88817a28f158 EFLAGS: 00010246 > [ 152.443851] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000 > [ 152.444826] RDX: dffffc0000000000 RSI: ffffffff8500ae80 RDI: ffff88810a987900 > [ 152.445791] RBP: ffff888179d88240 R08: ffff888179d8845c R09: ffff888179d88240 > [ 152.446780] R10: ffffed102f451e48 R11: 00000000fffffff2 R12: ffff88810a987900 > [ 152.447741] R13: ffffffff8500ae80 R14: ffff88810a987900 R15: ffff888149b3c738 > [ 152.448756] FS: 00007f5eb2a34800(0000) GS:ffff88881ec00000(0000) knlGS:0000000000000000 > [ 152.449888] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 > [ 152.450685] CR2: 000000000046ad19 CR3: 000000010b0bd006 CR4: 0000000000370ea0 > [ 152.451641] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 > [ 152.452628] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 > [ 152.453588] Call Trace: > [ 152.454032] <TASK> > [ 152.454447] ? netlink_sendmsg+0x7a1/0xcb0 > [ 152.455109] ? sock_sendmsg+0xc5/0x190 > [ 152.455689] ? ____sys_sendmsg+0x535/0x6b0 > [ 152.456320] ? ___sys_sendmsg+0xeb/0x170 > [ 152.456916] ? do_syscall_64+0x3d/0x90 > [ 152.457529] ? entry_SYSCALL_64_after_hwframe+0x46/0xb0 > [ 152.458321] ? ___sys_sendmsg+0xeb/0x170 > [ 152.458958] ? __sys_sendmsg+0xb5/0x140 > [ 152.459564] ? do_syscall_64+0x3d/0x90 > [ 152.460122] ? entry_SYSCALL_64_after_hwframe+0x46/0xb0 > [ 152.460852] ? fl_dump_key_options.part.0+0xea0/0xea0 [cls_flower] > [ 152.461710] ? _raw_spin_lock+0x7a/0xd0 > [ 152.462299] ? _raw_read_lock_irq+0x30/0x30 > [ 152.462924] ? nla_put+0x15e/0x1c0 > [ 152.463480] fl_dump+0x228/0x650 [cls_flower] > [ 152.464112] ? fl_tmplt_dump+0x210/0x210 [cls_flower] > [ 152.464854] ? __kmem_cache_alloc_node+0x1a7/0x330 > [ 152.465592] ? nla_put+0x15e/0x1c0 > [ 152.466160] tcf_fill_node+0x515/0x9a0 > [ 152.466766] ? tc_setup_offload_action+0xf0/0xf0 > [ 152.467463] ? __alloc_skb+0x13c/0x2a0 > [ 152.468067] ? __build_skb_around+0x330/0x330 > [ 152.468814] ? fl_get+0x107/0x1a0 [cls_flower] > [ 152.469503] tc_del_tfilter+0x718/0x1330 > [ 152.470115] ? is_bpf_text_address+0xa/0x20 > [ 152.470765] ? tc_ctl_chain+0xee0/0xee0 > [ 152.471335] ? __kernel_text_address+0xe/0x30 > [ 152.471948] ? unwind_get_return_address+0x56/0xa0 > [ 152.472639] ? __thaw_task+0x150/0x150 > [ 152.473218] ? arch_stack_walk+0x98/0xf0 > [ 152.473839] ? __stack_depot_save+0x35/0x4c0 > [ 152.474501] ? stack_trace_save+0x91/0xc0 > [ 152.475119] ? security_capable+0x51/0x90 > [ 152.475741] rtnetlink_rcv_msg+0x2c1/0x9d0 > [ 152.476387] ? rtnl_calcit.isra.0+0x2b0/0x2b0 > [ 152.477042] ? __sys_sendmsg+0xb5/0x140 > [ 152.477664] ? do_syscall_64+0x3d/0x90 > [ 152.478255] ? entry_SYSCALL_64_after_hwframe+0x46/0xb0 > [ 152.479010] ? __stack_depot_save+0x35/0x4c0 > [ 152.479679] ? __stack_depot_save+0x35/0x4c0 > [ 152.480346] netlink_rcv_skb+0x12c/0x360 > [ 152.480929] ? rtnl_calcit.isra.0+0x2b0/0x2b0 > [ 152.481517] ? do_syscall_64+0x3d/0x90 > [ 152.482061] ? netlink_ack+0x1550/0x1550 > [ 152.482612] ? rhashtable_walk_peek+0x170/0x170 > [ 152.483262] ? kmem_cache_alloc_node+0x1af/0x390 > [ 152.483875] ? _copy_from_iter+0x3d6/0xc70 > [ 152.484528] netlink_unicast+0x553/0x790 > [ 152.485168] ? netlink_attachskb+0x6a0/0x6a0 > [ 152.485848] ? unwind_next_frame+0x11cc/0x1a10 > [ 152.486538] ? arch_stack_walk+0x61/0xf0 > [ 152.487169] netlink_sendmsg+0x7a1/0xcb0 > [ 152.487799] ? netlink_unicast+0x790/0x790 > [ 152.488355] ? iovec_from_user.part.0+0x4d/0x220 > [ 152.488990] ? _raw_spin_lock+0x7a/0xd0 > [ 152.489598] ? netlink_unicast+0x790/0x790 > [ 152.490236] sock_sendmsg+0xc5/0x190 > [ 152.490796] ____sys_sendmsg+0x535/0x6b0 > [ 152.491394] ? import_iovec+0x7/0x10 > [ 152.491964] ? kernel_sendmsg+0x30/0x30 > [ 152.492561] ? __copy_msghdr+0x3c0/0x3c0 > [ 152.493160] ? do_syscall_64+0x3d/0x90 > [ 152.493706] ___sys_sendmsg+0xeb/0x170 > [ 152.494283] ? may_open_dev+0xd0/0xd0 > [ 152.494858] ? copy_msghdr_from_user+0x110/0x110 > [ 152.495541] ? __handle_mm_fault+0x2678/0x4ad0 > [ 152.496205] ? copy_page_range+0x2360/0x2360 > [ 152.496862] ? __fget_light+0x57/0x520 > [ 152.497449] ? mas_find+0x1c0/0x1c0 > [ 152.498026] ? sockfd_lookup_light+0x1a/0x140 > [ 152.498703] __sys_sendmsg+0xb5/0x140 > [ 152.499306] ? __sys_sendmsg_sock+0x20/0x20 > [ 152.499951] ? do_user_addr_fault+0x369/0xd80 > [ 152.500595] do_syscall_64+0x3d/0x90 > [ 152.501185] entry_SYSCALL_64_after_hwframe+0x46/0xb0 > [ 152.501917] RIP: 0033:0x7f5eb294f887 > [ 152.502494] Code: 0a 00 f7 d8 64 89 02 48 c7 c0 ff ff ff ff eb b9 0f 1f 00 f3 0f 1e fa 64 8b 04 25 18 00 00 00 85 c0 75 10 b8 2e 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 51 c3 48 83 ec 28 89 54 24 1c 48 89 74 24 10 > [ 152.505008] RSP: 002b:00007ffd2c708f78 EFLAGS: 00000246 ORIG_RAX: 000000000000002e > [ 152.506152] RAX: ffffffffffffffda RBX: 00000000642d9472 RCX: 00007f5eb294f887 > [ 152.507134] RDX: 0000000000000000 RSI: 00007ffd2c708fe0 RDI: 0000000000000003 > [ 152.508113] RBP: 0000000000000000 R08: 0000000000000001 R09: 0000000000000000 > [ 152.509119] R10: 00007f5eb2808708 R11: 0000000000000246 R12: 0000000000000001 > [ 152.510068] R13: 0000000000000000 R14: 00007ffd2c70d1b8 R15: 0000000000485400 > [ 152.511031] </TASK> > [ 152.511444] Modules linked in: cls_flower sch_ingress openvswitch nsh mlx5_vdpa vringh vhost_iotlb vdpa mlx5_ib mlx5_core rpcrdma rdma_ucm ib_iser libiscsi scsi_transport_iscsi ib_umad rdma_cm ib_ipoib iw_cm ib_cm ib_uverbs ib_core xt_conntrack xt_MASQUERADE nf_conntrack_netlink nfnetlink xt_addrtype iptable_nat nf_nat br_netfilter overlay zram zsmalloc fuse [last unloaded: mlx5_core] > [ 152.515720] ---[ end trace 0000000000000000 ]--- > > Fixes: 08a0063df3ae ("net/sched: flower: Move filter handle initialization earlier") > Signed-off-by: Vlad Buslov <vladbu@nvidia.com>| LGTM, Reviewed-by: Pedro Tammela <pctammela@mojatatu.com> > --- > net/sched/cls_flower.c | 6 +++--- > 1 file changed, 3 insertions(+), 3 deletions(-) > > diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c > index 475fe222a855..1844545bef37 100644 > --- a/net/sched/cls_flower.c > +++ b/net/sched/cls_flower.c > @@ -2210,10 +2210,10 @@ static int fl_change(struct net *net, struct sk_buff *in_skb, > spin_lock(&tp->lock); > if (!handle) { > handle = 1; > - err = idr_alloc_u32(&head->handle_idr, fnew, &handle, > + err = idr_alloc_u32(&head->handle_idr, NULL, &handle, > INT_MAX, GFP_ATOMIC); > } else { > - err = idr_alloc_u32(&head->handle_idr, fnew, &handle, > + err = idr_alloc_u32(&head->handle_idr, NULL, &handle, > handle, GFP_ATOMIC); > > /* Filter with specified handle was concurrently > @@ -2378,7 +2378,7 @@ static void fl_walk(struct tcf_proto *tp, struct tcf_walker *arg, > rcu_read_lock(); > idr_for_each_entry_continue_ul(&head->handle_idr, f, tmp, id) { > /* don't return filters that are being deleted */ > - if (!refcount_inc_not_zero(&f->refcnt)) > + if (!f || !refcount_inc_not_zero(&f->refcnt)) > continue; > rcu_read_unlock(); > ^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH net 1/2] net/sched: flower: fix filter idr initialization 2023-04-26 12:14 ` [PATCH net 1/2] net/sched: flower: fix filter idr initialization Vlad Buslov 2023-04-26 14:25 ` Simon Horman 2023-04-26 14:27 ` Pedro Tammela @ 2023-04-27 5:53 ` Paul Blakey 2 siblings, 0 replies; 20+ messages in thread From: Paul Blakey @ 2023-04-27 5:53 UTC (permalink / raw) To: Vlad Buslov, davem, kuba Cc: netdev, jhs, xiyou.wangcong, jiri, marcelo.leitner, simon.horman On 26/04/2023 15:14, Vlad Buslov wrote: > The cited commit moved idr initialization too early in fl_change() which > allows concurrent users to access the filter that is still being > initialized and is in inconsistent state, which, in turn, can cause NULL > pointer dereference [0]. Since there is no obvious way to fix the ordering > without reverting the whole cited commit, alternative approach taken to > first insert NULL pointer into idr in order to allocate the handle but > still cause fl_get() to return NULL and prevent concurrent users from > seeing the filter while providing miss-to-action infrastructure with valid > handle id early in fl_change(). > > [ 152.434728] general protection fault, probably for non-canonical address 0xdffffc0000000000: 0000 [#1] SMP KASAN > [ 152.436163] KASAN: null-ptr-deref in range [0x0000000000000000-0x0000000000000007] > [ 152.437269] CPU: 4 PID: 3877 Comm: tc Not tainted 6.3.0-rc4+ #5 > [ 152.438110] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.13.0-0-gf21b5a4aeb02-prebuilt.qemu.org 04/01/2014 > [ 152.439644] RIP: 0010:fl_dump_key+0x8b/0x1d10 [cls_flower] > [ 152.440461] Code: 01 f2 02 f2 c7 40 08 04 f2 04 f2 c7 40 0c 04 f3 f3 f3 65 48 8b 04 25 28 00 00 00 48 89 84 24 00 01 00 00 48 89 c8 48 c1 e8 03 <0f> b6 04 10 84 c0 74 08 3c 03 0f 8e 98 19 00 00 8b 13 85 d2 74 57 > [ 152.442885] RSP: 0018:ffff88817a28f158 EFLAGS: 00010246 > [ 152.443851] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000 > [ 152.444826] RDX: dffffc0000000000 RSI: ffffffff8500ae80 RDI: ffff88810a987900 > [ 152.445791] RBP: ffff888179d88240 R08: ffff888179d8845c R09: ffff888179d88240 > [ 152.446780] R10: ffffed102f451e48 R11: 00000000fffffff2 R12: ffff88810a987900 > [ 152.447741] R13: ffffffff8500ae80 R14: ffff88810a987900 R15: ffff888149b3c738 > [ 152.448756] FS: 00007f5eb2a34800(0000) GS:ffff88881ec00000(0000) knlGS:0000000000000000 > [ 152.449888] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 > [ 152.450685] CR2: 000000000046ad19 CR3: 000000010b0bd006 CR4: 0000000000370ea0 > [ 152.451641] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 > [ 152.452628] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 > [ 152.453588] Call Trace: > [ 152.454032] <TASK> > [ 152.454447] ? netlink_sendmsg+0x7a1/0xcb0 > [ 152.455109] ? sock_sendmsg+0xc5/0x190 > [ 152.455689] ? ____sys_sendmsg+0x535/0x6b0 > [ 152.456320] ? ___sys_sendmsg+0xeb/0x170 > [ 152.456916] ? do_syscall_64+0x3d/0x90 > [ 152.457529] ? entry_SYSCALL_64_after_hwframe+0x46/0xb0 > [ 152.458321] ? ___sys_sendmsg+0xeb/0x170 > [ 152.458958] ? __sys_sendmsg+0xb5/0x140 > [ 152.459564] ? do_syscall_64+0x3d/0x90 > [ 152.460122] ? entry_SYSCALL_64_after_hwframe+0x46/0xb0 > [ 152.460852] ? fl_dump_key_options.part.0+0xea0/0xea0 [cls_flower] > [ 152.461710] ? _raw_spin_lock+0x7a/0xd0 > [ 152.462299] ? _raw_read_lock_irq+0x30/0x30 > [ 152.462924] ? nla_put+0x15e/0x1c0 > [ 152.463480] fl_dump+0x228/0x650 [cls_flower] > [ 152.464112] ? fl_tmplt_dump+0x210/0x210 [cls_flower] > [ 152.464854] ? __kmem_cache_alloc_node+0x1a7/0x330 > [ 152.465592] ? nla_put+0x15e/0x1c0 > [ 152.466160] tcf_fill_node+0x515/0x9a0 > [ 152.466766] ? tc_setup_offload_action+0xf0/0xf0 > [ 152.467463] ? __alloc_skb+0x13c/0x2a0 > [ 152.468067] ? __build_skb_around+0x330/0x330 > [ 152.468814] ? fl_get+0x107/0x1a0 [cls_flower] > [ 152.469503] tc_del_tfilter+0x718/0x1330 > [ 152.470115] ? is_bpf_text_address+0xa/0x20 > [ 152.470765] ? tc_ctl_chain+0xee0/0xee0 > [ 152.471335] ? __kernel_text_address+0xe/0x30 > [ 152.471948] ? unwind_get_return_address+0x56/0xa0 > [ 152.472639] ? __thaw_task+0x150/0x150 > [ 152.473218] ? arch_stack_walk+0x98/0xf0 > [ 152.473839] ? __stack_depot_save+0x35/0x4c0 > [ 152.474501] ? stack_trace_save+0x91/0xc0 > [ 152.475119] ? security_capable+0x51/0x90 > [ 152.475741] rtnetlink_rcv_msg+0x2c1/0x9d0 > [ 152.476387] ? rtnl_calcit.isra.0+0x2b0/0x2b0 > [ 152.477042] ? __sys_sendmsg+0xb5/0x140 > [ 152.477664] ? do_syscall_64+0x3d/0x90 > [ 152.478255] ? entry_SYSCALL_64_after_hwframe+0x46/0xb0 > [ 152.479010] ? __stack_depot_save+0x35/0x4c0 > [ 152.479679] ? __stack_depot_save+0x35/0x4c0 > [ 152.480346] netlink_rcv_skb+0x12c/0x360 > [ 152.480929] ? rtnl_calcit.isra.0+0x2b0/0x2b0 > [ 152.481517] ? do_syscall_64+0x3d/0x90 > [ 152.482061] ? netlink_ack+0x1550/0x1550 > [ 152.482612] ? rhashtable_walk_peek+0x170/0x170 > [ 152.483262] ? kmem_cache_alloc_node+0x1af/0x390 > [ 152.483875] ? _copy_from_iter+0x3d6/0xc70 > [ 152.484528] netlink_unicast+0x553/0x790 > [ 152.485168] ? netlink_attachskb+0x6a0/0x6a0 > [ 152.485848] ? unwind_next_frame+0x11cc/0x1a10 > [ 152.486538] ? arch_stack_walk+0x61/0xf0 > [ 152.487169] netlink_sendmsg+0x7a1/0xcb0 > [ 152.487799] ? netlink_unicast+0x790/0x790 > [ 152.488355] ? iovec_from_user.part.0+0x4d/0x220 > [ 152.488990] ? _raw_spin_lock+0x7a/0xd0 > [ 152.489598] ? netlink_unicast+0x790/0x790 > [ 152.490236] sock_sendmsg+0xc5/0x190 > [ 152.490796] ____sys_sendmsg+0x535/0x6b0 > [ 152.491394] ? import_iovec+0x7/0x10 > [ 152.491964] ? kernel_sendmsg+0x30/0x30 > [ 152.492561] ? __copy_msghdr+0x3c0/0x3c0 > [ 152.493160] ? do_syscall_64+0x3d/0x90 > [ 152.493706] ___sys_sendmsg+0xeb/0x170 > [ 152.494283] ? may_open_dev+0xd0/0xd0 > [ 152.494858] ? copy_msghdr_from_user+0x110/0x110 > [ 152.495541] ? __handle_mm_fault+0x2678/0x4ad0 > [ 152.496205] ? copy_page_range+0x2360/0x2360 > [ 152.496862] ? __fget_light+0x57/0x520 > [ 152.497449] ? mas_find+0x1c0/0x1c0 > [ 152.498026] ? sockfd_lookup_light+0x1a/0x140 > [ 152.498703] __sys_sendmsg+0xb5/0x140 > [ 152.499306] ? __sys_sendmsg_sock+0x20/0x20 > [ 152.499951] ? do_user_addr_fault+0x369/0xd80 > [ 152.500595] do_syscall_64+0x3d/0x90 > [ 152.501185] entry_SYSCALL_64_after_hwframe+0x46/0xb0 > [ 152.501917] RIP: 0033:0x7f5eb294f887 > [ 152.502494] Code: 0a 00 f7 d8 64 89 02 48 c7 c0 ff ff ff ff eb b9 0f 1f 00 f3 0f 1e fa 64 8b 04 25 18 00 00 00 85 c0 75 10 b8 2e 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 51 c3 48 83 ec 28 89 54 24 1c 48 89 74 24 10 > [ 152.505008] RSP: 002b:00007ffd2c708f78 EFLAGS: 00000246 ORIG_RAX: 000000000000002e > [ 152.506152] RAX: ffffffffffffffda RBX: 00000000642d9472 RCX: 00007f5eb294f887 > [ 152.507134] RDX: 0000000000000000 RSI: 00007ffd2c708fe0 RDI: 0000000000000003 > [ 152.508113] RBP: 0000000000000000 R08: 0000000000000001 R09: 0000000000000000 > [ 152.509119] R10: 00007f5eb2808708 R11: 0000000000000246 R12: 0000000000000001 > [ 152.510068] R13: 0000000000000000 R14: 00007ffd2c70d1b8 R15: 0000000000485400 > [ 152.511031] </TASK> > [ 152.511444] Modules linked in: cls_flower sch_ingress openvswitch nsh mlx5_vdpa vringh vhost_iotlb vdpa mlx5_ib mlx5_core rpcrdma rdma_ucm ib_iser libiscsi scsi_transport_iscsi ib_umad rdma_cm ib_ipoib iw_cm ib_cm ib_uverbs ib_core xt_conntrack xt_MASQUERADE nf_conntrack_netlink nfnetlink xt_addrtype iptable_nat nf_nat br_netfilter overlay zram zsmalloc fuse [last unloaded: mlx5_core] > [ 152.515720] ---[ end trace 0000000000000000 ]--- > > Fixes: 08a0063df3ae ("net/sched: flower: Move filter handle initialization earlier") > Signed-off-by: Vlad Buslov <vladbu@nvidia.com> Reviewed-by: Paul Blakey <paulb@nvidia.com> ^ permalink raw reply [flat|nested] 20+ messages in thread
* [PATCH net 2/2] net/sched: flower: fix error handler on replace 2023-04-26 12:14 [PATCH net 0/2] Fixes for miss to tc action series Vlad Buslov 2023-04-26 12:14 ` [PATCH net 1/2] net/sched: flower: fix filter idr initialization Vlad Buslov @ 2023-04-26 12:14 ` Vlad Buslov 2023-04-26 14:06 ` Pedro Tammela ` (2 more replies) 1 sibling, 3 replies; 20+ messages in thread From: Vlad Buslov @ 2023-04-26 12:14 UTC (permalink / raw) To: davem, kuba Cc: netdev, jhs, xiyou.wangcong, jiri, marcelo.leitner, paulb, simon.horman, Vlad Buslov When replacing a filter (i.e. 'fold' pointer is not NULL) the insertion of new filter to idr is postponed until later in code since handle is already provided by the user. However, the error handling code in fl_change() always assumes that the new filter had been inserted into idr. If error handler is reached when replacing existing filter it may remove it from idr therefore making it unreachable for delete or dump afterwards. Fix the issue by verifying that 'fold' argument wasn't provided by caller before calling idr_remove(). Fixes: 08a0063df3ae ("net/sched: flower: Move filter handle initialization earlier") Signed-off-by: Vlad Buslov <vladbu@nvidia.com> --- net/sched/cls_flower.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c index 1844545bef37..a1c4ee2e0be2 100644 --- a/net/sched/cls_flower.c +++ b/net/sched/cls_flower.c @@ -2339,7 +2339,8 @@ static int fl_change(struct net *net, struct sk_buff *in_skb, errout_mask: fl_mask_put(head, fnew->mask); errout_idr: - idr_remove(&head->handle_idr, fnew->handle); + if (!fold) + idr_remove(&head->handle_idr, fnew->handle); __fl_put(fnew); errout_tb: kfree(tb); -- 2.39.2 ^ permalink raw reply related [flat|nested] 20+ messages in thread
* Re: [PATCH net 2/2] net/sched: flower: fix error handler on replace 2023-04-26 12:14 ` [PATCH net 2/2] net/sched: flower: fix error handler on replace Vlad Buslov @ 2023-04-26 14:06 ` Pedro Tammela 2023-04-26 14:22 ` Pedro Tammela 2023-04-27 5:52 ` Paul Blakey 2 siblings, 0 replies; 20+ messages in thread From: Pedro Tammela @ 2023-04-26 14:06 UTC (permalink / raw) To: Vlad Buslov, davem, kuba Cc: netdev, jhs, xiyou.wangcong, jiri, marcelo.leitner, paulb, simon.horman On 26/04/2023 09:14, Vlad Buslov wrote: > When replacing a filter (i.e. 'fold' pointer is not NULL) the insertion of > new filter to idr is postponed until later in code since handle is already > provided by the user. However, the error handling code in fl_change() > always assumes that the new filter had been inserted into idr. If error > handler is reached when replacing existing filter it may remove it from idr > therefore making it unreachable for delete or dump afterwards. Fix the > issue by verifying that 'fold' argument wasn't provided by caller before > calling idr_remove(). > > Fixes: 08a0063df3ae ("net/sched: flower: Move filter handle initialization earlier") > Signed-off-by: Vlad Buslov <vladbu@nvidia.com> LGTM Reviewed-by: Pedro Tammela <pctammela@mojatatu.com> > --- > net/sched/cls_flower.c | 3 ++- > 1 file changed, 2 insertions(+), 1 deletion(-) > > diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c > index 1844545bef37..a1c4ee2e0be2 100644 > --- a/net/sched/cls_flower.c > +++ b/net/sched/cls_flower.c > @@ -2339,7 +2339,8 @@ static int fl_change(struct net *net, struct sk_buff *in_skb, > errout_mask: > fl_mask_put(head, fnew->mask); > errout_idr: > - idr_remove(&head->handle_idr, fnew->handle); > + if (!fold) > + idr_remove(&head->handle_idr, fnew->handle); > __fl_put(fnew); > errout_tb: > kfree(tb); ^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH net 2/2] net/sched: flower: fix error handler on replace 2023-04-26 12:14 ` [PATCH net 2/2] net/sched: flower: fix error handler on replace Vlad Buslov 2023-04-26 14:06 ` Pedro Tammela @ 2023-04-26 14:22 ` Pedro Tammela 2023-04-26 14:46 ` Vlad Buslov 2023-04-27 5:52 ` Paul Blakey 2 siblings, 1 reply; 20+ messages in thread From: Pedro Tammela @ 2023-04-26 14:22 UTC (permalink / raw) To: Vlad Buslov, davem, kuba Cc: netdev, jhs, xiyou.wangcong, jiri, marcelo.leitner, paulb, simon.horman On 26/04/2023 09:14, Vlad Buslov wrote: > When replacing a filter (i.e. 'fold' pointer is not NULL) the insertion of > new filter to idr is postponed until later in code since handle is already > provided by the user. However, the error handling code in fl_change() > always assumes that the new filter had been inserted into idr. If error > handler is reached when replacing existing filter it may remove it from idr > therefore making it unreachable for delete or dump afterwards. Fix the > issue by verifying that 'fold' argument wasn't provided by caller before > calling idr_remove(). > > Fixes: 08a0063df3ae ("net/sched: flower: Move filter handle initialization earlier") > Signed-off-by: Vlad Buslov <vladbu@nvidia.com> > --- > net/sched/cls_flower.c | 3 ++- > 1 file changed, 2 insertions(+), 1 deletion(-) > > diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c > index 1844545bef37..a1c4ee2e0be2 100644 > --- a/net/sched/cls_flower.c > +++ b/net/sched/cls_flower.c > @@ -2339,7 +2339,8 @@ static int fl_change(struct net *net, struct sk_buff *in_skb, > errout_mask: > fl_mask_put(head, fnew->mask); > errout_idr: > - idr_remove(&head->handle_idr, fnew->handle); > + if (!fold) > + idr_remove(&head->handle_idr, fnew->handle); > __fl_put(fnew); > errout_tb: > kfree(tb); Actually this seems to be fixing the same issue: https://lore.kernel.org/all/20230425140604.169881-1-ivecera@redhat.com/ ^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH net 2/2] net/sched: flower: fix error handler on replace 2023-04-26 14:22 ` Pedro Tammela @ 2023-04-26 14:46 ` Vlad Buslov 2023-04-26 15:24 ` Pedro Tammela 2023-04-26 15:39 ` Ivan Vecera 0 siblings, 2 replies; 20+ messages in thread From: Vlad Buslov @ 2023-04-26 14:46 UTC (permalink / raw) To: Pedro Tammela, Ivan Vecera Cc: davem, kuba, netdev, jhs, xiyou.wangcong, jiri, marcelo.leitner, paulb, simon.horman On Wed 26 Apr 2023 at 11:22, Pedro Tammela <pctammela@mojatatu.com> wrote: > On 26/04/2023 09:14, Vlad Buslov wrote: >> When replacing a filter (i.e. 'fold' pointer is not NULL) the insertion of >> new filter to idr is postponed until later in code since handle is already >> provided by the user. However, the error handling code in fl_change() >> always assumes that the new filter had been inserted into idr. If error >> handler is reached when replacing existing filter it may remove it from idr >> therefore making it unreachable for delete or dump afterwards. Fix the >> issue by verifying that 'fold' argument wasn't provided by caller before >> calling idr_remove(). >> Fixes: 08a0063df3ae ("net/sched: flower: Move filter handle initialization >> earlier") >> Signed-off-by: Vlad Buslov <vladbu@nvidia.com> >> --- >> net/sched/cls_flower.c | 3 ++- >> 1 file changed, 2 insertions(+), 1 deletion(-) >> diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c >> index 1844545bef37..a1c4ee2e0be2 100644 >> --- a/net/sched/cls_flower.c >> +++ b/net/sched/cls_flower.c >> @@ -2339,7 +2339,8 @@ static int fl_change(struct net *net, struct sk_buff *in_skb, >> errout_mask: >> fl_mask_put(head, fnew->mask); >> errout_idr: >> - idr_remove(&head->handle_idr, fnew->handle); >> + if (!fold) >> + idr_remove(&head->handle_idr, fnew->handle); >> __fl_put(fnew); >> errout_tb: >> kfree(tb); > > Actually this seems to be fixing the same issue: > https://lore.kernel.org/all/20230425140604.169881-1-ivecera@redhat.com/ Indeed it does, I've missed that patch. However, it seems there is an issue with Ivan's approach. Consider what would happen when fold!=NULL && in_ht==false and rhashtable_insert_fast() fails here: if (fold) { /* Fold filter was deleted concurrently. Retry lookup. */ if (fold->deleted) { err = -EAGAIN; goto errout_hw; } fnew->handle = handle; // <-- fnew->handle is assigned if (!in_ht) { struct rhashtable_params params = fnew->mask->filter_ht_params; err = rhashtable_insert_fast(&fnew->mask->ht, &fnew->ht_node, params); if (err) goto errout_hw; /* <-- err is set, go to error handler here */ in_ht = true; } refcount_inc(&fnew->refcnt); rhashtable_remove_fast(&fold->mask->ht, &fold->ht_node, fold->mask->filter_ht_params); /* !!! we never get to insert fnew into idr here, if ht insertion fails */ idr_replace(&head->handle_idr, fnew, fnew->handle); list_replace_rcu(&fold->list, &fnew->list); fold->deleted = true; spin_unlock(&tp->lock); fl_mask_put(head, fold->mask); if (!tc_skip_hw(fold->flags)) fl_hw_destroy_filter(tp, fold, rtnl_held, NULL); tcf_unbind_filter(tp, &fold->res); /* Caller holds reference to fold, so refcnt is always > 0 * after this. */ refcount_dec(&fold->refcnt); __fl_put(fold); } ... errout_ht: spin_lock(&tp->lock); errout_hw: fnew->deleted = true; spin_unlock(&tp->lock); if (!tc_skip_hw(fnew->flags)) fl_hw_destroy_filter(tp, fnew, rtnl_held, NULL); if (in_ht) rhashtable_remove_fast(&fnew->mask->ht, &fnew->ht_node, fnew->mask->filter_ht_params); errout_mask: fl_mask_put(head, fnew->mask); errout_idr: /* !!! On next line we remove handle that we don't actually own */ idr_remove(&head->handle_idr, fnew->handle); __fl_put(fnew); errout_tb: kfree(tb); errout_mask_alloc: tcf_queue_work(&mask->rwork, fl_uninit_mask_free_work); errout_fold: if (fold) __fl_put(fold); return err; Also, if I understood the idea behind Ivan's fix correctly, it relies on the fact that calling idr_remove() with handle==0 is a noop. I prefer my approach slightly better as it is more explicit IMO. Thoughts? ^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH net 2/2] net/sched: flower: fix error handler on replace 2023-04-26 14:46 ` Vlad Buslov @ 2023-04-26 15:24 ` Pedro Tammela 2023-04-26 15:39 ` Ivan Vecera 1 sibling, 0 replies; 20+ messages in thread From: Pedro Tammela @ 2023-04-26 15:24 UTC (permalink / raw) To: Vlad Buslov, Ivan Vecera Cc: davem, kuba, netdev, jhs, xiyou.wangcong, jiri, marcelo.leitner, paulb, simon.horman On 26/04/2023 11:46, Vlad Buslov wrote: > On Wed 26 Apr 2023 at 11:22, Pedro Tammela <pctammela@mojatatu.com> wrote: >> On 26/04/2023 09:14, Vlad Buslov wrote: >>> When replacing a filter (i.e. 'fold' pointer is not NULL) the insertion of >>> new filter to idr is postponed until later in code since handle is already >>> provided by the user. However, the error handling code in fl_change() >>> always assumes that the new filter had been inserted into idr. If error >>> handler is reached when replacing existing filter it may remove it from idr >>> therefore making it unreachable for delete or dump afterwards. Fix the >>> issue by verifying that 'fold' argument wasn't provided by caller before >>> calling idr_remove(). >>> Fixes: 08a0063df3ae ("net/sched: flower: Move filter handle initialization >>> earlier") >>> Signed-off-by: Vlad Buslov <vladbu@nvidia.com> >>> --- >>> net/sched/cls_flower.c | 3 ++- >>> 1 file changed, 2 insertions(+), 1 deletion(-) >>> diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c >>> index 1844545bef37..a1c4ee2e0be2 100644 >>> --- a/net/sched/cls_flower.c >>> +++ b/net/sched/cls_flower.c >>> @@ -2339,7 +2339,8 @@ static int fl_change(struct net *net, struct sk_buff *in_skb, >>> errout_mask: >>> fl_mask_put(head, fnew->mask); >>> errout_idr: >>> - idr_remove(&head->handle_idr, fnew->handle); >>> + if (!fold) >>> + idr_remove(&head->handle_idr, fnew->handle); >>> __fl_put(fnew); >>> errout_tb: >>> kfree(tb); >> >> Actually this seems to be fixing the same issue: >> https://lore.kernel.org/all/20230425140604.169881-1-ivecera@redhat.com/ > > Indeed it does, I've missed that patch. However, it seems there > is an issue with Ivan's approach. Consider what would happen when > fold!=NULL && in_ht==false and rhashtable_insert_fast() fails here: > > > if (fold) { > /* Fold filter was deleted concurrently. Retry lookup. */ > if (fold->deleted) { > err = -EAGAIN; > goto errout_hw; > } > > fnew->handle = handle; // <-- fnew->handle is assigned > > if (!in_ht) { > struct rhashtable_params params = > fnew->mask->filter_ht_params; > > err = rhashtable_insert_fast(&fnew->mask->ht, > &fnew->ht_node, > params); > if (err) > goto errout_hw; /* <-- err is set, go to > error handler here */ > in_ht = true; > } > > refcount_inc(&fnew->refcnt); > rhashtable_remove_fast(&fold->mask->ht, > &fold->ht_node, > fold->mask->filter_ht_params); > /* !!! we never get to insert fnew into idr here, if ht insertion fails */ > idr_replace(&head->handle_idr, fnew, fnew->handle); > list_replace_rcu(&fold->list, &fnew->list); > fold->deleted = true; > > spin_unlock(&tp->lock); > > fl_mask_put(head, fold->mask); > if (!tc_skip_hw(fold->flags)) > fl_hw_destroy_filter(tp, fold, rtnl_held, NULL); > tcf_unbind_filter(tp, &fold->res); > /* Caller holds reference to fold, so refcnt is always > 0 > * after this. > */ > refcount_dec(&fold->refcnt); > __fl_put(fold); > } > > ... > > errout_ht: > spin_lock(&tp->lock); > errout_hw: > fnew->deleted = true; > spin_unlock(&tp->lock); > if (!tc_skip_hw(fnew->flags)) > fl_hw_destroy_filter(tp, fnew, rtnl_held, NULL); > if (in_ht) > rhashtable_remove_fast(&fnew->mask->ht, &fnew->ht_node, > fnew->mask->filter_ht_params); > errout_mask: > fl_mask_put(head, fnew->mask); > errout_idr: > /* !!! On next line we remove handle that we don't actually own */ > idr_remove(&head->handle_idr, fnew->handle); > __fl_put(fnew); > errout_tb: > kfree(tb); > errout_mask_alloc: > tcf_queue_work(&mask->rwork, fl_uninit_mask_free_work); > errout_fold: > if (fold) > __fl_put(fold); > return err; > > > Also, if I understood the idea behind Ivan's fix correctly, it relies on > the fact that calling idr_remove() with handle==0 is a noop. I prefer my > approach slightly better as it is more explicit IMO. > > Thoughts? I agree with your analysis ^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH net 2/2] net/sched: flower: fix error handler on replace 2023-04-26 14:46 ` Vlad Buslov 2023-04-26 15:24 ` Pedro Tammela @ 2023-04-26 15:39 ` Ivan Vecera 2023-04-28 7:11 ` Simon Horman 1 sibling, 1 reply; 20+ messages in thread From: Ivan Vecera @ 2023-04-26 15:39 UTC (permalink / raw) To: Vlad Buslov, Pedro Tammela Cc: davem, kuba, netdev, jhs, xiyou.wangcong, jiri, marcelo.leitner, paulb, simon.horman On 26. 04. 23 16:46, Vlad Buslov wrote: > On Wed 26 Apr 2023 at 11:22, Pedro Tammela <pctammela@mojatatu.com> wrote: >> On 26/04/2023 09:14, Vlad Buslov wrote: >>> When replacing a filter (i.e. 'fold' pointer is not NULL) the insertion of >>> new filter to idr is postponed until later in code since handle is already >>> provided by the user. However, the error handling code in fl_change() >>> always assumes that the new filter had been inserted into idr. If error >>> handler is reached when replacing existing filter it may remove it from idr >>> therefore making it unreachable for delete or dump afterwards. Fix the >>> issue by verifying that 'fold' argument wasn't provided by caller before >>> calling idr_remove(). >>> Fixes: 08a0063df3ae ("net/sched: flower: Move filter handle initialization >>> earlier") >>> Signed-off-by: Vlad Buslov <vladbu@nvidia.com> >>> --- >>> net/sched/cls_flower.c | 3 ++- >>> 1 file changed, 2 insertions(+), 1 deletion(-) >>> diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c >>> index 1844545bef37..a1c4ee2e0be2 100644 >>> --- a/net/sched/cls_flower.c >>> +++ b/net/sched/cls_flower.c >>> @@ -2339,7 +2339,8 @@ static int fl_change(struct net *net, struct sk_buff *in_skb, >>> errout_mask: >>> fl_mask_put(head, fnew->mask); >>> errout_idr: >>> - idr_remove(&head->handle_idr, fnew->handle); >>> + if (!fold) >>> + idr_remove(&head->handle_idr, fnew->handle); >>> __fl_put(fnew); >>> errout_tb: >>> kfree(tb); >> >> Actually this seems to be fixing the same issue: >> https://lore.kernel.org/all/20230425140604.169881-1-ivecera@redhat.com/ > > Indeed it does, I've missed that patch. However, it seems there > is an issue with Ivan's approach. Consider what would happen when > fold!=NULL && in_ht==false and rhashtable_insert_fast() fails here: > > > if (fold) { > /* Fold filter was deleted concurrently. Retry lookup. */ > if (fold->deleted) { > err = -EAGAIN; > goto errout_hw; > } > > fnew->handle = handle; // <-- fnew->handle is assigned > > if (!in_ht) { > struct rhashtable_params params = > fnew->mask->filter_ht_params; > > err = rhashtable_insert_fast(&fnew->mask->ht, > &fnew->ht_node, > params); > if (err) > goto errout_hw; /* <-- err is set, go to > error handler here */ > in_ht = true; > } > > refcount_inc(&fnew->refcnt); > rhashtable_remove_fast(&fold->mask->ht, > &fold->ht_node, > fold->mask->filter_ht_params); > /* !!! we never get to insert fnew into idr here, if ht insertion fails */ > idr_replace(&head->handle_idr, fnew, fnew->handle); > list_replace_rcu(&fold->list, &fnew->list); > fold->deleted = true; > > spin_unlock(&tp->lock); > > fl_mask_put(head, fold->mask); > if (!tc_skip_hw(fold->flags)) > fl_hw_destroy_filter(tp, fold, rtnl_held, NULL); > tcf_unbind_filter(tp, &fold->res); > /* Caller holds reference to fold, so refcnt is always > 0 > * after this. > */ > refcount_dec(&fold->refcnt); > __fl_put(fold); > } > > ... > > errout_ht: > spin_lock(&tp->lock); > errout_hw: > fnew->deleted = true; > spin_unlock(&tp->lock); > if (!tc_skip_hw(fnew->flags)) > fl_hw_destroy_filter(tp, fnew, rtnl_held, NULL); > if (in_ht) > rhashtable_remove_fast(&fnew->mask->ht, &fnew->ht_node, > fnew->mask->filter_ht_params); > errout_mask: > fl_mask_put(head, fnew->mask); > errout_idr: > /* !!! On next line we remove handle that we don't actually own */ > idr_remove(&head->handle_idr, fnew->handle); > __fl_put(fnew); > errout_tb: > kfree(tb); > errout_mask_alloc: > tcf_queue_work(&mask->rwork, fl_uninit_mask_free_work); > errout_fold: > if (fold) > __fl_put(fold); > return err; > > > Also, if I understood the idea behind Ivan's fix correctly, it relies on > the fact that calling idr_remove() with handle==0 is a noop. I prefer my > approach slightly better as it is more explicit IMO. > > Thoughts? Yes, your approach is better... Acked-by: Ivan Vecera <ivecera@redhat.com> ^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH net 2/2] net/sched: flower: fix error handler on replace 2023-04-26 15:39 ` Ivan Vecera @ 2023-04-28 7:11 ` Simon Horman 2023-04-28 8:20 ` Ivan Vecera 0 siblings, 1 reply; 20+ messages in thread From: Simon Horman @ 2023-04-28 7:11 UTC (permalink / raw) To: Ivan Vecera Cc: Vlad Buslov, Pedro Tammela, davem, kuba, netdev, jhs, xiyou.wangcong, jiri, marcelo.leitner, paulb On Wed, Apr 26, 2023 at 05:39:09PM +0200, Ivan Vecera wrote: > > > On 26. 04. 23 16:46, Vlad Buslov wrote: > > On Wed 26 Apr 2023 at 11:22, Pedro Tammela <pctammela@mojatatu.com> wrote: > > > On 26/04/2023 09:14, Vlad Buslov wrote: > > > > When replacing a filter (i.e. 'fold' pointer is not NULL) the insertion of > > > > new filter to idr is postponed until later in code since handle is already > > > > provided by the user. However, the error handling code in fl_change() > > > > always assumes that the new filter had been inserted into idr. If error > > > > handler is reached when replacing existing filter it may remove it from idr > > > > therefore making it unreachable for delete or dump afterwards. Fix the > > > > issue by verifying that 'fold' argument wasn't provided by caller before > > > > calling idr_remove(). > > > > Fixes: 08a0063df3ae ("net/sched: flower: Move filter handle initialization > > > > earlier") > > > > Signed-off-by: Vlad Buslov <vladbu@nvidia.com> > > > > --- > > > > net/sched/cls_flower.c | 3 ++- > > > > 1 file changed, 2 insertions(+), 1 deletion(-) > > > > diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c > > > > index 1844545bef37..a1c4ee2e0be2 100644 > > > > --- a/net/sched/cls_flower.c > > > > +++ b/net/sched/cls_flower.c > > > > @@ -2339,7 +2339,8 @@ static int fl_change(struct net *net, struct sk_buff *in_skb, > > > > errout_mask: > > > > fl_mask_put(head, fnew->mask); > > > > errout_idr: > > > > - idr_remove(&head->handle_idr, fnew->handle); > > > > + if (!fold) > > > > + idr_remove(&head->handle_idr, fnew->handle); > > > > __fl_put(fnew); > > > > errout_tb: > > > > kfree(tb); > > > > > > Actually this seems to be fixing the same issue: > > > https://lore.kernel.org/all/20230425140604.169881-1-ivecera@redhat.com/ > > > > Indeed it does, I've missed that patch. However, it seems there > > is an issue with Ivan's approach. Consider what would happen when > > fold!=NULL && in_ht==false and rhashtable_insert_fast() fails here: > > > > > > if (fold) { > > /* Fold filter was deleted concurrently. Retry lookup. */ > > if (fold->deleted) { > > err = -EAGAIN; > > goto errout_hw; > > } > > > > fnew->handle = handle; // <-- fnew->handle is assigned > > > > if (!in_ht) { > > struct rhashtable_params params = > > fnew->mask->filter_ht_params; > > > > err = rhashtable_insert_fast(&fnew->mask->ht, > > &fnew->ht_node, > > params); > > if (err) > > goto errout_hw; /* <-- err is set, go to > > error handler here */ > > in_ht = true; > > } > > > > refcount_inc(&fnew->refcnt); > > rhashtable_remove_fast(&fold->mask->ht, > > &fold->ht_node, > > fold->mask->filter_ht_params); > > /* !!! we never get to insert fnew into idr here, if ht insertion fails */ > > idr_replace(&head->handle_idr, fnew, fnew->handle); > > list_replace_rcu(&fold->list, &fnew->list); > > fold->deleted = true; > > > > spin_unlock(&tp->lock); > > > > fl_mask_put(head, fold->mask); > > if (!tc_skip_hw(fold->flags)) > > fl_hw_destroy_filter(tp, fold, rtnl_held, NULL); > > tcf_unbind_filter(tp, &fold->res); > > /* Caller holds reference to fold, so refcnt is always > 0 > > * after this. > > */ > > refcount_dec(&fold->refcnt); > > __fl_put(fold); > > } > > > > ... > > > > errout_ht: > > spin_lock(&tp->lock); > > errout_hw: > > fnew->deleted = true; > > spin_unlock(&tp->lock); > > if (!tc_skip_hw(fnew->flags)) > > fl_hw_destroy_filter(tp, fnew, rtnl_held, NULL); > > if (in_ht) > > rhashtable_remove_fast(&fnew->mask->ht, &fnew->ht_node, > > fnew->mask->filter_ht_params); > > errout_mask: > > fl_mask_put(head, fnew->mask); > > errout_idr: > > /* !!! On next line we remove handle that we don't actually own */ > > idr_remove(&head->handle_idr, fnew->handle); > > __fl_put(fnew); > > errout_tb: > > kfree(tb); > > errout_mask_alloc: > > tcf_queue_work(&mask->rwork, fl_uninit_mask_free_work); > > errout_fold: > > if (fold) > > __fl_put(fold); > > return err; > > > > > > Also, if I understood the idea behind Ivan's fix correctly, it relies on > > the fact that calling idr_remove() with handle==0 is a noop. I prefer my > > approach slightly better as it is more explicit IMO. > > > > Thoughts? > > Yes, your approach is better... > > Acked-by: Ivan Vecera <ivecera@redhat.com> In the meantime it seems that Ivan's patch has been accepted into net. - [net] net/sched: flower: Fix wrong handle assignment during filter change https://git.kernel.org/netdev/net/c/32eff6bacec2 Is some adjustment to this patch required to take that into account? > ^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH net 2/2] net/sched: flower: fix error handler on replace 2023-04-28 7:11 ` Simon Horman @ 2023-04-28 8:20 ` Ivan Vecera 2023-04-28 11:03 ` Vlad Buslov 0 siblings, 1 reply; 20+ messages in thread From: Ivan Vecera @ 2023-04-28 8:20 UTC (permalink / raw) To: Simon Horman Cc: Vlad Buslov, Pedro Tammela, davem, kuba, netdev, jhs, xiyou.wangcong, jiri, marcelo.leitner, paulb, Paolo Abeni On 28. 04. 23 9:11, Simon Horman wrote: > On Wed, Apr 26, 2023 at 05:39:09PM +0200, Ivan Vecera wrote: >> >> >> On 26. 04. 23 16:46, Vlad Buslov wrote: >>> On Wed 26 Apr 2023 at 11:22, Pedro Tammela <pctammela@mojatatu.com> wrote: >>>> On 26/04/2023 09:14, Vlad Buslov wrote: >>>>> When replacing a filter (i.e. 'fold' pointer is not NULL) the insertion of >>>>> new filter to idr is postponed until later in code since handle is already >>>>> provided by the user. However, the error handling code in fl_change() >>>>> always assumes that the new filter had been inserted into idr. If error >>>>> handler is reached when replacing existing filter it may remove it from idr >>>>> therefore making it unreachable for delete or dump afterwards. Fix the >>>>> issue by verifying that 'fold' argument wasn't provided by caller before >>>>> calling idr_remove(). >>>>> Fixes: 08a0063df3ae ("net/sched: flower: Move filter handle initialization >>>>> earlier") >>>>> Signed-off-by: Vlad Buslov <vladbu@nvidia.com> >>>>> --- >>>>> net/sched/cls_flower.c | 3 ++- >>>>> 1 file changed, 2 insertions(+), 1 deletion(-) >>>>> diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c >>>>> index 1844545bef37..a1c4ee2e0be2 100644 >>>>> --- a/net/sched/cls_flower.c >>>>> +++ b/net/sched/cls_flower.c >>>>> @@ -2339,7 +2339,8 @@ static int fl_change(struct net *net, struct sk_buff *in_skb, >>>>> errout_mask: >>>>> fl_mask_put(head, fnew->mask); >>>>> errout_idr: >>>>> - idr_remove(&head->handle_idr, fnew->handle); >>>>> + if (!fold) >>>>> + idr_remove(&head->handle_idr, fnew->handle); >>>>> __fl_put(fnew); >>>>> errout_tb: >>>>> kfree(tb); >>>> >>>> Actually this seems to be fixing the same issue: >>>> https://lore.kernel.org/all/20230425140604.169881-1-ivecera@redhat.com/ >>> >>> Indeed it does, I've missed that patch. However, it seems there >>> is an issue with Ivan's approach. Consider what would happen when >>> fold!=NULL && in_ht==false and rhashtable_insert_fast() fails here: >>> >>> >>> if (fold) { >>> /* Fold filter was deleted concurrently. Retry lookup. */ >>> if (fold->deleted) { >>> err = -EAGAIN; >>> goto errout_hw; >>> } >>> >>> fnew->handle = handle; // <-- fnew->handle is assigned >>> >>> if (!in_ht) { >>> struct rhashtable_params params = >>> fnew->mask->filter_ht_params; >>> >>> err = rhashtable_insert_fast(&fnew->mask->ht, >>> &fnew->ht_node, >>> params); >>> if (err) >>> goto errout_hw; /* <-- err is set, go to >>> error handler here */ >>> in_ht = true; >>> } >>> >>> refcount_inc(&fnew->refcnt); >>> rhashtable_remove_fast(&fold->mask->ht, >>> &fold->ht_node, >>> fold->mask->filter_ht_params); >>> /* !!! we never get to insert fnew into idr here, if ht insertion fails */ >>> idr_replace(&head->handle_idr, fnew, fnew->handle); >>> list_replace_rcu(&fold->list, &fnew->list); >>> fold->deleted = true; >>> >>> spin_unlock(&tp->lock); >>> >>> fl_mask_put(head, fold->mask); >>> if (!tc_skip_hw(fold->flags)) >>> fl_hw_destroy_filter(tp, fold, rtnl_held, NULL); >>> tcf_unbind_filter(tp, &fold->res); >>> /* Caller holds reference to fold, so refcnt is always > 0 >>> * after this. >>> */ >>> refcount_dec(&fold->refcnt); >>> __fl_put(fold); >>> } >>> >>> ... >>> >>> errout_ht: >>> spin_lock(&tp->lock); >>> errout_hw: >>> fnew->deleted = true; >>> spin_unlock(&tp->lock); >>> if (!tc_skip_hw(fnew->flags)) >>> fl_hw_destroy_filter(tp, fnew, rtnl_held, NULL); >>> if (in_ht) >>> rhashtable_remove_fast(&fnew->mask->ht, &fnew->ht_node, >>> fnew->mask->filter_ht_params); >>> errout_mask: >>> fl_mask_put(head, fnew->mask); >>> errout_idr: >>> /* !!! On next line we remove handle that we don't actually own */ >>> idr_remove(&head->handle_idr, fnew->handle); >>> __fl_put(fnew); >>> errout_tb: >>> kfree(tb); >>> errout_mask_alloc: >>> tcf_queue_work(&mask->rwork, fl_uninit_mask_free_work); >>> errout_fold: >>> if (fold) >>> __fl_put(fold); >>> return err; >>> >>> >>> Also, if I understood the idea behind Ivan's fix correctly, it relies on >>> the fact that calling idr_remove() with handle==0 is a noop. I prefer my >>> approach slightly better as it is more explicit IMO. >>> >>> Thoughts? >> >> Yes, your approach is better... >> >> Acked-by: Ivan Vecera <ivecera@redhat.com> > > In the meantime it seems that Ivan's patch has been accepted into net. > > - [net] net/sched: flower: Fix wrong handle assignment during filter change > https://git.kernel.org/netdev/net/c/32eff6bacec2 > > Is some adjustment to this patch required to take that into account? I think something like this is necessary to cover Vlad's findings: diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c index 6ab6aadc07b8da..ce937baefcf00e 100644 --- a/net/sched/cls_flower.c +++ b/net/sched/cls_flower.c @@ -2279,8 +2279,6 @@ static int fl_change(struct net *net, struct sk_buff *in_skb, goto errout_hw; } - fnew->handle = handle; - if (!in_ht) { struct rhashtable_params params = fnew->mask->filter_ht_params; @@ -2297,6 +2295,7 @@ static int fl_change(struct net *net, struct sk_buff *in_skb, rhashtable_remove_fast(&fold->mask->ht, &fold->ht_node, fold->mask->filter_ht_params); + fnew->handle = handle; idr_replace(&head->handle_idr, fnew, fnew->handle); list_replace_rcu(&fold->list, &fnew->list); fold->deleted = true; Just move fnew->handle assignment immediately prior idr_replace(). Thoughts? Ivan ^ permalink raw reply related [flat|nested] 20+ messages in thread
* Re: [PATCH net 2/2] net/sched: flower: fix error handler on replace 2023-04-28 8:20 ` Ivan Vecera @ 2023-04-28 11:03 ` Vlad Buslov 2023-05-03 2:44 ` Jakub Kicinski 0 siblings, 1 reply; 20+ messages in thread From: Vlad Buslov @ 2023-04-28 11:03 UTC (permalink / raw) To: Ivan Vecera Cc: Simon Horman, Pedro Tammela, davem, kuba, netdev, jhs, xiyou.wangcong, jiri, marcelo.leitner, paulb, Paolo Abeni On Fri 28 Apr 2023 at 10:20, Ivan Vecera <ivecera@redhat.com> wrote: > On 28. 04. 23 9:11, Simon Horman wrote: >> On Wed, Apr 26, 2023 at 05:39:09PM +0200, Ivan Vecera wrote: >>> >>> >>> On 26. 04. 23 16:46, Vlad Buslov wrote: >>>> On Wed 26 Apr 2023 at 11:22, Pedro Tammela <pctammela@mojatatu.com> wrote: >>>>> On 26/04/2023 09:14, Vlad Buslov wrote: >>>>>> When replacing a filter (i.e. 'fold' pointer is not NULL) the insertion of >>>>>> new filter to idr is postponed until later in code since handle is already >>>>>> provided by the user. However, the error handling code in fl_change() >>>>>> always assumes that the new filter had been inserted into idr. If error >>>>>> handler is reached when replacing existing filter it may remove it from idr >>>>>> therefore making it unreachable for delete or dump afterwards. Fix the >>>>>> issue by verifying that 'fold' argument wasn't provided by caller before >>>>>> calling idr_remove(). >>>>>> Fixes: 08a0063df3ae ("net/sched: flower: Move filter handle initialization >>>>>> earlier") >>>>>> Signed-off-by: Vlad Buslov <vladbu@nvidia.com> >>>>>> --- >>>>>> net/sched/cls_flower.c | 3 ++- >>>>>> 1 file changed, 2 insertions(+), 1 deletion(-) >>>>>> diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c >>>>>> index 1844545bef37..a1c4ee2e0be2 100644 >>>>>> --- a/net/sched/cls_flower.c >>>>>> +++ b/net/sched/cls_flower.c >>>>>> @@ -2339,7 +2339,8 @@ static int fl_change(struct net *net, struct sk_buff *in_skb, >>>>>> errout_mask: >>>>>> fl_mask_put(head, fnew->mask); >>>>>> errout_idr: >>>>>> - idr_remove(&head->handle_idr, fnew->handle); >>>>>> + if (!fold) >>>>>> + idr_remove(&head->handle_idr, fnew->handle); >>>>>> __fl_put(fnew); >>>>>> errout_tb: >>>>>> kfree(tb); >>>>> >>>>> Actually this seems to be fixing the same issue: >>>>> https://lore.kernel.org/all/20230425140604.169881-1-ivecera@redhat.com/ >>>> >>>> Indeed it does, I've missed that patch. However, it seems there >>>> is an issue with Ivan's approach. Consider what would happen when >>>> fold!=NULL && in_ht==false and rhashtable_insert_fast() fails here: >>>> >>>> >>>> if (fold) { >>>> /* Fold filter was deleted concurrently. Retry lookup. */ >>>> if (fold->deleted) { >>>> err = -EAGAIN; >>>> goto errout_hw; >>>> } >>>> >>>> fnew->handle = handle; // <-- fnew->handle is assigned >>>> >>>> if (!in_ht) { >>>> struct rhashtable_params params = >>>> fnew->mask->filter_ht_params; >>>> >>>> err = rhashtable_insert_fast(&fnew->mask->ht, >>>> &fnew->ht_node, >>>> params); >>>> if (err) >>>> goto errout_hw; /* <-- err is set, go to >>>> error handler here */ >>>> in_ht = true; >>>> } >>>> >>>> refcount_inc(&fnew->refcnt); >>>> rhashtable_remove_fast(&fold->mask->ht, >>>> &fold->ht_node, >>>> fold->mask->filter_ht_params); >>>> /* !!! we never get to insert fnew into idr here, if ht insertion fails */ >>>> idr_replace(&head->handle_idr, fnew, fnew->handle); >>>> list_replace_rcu(&fold->list, &fnew->list); >>>> fold->deleted = true; >>>> >>>> spin_unlock(&tp->lock); >>>> >>>> fl_mask_put(head, fold->mask); >>>> if (!tc_skip_hw(fold->flags)) >>>> fl_hw_destroy_filter(tp, fold, rtnl_held, NULL); >>>> tcf_unbind_filter(tp, &fold->res); >>>> /* Caller holds reference to fold, so refcnt is always > 0 >>>> * after this. >>>> */ >>>> refcount_dec(&fold->refcnt); >>>> __fl_put(fold); >>>> } >>>> >>>> ... >>>> >>>> errout_ht: >>>> spin_lock(&tp->lock); >>>> errout_hw: >>>> fnew->deleted = true; >>>> spin_unlock(&tp->lock); >>>> if (!tc_skip_hw(fnew->flags)) >>>> fl_hw_destroy_filter(tp, fnew, rtnl_held, NULL); >>>> if (in_ht) >>>> rhashtable_remove_fast(&fnew->mask->ht, &fnew->ht_node, >>>> fnew->mask->filter_ht_params); >>>> errout_mask: >>>> fl_mask_put(head, fnew->mask); >>>> errout_idr: >>>> /* !!! On next line we remove handle that we don't actually own */ >>>> idr_remove(&head->handle_idr, fnew->handle); >>>> __fl_put(fnew); >>>> errout_tb: >>>> kfree(tb); >>>> errout_mask_alloc: >>>> tcf_queue_work(&mask->rwork, fl_uninit_mask_free_work); >>>> errout_fold: >>>> if (fold) >>>> __fl_put(fold); >>>> return err; >>>> >>>> >>>> Also, if I understood the idea behind Ivan's fix correctly, it relies on >>>> the fact that calling idr_remove() with handle==0 is a noop. I prefer my >>>> approach slightly better as it is more explicit IMO. >>>> >>>> Thoughts? >>> >>> Yes, your approach is better... >>> >>> Acked-by: Ivan Vecera <ivecera@redhat.com> >> In the meantime it seems that Ivan's patch has been accepted into net. >> - [net] net/sched: flower: Fix wrong handle assignment during filter change >> https://git.kernel.org/netdev/net/c/32eff6bacec2 >> Is some adjustment to this patch required to take that into account? > > I think something like this is necessary to cover Vlad's findings: > > diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c > index 6ab6aadc07b8da..ce937baefcf00e 100644 > --- a/net/sched/cls_flower.c > +++ b/net/sched/cls_flower.c > @@ -2279,8 +2279,6 @@ static int fl_change(struct net *net, struct sk_buff > *in_skb, > goto errout_hw; > } > > - fnew->handle = handle; > - > if (!in_ht) { > struct rhashtable_params params = > fnew->mask->filter_ht_params; > @@ -2297,6 +2295,7 @@ static int fl_change(struct net *net, struct sk_buff > *in_skb, > rhashtable_remove_fast(&fold->mask->ht, > &fold->ht_node, > fold->mask->filter_ht_params); > + fnew->handle = handle; > idr_replace(&head->handle_idr, fnew, fnew->handle); > list_replace_rcu(&fold->list, &fnew->list); > fold->deleted = true; > > Just move fnew->handle assignment immediately prior idr_replace(). > > Thoughts? Note that with these changes (both accepted patch and preceding diff) you are exposing filter to dapapath access (datapath looks up filter via hash table, not idr) with its handle set to 0 initially and then resent while already accessible. After taking a quick look at Paul's miss-to-action code it seems that handle value used by datapath is taken from struct tcf_exts_miss_cookie_node not from filter directly, so such approach likely doesn't break anything existing, but I might have missed something. ^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH net 2/2] net/sched: flower: fix error handler on replace 2023-04-28 11:03 ` Vlad Buslov @ 2023-05-03 2:44 ` Jakub Kicinski 2023-05-04 13:40 ` Vlad Buslov 0 siblings, 1 reply; 20+ messages in thread From: Jakub Kicinski @ 2023-05-03 2:44 UTC (permalink / raw) To: Vlad Buslov Cc: Ivan Vecera, Simon Horman, Pedro Tammela, davem, netdev, jhs, xiyou.wangcong, jiri, marcelo.leitner, paulb, Paolo Abeni On Fri, 28 Apr 2023 14:03:19 +0300 Vlad Buslov wrote: > Note that with these changes (both accepted patch and preceding diff) > you are exposing filter to dapapath access (datapath looks up filter via > hash table, not idr) with its handle set to 0 initially and then resent > while already accessible. After taking a quick look at Paul's > miss-to-action code it seems that handle value used by datapath is taken > from struct tcf_exts_miss_cookie_node not from filter directly, so such > approach likely doesn't break anything existing, but I might have missed > something. Did we deadlock in this discussion, or the issue was otherwise fixed? ^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH net 2/2] net/sched: flower: fix error handler on replace 2023-05-03 2:44 ` Jakub Kicinski @ 2023-05-04 13:40 ` Vlad Buslov 2023-05-04 14:24 ` Paolo Abeni 0 siblings, 1 reply; 20+ messages in thread From: Vlad Buslov @ 2023-05-04 13:40 UTC (permalink / raw) To: Jakub Kicinski Cc: Ivan Vecera, Simon Horman, Pedro Tammela, davem, netdev, jhs, xiyou.wangcong, jiri, marcelo.leitner, paulb, Paolo Abeni On Tue 02 May 2023 at 19:44, Jakub Kicinski <kuba@kernel.org> wrote: > On Fri, 28 Apr 2023 14:03:19 +0300 Vlad Buslov wrote: >> Note that with these changes (both accepted patch and preceding diff) >> you are exposing filter to dapapath access (datapath looks up filter via >> hash table, not idr) with its handle set to 0 initially and then resent >> while already accessible. After taking a quick look at Paul's >> miss-to-action code it seems that handle value used by datapath is taken >> from struct tcf_exts_miss_cookie_node not from filter directly, so such >> approach likely doesn't break anything existing, but I might have missed >> something. > > Did we deadlock in this discussion, or the issue was otherwise fixed? From my side I explained why in my opinion Ivan's fix doesn't cover all cases and my approach is better overall. Don't know what else to discuss since it seems that everyone agreed. ^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH net 2/2] net/sched: flower: fix error handler on replace 2023-05-04 13:40 ` Vlad Buslov @ 2023-05-04 14:24 ` Paolo Abeni 2023-05-04 18:32 ` Vlad Buslov 0 siblings, 1 reply; 20+ messages in thread From: Paolo Abeni @ 2023-05-04 14:24 UTC (permalink / raw) To: Vlad Buslov, Jakub Kicinski Cc: Ivan Vecera, Simon Horman, Pedro Tammela, davem, netdev, jhs, xiyou.wangcong, jiri, marcelo.leitner, paulb On Thu, 2023-05-04 at 16:40 +0300, Vlad Buslov wrote: > On Tue 02 May 2023 at 19:44, Jakub Kicinski <kuba@kernel.org> wrote: > > On Fri, 28 Apr 2023 14:03:19 +0300 Vlad Buslov wrote: > > > Note that with these changes (both accepted patch and preceding diff) > > > you are exposing filter to dapapath access (datapath looks up filter via > > > hash table, not idr) with its handle set to 0 initially and then resent > > > while already accessible. After taking a quick look at Paul's > > > miss-to-action code it seems that handle value used by datapath is taken > > > from struct tcf_exts_miss_cookie_node not from filter directly, so such > > > approach likely doesn't break anything existing, but I might have missed > > > something. > > > > Did we deadlock in this discussion, or the issue was otherwise fixed? > > From my side I explained why in my opinion Ivan's fix doesn't cover all > cases and my approach is better overall. Don't know what else to discuss > since it seems that everyone agreed. Do I read correctly that we need a revert of Ivan's patch to safely apply this series? If so, could you please repost including such revert? Thanks. Paolo ^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH net 2/2] net/sched: flower: fix error handler on replace 2023-05-04 14:24 ` Paolo Abeni @ 2023-05-04 18:32 ` Vlad Buslov 2023-05-05 13:25 ` Simon Horman 0 siblings, 1 reply; 20+ messages in thread From: Vlad Buslov @ 2023-05-04 18:32 UTC (permalink / raw) To: Paolo Abeni Cc: Jakub Kicinski, Ivan Vecera, Simon Horman, Pedro Tammela, davem, netdev, jhs, xiyou.wangcong, jiri, marcelo.leitner, paulb On Thu 04 May 2023 at 16:24, Paolo Abeni <pabeni@redhat.com> wrote: > On Thu, 2023-05-04 at 16:40 +0300, Vlad Buslov wrote: >> On Tue 02 May 2023 at 19:44, Jakub Kicinski <kuba@kernel.org> wrote: >> > On Fri, 28 Apr 2023 14:03:19 +0300 Vlad Buslov wrote: >> > > Note that with these changes (both accepted patch and preceding diff) >> > > you are exposing filter to dapapath access (datapath looks up filter via >> > > hash table, not idr) with its handle set to 0 initially and then resent >> > > while already accessible. After taking a quick look at Paul's >> > > miss-to-action code it seems that handle value used by datapath is taken >> > > from struct tcf_exts_miss_cookie_node not from filter directly, so such >> > > approach likely doesn't break anything existing, but I might have missed >> > > something. >> > >> > Did we deadlock in this discussion, or the issue was otherwise fixed? >> >> From my side I explained why in my opinion Ivan's fix doesn't cover all >> cases and my approach is better overall. Don't know what else to discuss >> since it seems that everyone agreed. > > Do I read correctly that we need a revert of Ivan's patch to safely > apply this series? If so, could you please repost including such > revert? I don't believe our fixes conflict, it is just that Ivan's should become redundant with mine applied. Anyway, I've just sent V2 with added revert. ^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH net 2/2] net/sched: flower: fix error handler on replace 2023-05-04 18:32 ` Vlad Buslov @ 2023-05-05 13:25 ` Simon Horman 0 siblings, 0 replies; 20+ messages in thread From: Simon Horman @ 2023-05-05 13:25 UTC (permalink / raw) To: Vlad Buslov Cc: Paolo Abeni, Jakub Kicinski, Ivan Vecera, Pedro Tammela, davem, netdev, jhs, xiyou.wangcong, jiri, marcelo.leitner, paulb On Thu, May 04, 2023 at 09:32:40PM +0300, Vlad Buslov wrote: > > On Thu 04 May 2023 at 16:24, Paolo Abeni <pabeni@redhat.com> wrote: > > On Thu, 2023-05-04 at 16:40 +0300, Vlad Buslov wrote: > >> On Tue 02 May 2023 at 19:44, Jakub Kicinski <kuba@kernel.org> wrote: > >> > On Fri, 28 Apr 2023 14:03:19 +0300 Vlad Buslov wrote: > >> > > Note that with these changes (both accepted patch and preceding diff) > >> > > you are exposing filter to dapapath access (datapath looks up filter via > >> > > hash table, not idr) with its handle set to 0 initially and then resent > >> > > while already accessible. After taking a quick look at Paul's > >> > > miss-to-action code it seems that handle value used by datapath is taken > >> > > from struct tcf_exts_miss_cookie_node not from filter directly, so such > >> > > approach likely doesn't break anything existing, but I might have missed > >> > > something. > >> > > >> > Did we deadlock in this discussion, or the issue was otherwise fixed? > >> > >> From my side I explained why in my opinion Ivan's fix doesn't cover all > >> cases and my approach is better overall. Don't know what else to discuss > >> since it seems that everyone agreed. > > > > Do I read correctly that we need a revert of Ivan's patch to safely > > apply this series? If so, could you please repost including such > > revert? > > I don't believe our fixes conflict, it is just that Ivan's should become > redundant with mine applied. Anyway, I've just sent V2 with added > revert. Thanks. FWIIW, this matches my understanding of the situation. ^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH net 2/2] net/sched: flower: fix error handler on replace 2023-04-26 12:14 ` [PATCH net 2/2] net/sched: flower: fix error handler on replace Vlad Buslov 2023-04-26 14:06 ` Pedro Tammela 2023-04-26 14:22 ` Pedro Tammela @ 2023-04-27 5:52 ` Paul Blakey 2 siblings, 0 replies; 20+ messages in thread From: Paul Blakey @ 2023-04-27 5:52 UTC (permalink / raw) To: Vlad Buslov, davem, kuba Cc: netdev, jhs, xiyou.wangcong, jiri, marcelo.leitner, simon.horman On 26/04/2023 15:14, Vlad Buslov wrote: > When replacing a filter (i.e. 'fold' pointer is not NULL) the insertion of > new filter to idr is postponed until later in code since handle is already > provided by the user. However, the error handling code in fl_change() > always assumes that the new filter had been inserted into idr. If error > handler is reached when replacing existing filter it may remove it from idr > therefore making it unreachable for delete or dump afterwards. Fix the > issue by verifying that 'fold' argument wasn't provided by caller before > calling idr_remove(). > > Fixes: 08a0063df3ae ("net/sched: flower: Move filter handle initialization earlier") > Signed-off-by: Vlad Buslov <vladbu@nvidia.com> Reviewed-by: Paul Blakey <paulb@nvidia.com> ^ permalink raw reply [flat|nested] 20+ messages in thread
end of thread, other threads:[~2023-05-05 13:25 UTC | newest] Thread overview: 20+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2023-04-26 12:14 [PATCH net 0/2] Fixes for miss to tc action series Vlad Buslov 2023-04-26 12:14 ` [PATCH net 1/2] net/sched: flower: fix filter idr initialization Vlad Buslov 2023-04-26 14:25 ` Simon Horman 2023-04-26 14:27 ` Pedro Tammela 2023-04-27 5:53 ` Paul Blakey 2023-04-26 12:14 ` [PATCH net 2/2] net/sched: flower: fix error handler on replace Vlad Buslov 2023-04-26 14:06 ` Pedro Tammela 2023-04-26 14:22 ` Pedro Tammela 2023-04-26 14:46 ` Vlad Buslov 2023-04-26 15:24 ` Pedro Tammela 2023-04-26 15:39 ` Ivan Vecera 2023-04-28 7:11 ` Simon Horman 2023-04-28 8:20 ` Ivan Vecera 2023-04-28 11:03 ` Vlad Buslov 2023-05-03 2:44 ` Jakub Kicinski 2023-05-04 13:40 ` Vlad Buslov 2023-05-04 14:24 ` Paolo Abeni 2023-05-04 18:32 ` Vlad Buslov 2023-05-05 13:25 ` Simon Horman 2023-04-27 5:52 ` Paul Blakey
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).