* The call trace occurs during the VRF fault injection test
@ 2023-09-08 10:05 hanhuihui
2023-09-11 15:47 ` David Ahern
2023-09-18 12:39 ` Fengtao (fengtao, Euler)
0 siblings, 2 replies; 3+ messages in thread
From: hanhuihui @ 2023-09-08 10:05 UTC (permalink / raw)
To: netdev@vger.kernel.org, davem@davemloft.net, dsahern@kernel.org,
pablo@netfilter.org
Cc: Yanan (Euler), Caowangbao, Fengtao (fengtao, Euler), liaichun
Hello, I found a problem in the VRF fault injection test scenario. When the size of the sent data packet exceeds the MTU, the call trace is triggered. The test script and detailed error information are as follows:
"ip link add name vrf-blue type vrf table 10
ip link set dev vrf-blue up
ip route add table 10 unreachable default
ip link set dev enp4s0 master vrf-blue
ip address add 192.168.255.250/16 dev enp4s0
tc qdisc add dev enp4s0 root netem delay 1000ms 500ms
tc qdisc add dev vrf-blue root netem delay 1000ms 500ms
ip vrf exec vrf-blue ping "192.168.162.184" -s 6000 -I "enp4s0" -c 3
tc qdisc del dev "enp4s0" root
tc qdisc del dev vrf-blue root
ip address del 192.168.255.250/16 dev enp4s0
ip link set dev enp4s0 nomaster"
"[ 284.613866] refcount_t: underflow; use-after-free.
[ 284.613906] WARNING: CPU: 0 PID: 0 at lib/refcount.c:28 refcount_warn_saturate+0xd1/0x120
[ 284.613920] Modules linked in: sch_netem vrf nft_fib_inet nft_fib_ipv4 nft_fib_ipv6 nft_fib nft_reject_inet nf_reject_ipv4 nf_reject_ipv6 nft_reject nft_ct nft_chain_nat nf_tables ebtable_nat ebtable_broute ip6table_nat ip6table_mangle ip6table_raw ip6table_security iptable_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 libcrc32c iptable_mangle iptable_raw iptable_security rfkill ip_set nfnetlink ebtable_filter ebtables ip6table_filter ip6_tables iptable_filter ip_tables sunrpc intel_rapl_msr intel_rapl_common isst_if_mbox_msr isst_if_common nfit libnvdimm rapl ipmi_ssif cirrus sg drm_shmem_helper acpi_ipmi joydev ipmi_si ipmi_devintf drm_kms_helper i2c_piix4 virtio_balloon pcspkr ipmi_msghandler drm fuse ext4 mbcache jbd2 sd_mod crct10dif_pclmul t10_pi crc32_pclmul crc64_rocksoft_generic crc32c_intel ata_generic crc64_rocksoft crc64 virtio_net ata_piix ghash_clmulni_intel net_failover virtio_console failover sha512_ssse3 libata serio_raw virtio_scsi dm_mirror dm_region_hash dm_log dm_mod
[ 284.614124] CPU: 0 PID: 0 Comm: swapper/0 Kdump: loaded Not tainted 6.5.0+ #2
[ 284.614130] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.12.1-0-ga5cab58-20220525_182517-szxrtosci10000 04/01/2014
[ 284.614134] RIP: 0010:refcount_warn_saturate+0xd1/0x120
[ 284.614140] Code: 79 e5 07 02 01 e8 8f 86 7c ff 0f 0b eb 95 80 3d 66 e5 07 02 00 75 8c 48 c7 c7 80 a0 a7 97 c6 05 56 e5 07 02 01 e8 6f 86 7c ff <0f> 0b e9 72 ff ff ff 80 3d 41 e5 07 02 00 0f 85 65 ff ff ff 48 c7
[ 284.614145] RSP: 0018:ffff888117609320 EFLAGS: 00010286
[ 284.614155] RAX: 0000000000000000 RBX: 0000000000000003 RCX: 000000000000083f
[ 284.614159] RDX: 0000000000000000 RSI: 00000000000000f6 RDI: 000000000000003f
[ 284.614162] RBP: ffff88811004d6d4 R08: 0000000000000001 R09: ffffed1022ec1229
[ 284.614165] R10: ffff88811760914f R11: 0000000000000001 R12: 0000000000000900
[ 284.614168] R13: ffff88811004d6d4 R14: ffff88811004d5e0 R15: ffff88811004d830
[ 284.614174] FS: 0000000000000000(0000) GS:ffff888117600000(0000) knlGS:0000000000000000
[ 284.614178] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 284.614182] CR2: 000055def0aa7460 CR3: 0000000102536001 CR4: 0000000000370ef0
[ 284.614186] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 284.614189] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 284.614192] Call Trace:
[ 284.614195] <IRQ>
[ 284.614198] ? __warn+0xa5/0x1b0
[ 284.614207] ? refcount_warn_saturate+0xd1/0x120
[ 284.614216] ? __report_bug+0x123/0x130
[ 284.614225] ? refcount_warn_saturate+0xd1/0x120
[ 284.614229] ? report_bug+0x43/0xa0
[ 284.614234] ? handle_bug+0x3c/0x70
[ 284.614241] ? exc_invalid_op+0x18/0x50
[ 284.614246] ? asm_exc_invalid_op+0x1a/0x20
[ 284.614257] ? refcount_warn_saturate+0xd1/0x120
[ 284.614262] sock_wfree+0x303/0x310
[ 284.614269] ? __pfx_sock_wfree+0x10/0x10
[ 284.614273] skb_orphan_partial+0x1f3/0x250
[ 284.614282] ? __pfx_skb_orphan_partial+0x10/0x10
[ 284.614288] ? dequeue_skb+0xe0/0x700
[ 284.614299] netem_enqueue+0xda/0x1160 [sch_netem]
[ 284.614310] ? __pfx___qdisc_run+0x10/0x10
[ 284.614315] ? _raw_spin_lock+0x85/0xe0
[ 284.614325] dev_qdisc_enqueue+0x30/0xe0
[ 284.614333] __dev_xmit_skb+0x410/0x8a0
[ 284.614338] ? __pfx___dev_xmit_skb+0x10/0x10
[ 284.614343] ? arp_process+0x4e9/0xd50
[ 284.614352] __dev_queue_xmit+0x620/0xde0
[ 284.614359] ? enqueue_timer+0xab/0x190
[ 284.614368] ? __pfx___dev_queue_xmit+0x10/0x10
[ 284.614373] ? _raw_spin_unlock_irqrestore+0xe/0x30
[ 284.614379] ? __mod_timer+0x42b/0x630
[ 284.614384] ? _raw_write_lock_bh+0x89/0xe0
[ 284.614389] ? __rcu_read_unlock+0x33/0x70
[ 284.614397] ? skb_push+0x4d/0x90
[ 284.614404] ? eth_header+0x81/0xf0
[ 284.614409] ? __pfx_eth_header+0x10/0x10
[ 284.614413] ? neigh_resolve_output.part.0+0x1b9/0x2a0
[ 284.614421] __neigh_update+0x2ef/0xf10
[ 284.614429] arp_process+0x4af/0xd50
[ 284.614435] ? __pfx_arp_process+0x10/0x10
[ 284.614440] ? __netif_receive_skb_core+0x3ef/0x1990
[ 284.614446] ? __pfx___alloc_pages+0x10/0x10
[ 284.614455] arp_rcv.part.0+0x1e6/0x2d0
[ 284.614460] ? __pfx_arp_rcv.part.0+0x10/0x10
[ 284.614466] ? __build_skb_around+0x129/0x190
[ 284.614472] ? __napi_build_skb+0x3a/0x50
[ 284.614477] ? __napi_alloc_skb+0xe3/0x390
[ 284.614482] ? __pfx___napi_alloc_skb+0x10/0x10
[ 284.614488] ? __pfx_arp_rcv+0x10/0x10
[ 284.614493] __netif_receive_skb_list_core+0x489/0x500
[ 284.614499] ? __pfx___netif_receive_skb_list_core+0x10/0x10
[ 284.614506] ? receive_mergeable+0x482/0x920 [virtio_net]
[ 284.614522] __netif_receive_skb_list+0x1cc/0x2d0
[ 284.614528] ? virtio_net_hdr_to_skb.constprop.0+0x2ec/0x720 [virtio_net]
[ 284.614541] ? __pfx___netif_receive_skb_list+0x10/0x10
[ 284.614547] ? __rcu_read_unlock+0x4c/0x70
[ 284.614552] ? dev_gro_receive+0xe1/0x780
[ 284.614557] ? kvm_clock_get_cycles+0x18/0x30
[ 284.614564] netif_receive_skb_list_internal+0x234/0x380
[ 284.614570] ? napi_gro_receive+0x159/0x3a0
[ 284.614574] ? __pfx_netif_receive_skb_list_internal+0x10/0x10
[ 284.614580] ? virtqueue_get_vring_size+0x1f/0x30
[ 284.614588] ? virtnet_receive+0x218/0x3d0 [virtio_net]
[ 284.614602] ? __pfx_virtnet_receive+0x10/0x10 [virtio_net]
[ 284.614616] napi_complete_done+0x128/0x390
[ 284.614621] ? __pfx_napi_complete_done+0x10/0x10
[ 284.614627] ? virtqueue_enable_cb_delayed+0x252/0x340
[ 284.614633] ? netif_tx_wake_queue+0x1e/0x50
[ 284.614640] virtnet_poll+0x1e3/0x340 [virtio_net]
[ 284.614653] ? scheduler_tick+0x1ac/0x3c0
[ 284.614662] ? __pfx_virtnet_poll+0x10/0x10 [virtio_net]
[ 284.614676] ? timerqueue_add+0x128/0x150
[ 284.614683] __napi_poll+0x59/0x2c0
[ 284.614689] net_rx_action+0x55a/0x6a0
[ 284.614694] ? __pfx_net_rx_action+0x10/0x10
[ 284.614698] ? _raw_write_lock_irq+0xe0/0xe0
[ 284.614704] ? kvm_sched_clock_read+0x11/0x20
[ 284.614712] __do_softirq+0xf5/0x38d
[ 284.614718] __irq_exit_rcu+0xdd/0x100
[ 284.614725] common_interrupt+0x81/0xa0
[ 284.614738] </IRQ>
[ 284.614742] <TASK>
[ 284.614745] asm_common_interrupt+0x26/0x40
[ 284.614751] RIP: 0010:default_idle+0xf/0x20
[ 284.614757] Code: 4c 01 c7 4c 29 c2 e9 72 ff ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 f3 0f 1e fa 66 90 0f 00 2d c3 73 2c 00 fb f4 <fa> c3 cc cc cc cc 66 66 2e 0f 1f 84 00 00 00 00 00 90 90 90 90 90
[ 284.614761] RSP: 0018:ffffffff98207e38 EFLAGS: 00000256
[ 284.614766] RAX: 0000000000000000 RBX: 1ffffffff3040fc9 RCX: ffffffff97554543
[ 284.614770] RDX: ffffed1022ec7cb6 RSI: 0000000000000004 RDI: 000000000016e17c
[ 284.614773] RBP: 0000000000000000 R08: 0000000000000001 R09: ffffed1022ec7cb5
[ 284.614776] R10: ffff88811763e5ab R11: 0000000000000000 R12: 0000000000000000
[ 284.614782] R13: ffffffff982129c0 R14: 0000000000000000 R15: 0000000000093ff0
[ 284.614787] ? ct_kernel_exit.constprop.0+0x93/0xd0
[ 284.614792] default_idle_call+0x34/0x50
[ 284.614797] cpuidle_idle_call+0x199/0x1e0
[ 284.614805] ? __pfx_cpuidle_idle_call+0x10/0x10
[ 284.614810] ? kvm_sched_clock_read+0x11/0x20
[ 284.614815] ? sched_clock+0x10/0x30
[ 284.614823] ? sched_clock_cpu+0x15/0x130
[ 284.614831] ? tsc_verify_tsc_adjust+0x7a/0x160
[ 284.614837] ? rcu_nocb_flush_deferred_wakeup+0x2c/0xc0
[ 284.614842] do_idle+0xa7/0x120
[ 284.614848] cpu_startup_entry+0x1d/0x20
[ 284.614853] rest_init+0xf0/0xf0
[ 284.614858] arch_call_rest_init+0x13/0x40
[ 284.614866] start_kernel+0x311/0x3d0
[ 284.614871] x86_64_start_reservations+0x18/0x30
[ 284.614877] x86_64_start_kernel+0x97/0xa0
[ 284.614881] secondary_startup_64_no_verify+0x17d/0x18b
[ 284.614891] </TASK>
[ 284.614894] ---[ end trace 0000000000000000 ]---
[ 285.594335] ------------[ cut here ]------------
[ 285.594345] refcount_t: saturated; leaking memory.
[ 285.594396] WARNING: CPU: 3 PID: 8254 at lib/refcount.c:22 refcount_warn_saturate+0x71/0x120
[ 285.594414] Modules linked in: sch_netem vrf nft_fib_inet nft_fib_ipv4 nft_fib_ipv6 nft_fib nft_reject_inet nf_reject_ipv4 nf_reject_ipv6 nft_reject nft_ct nft_chain_nat nf_tables ebtable_nat ebtable_broute ip6table_nat ip6table_mangle ip6table_raw ip6table_security iptable_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 libcrc32c iptable_mangle iptable_raw iptable_security rfkill ip_set nfnetlink ebtable_filter ebtables ip6table_filter ip6_tables iptable_filter ip_tables sunrpc intel_rapl_msr intel_rapl_common isst_if_mbox_msr isst_if_common nfit libnvdimm rapl ipmi_ssif cirrus sg drm_shmem_helper acpi_ipmi joydev ipmi_si ipmi_devintf drm_kms_helper i2c_piix4 virtio_balloon pcspkr ipmi_msghandler drm fuse ext4 mbcache jbd2 sd_mod crct10dif_pclmul t10_pi crc32_pclmul crc64_rocksoft_generic crc32c_intel ata_generic crc64_rocksoft crc64 virtio_net ata_piix ghash_clmulni_intel net_failover virtio_console failover sha512_ssse3 libata serio_raw virtio_scsi dm_mirror dm_region_hash dm_log dm_mod
[ 285.594655] CPU: 3 PID: 8254 Comm: ping Kdump: loaded Tainted: G W 6.5.0+ #2
[ 285.594663] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.12.1-0-ga5cab58-20220525_182517-szxrtosci10000 04/01/2014
[ 285.594668] RIP: 0010:refcount_warn_saturate+0x71/0x120
[ 285.594677] Code: 00 00 00 5b 5d c3 cc cc cc cc 85 db 74 40 80 3d c8 e5 07 02 00 75 ec 48 c7 c7 80 9f a7 97 c6 05 b8 e5 07 02 01 e8 cf 86 7c ff <0f> 0b eb d5 80 3d a7 e5 07 02 00 75 cc 48 c7 c7 20 a0 a7 97 c6 05
[ 285.594684] RSP: 0018:ffff88810b27f890 EFLAGS: 00010286
[ 285.594692] RAX: 0000000000000000 RBX: 0000000000000001 RCX: 0000000000000027
[ 285.594697] RDX: 0000000000000027 RSI: 0000000000000004 RDI: ffff8881177b0648
[ 285.594701] RBP: ffff88811004d6d4 R08: ffffffff965d071e R09: ffffed1022ef60c9
[ 285.594706] R10: ffff8881177b064b R11: 0000000000000001 R12: 000000000000060b
[ 285.594711] R13: ffff888109e3b000 R14: 0000000000000000 R15: 0000000000001778
[ 285.594719] FS: 00007fbdee966b80(0000) GS:ffff888117780000(0000) knlGS:0000000000000000
[ 285.594724] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 285.594729] CR2: 0000560c2756aca0 CR3: 0000000102768002 CR4: 0000000000370ee0
[ 285.594743] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 285.594747] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 285.594751] Call Trace:
[ 285.594755] <TASK>
[ 285.594760] ? __warn+0xa5/0x1b0
[ 285.594773] ? refcount_warn_saturate+0x71/0x120
[ 285.594780] ? __report_bug+0x123/0x130
[ 285.594793] ? refcount_warn_saturate+0x71/0x120
[ 285.594800] ? report_bug+0x43/0xa0
[ 285.594809] ? handle_bug+0x3c/0x70
[ 285.594818] ? exc_invalid_op+0x18/0x50
[ 285.594825] ? asm_exc_invalid_op+0x1a/0x20
[ 285.594839] ? irq_work_claim+0x1e/0x40
[ 285.594849] ? refcount_warn_saturate+0x71/0x120
[ 285.594856] __ip_append_data+0x138c/0x1bc0
[ 285.594871] ? __pfx_raw_getfrag+0x10/0x10
[ 285.594881] ? find_exception+0x20/0x190
[ 285.594900] ? __pfx___ip_append_data+0x10/0x10
[ 285.594909] ? __rcu_read_unlock+0x4c/0x70
[ 285.594921] ? ipv4_mtu+0xf8/0x170
[ 285.594928] ? __pfx_raw_getfrag+0x10/0x10
[ 285.594937] ip_append_data+0x9b/0xf0
[ 285.594948] raw_sendmsg+0x5ff/0xb90
[ 285.594959] ? __pfx_raw_sendmsg+0x10/0x10
[ 285.594968] ? __pfx_avc_has_perm+0x10/0x10
[ 285.594978] ? ____sys_recvmsg+0x138/0x330
[ 285.594990] ? __pfx_selinux_socket_sendmsg+0x10/0x10
[ 285.595002] ? ____sys_sendmsg+0x28/0x530
[ 285.595010] ? __pfx_copy_msghdr_from_user+0x10/0x10
[ 285.595019] ? __mod_lruvec_page_state+0x107/0x1f0
[ 285.595031] ? inet_send_prepare+0x1f/0x110
[ 285.595042] ? __pfx_inet_sendmsg+0x10/0x10
[ 285.595117] sock_sendmsg+0xfe/0x140
[ 285.595126] __sys_sendto+0x194/0x240
[ 285.595136] ? __pfx___sys_sendto+0x10/0x10
[ 285.595145] ? __handle_mm_fault+0x4cc/0x8c0
[ 285.595210] ? __sys_recvmsg+0xc9/0x150
[ 285.595217] ? __pfx___sys_recvmsg+0x10/0x10
[ 285.595224] ? __rcu_read_unlock+0x4c/0x70
[ 285.595232] ? mm_account_fault+0xcc/0x120
[ 285.595242] ? __pfx_restore_fpregs_from_fpstate+0x10/0x10
[ 285.595319] ? __audit_syscall_entry+0x17c/0x200
[ 285.595333] __x64_sys_sendto+0x78/0x90
[ 285.595344] do_syscall_64+0x3c/0x90
[ 285.595351] entry_SYSCALL_64_after_hwframe+0x6e/0xd8
[ 285.595360] RIP: 0033:0x7fbdee70f16a
[ 285.595367] Code: d8 64 89 02 48 c7 c0 ff ff ff ff eb b8 0f 1f 00 f3 0f 1e fa 41 89 ca 64 8b 04 25 18 00 00 00 85 c0 75 15 b8 2c 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 7e c3 0f 1f 44 00 00 41 54 48 83 ec 30 44 89
[ 285.595374] RSP: 002b:00007ffde0514458 EFLAGS: 00000246 ORIG_RAX: 000000000000002c
[ 285.595381] RAX: ffffffffffffffda RBX: 000055aafaadf060 RCX: 00007fbdee70f16a
[ 285.595387] RDX: 0000000000001778 RSI: 000055aafbe2a4d0 RDI: 0000000000000003
[ 285.595391] RBP: 000055aafbe2a4d0 R08: 000055aafaae12e0 R09: 0000000000000010
[ 285.595396] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000001778
[ 285.595400] R13: 00007ffde0515b50 R14: 00007ffde0514460 R15: 0000001d00000001
[ 285.595409] </TASK>
[ 285.595413] ---[ end trace 0000000000000000 ]---"
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: The call trace occurs during the VRF fault injection test
2023-09-08 10:05 The call trace occurs during the VRF fault injection test hanhuihui
@ 2023-09-11 15:47 ` David Ahern
2023-09-18 12:39 ` Fengtao (fengtao, Euler)
1 sibling, 0 replies; 3+ messages in thread
From: David Ahern @ 2023-09-11 15:47 UTC (permalink / raw)
To: hanhuihui, netdev@vger.kernel.org, davem@davemloft.net,
pablo@netfilter.org
Cc: Yanan (Euler), Caowangbao, Fengtao (fengtao, Euler), liaichun
On 9/8/23 4:05 AM, hanhuihui wrote:
> Hello, I found a problem in the VRF fault injection test scenario. When the size of the sent data packet exceeds the MTU, the call trace is triggered. The test script and detailed error information are as follows:
> "ip link add name vrf-blue type vrf table 10
> ip link set dev vrf-blue up
> ip route add table 10 unreachable default
> ip link set dev enp4s0 master vrf-blue
> ip address add 192.168.255.250/16 dev enp4s0
> tc qdisc add dev enp4s0 root netem delay 1000ms 500ms
> tc qdisc add dev vrf-blue root netem delay 1000ms 500ms
> ip vrf exec vrf-blue ping "192.168.162.184" -s 6000 -I "enp4s0" -c 3
> tc qdisc del dev "enp4s0" root
> tc qdisc del dev vrf-blue root
> ip address del 192.168.255.250/16 dev enp4s0
> ip link set dev enp4s0 nomaster"
>
Thanks for the reproducer. I will take a look when I get some time.
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: The call trace occurs during the VRF fault injection test
2023-09-08 10:05 The call trace occurs during the VRF fault injection test hanhuihui
2023-09-11 15:47 ` David Ahern
@ 2023-09-18 12:39 ` Fengtao (fengtao, Euler)
1 sibling, 0 replies; 3+ messages in thread
From: Fengtao (fengtao, Euler) @ 2023-09-18 12:39 UTC (permalink / raw)
To: hanhuihui, netdev@vger.kernel.org, davem@davemloft.net,
dsahern@kernel.org, pablo@netfilter.org, stephen, jhs,
xiyou.wangcong, kuba
Cc: Yanan (Euler), Caowangbao, liaichun
Hi all,
I am analyzing the issue, and I am very happy for any replies.
Setting netem in physical device does not matter, remove the following step will still hit problem.
"tc qdisc add dev enp4s0 root netem delay 1000ms 500ms"
When we set netem with delay in vrf device, commit 5a308f40bfe27(netem: refine early skb orphaning)
will do early skb orphaning, and set skb->destructor to sock_efree. But when skb propagate to the second
round and back to L3, we need do fragment again. In quick path of ip_do_fragment, frag->destructor is been
setting to sock_wfree again(commit 2fdba6b085eb7).
In the end, start_xmit(virtio_net) call skb_orphan will lead to having negative socket sk_wmem_alloc counter.
I am know little about netem, I am confused for the early skb orphaning, which is not exist in other qdiscs.
Seems that if we do early skb orphaning, skb back to L3 and do ip fragment again, something unexpected.
After testing the following code, everything seems fine.
---
net/sched/sch_netem.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/net/sched/sch_netem.c b/net/sched/sch_netem.c
index cd5d821..04d5e22 100644
--- a/net/sched/sch_netem.c
+++ b/net/sched/sch_netem.c
@@ -578,6 +578,11 @@ static int netem_enqueue(struct sk_buff *skb, struct Qdisc *sch,
cb->time_to_send = now + delay;
++q->counter;
+ if (skb->sk) {
+ skb->destructor = sock_wfree;
+ skb_set_hash_from_sk(skb, skb->sk);
+ refcount_add(skb->truesize, &(skb->sk->sk_wmem_alloc));
+ }
tfifo_enqueue(skb, sch);
} else {
/*
--
Cheers,
t.feng
.
On 2023/9/8 18:05, hanhuihui wrote:
> Hello, I found a problem in the VRF fault injection test scenario. When the size of the sent data packet exceeds the MTU, the call trace is triggered. The test script and detailed error information are as follows:
> "ip link add name vrf-blue type vrf table 10
> ip link set dev vrf-blue up
> ip route add table 10 unreachable default
> ip link set dev enp4s0 master vrf-blue
> ip address add 192.168.255.250/16 dev enp4s0
> tc qdisc add dev enp4s0 root netem delay 1000ms 500ms
> tc qdisc add dev vrf-blue root netem delay 1000ms 500ms
> ip vrf exec vrf-blue ping "192.168.162.184" -s 6000 -I "enp4s0" -c 3
> tc qdisc del dev "enp4s0" root
> tc qdisc del dev vrf-blue root
> ip address del 192.168.255.250/16 dev enp4s0
> ip link set dev enp4s0 nomaster"
>
>
> "[ 284.613866] refcount_t: underflow; use-after-free.
> [ 284.613906] WARNING: CPU: 0 PID: 0 at lib/refcount.c:28 refcount_warn_saturate+0xd1/0x120
> [ 284.614192] Call Trace:
> [ 284.614195] <IRQ>
> [ 284.614207] ? refcount_warn_saturate+0xd1/0x120
> [ 284.614257] ? refcount_warn_saturate+0xd1/0x120
> [ 284.614262] sock_wfree+0x303/0x310
> [ 284.614273] skb_orphan_partial+0x1f3/0x250
> [ 284.614299] netem_enqueue+0xda/0x1160 [sch_netem]
> [ 284.614325] dev_qdisc_enqueue+0x30/0xe0
> [ 284.614333] __dev_xmit_skb+0x410/0x8a0
> [ 284.614352] __dev_queue_xmit+0x620/0xde0
> [ 284.614421] __neigh_update+0x2ef/0xf10
> [ 284.614429] arp_process+0x4af/0xd50
> [ 284.614455] arp_rcv.part.0+0x1e6/0x2d0
> [ 284.614493] __netif_receive_skb_list_core+0x489/0x500
> [ 284.614522] __netif_receive_skb_list+0x1cc/0x2d0
> [ 284.614564] netif_receive_skb_list_internal+0x234/0x380
> [ 284.614616] napi_complete_done+0x128/0x390
> [ 284.614640] virtnet_poll+0x1e3/0x340 [virtio_net]
> [ 284.614683] __napi_poll+0x59/0x2c0
> [ 284.614689] net_rx_action+0x55a/0x6a0
> [ 284.614712] __do_softirq+0xf5/0x38d
> [ 284.614718] __irq_exit_rcu+0xdd/0x100
> [ 284.614725] common_interrupt+0x81/0xa0
^ permalink raw reply related [flat|nested] 3+ messages in thread
end of thread, other threads:[~2023-09-18 12:39 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-09-08 10:05 The call trace occurs during the VRF fault injection test hanhuihui
2023-09-11 15:47 ` David Ahern
2023-09-18 12:39 ` Fengtao (fengtao, Euler)
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).