* kernel panic receiving flooded VXLAN traffic with OVS
@ 2014-11-07 1:58 Jay Vosburgh
2014-11-07 17:40 ` Pravin Shelar
2014-12-04 1:45 ` Jay Vosburgh
0 siblings, 2 replies; 13+ messages in thread
From: Jay Vosburgh @ 2014-11-07 1:58 UTC (permalink / raw)
To: netdev, discuss
I am able to reproduce a kernel panic on an system using
openvswitch when receiving VXLAN traffic under a very specific set of
circumstances. This occurs with a recent net-next as well as an Ubuntu
3.13 kernel. I'm not sure if the error lies in OVS, GRO, or elsewhere.
In summary, when the system receives multiple VXLAN encapsulated
TCP segments for a different system (not intended for local reception)
that are from the middle of an active connection (received due to a switch
flood), and are tagged to a VLAN not configured on the local host, then
the system panics in skb_segment when OVS calls __skb_gso_segment on the
GRO skb prior to performing an upcall to user space.
The panic occurs in skbuff.c:skb_segment(), at the BUG_ON around
line 3036:
struct sk_buff *skb_segment(struct sk_buff *head_skb,
netdev_features_t features)
{
[...]
skb_shinfo(nskb)->tx_flags = skb_shinfo(head_skb)->tx_flags &
SKBTX_SHARED_FRAG;
while (pos < offset + len) {
if (i >= nfrags) {
BUG_ON(skb_headlen(list_skb));
i = 0;
The BUG_ON triggers because the skbs that have been GRO
accumulated are partially or entirely linear, depending upon the receiving
network device (sky2 is partial, enic is entire). The receive buffers end
up being linear evidently because the mtu is set to 9000, and
__netdev_alloc_skb calls __alloc_skb (and thus kmalloc) instead of
__netdev_alloc_frag followed by build_skb.
The foreign-VLAN VXLAN TCP segments are not processed as normal
VXLAN traffic, as there is no listener on the VLAN in question, so once
GRO processes them, they are sent directly to ovs_vport_receive. The
panic stack appears as follows:
[ 6558.812214] kernel BUG at net/core/skbuff.c:3025!
[ 6558.812214] invalid opcode: 0000 [#1] SMP
[ 6558.812214] Modules linked in: veth 8021q garp mrp bonding xt_tcpudp bridge stp llc iptable_filter ip_tables x_tables openvswitch vxlan ip6_udp_tunnel udp_tunnel gre libcrc32c i915 video drm_kms_helper coretemp drm kvm_intel kvm gpio_ich ppdev parport_pc lp lpc_ich serio_raw i2c_algo_bit parport mac_hid hid_generic usbhid hid psmouse r8169 mii sky2
[ 6558.812214] CPU: 0 PID: 3 Comm: ksoftirqd/0 Not tainted 3.17.0-rc7-testola+ #5
[ 6558.812214] Hardware name: LENOVO 0829F3U/To be filled by O.E.M., BIOS 90KT15AUS 07/21/2010
[ 6558.812214] task: ffff880139eb3200 ti: ffff880139ed0000 task.ti: ffff880139ed0000
[ 6558.812214] RIP: 0010:[<ffffffff81616bc2>] [<ffffffff81616bc2>] skb_segment+0x9d2/0xa00
[ 6558.812214] RSP: 0018:ffff880139ed3610 EFLAGS: 00010216
[ 6558.812214] RAX: 00000000000002dc RBX: ffff8800a3be5e00 RCX: ffff8800b10a26f0
[ 6558.812214] RDX: 0000000000000074 RSI: ffff8800b10a2600 RDI: ffff8800b10a2000
[ 6558.812214] RBP: ffff880139ed36e0 R08: 0000000000000022 R09: 0000000000000000
[ 6558.812214] R10: ffff8800b11e6000 R11: 00000000000005ca R12: ffff8800b10a20f0
[ 6558.812214] R13: 0000000000000000 R14: ffff8800b116cb00 R15: 0000000000000074
[ 6558.812214] FS: 0000000000000000(0000) GS:ffff88013fc00000(0000) knlGS:0000000000000000
[ 6558.812214] CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
[ 6558.812214] CR2: 00007fa906f4f148 CR3: 00000000b2a46000 CR4: 00000000000407f0
[ 6558.812214] Stack:
[ 6558.812214] 00000000000016a0 ffff880031353800 ffffffffffffffde ffff8800000005ca
[ 6558.812214] 0000000000000022 0000000000000040 ffff8800b11e6000 00000001000016a0
[ 6558.812214] 0000000000000000 0000000000000022 00000000000005a8 ffff8800a3be5e00
[ 6558.812214] Call Trace:
[ 6558.812214] [<ffffffff8168c97f>] udp4_ufo_fragment+0x10f/0x1a0
[ 6558.812214] [<ffffffff81695c51>] inet_gso_segment+0x141/0x370
[ 6558.812214] [<ffffffff810aa2c8>] ? __wake_up_common+0x58/0x90
[ 6558.812214] [<ffffffff81624f4f>] skb_mac_gso_segment+0x9f/0x100
[ 6558.812214] [<ffffffff81625016>] __skb_gso_segment+0x66/0xd0
[ 6558.812214] [<ffffffffa01d4c91>] queue_gso_packets+0x41/0x130 [openvswitch]
[ 6558.812214] [<ffffffff8121aa4d>] ? ep_poll_safewake+0x2d/0x30
[ 6558.812214] [<ffffffff8121b03d>] ? ep_poll_callback+0xcd/0x170
[ 6558.812214] [<ffffffff810aa2c8>] ? __wake_up_common+0x58/0x90
[ 6558.812214] [<ffffffff810aa860>] ? __wake_up_sync_key+0x50/0x60
[ 6558.812214] [<ffffffff8161c232>] ? __skb_flow_dissect+0x162/0x4c0
[ 6558.812214] [<ffffffff8172001f>] ? __slab_free+0xfe/0x2c3
[ 6558.812214] [<ffffffff816107af>] ? kfree_skbmem+0x3f/0xa0
[ 6558.812214] [<ffffffff8161c5ba>] ? __skb_get_hash+0x2a/0x160
[ 6558.812214] [<ffffffffa01d609e>] ovs_dp_upcall+0x2e/0x70 [openvswitch]
[ 6558.812214] [<ffffffffa01d6193>] ovs_dp_process_packet+0xb3/0xd0 [openvswitch]
[ 6558.812214] [<ffffffffa01dc860>] ovs_vport_receive+0x60/0x80 [openvswitch]
[ 6558.812214] [<ffffffff811828f1>] ? zone_statistics+0x81/0xa0
[ 6558.812214] [<ffffffff81617819>] ? skb_gro_receive+0x559/0x5f0
[ 6558.812214] [<ffffffff81695ada>] ? inet_gro_receive+0x1da/0x210
[ 6558.812214] [<ffffffffa01dd10a>] netdev_frame_hook+0xca/0x130 [openvswitch]
[ 6558.812214] [<ffffffff816233aa>] __netif_receive_skb_core+0x1ba/0x7a0
[ 6558.812214] [<ffffffff816239a8>] __netif_receive_skb+0x18/0x60
[ 6558.812214] [<ffffffff81623a13>] netif_receive_skb_internal+0x23/0x90
[ 6558.812214] [<ffffffff8168cefa>] ? udp4_gro_complete+0x6a/0x70
[ 6558.812214] [<ffffffff81623b94>] napi_gro_complete+0xa4/0xe0
[ 6558.812214] [<ffffffff81623c3d>] napi_gro_flush+0x6d/0x90
[ 6558.812214] [<ffffffff81623c7e>] napi_complete+0x1e/0x50
[ 6558.812214] [<ffffffffa0006538>] sky2_poll+0xa38/0xd80 [sky2]
[ 6558.812214] [<ffffffff81623e02>] net_rx_action+0x152/0x250
[ 6558.812214] [<ffffffff81070aa5>] __do_softirq+0xf5/0x2e0
[ 6558.812214] [<ffffffff81070cc0>] run_ksoftirqd+0x30/0x50
[ 6558.812214] [<ffffffff8108e0ff>] smpboot_thread_fn+0xff/0x1b0
[ 6558.812214] [<ffffffff8108e000>] ? SyS_setgroups+0x1a0/0x1a0
[ 6558.812214] [<ffffffff8108a5a2>] kthread+0xd2/0xf0
[ 6558.812214] [<ffffffff8108a4d0>] ? kthread_create_on_node+0x180/0x180
[ 6558.812214] [<ffffffff81729e3c>] ret_from_fork+0x7c/0xb0
[ 6558.812214] [<ffffffff8108a4d0>] ? kthread_create_on_node+0x180/0x180
[ 6558.812214] Code: 8b 44 24 70 44 8b 4c 24 30 44 8b 5c 24 18 8b 54 24 08 48 8b 0c 24 0f 85 0f fd ff ff e9 06 fd ff ff 0f 1f 84 00 00 00 00 00 0f 0b <0f> 0b 0f 0b c6 44 24 3b 01 e9 28 f7 ff ff e8 76 db 10 00 0f 0b
[ 6558.812214] RIP [<ffffffff81616bc2>] skb_segment+0x9d2/0xa00
[ 6558.812214] RSP <ffff880139ed3610>
I'm not sure if this is an error on the part of the RX / GRO
processing in assembling the GRO skb, or in how OVS calls skb_segment.
-J
---
-Jay Vosburgh, jay.vosburgh@canonical.com
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: kernel panic receiving flooded VXLAN traffic with OVS
2014-11-07 1:58 kernel panic receiving flooded VXLAN traffic with OVS Jay Vosburgh
@ 2014-11-07 17:40 ` Pravin Shelar
2014-11-07 18:34 ` Jesse Gross
2014-12-04 1:45 ` Jay Vosburgh
1 sibling, 1 reply; 13+ messages in thread
From: Pravin Shelar @ 2014-11-07 17:40 UTC (permalink / raw)
To: Jay Vosburgh; +Cc: netdev, discuss@openvswitch.org
On Thu, Nov 6, 2014 at 5:58 PM, Jay Vosburgh <jay.vosburgh@canonical.com> wrote:
>
> I am able to reproduce a kernel panic on an system using
> openvswitch when receiving VXLAN traffic under a very specific set of
> circumstances. This occurs with a recent net-next as well as an Ubuntu
> 3.13 kernel. I'm not sure if the error lies in OVS, GRO, or elsewhere.
>
> In summary, when the system receives multiple VXLAN encapsulated
> TCP segments for a different system (not intended for local reception)
> that are from the middle of an active connection (received due to a switch
> flood), and are tagged to a VLAN not configured on the local host, then
> the system panics in skb_segment when OVS calls __skb_gso_segment on the
> GRO skb prior to performing an upcall to user space.
>
> The panic occurs in skbuff.c:skb_segment(), at the BUG_ON around
> line 3036:
>
> struct sk_buff *skb_segment(struct sk_buff *head_skb,
> netdev_features_t features)
> {
> [...]
> skb_shinfo(nskb)->tx_flags = skb_shinfo(head_skb)->tx_flags &
> SKBTX_SHARED_FRAG;
>
> while (pos < offset + len) {
> if (i >= nfrags) {
> BUG_ON(skb_headlen(list_skb));
>
> i = 0;
>
>
> The BUG_ON triggers because the skbs that have been GRO
> accumulated are partially or entirely linear, depending upon the receiving
> network device (sky2 is partial, enic is entire). The receive buffers end
> up being linear evidently because the mtu is set to 9000, and
> __netdev_alloc_skb calls __alloc_skb (and thus kmalloc) instead of
> __netdev_alloc_frag followed by build_skb.
>
> The foreign-VLAN VXLAN TCP segments are not processed as normal
> VXLAN traffic, as there is no listener on the VLAN in question, so once
> GRO processes them, they are sent directly to ovs_vport_receive. The
> panic stack appears as follows:
>
> [ 6558.812214] kernel BUG at net/core/skbuff.c:3025!
> [ 6558.812214] invalid opcode: 0000 [#1] SMP
> [ 6558.812214] Modules linked in: veth 8021q garp mrp bonding xt_tcpudp bridge stp llc iptable_filter ip_tables x_tables openvswitch vxlan ip6_udp_tunnel udp_tunnel gre libcrc32c i915 video drm_kms_helper coretemp drm kvm_intel kvm gpio_ich ppdev parport_pc lp lpc_ich serio_raw i2c_algo_bit parport mac_hid hid_generic usbhid hid psmouse r8169 mii sky2
> [ 6558.812214] CPU: 0 PID: 3 Comm: ksoftirqd/0 Not tainted 3.17.0-rc7-testola+ #5
> [ 6558.812214] Hardware name: LENOVO 0829F3U/To be filled by O.E.M., BIOS 90KT15AUS 07/21/2010
> [ 6558.812214] task: ffff880139eb3200 ti: ffff880139ed0000 task.ti: ffff880139ed0000
> [ 6558.812214] RIP: 0010:[<ffffffff81616bc2>] [<ffffffff81616bc2>] skb_segment+0x9d2/0xa00
> [ 6558.812214] RSP: 0018:ffff880139ed3610 EFLAGS: 00010216
> [ 6558.812214] RAX: 00000000000002dc RBX: ffff8800a3be5e00 RCX: ffff8800b10a26f0
> [ 6558.812214] RDX: 0000000000000074 RSI: ffff8800b10a2600 RDI: ffff8800b10a2000
> [ 6558.812214] RBP: ffff880139ed36e0 R08: 0000000000000022 R09: 0000000000000000
> [ 6558.812214] R10: ffff8800b11e6000 R11: 00000000000005ca R12: ffff8800b10a20f0
> [ 6558.812214] R13: 0000000000000000 R14: ffff8800b116cb00 R15: 0000000000000074
> [ 6558.812214] FS: 0000000000000000(0000) GS:ffff88013fc00000(0000) knlGS:0000000000000000
> [ 6558.812214] CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
> [ 6558.812214] CR2: 00007fa906f4f148 CR3: 00000000b2a46000 CR4: 00000000000407f0
> [ 6558.812214] Stack:
> [ 6558.812214] 00000000000016a0 ffff880031353800 ffffffffffffffde ffff8800000005ca
> [ 6558.812214] 0000000000000022 0000000000000040 ffff8800b11e6000 00000001000016a0
> [ 6558.812214] 0000000000000000 0000000000000022 00000000000005a8 ffff8800a3be5e00
> [ 6558.812214] Call Trace:
> [ 6558.812214] [<ffffffff8168c97f>] udp4_ufo_fragment+0x10f/0x1a0
> [ 6558.812214] [<ffffffff81695c51>] inet_gso_segment+0x141/0x370
> [ 6558.812214] [<ffffffff810aa2c8>] ? __wake_up_common+0x58/0x90
> [ 6558.812214] [<ffffffff81624f4f>] skb_mac_gso_segment+0x9f/0x100
> [ 6558.812214] [<ffffffff81625016>] __skb_gso_segment+0x66/0xd0
> [ 6558.812214] [<ffffffffa01d4c91>] queue_gso_packets+0x41/0x130 [openvswitch]
> [ 6558.812214] [<ffffffff8121aa4d>] ? ep_poll_safewake+0x2d/0x30
> [ 6558.812214] [<ffffffff8121b03d>] ? ep_poll_callback+0xcd/0x170
> [ 6558.812214] [<ffffffff810aa2c8>] ? __wake_up_common+0x58/0x90
> [ 6558.812214] [<ffffffff810aa860>] ? __wake_up_sync_key+0x50/0x60
> [ 6558.812214] [<ffffffff8161c232>] ? __skb_flow_dissect+0x162/0x4c0
> [ 6558.812214] [<ffffffff8172001f>] ? __slab_free+0xfe/0x2c3
> [ 6558.812214] [<ffffffff816107af>] ? kfree_skbmem+0x3f/0xa0
> [ 6558.812214] [<ffffffff8161c5ba>] ? __skb_get_hash+0x2a/0x160
> [ 6558.812214] [<ffffffffa01d609e>] ovs_dp_upcall+0x2e/0x70 [openvswitch]
> [ 6558.812214] [<ffffffffa01d6193>] ovs_dp_process_packet+0xb3/0xd0 [openvswitch]
> [ 6558.812214] [<ffffffffa01dc860>] ovs_vport_receive+0x60/0x80 [openvswitch]
> [ 6558.812214] [<ffffffff811828f1>] ? zone_statistics+0x81/0xa0
> [ 6558.812214] [<ffffffff81617819>] ? skb_gro_receive+0x559/0x5f0
> [ 6558.812214] [<ffffffff81695ada>] ? inet_gro_receive+0x1da/0x210
> [ 6558.812214] [<ffffffffa01dd10a>] netdev_frame_hook+0xca/0x130 [openvswitch]
> [ 6558.812214] [<ffffffff816233aa>] __netif_receive_skb_core+0x1ba/0x7a0
> [ 6558.812214] [<ffffffff816239a8>] __netif_receive_skb+0x18/0x60
> [ 6558.812214] [<ffffffff81623a13>] netif_receive_skb_internal+0x23/0x90
> [ 6558.812214] [<ffffffff8168cefa>] ? udp4_gro_complete+0x6a/0x70
> [ 6558.812214] [<ffffffff81623b94>] napi_gro_complete+0xa4/0xe0
> [ 6558.812214] [<ffffffff81623c3d>] napi_gro_flush+0x6d/0x90
> [ 6558.812214] [<ffffffff81623c7e>] napi_complete+0x1e/0x50
> [ 6558.812214] [<ffffffffa0006538>] sky2_poll+0xa38/0xd80 [sky2]
> [ 6558.812214] [<ffffffff81623e02>] net_rx_action+0x152/0x250
> [ 6558.812214] [<ffffffff81070aa5>] __do_softirq+0xf5/0x2e0
> [ 6558.812214] [<ffffffff81070cc0>] run_ksoftirqd+0x30/0x50
> [ 6558.812214] [<ffffffff8108e0ff>] smpboot_thread_fn+0xff/0x1b0
> [ 6558.812214] [<ffffffff8108e000>] ? SyS_setgroups+0x1a0/0x1a0
> [ 6558.812214] [<ffffffff8108a5a2>] kthread+0xd2/0xf0
> [ 6558.812214] [<ffffffff8108a4d0>] ? kthread_create_on_node+0x180/0x180
> [ 6558.812214] [<ffffffff81729e3c>] ret_from_fork+0x7c/0xb0
> [ 6558.812214] [<ffffffff8108a4d0>] ? kthread_create_on_node+0x180/0x180
> [ 6558.812214] Code: 8b 44 24 70 44 8b 4c 24 30 44 8b 5c 24 18 8b 54 24 08 48 8b 0c 24 0f 85 0f fd ff ff e9 06 fd ff ff 0f 1f 84 00 00 00 00 00 0f 0b <0f> 0b 0f 0b c6 44 24 3b 01 e9 28 f7 ff ff e8 76 db 10 00 0f 0b
> [ 6558.812214] RIP [<ffffffff81616bc2>] skb_segment+0x9d2/0xa00
> [ 6558.812214] RSP <ffff880139ed3610>
>
> I'm not sure if this is an error on the part of the RX / GRO
> processing in assembling the GRO skb, or in how OVS calls skb_segment.
>
I think this is related skb_segment() issue where it is not able to
handle this type of skb geometry. We need to fix skb-segmentation. I
will investigate it more.
> -J
>
> ---
> -Jay Vosburgh, jay.vosburgh@canonical.com
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: kernel panic receiving flooded VXLAN traffic with OVS
2014-11-07 17:40 ` Pravin Shelar
@ 2014-11-07 18:34 ` Jesse Gross
2014-11-07 20:27 ` Jesse Gross
0 siblings, 1 reply; 13+ messages in thread
From: Jesse Gross @ 2014-11-07 18:34 UTC (permalink / raw)
To: Pravin Shelar; +Cc: Jay Vosburgh, netdev, discuss@openvswitch.org
On Fri, Nov 7, 2014 at 9:40 AM, Pravin Shelar <pshelar@nicira.com> wrote:
> On Thu, Nov 6, 2014 at 5:58 PM, Jay Vosburgh <jay.vosburgh@canonical.com> wrote:
>>
>> I am able to reproduce a kernel panic on an system using
>> openvswitch when receiving VXLAN traffic under a very specific set of
>> circumstances. This occurs with a recent net-next as well as an Ubuntu
>> 3.13 kernel. I'm not sure if the error lies in OVS, GRO, or elsewhere.
>>
>> In summary, when the system receives multiple VXLAN encapsulated
>> TCP segments for a different system (not intended for local reception)
>> that are from the middle of an active connection (received due to a switch
>> flood), and are tagged to a VLAN not configured on the local host, then
>> the system panics in skb_segment when OVS calls __skb_gso_segment on the
>> GRO skb prior to performing an upcall to user space.
>>
>> The panic occurs in skbuff.c:skb_segment(), at the BUG_ON around
>> line 3036:
>>
>> struct sk_buff *skb_segment(struct sk_buff *head_skb,
>> netdev_features_t features)
>> {
>> [...]
>> skb_shinfo(nskb)->tx_flags = skb_shinfo(head_skb)->tx_flags &
>> SKBTX_SHARED_FRAG;
>>
>> while (pos < offset + len) {
>> if (i >= nfrags) {
>> BUG_ON(skb_headlen(list_skb));
>>
>> i = 0;
>>
>>
>> The BUG_ON triggers because the skbs that have been GRO
>> accumulated are partially or entirely linear, depending upon the receiving
>> network device (sky2 is partial, enic is entire). The receive buffers end
>> up being linear evidently because the mtu is set to 9000, and
>> __netdev_alloc_skb calls __alloc_skb (and thus kmalloc) instead of
>> __netdev_alloc_frag followed by build_skb.
>>
>> The foreign-VLAN VXLAN TCP segments are not processed as normal
>> VXLAN traffic, as there is no listener on the VLAN in question, so once
>> GRO processes them, they are sent directly to ovs_vport_receive. The
>> panic stack appears as follows:
>>
>> [ 6558.812214] kernel BUG at net/core/skbuff.c:3025!
>> [ 6558.812214] invalid opcode: 0000 [#1] SMP
>> [ 6558.812214] Modules linked in: veth 8021q garp mrp bonding xt_tcpudp bridge stp llc iptable_filter ip_tables x_tables openvswitch vxlan ip6_udp_tunnel udp_tunnel gre libcrc32c i915 video drm_kms_helper coretemp drm kvm_intel kvm gpio_ich ppdev parport_pc lp lpc_ich serio_raw i2c_algo_bit parport mac_hid hid_generic usbhid hid psmouse r8169 mii sky2
>> [ 6558.812214] CPU: 0 PID: 3 Comm: ksoftirqd/0 Not tainted 3.17.0-rc7-testola+ #5
>> [ 6558.812214] Hardware name: LENOVO 0829F3U/To be filled by O.E.M., BIOS 90KT15AUS 07/21/2010
>> [ 6558.812214] task: ffff880139eb3200 ti: ffff880139ed0000 task.ti: ffff880139ed0000
>> [ 6558.812214] RIP: 0010:[<ffffffff81616bc2>] [<ffffffff81616bc2>] skb_segment+0x9d2/0xa00
>> [ 6558.812214] RSP: 0018:ffff880139ed3610 EFLAGS: 00010216
>> [ 6558.812214] RAX: 00000000000002dc RBX: ffff8800a3be5e00 RCX: ffff8800b10a26f0
>> [ 6558.812214] RDX: 0000000000000074 RSI: ffff8800b10a2600 RDI: ffff8800b10a2000
>> [ 6558.812214] RBP: ffff880139ed36e0 R08: 0000000000000022 R09: 0000000000000000
>> [ 6558.812214] R10: ffff8800b11e6000 R11: 00000000000005ca R12: ffff8800b10a20f0
>> [ 6558.812214] R13: 0000000000000000 R14: ffff8800b116cb00 R15: 0000000000000074
>> [ 6558.812214] FS: 0000000000000000(0000) GS:ffff88013fc00000(0000) knlGS:0000000000000000
>> [ 6558.812214] CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
>> [ 6558.812214] CR2: 00007fa906f4f148 CR3: 00000000b2a46000 CR4: 00000000000407f0
>> [ 6558.812214] Stack:
>> [ 6558.812214] 00000000000016a0 ffff880031353800 ffffffffffffffde ffff8800000005ca
>> [ 6558.812214] 0000000000000022 0000000000000040 ffff8800b11e6000 00000001000016a0
>> [ 6558.812214] 0000000000000000 0000000000000022 00000000000005a8 ffff8800a3be5e00
>> [ 6558.812214] Call Trace:
>> [ 6558.812214] [<ffffffff8168c97f>] udp4_ufo_fragment+0x10f/0x1a0
>> [ 6558.812214] [<ffffffff81695c51>] inet_gso_segment+0x141/0x370
>> [ 6558.812214] [<ffffffff810aa2c8>] ? __wake_up_common+0x58/0x90
>> [ 6558.812214] [<ffffffff81624f4f>] skb_mac_gso_segment+0x9f/0x100
>> [ 6558.812214] [<ffffffff81625016>] __skb_gso_segment+0x66/0xd0
>> [ 6558.812214] [<ffffffffa01d4c91>] queue_gso_packets+0x41/0x130 [openvswitch]
>> [ 6558.812214] [<ffffffff8121aa4d>] ? ep_poll_safewake+0x2d/0x30
>> [ 6558.812214] [<ffffffff8121b03d>] ? ep_poll_callback+0xcd/0x170
>> [ 6558.812214] [<ffffffff810aa2c8>] ? __wake_up_common+0x58/0x90
>> [ 6558.812214] [<ffffffff810aa860>] ? __wake_up_sync_key+0x50/0x60
>> [ 6558.812214] [<ffffffff8161c232>] ? __skb_flow_dissect+0x162/0x4c0
>> [ 6558.812214] [<ffffffff8172001f>] ? __slab_free+0xfe/0x2c3
>> [ 6558.812214] [<ffffffff816107af>] ? kfree_skbmem+0x3f/0xa0
>> [ 6558.812214] [<ffffffff8161c5ba>] ? __skb_get_hash+0x2a/0x160
>> [ 6558.812214] [<ffffffffa01d609e>] ovs_dp_upcall+0x2e/0x70 [openvswitch]
>> [ 6558.812214] [<ffffffffa01d6193>] ovs_dp_process_packet+0xb3/0xd0 [openvswitch]
>> [ 6558.812214] [<ffffffffa01dc860>] ovs_vport_receive+0x60/0x80 [openvswitch]
>> [ 6558.812214] [<ffffffff811828f1>] ? zone_statistics+0x81/0xa0
>> [ 6558.812214] [<ffffffff81617819>] ? skb_gro_receive+0x559/0x5f0
>> [ 6558.812214] [<ffffffff81695ada>] ? inet_gro_receive+0x1da/0x210
>> [ 6558.812214] [<ffffffffa01dd10a>] netdev_frame_hook+0xca/0x130 [openvswitch]
>> [ 6558.812214] [<ffffffff816233aa>] __netif_receive_skb_core+0x1ba/0x7a0
>> [ 6558.812214] [<ffffffff816239a8>] __netif_receive_skb+0x18/0x60
>> [ 6558.812214] [<ffffffff81623a13>] netif_receive_skb_internal+0x23/0x90
>> [ 6558.812214] [<ffffffff8168cefa>] ? udp4_gro_complete+0x6a/0x70
>> [ 6558.812214] [<ffffffff81623b94>] napi_gro_complete+0xa4/0xe0
>> [ 6558.812214] [<ffffffff81623c3d>] napi_gro_flush+0x6d/0x90
>> [ 6558.812214] [<ffffffff81623c7e>] napi_complete+0x1e/0x50
>> [ 6558.812214] [<ffffffffa0006538>] sky2_poll+0xa38/0xd80 [sky2]
>> [ 6558.812214] [<ffffffff81623e02>] net_rx_action+0x152/0x250
>> [ 6558.812214] [<ffffffff81070aa5>] __do_softirq+0xf5/0x2e0
>> [ 6558.812214] [<ffffffff81070cc0>] run_ksoftirqd+0x30/0x50
>> [ 6558.812214] [<ffffffff8108e0ff>] smpboot_thread_fn+0xff/0x1b0
>> [ 6558.812214] [<ffffffff8108e000>] ? SyS_setgroups+0x1a0/0x1a0
>> [ 6558.812214] [<ffffffff8108a5a2>] kthread+0xd2/0xf0
>> [ 6558.812214] [<ffffffff8108a4d0>] ? kthread_create_on_node+0x180/0x180
>> [ 6558.812214] [<ffffffff81729e3c>] ret_from_fork+0x7c/0xb0
>> [ 6558.812214] [<ffffffff8108a4d0>] ? kthread_create_on_node+0x180/0x180
>> [ 6558.812214] Code: 8b 44 24 70 44 8b 4c 24 30 44 8b 5c 24 18 8b 54 24 08 48 8b 0c 24 0f 85 0f fd ff ff e9 06 fd ff ff 0f 1f 84 00 00 00 00 00 0f 0b <0f> 0b 0f 0b c6 44 24 3b 01 e9 28 f7 ff ff e8 76 db 10 00 0f 0b
>> [ 6558.812214] RIP [<ffffffff81616bc2>] skb_segment+0x9d2/0xa00
>> [ 6558.812214] RSP <ffff880139ed3610>
>>
>> I'm not sure if this is an error on the part of the RX / GRO
>> processing in assembling the GRO skb, or in how OVS calls skb_segment.
>>
>
> I think this is related skb_segment() issue where it is not able to
> handle this type of skb geometry. We need to fix skb-segmentation. I
> will investigate it more.
One problem that I see is that vxlan_gro_complete() doesn't add
SKB_GSO_UDP_TUNNEL to gso_type. This causes us to attempt
fragmentation as UDP rather than continuing down to do TCP
segmentation. That probably screws up the skb geometry.
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: kernel panic receiving flooded VXLAN traffic with OVS
2014-11-07 18:34 ` Jesse Gross
@ 2014-11-07 20:27 ` Jesse Gross
2014-11-07 21:13 ` Jay Vosburgh
0 siblings, 1 reply; 13+ messages in thread
From: Jesse Gross @ 2014-11-07 20:27 UTC (permalink / raw)
To: Pravin Shelar; +Cc: Jay Vosburgh, netdev, discuss@openvswitch.org
On Fri, Nov 7, 2014 at 10:34 AM, Jesse Gross <jesse@nicira.com> wrote:
> On Fri, Nov 7, 2014 at 9:40 AM, Pravin Shelar <pshelar@nicira.com> wrote:
>> On Thu, Nov 6, 2014 at 5:58 PM, Jay Vosburgh <jay.vosburgh@canonical.com> wrote:
>>>
>>> I am able to reproduce a kernel panic on an system using
>>> openvswitch when receiving VXLAN traffic under a very specific set of
>>> circumstances. This occurs with a recent net-next as well as an Ubuntu
>>> 3.13 kernel. I'm not sure if the error lies in OVS, GRO, or elsewhere.
>>>
>>> In summary, when the system receives multiple VXLAN encapsulated
>>> TCP segments for a different system (not intended for local reception)
>>> that are from the middle of an active connection (received due to a switch
>>> flood), and are tagged to a VLAN not configured on the local host, then
>>> the system panics in skb_segment when OVS calls __skb_gso_segment on the
>>> GRO skb prior to performing an upcall to user space.
>>>
>>> The panic occurs in skbuff.c:skb_segment(), at the BUG_ON around
>>> line 3036:
>>>
>>> struct sk_buff *skb_segment(struct sk_buff *head_skb,
>>> netdev_features_t features)
>>> {
>>> [...]
>>> skb_shinfo(nskb)->tx_flags = skb_shinfo(head_skb)->tx_flags &
>>> SKBTX_SHARED_FRAG;
>>>
>>> while (pos < offset + len) {
>>> if (i >= nfrags) {
>>> BUG_ON(skb_headlen(list_skb));
>>>
>>> i = 0;
>>>
>>>
>>> The BUG_ON triggers because the skbs that have been GRO
>>> accumulated are partially or entirely linear, depending upon the receiving
>>> network device (sky2 is partial, enic is entire). The receive buffers end
>>> up being linear evidently because the mtu is set to 9000, and
>>> __netdev_alloc_skb calls __alloc_skb (and thus kmalloc) instead of
>>> __netdev_alloc_frag followed by build_skb.
>>>
>>> The foreign-VLAN VXLAN TCP segments are not processed as normal
>>> VXLAN traffic, as there is no listener on the VLAN in question, so once
>>> GRO processes them, they are sent directly to ovs_vport_receive. The
>>> panic stack appears as follows:
>>>
>>> [ 6558.812214] kernel BUG at net/core/skbuff.c:3025!
>>> [ 6558.812214] invalid opcode: 0000 [#1] SMP
>>> [ 6558.812214] Modules linked in: veth 8021q garp mrp bonding xt_tcpudp bridge stp llc iptable_filter ip_tables x_tables openvswitch vxlan ip6_udp_tunnel udp_tunnel gre libcrc32c i915 video drm_kms_helper coretemp drm kvm_intel kvm gpio_ich ppdev parport_pc lp lpc_ich serio_raw i2c_algo_bit parport mac_hid hid_generic usbhid hid psmouse r8169 mii sky2
>>> [ 6558.812214] CPU: 0 PID: 3 Comm: ksoftirqd/0 Not tainted 3.17.0-rc7-testola+ #5
>>> [ 6558.812214] Hardware name: LENOVO 0829F3U/To be filled by O.E.M., BIOS 90KT15AUS 07/21/2010
>>> [ 6558.812214] task: ffff880139eb3200 ti: ffff880139ed0000 task.ti: ffff880139ed0000
>>> [ 6558.812214] RIP: 0010:[<ffffffff81616bc2>] [<ffffffff81616bc2>] skb_segment+0x9d2/0xa00
>>> [ 6558.812214] RSP: 0018:ffff880139ed3610 EFLAGS: 00010216
>>> [ 6558.812214] RAX: 00000000000002dc RBX: ffff8800a3be5e00 RCX: ffff8800b10a26f0
>>> [ 6558.812214] RDX: 0000000000000074 RSI: ffff8800b10a2600 RDI: ffff8800b10a2000
>>> [ 6558.812214] RBP: ffff880139ed36e0 R08: 0000000000000022 R09: 0000000000000000
>>> [ 6558.812214] R10: ffff8800b11e6000 R11: 00000000000005ca R12: ffff8800b10a20f0
>>> [ 6558.812214] R13: 0000000000000000 R14: ffff8800b116cb00 R15: 0000000000000074
>>> [ 6558.812214] FS: 0000000000000000(0000) GS:ffff88013fc00000(0000) knlGS:0000000000000000
>>> [ 6558.812214] CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
>>> [ 6558.812214] CR2: 00007fa906f4f148 CR3: 00000000b2a46000 CR4: 00000000000407f0
>>> [ 6558.812214] Stack:
>>> [ 6558.812214] 00000000000016a0 ffff880031353800 ffffffffffffffde ffff8800000005ca
>>> [ 6558.812214] 0000000000000022 0000000000000040 ffff8800b11e6000 00000001000016a0
>>> [ 6558.812214] 0000000000000000 0000000000000022 00000000000005a8 ffff8800a3be5e00
>>> [ 6558.812214] Call Trace:
>>> [ 6558.812214] [<ffffffff8168c97f>] udp4_ufo_fragment+0x10f/0x1a0
>>> [ 6558.812214] [<ffffffff81695c51>] inet_gso_segment+0x141/0x370
>>> [ 6558.812214] [<ffffffff810aa2c8>] ? __wake_up_common+0x58/0x90
>>> [ 6558.812214] [<ffffffff81624f4f>] skb_mac_gso_segment+0x9f/0x100
>>> [ 6558.812214] [<ffffffff81625016>] __skb_gso_segment+0x66/0xd0
>>> [ 6558.812214] [<ffffffffa01d4c91>] queue_gso_packets+0x41/0x130 [openvswitch]
>>> [ 6558.812214] [<ffffffff8121aa4d>] ? ep_poll_safewake+0x2d/0x30
>>> [ 6558.812214] [<ffffffff8121b03d>] ? ep_poll_callback+0xcd/0x170
>>> [ 6558.812214] [<ffffffff810aa2c8>] ? __wake_up_common+0x58/0x90
>>> [ 6558.812214] [<ffffffff810aa860>] ? __wake_up_sync_key+0x50/0x60
>>> [ 6558.812214] [<ffffffff8161c232>] ? __skb_flow_dissect+0x162/0x4c0
>>> [ 6558.812214] [<ffffffff8172001f>] ? __slab_free+0xfe/0x2c3
>>> [ 6558.812214] [<ffffffff816107af>] ? kfree_skbmem+0x3f/0xa0
>>> [ 6558.812214] [<ffffffff8161c5ba>] ? __skb_get_hash+0x2a/0x160
>>> [ 6558.812214] [<ffffffffa01d609e>] ovs_dp_upcall+0x2e/0x70 [openvswitch]
>>> [ 6558.812214] [<ffffffffa01d6193>] ovs_dp_process_packet+0xb3/0xd0 [openvswitch]
>>> [ 6558.812214] [<ffffffffa01dc860>] ovs_vport_receive+0x60/0x80 [openvswitch]
>>> [ 6558.812214] [<ffffffff811828f1>] ? zone_statistics+0x81/0xa0
>>> [ 6558.812214] [<ffffffff81617819>] ? skb_gro_receive+0x559/0x5f0
>>> [ 6558.812214] [<ffffffff81695ada>] ? inet_gro_receive+0x1da/0x210
>>> [ 6558.812214] [<ffffffffa01dd10a>] netdev_frame_hook+0xca/0x130 [openvswitch]
>>> [ 6558.812214] [<ffffffff816233aa>] __netif_receive_skb_core+0x1ba/0x7a0
>>> [ 6558.812214] [<ffffffff816239a8>] __netif_receive_skb+0x18/0x60
>>> [ 6558.812214] [<ffffffff81623a13>] netif_receive_skb_internal+0x23/0x90
>>> [ 6558.812214] [<ffffffff8168cefa>] ? udp4_gro_complete+0x6a/0x70
>>> [ 6558.812214] [<ffffffff81623b94>] napi_gro_complete+0xa4/0xe0
>>> [ 6558.812214] [<ffffffff81623c3d>] napi_gro_flush+0x6d/0x90
>>> [ 6558.812214] [<ffffffff81623c7e>] napi_complete+0x1e/0x50
>>> [ 6558.812214] [<ffffffffa0006538>] sky2_poll+0xa38/0xd80 [sky2]
>>> [ 6558.812214] [<ffffffff81623e02>] net_rx_action+0x152/0x250
>>> [ 6558.812214] [<ffffffff81070aa5>] __do_softirq+0xf5/0x2e0
>>> [ 6558.812214] [<ffffffff81070cc0>] run_ksoftirqd+0x30/0x50
>>> [ 6558.812214] [<ffffffff8108e0ff>] smpboot_thread_fn+0xff/0x1b0
>>> [ 6558.812214] [<ffffffff8108e000>] ? SyS_setgroups+0x1a0/0x1a0
>>> [ 6558.812214] [<ffffffff8108a5a2>] kthread+0xd2/0xf0
>>> [ 6558.812214] [<ffffffff8108a4d0>] ? kthread_create_on_node+0x180/0x180
>>> [ 6558.812214] [<ffffffff81729e3c>] ret_from_fork+0x7c/0xb0
>>> [ 6558.812214] [<ffffffff8108a4d0>] ? kthread_create_on_node+0x180/0x180
>>> [ 6558.812214] Code: 8b 44 24 70 44 8b 4c 24 30 44 8b 5c 24 18 8b 54 24 08 48 8b 0c 24 0f 85 0f fd ff ff e9 06 fd ff ff 0f 1f 84 00 00 00 00 00 0f 0b <0f> 0b 0f 0b c6 44 24 3b 01 e9 28 f7 ff ff e8 76 db 10 00 0f 0b
>>> [ 6558.812214] RIP [<ffffffff81616bc2>] skb_segment+0x9d2/0xa00
>>> [ 6558.812214] RSP <ffff880139ed3610>
>>>
>>> I'm not sure if this is an error on the part of the RX / GRO
>>> processing in assembling the GRO skb, or in how OVS calls skb_segment.
>>>
>>
>> I think this is related skb_segment() issue where it is not able to
>> handle this type of skb geometry. We need to fix skb-segmentation. I
>> will investigate it more.
>
> One problem that I see is that vxlan_gro_complete() doesn't add
> SKB_GSO_UDP_TUNNEL to gso_type. This causes us to attempt
> fragmentation as UDP rather than continuing down to do TCP
> segmentation. That probably screws up the skb geometry.
I sent out a patch to fix this issue. I'm pretty sure that it is the
root cause of the originally reported case but I don't have a good way
to reproduce it so it would be great if you could test it Jay.
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: kernel panic receiving flooded VXLAN traffic with OVS
2014-11-07 20:27 ` Jesse Gross
@ 2014-11-07 21:13 ` Jay Vosburgh
2014-11-07 22:29 ` Jesse Gross
0 siblings, 1 reply; 13+ messages in thread
From: Jay Vosburgh @ 2014-11-07 21:13 UTC (permalink / raw)
To: Jesse Gross; +Cc: Pravin Shelar, netdev, discuss@openvswitch.org
Jesse Gross <jesse@nicira.com> wrote:
>On Fri, Nov 7, 2014 at 10:34 AM, Jesse Gross <jesse@nicira.com> wrote:
>> On Fri, Nov 7, 2014 at 9:40 AM, Pravin Shelar <pshelar@nicira.com> wrote:
>>> On Thu, Nov 6, 2014 at 5:58 PM, Jay Vosburgh <jay.vosburgh@canonical.com> wrote:
[...]
>>>> I'm not sure if this is an error on the part of the RX / GRO
>>>> processing in assembling the GRO skb, or in how OVS calls skb_segment.
>>>>
>>>
>>> I think this is related skb_segment() issue where it is not able to
>>> handle this type of skb geometry. We need to fix skb-segmentation. I
>>> will investigate it more.
>>
>> One problem that I see is that vxlan_gro_complete() doesn't add
>> SKB_GSO_UDP_TUNNEL to gso_type. This causes us to attempt
>> fragmentation as UDP rather than continuing down to do TCP
>> segmentation. That probably screws up the skb geometry.
>
>I sent out a patch to fix this issue. I'm pretty sure that it is the
>root cause of the originally reported case but I don't have a good way
>to reproduce it so it would be great if you could test it Jay.
I'm having an issue there; when I set up my recreation on
current net-next (3.18-rc2) without your new patch, I get the following
oops when my ovs script does "ovs-vsctl --if-exists del-br br-ex":
[ 18.580812] BUG: unable to handle kernel paging request at 0000000022835df6
[ 18.585532] IP: [<ffffffffa01cc5ec>] ovs_flow_tbl_insert+0xdc/0x1f0 [openvswitch]
[ 18.585532] PGD b016e067 PUD afdf2067 PMD 0
[ 18.585532] Oops: 0002 [#1] SMP
[ 18.585532] Modules linked in: i915 openvswitch libcrc32c video
[ 18.608578] sky2 0000:05:00.0 eth0: Link is up at 1000 Mbps, full duplex, flow control rx
[ 18.585532] drm_kms_helper drm gpio_ich lpc_ich i2c_algo_bit ppdev lp serio_raw coretemp kvm_intel kvm parport_pc parport mac_hid hid_generic usbhid hid psmouse r8169 sky2 mii
[ 18.585532] CPU: 0 PID: 843 Comm: ovs-vswitchd Not tainted 3.18.0-rc2+ #7
[ 18.585532] Hardware name: LENOVO 0829F3U/To be filled by O.E.M., BIOS 90KT15AUS 07/21/2010
[ 18.585532] task: ffff880134af3200 ti: ffff8800b0cc4000 task.ti: ffff8800b0cc4000
[ 18.585532] RIP: 0010:[<ffffffffa01cc5ec>] [<ffffffffa01cc5ec>] ovs_flow_tbl_insert+0xdc/0x1f0 [openvswitch]
[ 18.585532] RSP: 0018:ffff8800b0cc77a8 EFLAGS: 00010212
[ 18.585532] RAX: 00000000432e9568 RBX: ffff880134cb2120 RCX: 0000000001d3d19d
[ 18.585532] RDX: 00000000f4372b69 RSI: 000000006d3fa049 RDI: ffff8800b017c19c
[ 18.585532] RBP: ffff8800b0cc77f8 R08: 0000000022835dc6 R09: 000000000974849a
[ 18.585532] R10: ffffffffa01cc696 R11: 0000000000000004 R12: ffff880134cb2128
[ 18.585532] R13: ffff8800b0cc7850 R14: ffff880134cb2128 R15: ffff8800b2706400
[ 18.585532] FS: 00007f0497d3a980(0000) GS:ffff88013fc00000(0000) knlGS:0000000000000000
[ 18.585532] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 18.585532] CR2: 0000000022835df6 CR3: 00000000b060e000 CR4: 00000000000407f0
[ 18.585532] Stack:
[ 18.585532] ffff8800b017c000 ffff8800b017c000 ffff8800b0cc7a70 ffff8800b017c1c0
[ 18.585532] ffff8800b076b400 ffff8800b017c000 ffff8800b0cc7a70 0000000000000000
[ 18.585532] ffff8800b076b400 ffff880134cb2120 ffff8800b0cc7a38 ffffffffa01c3ed5
[ 18.585532] Call Trace:
[ 18.585532] [<ffffffffa01c3ed5>] ovs_flow_cmd_new+0x175/0x3a0 [openvswitch]
[ 18.585532] [<ffffffff81208688>] ? bh_lru_install+0x178/0x1b0
[ 18.585532] [<ffffffff8137ed83>] ? radix_tree_lookup_slot+0x13/0x30
[ 18.585532] [<ffffffff8165f445>] genl_family_rcv_msg+0x1a5/0x3c0
[ 18.585532] [<ffffffff8165f660>] ? genl_family_rcv_msg+0x3c0/0x3c0
[ 18.585532] [<ffffffff8165f6f1>] genl_rcv_msg+0x91/0xd0
[ 18.585532] [<ffffffff8165d761>] netlink_rcv_skb+0xc1/0xe0
[ 18.585532] [<ffffffff8165dc8c>] genl_rcv+0x2c/0x40
[ 18.585532] [<ffffffff8165ccf6>] netlink_unicast+0xf6/0x200
[ 18.585532] [<ffffffff8165d11d>] netlink_sendmsg+0x31d/0x780
[ 18.585532] [<ffffffff81614173>] sock_sendmsg+0x93/0xd0
[ 18.585532] [<ffffffff8101c375>] ? native_sched_clock+0x35/0x90
[ 18.585532] [<ffffffff8101c3d9>] ? sched_clock+0x9/0x10
[ 18.585532] [<ffffffff810966f5>] ? sched_clock_local+0x25/0x90
[ 18.585532] [<ffffffff81622427>] ? verify_iovec+0x47/0xd0
[ 18.585532] [<ffffffff81614989>] ___sys_sendmsg+0x399/0x3b0
[ 18.585532] [<ffffffff81096cb5>] ? fetch_task_cputime+0x95/0x100
[ 18.585532] [<ffffffff811de4c8>] ? pipe_read+0x1c8/0x2f0
[ 18.585532] [<ffffffff8101c375>] ? native_sched_clock+0x35/0x90
[ 18.585532] [<ffffffff8101c375>] ? native_sched_clock+0x35/0x90
[ 18.585532] [<ffffffff8101c3d9>] ? sched_clock+0x9/0x10
[ 18.585532] [<ffffffff8111cf1c>] ? acct_account_cputime+0x1c/0x20
[ 18.585532] [<ffffffff81096dab>] ? account_user_time+0x8b/0xa0
[ 18.585532] [<ffffffff811f30e5>] ? __fget_light+0x25/0x70
[ 18.585532] [<ffffffff81615082>] __sys_sendmsg+0x42/0x80
[ 18.585532] [<ffffffff816150d2>] SyS_sendmsg+0x12/0x20
[ 18.585532] [<ffffffff817365e4>] tracesys_phase2+0xd8/0xdd
[ 18.585532] Code: 24 e8 4c 8b 45 b0 31 d2 4d 89 b8 48 03 00 00 41 0f b7 4f 28 41 0f b7 77 2a 0f b7 c1 29 ce 49 8d 7c 00 38 c1 fe 02 e8 d4 af 1d e1 <41> 89 40 30 4c 8b 2b 4c 89 c6 4c 89 ef e8 a2 f5 ff ff 8b 43 20
[ 18.585532] RIP [<ffffffffa01cc5ec>] ovs_flow_tbl_insert+0xdc/0x1f0 [openvswitch]
[ 18.585532] RSP <ffff8800b0cc77a8>
[ 18.585532] CR2: 0000000022835df6
[ 18.969812] ---[ end trace fdb3743001087166 ]---
I'll go back to 3.17 to test your patch in the meantime.
-J
---
-Jay Vosburgh, jay.vosburgh@canonical.com
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: kernel panic receiving flooded VXLAN traffic with OVS
2014-11-07 21:13 ` Jay Vosburgh
@ 2014-11-07 22:29 ` Jesse Gross
2014-11-07 23:06 ` Pravin Shelar
0 siblings, 1 reply; 13+ messages in thread
From: Jesse Gross @ 2014-11-07 22:29 UTC (permalink / raw)
To: Jay Vosburgh; +Cc: Pravin Shelar, netdev, discuss@openvswitch.org
On Fri, Nov 7, 2014 at 1:13 PM, Jay Vosburgh <jay.vosburgh@canonical.com> wrote:
> Jesse Gross <jesse@nicira.com> wrote:
>
>>On Fri, Nov 7, 2014 at 10:34 AM, Jesse Gross <jesse@nicira.com> wrote:
>>> On Fri, Nov 7, 2014 at 9:40 AM, Pravin Shelar <pshelar@nicira.com> wrote:
>>>> On Thu, Nov 6, 2014 at 5:58 PM, Jay Vosburgh <jay.vosburgh@canonical.com> wrote:
> [...]
>>>>> I'm not sure if this is an error on the part of the RX / GRO
>>>>> processing in assembling the GRO skb, or in how OVS calls skb_segment.
>>>>>
>>>>
>>>> I think this is related skb_segment() issue where it is not able to
>>>> handle this type of skb geometry. We need to fix skb-segmentation. I
>>>> will investigate it more.
>>>
>>> One problem that I see is that vxlan_gro_complete() doesn't add
>>> SKB_GSO_UDP_TUNNEL to gso_type. This causes us to attempt
>>> fragmentation as UDP rather than continuing down to do TCP
>>> segmentation. That probably screws up the skb geometry.
>>
>>I sent out a patch to fix this issue. I'm pretty sure that it is the
>>root cause of the originally reported case but I don't have a good way
>>to reproduce it so it would be great if you could test it Jay.
>
> I'm having an issue there; when I set up my recreation on
> current net-next (3.18-rc2) without your new patch, I get the following
> oops when my ovs script does "ovs-vsctl --if-exists del-br br-ex":
Hmm, that looks like a totally different problem. Pravin - any ideas?
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: kernel panic receiving flooded VXLAN traffic with OVS
2014-11-07 22:29 ` Jesse Gross
@ 2014-11-07 23:06 ` Pravin Shelar
[not found] ` <CALnjE+q1t0dbC6-EvxsQvvNafKsk2HNKXBjDrALA9S-gon68PQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
0 siblings, 1 reply; 13+ messages in thread
From: Pravin Shelar @ 2014-11-07 23:06 UTC (permalink / raw)
To: Jesse Gross; +Cc: Jay Vosburgh, netdev, discuss@openvswitch.org
On Fri, Nov 7, 2014 at 2:29 PM, Jesse Gross <jesse@nicira.com> wrote:
> On Fri, Nov 7, 2014 at 1:13 PM, Jay Vosburgh <jay.vosburgh@canonical.com> wrote:
>> Jesse Gross <jesse@nicira.com> wrote:
>>
>>>On Fri, Nov 7, 2014 at 10:34 AM, Jesse Gross <jesse@nicira.com> wrote:
>>>> On Fri, Nov 7, 2014 at 9:40 AM, Pravin Shelar <pshelar@nicira.com> wrote:
>>>>> On Thu, Nov 6, 2014 at 5:58 PM, Jay Vosburgh <jay.vosburgh@canonical.com> wrote:
>> [...]
>>>>>> I'm not sure if this is an error on the part of the RX / GRO
>>>>>> processing in assembling the GRO skb, or in how OVS calls skb_segment.
>>>>>>
>>>>>
>>>>> I think this is related skb_segment() issue where it is not able to
>>>>> handle this type of skb geometry. We need to fix skb-segmentation. I
>>>>> will investigate it more.
>>>>
>>>> One problem that I see is that vxlan_gro_complete() doesn't add
>>>> SKB_GSO_UDP_TUNNEL to gso_type. This causes us to attempt
>>>> fragmentation as UDP rather than continuing down to do TCP
>>>> segmentation. That probably screws up the skb geometry.
>>>
>>>I sent out a patch to fix this issue. I'm pretty sure that it is the
>>>root cause of the originally reported case but I don't have a good way
>>>to reproduce it so it would be great if you could test it Jay.
>>
>> I'm having an issue there; when I set up my recreation on
>> current net-next (3.18-rc2) without your new patch, I get the following
>> oops when my ovs script does "ovs-vsctl --if-exists del-br br-ex":
>
> Hmm, that looks like a totally different problem. Pravin - any ideas?
I am not able to reproduce with above command. Jay, Can you send me
steps to reproduce this issue?
Thanks,
Pravin.
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [ovs-discuss] kernel panic receiving flooded VXLAN traffic with OVS
[not found] ` <CALnjE+q1t0dbC6-EvxsQvvNafKsk2HNKXBjDrALA9S-gon68PQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2014-11-08 5:13 ` Jay Vosburgh
0 siblings, 0 replies; 13+ messages in thread
From: Jay Vosburgh @ 2014-11-08 5:13 UTC (permalink / raw)
To: Pravin Shelar; +Cc: netdev, discuss-yBygre7rU0TnMu66kgdUjQ@public.gmane.org
Pravin Shelar <pshelar@nicira.com> wrote:
>On Fri, Nov 7, 2014 at 2:29 PM, Jesse Gross <jesse@nicira.com> wrote:
>> On Fri, Nov 7, 2014 at 1:13 PM, Jay Vosburgh <jay.vosburgh@canonical.com> wrote:
>>> Jesse Gross <jesse@nicira.com> wrote:
>>>
>>>>On Fri, Nov 7, 2014 at 10:34 AM, Jesse Gross <jesse@nicira.com> wrote:
>>>>> On Fri, Nov 7, 2014 at 9:40 AM, Pravin Shelar <pshelar@nicira.com> wrote:
>>>>>> On Thu, Nov 6, 2014 at 5:58 PM, Jay Vosburgh <jay.vosburgh@canonical.com> wrote:
>>> [...]
>>>>>>> I'm not sure if this is an error on the part of the RX / GRO
>>>>>>> processing in assembling the GRO skb, or in how OVS calls skb_segment.
>>>>>>>
>>>>>>
>>>>>> I think this is related skb_segment() issue where it is not able to
>>>>>> handle this type of skb geometry. We need to fix skb-segmentation. I
>>>>>> will investigate it more.
>>>>>
>>>>> One problem that I see is that vxlan_gro_complete() doesn't add
>>>>> SKB_GSO_UDP_TUNNEL to gso_type. This causes us to attempt
>>>>> fragmentation as UDP rather than continuing down to do TCP
>>>>> segmentation. That probably screws up the skb geometry.
>>>>
>>>>I sent out a patch to fix this issue. I'm pretty sure that it is the
>>>>root cause of the originally reported case but I don't have a good way
>>>>to reproduce it so it would be great if you could test it Jay.
>>>
>>> I'm having an issue there; when I set up my recreation on
>>> current net-next (3.18-rc2) without your new patch, I get the following
>>> oops when my ovs script does "ovs-vsctl --if-exists del-br br-ex":
>>
>> Hmm, that looks like a totally different problem. Pravin - any ideas?
>
>I am not able to reproduce with above command. Jay, Can you send me
>steps to reproduce this issue?
Well, at the moment a 3.18.0-rc2 kernel panics in
ovs_flow_tbl_insert as soon as ovs-vswitchd starts up. Booting to an
earlier kernel (3.17, for example) with no other changes doesn't panic.
I moved /etc/openvswitch/conf.db away and rebooted the system
(which I think will eliminate the stored configuration), and the kernel
still hits this oops when ovs-vswitchd starts up.
A bit of poking with crash
[ 22.180002] RIP: 0010:[<ffffffffa01a55ec>] [<ffffffffa01a55ec>] ovs_flow_tbl_insert+0xdc/0x1f0 [openvswitch]
[ 22.180002] RSP: 0018:ffff8801391a77a8 EFLAGS: 00010203
[ 22.180002] RAX: 00000000076cc6f1 RBX: ffff8800b35c2020 RCX: 00000000fb994f19
[ 22.180002] RDX: 000000009cc907e5 RSI: 00000000a490ff19 RDI: ffff8800b0c1c19c
[ 22.180002] RBP: ffff8801391a77f8 R08: 000000006867223a R09: 00000000819d92a7
[ 22.180002] R10: ffffffffa01a5696 R11: 0000000000000004 R12: ffff8800b35c2028
[ 22.180002] R13: ffff8801391a7850 R14: ffff8800b35c2028 R15: ffff880134827800
[ 22.180002] FS: 00007f0ef491a980(0000) GS:ffff88013fc80000(0000) knlGS:0000000000000000
0xffffffffa01a55e7 <ovs_flow_tbl_insert+0xd7>: callq 0xffffffff813a75c0 <__jhash2>
0xffffffffa01a55ec <ovs_flow_tbl_insert+0xdc>: mov %eax,0x30(%r8)
0xffffffffa01a55f0 <ovs_flow_tbl_insert+0xe0>: mov (%rbx),%r13
0xffffffffa01a55f3 <ovs_flow_tbl_insert+0xe3>: mov %r8,%rsi
0xffffffffa01a55f6 <ovs_flow_tbl_insert+0xe6>: mov %r13,%rdi
0xffffffffa01a55f9 <ovs_flow_tbl_insert+0xe9>: callq 0xffffffffa01a4ba0 <table_instance_insert>
So it panics on return from __jhash2 because %r8 is invalid;
this is presumably
int ovs_flow_tbl_insert(struct flow_table *table, struct sw_flow *flow,
struct sw_flow_mask *mask)
{
[...]
flow->hash = flow_hash(&flow->key, flow->mask->range.start,
flow->mask->range.end);
ti = ovsl_dereference(table->ti);
table_instance_insert(ti, flow);
Looking at __jash2, it uses %r8 internally, but %r8 doesn't
appear to be saved and restored either around the call to, or within,
__jhash2. In ovs_flow_tbl_insert %r8 appears to hold the "flow"
variable, so this panic might have nothing to do with ovs itself.
I'll have to look at this further on Monday.
-J
---
-Jay Vosburgh, jay.vosburgh@canonical.com
_______________________________________________
discuss mailing list
discuss@openvswitch.org
http://openvswitch.org/mailman/listinfo/discuss
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: kernel panic receiving flooded VXLAN traffic with OVS
2014-11-07 1:58 kernel panic receiving flooded VXLAN traffic with OVS Jay Vosburgh
2014-11-07 17:40 ` Pravin Shelar
@ 2014-12-04 1:45 ` Jay Vosburgh
2014-12-06 2:51 ` Jesse Gross
1 sibling, 1 reply; 13+ messages in thread
From: Jay Vosburgh @ 2014-12-04 1:45 UTC (permalink / raw)
To: netdev, discuss; +Cc: Pravin Shelar, Jesse Gross
Jay Vosburgh <jay.vosburgh@canonical.com> wrote:
> I am able to reproduce a kernel panic on an system using
>openvswitch when receiving VXLAN traffic under a very specific set of
>circumstances. This occurs with a recent net-next as well as an Ubuntu
>3.13 kernel. I'm not sure if the error lies in OVS, GRO, or elsewhere.
>
> In summary, when the system receives multiple VXLAN encapsulated
>TCP segments for a different system (not intended for local reception)
>that are from the middle of an active connection (received due to a switch
>flood), and are tagged to a VLAN not configured on the local host, then
>the system panics in skb_segment when OVS calls __skb_gso_segment on the
>GRO skb prior to performing an upcall to user space.
>
> The panic occurs in skbuff.c:skb_segment(), at the BUG_ON around
>line 3036:
>
>struct sk_buff *skb_segment(struct sk_buff *head_skb,
> netdev_features_t features)
>{
>[...]
> skb_shinfo(nskb)->tx_flags = skb_shinfo(head_skb)->tx_flags &
> SKBTX_SHARED_FRAG;
>
> while (pos < offset + len) {
> if (i >= nfrags) {
> BUG_ON(skb_headlen(list_skb));
>
> i = 0;
>
>
> The BUG_ON triggers because the skbs that have been GRO
>accumulated are partially or entirely linear, depending upon the receiving
>network device (sky2 is partial, enic is entire). The receive buffers end
>up being linear evidently because the mtu is set to 9000, and
>__netdev_alloc_skb calls __alloc_skb (and thus kmalloc) instead of
>__netdev_alloc_frag followed by build_skb.
>
> The foreign-VLAN VXLAN TCP segments are not processed as normal
>VXLAN traffic, as there is no listener on the VLAN in question, so once
>GRO processes them, they are sent directly to ovs_vport_receive. The
>panic stack appears as follows:
I've worked out some more details on this with regards to the
cause.
There seems to be a mismatch between GRO and the packet receive
processing. GRO only looks at the receiving port number in order to
trigger VXLAN GRO accumulation (which will in turn perform TCP
accumulation on the encapsulated segment). For the panicking case, the
packet receive processing doesn't deliver the GRO skb to VXLAN because
there is no VXLAN listener on the foreign VLAN.
The GRO skb is not processed through iptunnel_pull_header by
vxlan_udp_encap_recv, so the GRO skb is left with the skb header
pointing to the UDP header, not the inner TCP header. Note that second
and later skbs within the GRO skb have their headers pointing to the
inner TCP header.
Then, when ovs_dp_upcall later ends up in inet_gso_segment, it
passes the GRO skb to udp4_ufo_fragment, not tcp_gso_segment.
GRO and the skb_segment call from ovs_dp_upcall appear to work
fine on TCP-in-VXLAN segments that do pass through the VXLAN receive
processing.
I'm not sure how best to resolve this; adding a check to the GRO
processing that an skb destined for the VXLAN port would actually be
received by VXLAN sounds like a possible solution, but that doesn't seem
to be simple to implement (because the skb->dev at the time GRO runs may
not match what it becomes later if the VXLAN runs on a VLAN).
-J
---
-Jay Vosburgh, jay.vosburgh@canonical.com
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: kernel panic receiving flooded VXLAN traffic with OVS
2014-12-04 1:45 ` Jay Vosburgh
@ 2014-12-06 2:51 ` Jesse Gross
[not found] ` <CAEP_g=_mfSCH1250ezx_h8_yM_4FzsYcsS6EnGi99AFoWO_MKw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
0 siblings, 1 reply; 13+ messages in thread
From: Jesse Gross @ 2014-12-06 2:51 UTC (permalink / raw)
To: Jay Vosburgh; +Cc: netdev, discuss@openvswitch.org, Pravin Shelar
On Wed, Dec 3, 2014 at 5:45 PM, Jay Vosburgh <jay.vosburgh@canonical.com> wrote:
>
> Jay Vosburgh <jay.vosburgh@canonical.com> wrote:
>
>> I am able to reproduce a kernel panic on an system using
>>openvswitch when receiving VXLAN traffic under a very specific set of
>>circumstances. This occurs with a recent net-next as well as an Ubuntu
>>3.13 kernel. I'm not sure if the error lies in OVS, GRO, or elsewhere.
>>
>> In summary, when the system receives multiple VXLAN encapsulated
>>TCP segments for a different system (not intended for local reception)
>>that are from the middle of an active connection (received due to a switch
>>flood), and are tagged to a VLAN not configured on the local host, then
>>the system panics in skb_segment when OVS calls __skb_gso_segment on the
>>GRO skb prior to performing an upcall to user space.
>>
>> The panic occurs in skbuff.c:skb_segment(), at the BUG_ON around
>>line 3036:
>>
>>struct sk_buff *skb_segment(struct sk_buff *head_skb,
>> netdev_features_t features)
>>{
>>[...]
>> skb_shinfo(nskb)->tx_flags = skb_shinfo(head_skb)->tx_flags &
>> SKBTX_SHARED_FRAG;
>>
>> while (pos < offset + len) {
>> if (i >= nfrags) {
>> BUG_ON(skb_headlen(list_skb));
>>
>> i = 0;
>>
>>
>> The BUG_ON triggers because the skbs that have been GRO
>>accumulated are partially or entirely linear, depending upon the receiving
>>network device (sky2 is partial, enic is entire). The receive buffers end
>>up being linear evidently because the mtu is set to 9000, and
>>__netdev_alloc_skb calls __alloc_skb (and thus kmalloc) instead of
>>__netdev_alloc_frag followed by build_skb.
>>
>> The foreign-VLAN VXLAN TCP segments are not processed as normal
>>VXLAN traffic, as there is no listener on the VLAN in question, so once
>>GRO processes them, they are sent directly to ovs_vport_receive. The
>>panic stack appears as follows:
>
> I've worked out some more details on this with regards to the
> cause.
>
> There seems to be a mismatch between GRO and the packet receive
> processing. GRO only looks at the receiving port number in order to
> trigger VXLAN GRO accumulation (which will in turn perform TCP
> accumulation on the encapsulated segment). For the panicking case, the
> packet receive processing doesn't deliver the GRO skb to VXLAN because
> there is no VXLAN listener on the foreign VLAN.
>
> The GRO skb is not processed through iptunnel_pull_header by
> vxlan_udp_encap_recv, so the GRO skb is left with the skb header
> pointing to the UDP header, not the inner TCP header. Note that second
> and later skbs within the GRO skb have their headers pointing to the
> inner TCP header.
>
> Then, when ovs_dp_upcall later ends up in inet_gso_segment, it
> passes the GRO skb to udp4_ufo_fragment, not tcp_gso_segment.
>
> GRO and the skb_segment call from ovs_dp_upcall appear to work
> fine on TCP-in-VXLAN segments that do pass through the VXLAN receive
> processing.
>
> I'm not sure how best to resolve this; adding a check to the GRO
> processing that an skb destined for the VXLAN port would actually be
> received by VXLAN sounds like a possible solution, but that doesn't seem
> to be simple to implement (because the skb->dev at the time GRO runs may
> not match what it becomes later if the VXLAN runs on a VLAN).
I don't think there is anything inherently wrong with aggregating TCP
segments in VXLAN that are not destined for the local host. This is
conceptually the same as doing aggregation for TCP packets where we
only perform L2 bridging - in theory we shouldn't look at the upper
layers but it is fine as long as we faithfully reconstruct it on the
way out.
A VXLAN packet that has been properly GRO-ed should result in a call
to tcp_tso_segment() even without the header being pulled off, since
that's what would happen for locally generated VXLAN packets on
egress. That's what I thought I was fixing with my previous patch to
the VXLAN GRO code although perhaps there is another issue.
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [ovs-discuss] kernel panic receiving flooded VXLAN traffic with OVS
[not found] ` <CAEP_g=_mfSCH1250ezx_h8_yM_4FzsYcsS6EnGi99AFoWO_MKw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2014-12-06 22:47 ` Nicholas Bastin
2014-12-08 17:33 ` Jesse Gross
0 siblings, 1 reply; 13+ messages in thread
From: Nicholas Bastin @ 2014-12-06 22:47 UTC (permalink / raw)
To: Jesse Gross; +Cc: discuss-yBygre7rU0TnMu66kgdUjQ@public.gmane.org, netdev
[-- Attachment #1.1: Type: text/plain, Size: 1160 bytes --]
On Fri, Dec 5, 2014 at 4:51 PM, Jesse Gross <jesse-l0M0P4e3n4LQT0dZR+AlfA@public.gmane.org> wrote:
> I don't think there is anything inherently wrong with aggregating TCP
> segments in VXLAN that are not destined for the local host. This is
> conceptually the same as doing aggregation for TCP packets where we
> only perform L2 bridging - in theory we shouldn't look at the upper
> layers but it is fine as long as we faithfully reconstruct it on the
> way out.
>
But you don't faithfully reconstruct what the user originally sent -
in-path reassembly is always wrong, which is why hardware switches don't do
it (by default, anyhow). If you configure a middlebox to do some kind of
assembly/translation/whatever work for you, that's fine, but something that
advertises itself as a "switch" or "router" should definitely not do this
by default.
If you reassemble frames you completely obviate any kind of PMTU-D or
configured MTU that your user is using, and this breaks a lot of paths. We
completely disable all GRO/TSO/etc., but if you are able to determine that
a packet is not destined for the local host you should definitely not
mutate it.
--
Nick
[-- Attachment #1.2: Type: text/html, Size: 1701 bytes --]
[-- Attachment #2: Type: text/plain, Size: 141 bytes --]
_______________________________________________
discuss mailing list
discuss@openvswitch.org
http://openvswitch.org/mailman/listinfo/discuss
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [ovs-discuss] kernel panic receiving flooded VXLAN traffic with OVS
2014-12-06 22:47 ` [ovs-discuss] " Nicholas Bastin
@ 2014-12-08 17:33 ` Jesse Gross
[not found] ` <CAEP_g=86QKL_Oxxj0mo8CZs8+fyBZuYw2fQTMGow_bSJbk+8AQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
0 siblings, 1 reply; 13+ messages in thread
From: Jesse Gross @ 2014-12-08 17:33 UTC (permalink / raw)
To: Nicholas Bastin; +Cc: Jay Vosburgh, netdev, discuss@openvswitch.org
On Sat, Dec 6, 2014 at 2:47 PM, Nicholas Bastin <nick.bastin@gmail.com> wrote:
> On Fri, Dec 5, 2014 at 4:51 PM, Jesse Gross <jesse@nicira.com> wrote:
>>
>> I don't think there is anything inherently wrong with aggregating TCP
>> segments in VXLAN that are not destined for the local host. This is
>> conceptually the same as doing aggregation for TCP packets where we
>> only perform L2 bridging - in theory we shouldn't look at the upper
>> layers but it is fine as long as we faithfully reconstruct it on the
>> way out.
>
>
> But you don't faithfully reconstruct what the user originally sent - in-path
> reassembly is always wrong, which is why hardware switches don't do it (by
> default, anyhow). If you configure a middlebox to do some kind of
> assembly/translation/whatever work for you, that's fine, but something that
> advertises itself as a "switch" or "router" should definitely not do this by
> default.
>
> If you reassemble frames you completely obviate any kind of PMTU-D or
> configured MTU that your user is using, and this breaks a lot of paths. We
> completely disable all GRO/TSO/etc., but if you are able to determine that a
> packet is not destined for the local host you should definitely not mutate
> it.
If you look at the implementation of GRO/TSO, I think you will see
that it does in fact faithfully reconstruct the original message and
path MTU discovery is preserved. On Linux systems, GRO is enabled by
default for all workloads - including those that do not result in
local termination such as bridging.
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [ovs-discuss] kernel panic receiving flooded VXLAN traffic with OVS
[not found] ` <CAEP_g=86QKL_Oxxj0mo8CZs8+fyBZuYw2fQTMGow_bSJbk+8AQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2014-12-08 21:14 ` Nicholas Bastin
0 siblings, 0 replies; 13+ messages in thread
From: Nicholas Bastin @ 2014-12-08 21:14 UTC (permalink / raw)
To: Jesse Gross; +Cc: discuss-yBygre7rU0TnMu66kgdUjQ@public.gmane.org, netdev
[-- Attachment #1.1: Type: text/plain, Size: 826 bytes --]
On Mon, Dec 8, 2014 at 7:33 AM, Jesse Gross <jesse-l0M0P4e3n4LQT0dZR+AlfA@public.gmane.org> wrote:
> If you look at the implementation of GRO/TSO, I think you will see
> that it does in fact faithfully reconstruct the original message and
> path MTU discovery is preserved. On Linux systems, GRO is enabled by
> default for all workloads - including those that do not result in
> local termination such as bridging.
I will go back and test this again (it's been a while - we just run with
all offloads turned off by default now). When we were having problems with
this we would find segmented TCP flows getting reassembled along the path
and then output with the local egress MTU (which was considerably larger
than that at the end stations), resulting in performance-crushing IP
fragmentation later in the path.
--
Nick
[-- Attachment #1.2: Type: text/html, Size: 1302 bytes --]
[-- Attachment #2: Type: text/plain, Size: 141 bytes --]
_______________________________________________
discuss mailing list
discuss@openvswitch.org
http://openvswitch.org/mailman/listinfo/discuss
^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2014-12-08 21:14 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-11-07 1:58 kernel panic receiving flooded VXLAN traffic with OVS Jay Vosburgh
2014-11-07 17:40 ` Pravin Shelar
2014-11-07 18:34 ` Jesse Gross
2014-11-07 20:27 ` Jesse Gross
2014-11-07 21:13 ` Jay Vosburgh
2014-11-07 22:29 ` Jesse Gross
2014-11-07 23:06 ` Pravin Shelar
[not found] ` <CALnjE+q1t0dbC6-EvxsQvvNafKsk2HNKXBjDrALA9S-gon68PQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2014-11-08 5:13 ` [ovs-discuss] " Jay Vosburgh
2014-12-04 1:45 ` Jay Vosburgh
2014-12-06 2:51 ` Jesse Gross
[not found] ` <CAEP_g=_mfSCH1250ezx_h8_yM_4FzsYcsS6EnGi99AFoWO_MKw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2014-12-06 22:47 ` [ovs-discuss] " Nicholas Bastin
2014-12-08 17:33 ` Jesse Gross
[not found] ` <CAEP_g=86QKL_Oxxj0mo8CZs8+fyBZuYw2fQTMGow_bSJbk+8AQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2014-12-08 21:14 ` Nicholas Bastin
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).