* vxlan gro problem ?
@ 2014-10-08 8:46 yinpeijun
2014-10-12 19:50 ` Or Gerlitz
0 siblings, 1 reply; 4+ messages in thread
From: yinpeijun @ 2014-10-08 8:46 UTC (permalink / raw)
To: netdev, linux-kernel, ogerlitz; +Cc: lichunhe, wangfakai
Hi all,
recently Linux 3.14 has been released and I find the networking has added udp gro and vxlan gro funtion, then I use the redhat 7.0(there is also add this funtion)
to test, I use kernel vxlan module and create a vxlan device then attach the device to ovs bridge , the configure as follow:
root@25:~$ ip link
15: vxlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master ovs-system state UNKNOWN mode DEFAULT
link/ether be:e1:ae:3d:8b:f2 brd ff:ff:ff:ff:ff:ff
16: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc mq master ovs-system state UNKNOWN mode DEFAULT qlen 5000
root@25:~$ ovs-vsctl show
aa1294f3-9952-4393-b2b5-54e9a6eb76ee
Bridge ovs-vx
Port ovs-vx
Interface ovs-vx
type: internal
Port "vnet0"
Interface "vnet0"
Port "vxlan0"
Interface "vxlan0"
ovs_version: "2.0.2"
vnet0 is a vm backend device, and the end is the same configuration. then I use netperf to test throughput in vm (netperf -H **** -t TCP_STREAM -l 10 -- -m 1460),
the result is 3-4 Gbit/sec, the improvement is not obvious, and I also confused there is no aggregation packets (length > mtu) in the end vm. so I want to know what
wrong ? or how to test the function ?
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: vxlan gro problem ?
2014-10-08 8:46 vxlan gro problem ? yinpeijun
@ 2014-10-12 19:50 ` Or Gerlitz
2014-10-13 9:14 ` yinpeijun
0 siblings, 1 reply; 4+ messages in thread
From: Or Gerlitz @ 2014-10-12 19:50 UTC (permalink / raw)
To: yinpeijun; +Cc: netdev, linux-kernel, lichunhe, wangfakai
On 10/8/2014 10:46 AM, yinpeijun wrote:
> Hi all,
> recently Linux 3.14 has been released and I find the networking has added udp gro and vxlan gro funtion, then I use the redhat 7.0(there is also add this funtion)
> to test, I use kernel vxlan module and create a vxlan device then attach the device to ovs bridge , the configure as follow:
> root@25:~$ ip link
> 15: vxlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master ovs-system state UNKNOWN mode DEFAULT
> link/ether be:e1:ae:3d:8b:f2 brd ff:ff:ff:ff:ff:ff
> 16: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc mq master ovs-system state UNKNOWN mode DEFAULT qlen 5000
>
> root@25:~$ ovs-vsctl show
> aa1294f3-9952-4393-b2b5-54e9a6eb76ee
> Bridge ovs-vx
> Port ovs-vx
> Interface ovs-vx
> type: internal
> Port "vnet0"
> Interface "vnet0"
> Port "vxlan0"
> Interface "vxlan0"
> ovs_version: "2.0.2"
>
> vnet0 is a vm backend device, and the end is the same configuration. then I use netperf to test throughput in vm (netperf -H **** -t TCP_STREAM -l 10 -- -m 1460),
> the result is 3-4 Gbit/sec, the improvement is not obvious, and I also confused there is no aggregation packets (length > mtu) in the end vm. so I want to know what
> wrong ? or how to test the function ?
>
As things are set in 3.14 and AFAIK also in RHEL 7.0, for GRO/VXLAN to
come into play you need to run over a NIC which supports RX checksum
offload too, is this the case?
Also, the configuration you run with isn't the typical play of VXLAN
with OVS... I didn't try it out and this week being out to LPC.
Did you try the usual track of running OVS VXLAN port?e.g as explained
in the Example section of [1]
Or.
[1] http://community.mellanox.com/docs/DOC-1446
Or.
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: vxlan gro problem ?
2014-10-12 19:50 ` Or Gerlitz
@ 2014-10-13 9:14 ` yinpeijun
2014-10-13 20:56 ` Or Gerlitz
0 siblings, 1 reply; 4+ messages in thread
From: yinpeijun @ 2014-10-13 9:14 UTC (permalink / raw)
To: Or Gerlitz, qinchuanyu
Cc: netdev, linux-kernel, lichunhe, wangfakai, liuyongan
On 2014/10/13 3:50, Or Gerlitz wrote:
> On 10/8/2014 10:46 AM, yinpeijun wrote:
>> Hi all,
>> recently Linux 3.14 has been released and I find the networking has added udp gro and vxlan gro funtion, then I use the redhat 7.0(there is also add this funtion)
>> to test, I use kernel vxlan module and create a vxlan device then attach the device to ovs bridge , the configure as follow:
>> root@25:~$ ip link
>> 15: vxlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master ovs-system state UNKNOWN mode DEFAULT
>> link/ether be:e1:ae:3d:8b:f2 brd ff:ff:ff:ff:ff:ff
>> 16: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc mq master ovs-system state UNKNOWN mode DEFAULT qlen 5000
>> root@25:~$ ovs-vsctl show
>> aa1294f3-9952-4393-b2b5-54e9a6eb76ee
>> Bridge ovs-vx
>> Port ovs-vx
>> Interface ovs-vx
>> type: internal
>> Port "vnet0"
>> Interface "vnet0"
>> Port "vxlan0"
>> Interface "vxlan0"
>> ovs_version: "2.0.2"
>>
>> vnet0 is a vm backend device, and the end is the same configuration. then I use netperf to test throughput in vm (netperf -H **** -t TCP_STREAM -l 10 -- -m 1460),
>> the result is 3-4 Gbit/sec, the improvement is not obvious, and I also confused there is no aggregation packets (length > mtu) in the end vm. so I want to know what
>> wrong ? or how to test the function ?
>>
>
> As things are set in 3.14 and AFAIK also in RHEL 7.0, for GRO/VXLAN to come into play you need to run over a NIC which supports RX checksum offload too, is this the case?
>
> Also, the configuration you run with isn't the typical play of VXLAN with OVS... I didn't try it out and this week being out to LPC.
>
> Did you try the usual track of running OVS VXLAN port?e.g as explained in the Example section of [1]
>
> Or.
>
> [1] http://community.mellanox.com/docs/DOC-1446
>
> Or.
>
>
>
> .
>
thank you for your reply, Gerlitz .
my test environment use mellanox ConnectX-3 Pro nic , as I know the nic support Rx checksum offload. but I am not confirm if should I do some special configure?
or the nic driver or firmware need update ? also , I have used redhat7.0 ovs vxlan to test with the similar configure as before, but there is also no improvement .
the nic infomation:
04:00.0 Ethernet controller: Mellanox Technologies MT27520 Family [ConnectX-3 Pro]
root@localhost:~# ethtool -i eth4
driver: mlx4_en
version: 2.0(Dec 2011)
firmware-version: 2.31.5050
bus-info: 0000:04:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: yes
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: vxlan gro problem ?
2014-10-13 9:14 ` yinpeijun
@ 2014-10-13 20:56 ` Or Gerlitz
0 siblings, 0 replies; 4+ messages in thread
From: Or Gerlitz @ 2014-10-13 20:56 UTC (permalink / raw)
To: yinpeijun
Cc: Or Gerlitz, qinchuanyu, Linux Netdev List, Linux Kernel, lichunhe,
wangfakai, liuyongan
On Mon, Oct 13, 2014 at 11:14 AM, yinpeijun <yinpeijun@huawei.com> wrote:
> On 2014/10/13 3:50, Or Gerlitz wrote:
> my test environment use mellanox ConnectX-3 Pro nic , as I know the nic support Rx checksum offload. but I am not confirm if should I do some special configure?
> or the nic driver or firmware need update ? also , I have used redhat7.0 ovs vxlan to test with the similar configure as before, but there is also no improvement .
The NIC (HW model and firmware) look just fine. As it seems now, this
boils down to get the RHEL7 inbox mlx4 driver to work properly on your
setup, something which goes a bit beyond the interest of the upstream
mailing lists...
Or.
>
> the nic infomation:
>
> 04:00.0 Ethernet controller: Mellanox Technologies MT27520 Family [ConnectX-3 Pro]
>
> root@localhost:~# ethtool -i eth4
> driver: mlx4_en
> version: 2.0(Dec 2011)
> firmware-version: 2.31.5050
> bus-info: 0000:04:00.0
> supports-statistics: yes
> supports-test: yes
> supports-eeprom-access: no
> supports-register-dump: no
> supports-priv-flags: yes
>
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2014-10-13 20:56 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-10-08 8:46 vxlan gro problem ? yinpeijun
2014-10-12 19:50 ` Or Gerlitz
2014-10-13 9:14 ` yinpeijun
2014-10-13 20:56 ` Or Gerlitz
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).