* vswitches performance comparison
@ 2015-07-21 18:00 Jun Xiao
2015-07-21 18:14 ` [ovs-discuss] " Gray, Mark D
2015-07-21 21:02 ` Stephen Hemminger
0 siblings, 2 replies; 8+ messages in thread
From: Jun Xiao @ 2015-07-21 18:00 UTC (permalink / raw)
To: discuss, dev
After CloudNetEngine vswitch technical preview is launched, we received quite
a few queries on vswitches performance comparison, but we cannot simply give a
test result on our platform because performance varies on different H/Ws and
different workloads, and that's why we encourage you to try the evaluation
package to get real data on your setup.
Anyway, we share a little more performance data on our H/W which is a comparison
among native kernel OVS/OVS-DPDK/CNE vswitch under the most common workload:
concurrent bi-directional TCP traffics cross hosts, and hope you can have a rough idea.
http://www.cloudnetengine.com/weblog/2015/07/22/vswitches-performance-comparison/
Thanks,
Jun
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [ovs-discuss] vswitches performance comparison
2015-07-21 18:00 vswitches performance comparison Jun Xiao
@ 2015-07-21 18:14 ` Gray, Mark D
2015-07-21 18:28 ` Jun Xiao
2015-07-21 21:02 ` Stephen Hemminger
1 sibling, 1 reply; 8+ messages in thread
From: Gray, Mark D @ 2015-07-21 18:14 UTC (permalink / raw)
To: Jun Xiao, discuss, dev
> -----Original Message-----
> From: discuss [mailto:discuss-bounces@openvswitch.org] On Behalf Of Jun
> Xiao
> Sent: Tuesday, July 21, 2015 7:01 PM
> To: discuss; dev
> Subject: [ovs-discuss] vswitches performance comparison
>
> After CloudNetEngine vswitch technical preview is launched, we received
> quite a few queries on vswitches performance comparison, but we cannot
> simply give a test result on our platform because performance varies on
> different H/Ws and different workloads, and that's why we encourage you to
> try the evaluation package to get real data on your setup.
>
> Anyway, we share a little more performance data on our H/W which is a
> comparison among native kernel OVS/OVS-DPDK/CNE vswitch under the
> most common workload:
> concurrent bi-directional TCP traffics cross hosts, and hope you can have a
> rough idea.
> http://www.cloudnetengine.com/weblog/2015/07/22/vswitches-
> performance-comparison/
I think there is an issue with you methodology. ovs-dpdk performance should be
significantly higher than kernel ovs.
>
> Thanks,
> Jun
> _______________________________________________
> discuss mailing list
> discuss@openvswitch.org
> http://openvswitch.org/mailman/listinfo/discuss
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [ovs-discuss] vswitches performance comparison
2015-07-21 18:14 ` [ovs-discuss] " Gray, Mark D
@ 2015-07-21 18:28 ` Jun Xiao
2015-07-21 18:36 ` Gray, Mark D
0 siblings, 1 reply; 8+ messages in thread
From: Jun Xiao @ 2015-07-21 18:28 UTC (permalink / raw)
To: Gray, Mark D; +Cc: dev, discuss
I'd like to hope that's my methodology problem, but I just follow the installation guide without any customization.
Hi Mark, do you have any performance data share with us? Maybe we are using different type of workloads, like I mentioned I am using typical data center workload, I guess you are talking about NFV type of workload?
Thanks,
Jun
Sent from my iPhone
> On Jul 22, 2015, at 2:14 AM, Gray, Mark D <mark.d.gray@intel.com> wrote:
>
>
>
>> -----Original Message-----
>> From: discuss [mailto:discuss-bounces@openvswitch.org] On Behalf Of Jun
>> Xiao
>> Sent: Tuesday, July 21, 2015 7:01 PM
>> To: discuss; dev
>> Subject: [ovs-discuss] vswitches performance comparison
>>
>> After CloudNetEngine vswitch technical preview is launched, we received
>> quite a few queries on vswitches performance comparison, but we cannot
>> simply give a test result on our platform because performance varies on
>> different H/Ws and different workloads, and that's why we encourage you to
>> try the evaluation package to get real data on your setup.
>>
>> Anyway, we share a little more performance data on our H/W which is a
>> comparison among native kernel OVS/OVS-DPDK/CNE vswitch under the
>> most common workload:
>> concurrent bi-directional TCP traffics cross hosts, and hope you can have a
>> rough idea.
>> http://www.cloudnetengine.com/weblog/2015/07/22/vswitches-
>> performance-comparison/
>
> I think there is an issue with you methodology. ovs-dpdk performance should be
> significantly higher than kernel ovs.
>
>>
>> Thanks,
>> Jun
>> _______________________________________________
>> discuss mailing list
>> discuss@openvswitch.org
>> http://openvswitch.org/mailman/listinfo/discuss
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [ovs-discuss] vswitches performance comparison
2015-07-21 18:28 ` Jun Xiao
@ 2015-07-21 18:36 ` Gray, Mark D
2015-07-21 18:48 ` Jun Xiao
0 siblings, 1 reply; 8+ messages in thread
From: Gray, Mark D @ 2015-07-21 18:36 UTC (permalink / raw)
To: Jun Xiao; +Cc: dev, discuss
>
> I'd like to hope that's my methodology problem, but I just follow the
> installation guide without any customization.
>
> Hi Mark, do you have any performance data share with us? Maybe we are
> using different type of workloads, like I mentioned I am using typical data
> center workload, I guess you are talking about NFV type of workload?
The number getting floated around on the mailing list recently is 16.5Mpps
for phy-phy. However, I don't think we have any iperf data off-hand for your
usecase. When we test throughput into the vm we usually generate the traffic externally
and send NIC->OVS->VM->OVS->NIC. This is a little different to your setup.
I do know, however, that ovs-dpdk typically has a much larger throughput than
the kernel space datapath.
Have you seen this? https://wiki.opnfv.org/characterize_vswitch_performance_for_telco_nfv_use_cases
>
> Thanks,
> Jun
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [ovs-discuss] vswitches performance comparison
2015-07-21 18:36 ` Gray, Mark D
@ 2015-07-21 18:48 ` Jun Xiao
[not found] ` <9D0E6ED2-6171-4EF5-AD21-01B1844B5136-KO+AhswbhJaAFCpoGyTkr9BPR1lH4CV8@public.gmane.org>
0 siblings, 1 reply; 8+ messages in thread
From: Jun Xiao @ 2015-07-21 18:48 UTC (permalink / raw)
To: Gray, Mark D; +Cc: dev, discuss
> On Jul 22, 2015, at 2:36 AM, Gray, Mark D <mark.d.gray@intel.com> wrote:
>
>
>>
>> I'd like to hope that's my methodology problem, but I just follow the
>> installation guide without any customization.
>>
>> Hi Mark, do you have any performance data share with us? Maybe we are
>> using different type of workloads, like I mentioned I am using typical data
>> center workload, I guess you are talking about NFV type of workload?
>
> The number getting floated around on the mailing list recently is 16.5Mpps
> for phy-phy. However, I don't think we have any iperf data off-hand for your
> usecase. When we test throughput into the vm we usually generate the traffic externally
> and send NIC->OVS->VM->OVS->NIC. This is a little different to your setup.
>
I guess pmd driver is used inside VM in that case, right?
> I do know, however, that ovs-dpdk typically has a much larger throughput than
> the kernel space datapath.
>
I'd like to say it depends on workloads, for small/medium packet size workload, that's definitely true, while for TSO size workload, it's not that obvious (or worse) as data path overheads are amortized and H/W can be leveraged.
> Have you seen this? https://wiki.opnfv.org/characterize_vswitch_performance_for_telco_nfv_use_cases
>
Thanks for the pointer, I'll try later.
>>
>> Thanks,
>> Jun
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: vswitches performance comparison
2015-07-21 18:00 vswitches performance comparison Jun Xiao
2015-07-21 18:14 ` [ovs-discuss] " Gray, Mark D
@ 2015-07-21 21:02 ` Stephen Hemminger
2015-07-22 8:07 ` Gray, Mark D
1 sibling, 1 reply; 8+ messages in thread
From: Stephen Hemminger @ 2015-07-21 21:02 UTC (permalink / raw)
To: Jun Xiao; +Cc: dev, discuss
On Wed, 22 Jul 2015 02:00:42 +0800
"Jun Xiao" <jun.xiao@cloudnetengine.com> wrote:
> After CloudNetEngine vswitch technical preview is launched, we received quite
> a few queries on vswitches performance comparison, but we cannot simply give a
> test result on our platform because performance varies on different H/Ws and
> different workloads, and that's why we encourage you to try the evaluation
> package to get real data on your setup.
>
> Anyway, we share a little more performance data on our H/W which is a comparison
> among native kernel OVS/OVS-DPDK/CNE vswitch under the most common workload:
> concurrent bi-directional TCP traffics cross hosts, and hope you can have a rough idea.
> http://www.cloudnetengine.com/weblog/2015/07/22/vswitches-performance-comparison/
>
> Thanks,
> Jun
Since the real bottleneck in most vswitches is per-packet overhead.
I would recommend running RFC-2544 tests for better data.
You probably need to use something like pktgen to get enough packets-per-second.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: vswitches performance comparison
2015-07-21 21:02 ` Stephen Hemminger
@ 2015-07-22 8:07 ` Gray, Mark D
0 siblings, 0 replies; 8+ messages in thread
From: Gray, Mark D @ 2015-07-22 8:07 UTC (permalink / raw)
To: Stephen Hemminger, Jun Xiao; +Cc: dev, discuss
> "Jun Xiao" <jun.xiao@cloudnetengine.com> wrote:
>
> > After CloudNetEngine vswitch technical preview is launched, we
> > received quite a few queries on vswitches performance comparison, but
> > we cannot simply give a test result on our platform because
> > performance varies on different H/Ws and different workloads, and
> > that's why we encourage you to try the evaluation package to get real data
> on your setup.
> >
> > Anyway, we share a little more performance data on our H/W which is a
> > comparison among native kernel OVS/OVS-DPDK/CNE vswitch under the
> most common workload:
> > concurrent bi-directional TCP traffics cross hosts, and hope you can have a
> rough idea.
> > http://www.cloudnetengine.com/weblog/2015/07/22/vswitches-
> performance-
> > comparison/
> >
> > Thanks,
> > Jun
>
> Since the real bottleneck in most vswitches is per-packet overhead.
> I would recommend running RFC-2544 tests for better data.
>
> You probably need to use something like pktgen to get enough packets-per-
> second.
Yeah this is the methodology that we use aswell.
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2015-07-22 8:07 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-07-21 18:00 vswitches performance comparison Jun Xiao
2015-07-21 18:14 ` [ovs-discuss] " Gray, Mark D
2015-07-21 18:28 ` Jun Xiao
2015-07-21 18:36 ` Gray, Mark D
2015-07-21 18:48 ` Jun Xiao
[not found] ` <9D0E6ED2-6171-4EF5-AD21-01B1844B5136-KO+AhswbhJaAFCpoGyTkr9BPR1lH4CV8@public.gmane.org>
2015-07-22 8:06 ` Gray, Mark D
2015-07-21 21:02 ` Stephen Hemminger
2015-07-22 8:07 ` Gray, Mark D
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).