* Re: [Bridge] [PATCH] macvlan: add tap device backend
2009-08-06 21:50 Arnd Bergmann
@ 2009-08-07 17:35 ` Daniel Robbins
0 siblings, 0 replies; 11+ messages in thread
From: Daniel Robbins @ 2009-08-07 17:35 UTC (permalink / raw)
To: Arnd Bergmann
Cc: netdev, Herbert Xu, Michael S. Tsirkin, Fischer, Anna, bridge,
linux-kernel, David S. Miller", Or Gerlitz,
Edge Virtual Bridging
On Thu, Aug 6, 2009 at 3:50 PM, Arnd Bergmann<arnd@arndb.de> wrote:
> This is a first prototype of a new interface into the network
> stack, to eventually replace tun/tap and the bridge driver
> in certain virtual machine setups.
I have some general questions about the intended use and benefits of
VEPA, from an IT perspective:
In which virtual machine setups and technologies do you forsee this
interface being used?
Is this new interface to be used within a virtual machine or
container, on the master node, or both?
What interface(s) would need to be configured for a single virtual
machine to use VEPA to access the network?
What are the current flexibility, security or performance limitations
of tun/tap and bridge that make this new interface necessary or
beneficial?
Is this new interface useful at all for VPN solutions or is it
*specifically* targeted for connecting virtual machines to the
network?
Is this essentially a bridge with layer-2 isolation for the virtual
machine interfaces built-in? If isolation is provided, what mechanism
is used to accomplish this, and how secure is it?
Does VEPA look like a regular ethernet interface (eth0) on the virtual
machine side?
Are there any associated user-space tools required for configuring a VEPA?
Do you have any HOWTO-style documentation that would demonstrate how
this interface would be used in production? Or a FAQ?
This seems like a very interesting effort but I don't quite have a
good grasp of VEPA's benefits and limitations -- I imagine that others
are in the same boat too.
Best Regards,
Daniel
^ permalink raw reply [flat|nested] 11+ messages in thread
* RE: [Bridge] [PATCH] macvlan: add tap device backend
[not found] <0199E0D51A61344794750DC57738F58E6D6A6CD7F6@GVW1118EXC.americas.hpqcorp.net>
@ 2009-08-07 19:10 ` Paul Congdon (UC Davis)
2009-08-07 19:35 ` Stephen Hemminger
2009-08-07 22:05 ` Arnd Bergmann
0 siblings, 2 replies; 11+ messages in thread
From: Paul Congdon (UC Davis) @ 2009-08-07 19:10 UTC (permalink / raw)
To: drobbins
Cc: 'Paul Congdon (UC Davis)', 'Fischer, Anna',
'Arnd Bergmann', herbert, mst, netdev, bridge,
linux-kernel, ogerlitz, evb, davem
Responding to Daniel's questions...
> I have some general questions about the intended use and benefits of
> VEPA, from an IT perspective:
>
> In which virtual machine setups and technologies do you forsee this
> interface being used?
The benefit of VEPA is the coordination and unification with the external network switch. So, in environments where you are needing/wanting your feature rich, wire speed, external network device (firewall/switch/IPS/content-filter) to provide consistent policy enforcement, and you want your VMs traffic to be subject to that enforcement, you will want their traffic directed externally. Perhaps you have some VMs that are on a DMZ or clustering an application or implementing a multi-tier application where you would normally place a firewall in-between the tiers.
> Is this new interface to be used within a virtual machine or
> container, on the master node, or both?
It is really an interface to a new type of virtual switch. When you create virtual network, I would imagine it being a new mode of operation (bridge, NAT, VEPA, etc).
> What interface(s) would need to be configured for a single virtual
> machine to use VEPA to access the network?
It would be the same as if that machine were configure to use a bridge to access the network, but the bridge mode would be different.
> What are the current flexibility, security or performance limitations
> of tun/tap and bridge that make this new interface necessary or
> beneficial?
If you have VMs that will be communicating with one another on the same physical machine, and you want their traffic to be exposed to an in-line network device such as a application firewall/IPS/content-filter (without this feature) you will have to have this device co-located within the same physical server. This will use up CPU cycles that you presumable purchased to run applications, it will require a lot of consistent configuration on all physical machines, it could invoke potentially a lot of software licensing, additional cost, etc.. Everything would need to be replicated on each physical machine. With the VEPA capability, you can leverage all this functionality in an external network device and have it managed and configured in one place. The external implementation is likely a
higher performance, silicon based implementation. It should make it easier to migrate machines from one physical server to another and maintain the same network policy enforcement.
> Is this new interface useful at all for VPN solutions or is it
> *specifically* targeted for connecting virtual machines to the
> network?
I'm not sure I see the benefit for VPN solutions, but I'd have to understand the deployment scenario better. Certainly this is targeting connecting VMs to the adjacent physical LAN.
> Is this essentially a bridge with layer-2 isolation for the virtual
> machine interfaces built-in? If isolation is provided, what mechanism
> is used to accomplish this, and how secure is it?
That might be an over simplification, but you can achieve layer-2 isolation if you connect to a standard external switch. If that switch has 'hairpin' forwarding, then the VMs can talk at L2, but their traffic is forced through the bridge. If that bridge is a security device (e.g. firewall), then their traffic is exposed to that.
The isolation in the outbound direction is created by the way frames are forwarded. They are simply dropped on the wire, so no VMs can talk directly to one another without their traffic first going external. In the inbound direction, the isolation is created using the forwarding table.
> Does VEPA look like a regular ethernet interface (eth0) on the virtual
> machine side?
Yes
> Are there any associated user-space tools required for configuring a
> VEPA?
>
The standard brctl utility has been augmented to enable/disable the capability.
> Do you have any HOWTO-style documentation that would demonstrate how
> this interface would be used in production? Or a FAQ?
>
None yet.
> This seems like a very interesting effort but I don't quite have a
> good grasp of VEPA's benefits and limitations -- I imagine that others
> are in the same boat too.
>
There are some seminar slides available on the IEEE 802.1 web-site and elsewhere. The patch had a reference to a seminar, but here is another one you might find helpful:
http://www.internet2.edu/presentations/jt2009jul/20090719-congdon.pdf
I'm happy to try to explain further...
Paul
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [Bridge] [PATCH] macvlan: add tap device backend
2009-08-07 19:10 ` [Bridge] [PATCH] macvlan: add tap device backend Paul Congdon (UC Davis)
@ 2009-08-07 19:35 ` Stephen Hemminger
2009-08-07 19:44 ` Fischer, Anna
` (2 more replies)
2009-08-07 22:05 ` Arnd Bergmann
1 sibling, 3 replies; 11+ messages in thread
From: Stephen Hemminger @ 2009-08-07 19:35 UTC (permalink / raw)
To: Paul Congdon (UC Davis)
Cc: drobbins, 'Paul Congdon (UC Davis)',
'Fischer, Anna', 'Arnd Bergmann', herbert, mst,
netdev, bridge, linux-kernel, ogerlitz, evb, davem
On Fri, 7 Aug 2009 12:10:07 -0700
"Paul Congdon \(UC Davis\)" <ptcongdon@ucdavis.edu> wrote:
> Responding to Daniel's questions...
>
> > I have some general questions about the intended use and benefits of
> > VEPA, from an IT perspective:
> >
> > In which virtual machine setups and technologies do you forsee this
> > interface being used?
>
> The benefit of VEPA is the coordination and unification with the external network switch. So, in environments where you are needing/wanting your feature rich, wire speed, external network device (firewall/switch/IPS/content-filter) to provide consistent policy enforcement, and you want your VMs traffic to be subject to that enforcement, you will want their traffic directed externally. Perhaps you have some VMs that are on a DMZ or clustering an application or implementing a multi-tier application where you would normally place a firewall in-between the tiers.
I do have to raise the point that Linux is perfectly capable of keeping up without
the need of an external switch. Whether you want policy external or internal is
a architecture decision that should not be driven by mis-information about performance.
^ permalink raw reply [flat|nested] 11+ messages in thread
* RE: [Bridge] [PATCH] macvlan: add tap device backend
2009-08-07 19:35 ` Stephen Hemminger
@ 2009-08-07 19:44 ` Fischer, Anna
2009-08-07 20:17 ` david
2009-08-07 19:47 ` Paul Congdon (UC Davis)
2009-08-07 21:38 ` Arnd Bergmann
2 siblings, 1 reply; 11+ messages in thread
From: Fischer, Anna @ 2009-08-07 19:44 UTC (permalink / raw)
To: Stephen Hemminger, Paul Congdon (UC Davis)
Cc: drobbins@funtoo.org, 'Paul Congdon (UC Davis)',
'Arnd Bergmann', herbert@gondor.apana.org.au,
mst@redhat.com, netdev@vger.kernel.org,
bridge@lists.linux-foundation.org, linux-kernel@vger.kernel.org,
ogerlitz@voltaire.com, evb@yahoogroups.com, davem@davemloft.net
> Subject: Re: [Bridge] [PATCH] macvlan: add tap device backend
>
> On Fri, 7 Aug 2009 12:10:07 -0700
> "Paul Congdon \(UC Davis\)" <ptcongdon@ucdavis.edu> wrote:
>
> > Responding to Daniel's questions...
> >
> > > I have some general questions about the intended use and benefits
> of
> > > VEPA, from an IT perspective:
> > >
> > > In which virtual machine setups and technologies do you forsee this
> > > interface being used?
> >
> > The benefit of VEPA is the coordination and unification with the
> external network switch. So, in environments where you are
> needing/wanting your feature rich, wire speed, external network device
> (firewall/switch/IPS/content-filter) to provide consistent policy
> enforcement, and you want your VMs traffic to be subject to that
> enforcement, you will want their traffic directed externally. Perhaps
> you have some VMs that are on a DMZ or clustering an application or
> implementing a multi-tier application where you would normally place a
> firewall in-between the tiers.
>
> I do have to raise the point that Linux is perfectly capable of keeping
> up without
> the need of an external switch. Whether you want policy external or
> internal is
> a architecture decision that should not be driven by mis-information
> about performance.
VEPA is not only about enabling faster packet processing (like firewall/switch/IPS/content-filter etc) by doing this on the external switch.
Due to rather low performance of software-based I/O virtualization approaches a lot of effort has recently been going into hardware-based implementations of virtual network interfaces like SRIOV NICs provide. Without VEPA, such a NIC would have to implement sophisticated virtual switching capabilities. VEPA however is very simple and therefore perfectly suited for a hardware-based implementation. So in the future, it will give you direct I/O like performance and all the capabilities your adjacent switch provides.
Anna
^ permalink raw reply [flat|nested] 11+ messages in thread
* RE: [Bridge] [PATCH] macvlan: add tap device backend
2009-08-07 19:35 ` Stephen Hemminger
2009-08-07 19:44 ` Fischer, Anna
@ 2009-08-07 19:47 ` Paul Congdon (UC Davis)
2009-08-07 21:38 ` Arnd Bergmann
2 siblings, 0 replies; 11+ messages in thread
From: Paul Congdon (UC Davis) @ 2009-08-07 19:47 UTC (permalink / raw)
To: 'Stephen Hemminger'
Cc: drobbins, 'Fischer, Anna', 'Arnd Bergmann',
herbert, mst, netdev, bridge, linux-kernel, ogerlitz, evb, davem
>
> I do have to raise the point that Linux is perfectly capable of keeping
> up without
> the need of an external switch. Whether you want policy external or
> internal is
> a architecture decision that should not be driven by mis-information
> about performance.
No argument here. I agree that you can do a lot in Linux. It is, as you
say, an architecture decision, that can be enabled with this addition mode
of operation. Without the mode of forcing things external, however, you
would always need to put this function internal or play games with VLANs
overlapping to get traffic to forward the way you want it.
Paul
^ permalink raw reply [flat|nested] 11+ messages in thread
* RE: [Bridge] [PATCH] macvlan: add tap device backend
2009-08-07 19:44 ` Fischer, Anna
@ 2009-08-07 20:17 ` david
0 siblings, 0 replies; 11+ messages in thread
From: david @ 2009-08-07 20:17 UTC (permalink / raw)
To: Fischer, Anna
Cc: Stephen Hemminger, Paul Congdon (UC Davis), drobbins@funtoo.org,
'Arnd Bergmann', herbert@gondor.apana.org.au,
mst@redhat.com, netdev@vger.kernel.org,
bridge@lists.linux-foundation.org, linux-kernel@vger.kernel.org,
ogerlitz@voltaire.com, evb@yahoogroups.com, davem@davemloft.net
On Fri, 7 Aug 2009, Fischer, Anna wrote:
> Subject: RE: [Bridge] [PATCH] macvlan: add tap device backend
>
>> Subject: Re: [Bridge] [PATCH] macvlan: add tap device backend
>>
>> On Fri, 7 Aug 2009 12:10:07 -0700
>> "Paul Congdon \(UC Davis\)" <ptcongdon@ucdavis.edu> wrote:
>>
>>> Responding to Daniel's questions...
>>>
>>>> I have some general questions about the intended use and benefits
>> of
>>>> VEPA, from an IT perspective:
>>>>
>>>> In which virtual machine setups and technologies do you forsee this
>>>> interface being used?
>>>
>>> The benefit of VEPA is the coordination and unification with the
>> external network switch. So, in environments where you are
>> needing/wanting your feature rich, wire speed, external network device
>> (firewall/switch/IPS/content-filter) to provide consistent policy
>> enforcement, and you want your VMs traffic to be subject to that
>> enforcement, you will want their traffic directed externally. Perhaps
>> you have some VMs that are on a DMZ or clustering an application or
>> implementing a multi-tier application where you would normally place a
>> firewall in-between the tiers.
>>
>> I do have to raise the point that Linux is perfectly capable of keeping
>> up without
>> the need of an external switch. Whether you want policy external or
>> internal is
>> a architecture decision that should not be driven by mis-information
>> about performance.
>
> VEPA is not only about enabling faster packet processing (like firewall/switch/IPS/content-filter etc) by doing this on the external switch.
>
> Due to rather low performance of software-based I/O virtualization approaches a lot of effort has recently been going into hardware-based implementations of virtual network interfaces like SRIOV NICs provide. Without VEPA, such a NIC would have to implement sophisticated virtual switching capabilities. VEPA however is very simple and therefore perfectly suited for a hardware-based implementation. So in the future, it will give you direct I/O like performance and all the capabilities your adjacent switch provides.
>
the performance overhead isn't from switching the packets, it's from
running the firewall/IDS/etc software on the same system.
with VEPA the communications from one VM to another VM running on the same
host will be forced to go out the interface to the datacenter switching
fabric. The overall performance of the network link will be slightly
slower, but it allows for other devices to be inserted into the path.
this is something that I would want available if I were to start using VMs
for things. I don't want to have to duplicate my IDS/firewalling functions
within each host system as well as having them as part of the switching
fabric.
David Lang
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [Bridge] [PATCH] macvlan: add tap device backend
2009-08-07 19:35 ` Stephen Hemminger
2009-08-07 19:44 ` Fischer, Anna
2009-08-07 19:47 ` Paul Congdon (UC Davis)
@ 2009-08-07 21:38 ` Arnd Bergmann
2 siblings, 0 replies; 11+ messages in thread
From: Arnd Bergmann @ 2009-08-07 21:38 UTC (permalink / raw)
To: Stephen Hemminger
Cc: Paul Congdon (UC Davis), drobbins, 'Fischer, Anna',
herbert, mst, netdev, bridge, linux-kernel, ogerlitz, evb, davem
On Friday 07 August 2009, Stephen Hemminger wrote:
> On Fri, 7 Aug 2009 12:10:07 -0700
> "Paul Congdon \(UC Davis\)" <ptcongdon@ucdavis.edu> wrote:
>
> > Responding to Daniel's questions...
> >
> > > I have some general questions about the intended use and benefits of
> > > VEPA, from an IT perspective:
> > >
> > > In which virtual machine setups and technologies do you forsee this
> > > interface being used?
> >
> > The benefit of VEPA is the coordination and unification with the
> > external network switch. So, in environments where you are
> > needing/wanting your feature rich, wire speed, external network
> > device (firewall/switch/IPS/content-filter) to provide consistent
> > policy enforcement, and you want your VMs traffic to be subject to
> > that enforcement, you will want their traffic directed externally.
> > Perhaps you have some VMs that are on a DMZ or clustering an
> > application or implementing a multi-tier application where you
> > would normally place a firewall in-between the tiers.
>
> I do have to raise the point that Linux is perfectly capable of keeping up without
> the need of an external switch. Whether you want policy external or internal is
> a architecture decision that should not be driven by mis-information about performance.
In general, I agree that Linux on a decent virtual machine host will be
able to handle forwarding of network data fast enough, often faster than
the external connectivity allows if it needs to transmit every frame twice.
However, there is a tradeoff between CPU cycles and I/O bandwidth. If your
application needs lots of CPU but you have spare capacity on the PCI bus, the
network wire and the external switch, VEPA can also be a win on the performance
side. As always, performance depends on the application, even if it's not the
main driving factor here.
Arnd <><
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [Bridge] [PATCH] macvlan: add tap device backend
2009-08-07 19:10 ` [Bridge] [PATCH] macvlan: add tap device backend Paul Congdon (UC Davis)
2009-08-07 19:35 ` Stephen Hemminger
@ 2009-08-07 22:05 ` Arnd Bergmann
2009-08-10 12:40 ` Fischer, Anna
1 sibling, 1 reply; 11+ messages in thread
From: Arnd Bergmann @ 2009-08-07 22:05 UTC (permalink / raw)
To: Paul Congdon (UC Davis)
Cc: drobbins, 'Fischer, Anna', herbert, mst, netdev, bridge,
linux-kernel, ogerlitz, evb, davem
On Friday 07 August 2009, Paul Congdon (UC Davis) wrote:
> Responding to Daniel's questions...
Thanks for the detailed responses. I'll add some more about the
specifics of the macvlan implementation that differs from the
bridge based VEPA implementation.
> > Is this new interface to be used within a virtual machine or
> > container, on the master node, or both?
>
> It is really an interface to a new type of virtual switch. When
> you create virtual network, I would imagine it being a new mode
> of operation (bridge, NAT, VEPA, etc).
I think the question was whether the patch needs to applied in the
host or the guest. Both the implementation that you and Anna did
and the one that I posted only apply to the *host* (master node),
the virtual machine does not need to know about it.
> > What interface(s) would need to be configured for a single virtual
> > machine to use VEPA to access the network?
>
> It would be the same as if that machine were configure to use a
> bridge to access the network, but the bridge mode would be different.
Right, with the bridge based VEPA, you would set up a kvm guest
or a container with the regular tools, then use the sysfs interface
to put the bridge device into VEPA mode.
With the macvlan based mode, you use 'ip link' to add a new tap
device to an external network interface and not use a bridge at
all. Then you configure KVM to use that tap device instead of the
regular bridge/tap setup.
> > What are the current flexibility, security or performance limitations
> > of tun/tap and bridge that make this new interface necessary or
> > beneficial?
>
> If you have VMs that will be communicating with one another on
> the same physical machine, and you want their traffic to be
> exposed to an in-line network device such as a application
> firewall/IPS/content-filter (without this feature) you will have
> to have this device co-located within the same physical server.
> This will use up CPU cycles that you presumable purchased to run
> applications, it will require a lot of consistent configuration
> on all physical machines, it could invoke potentially a lot of
> software licensing, additional cost, etc.. Everything would
> need to be replicated on each physical machine. With the VEPA
> capability, you can leverage all this functionality in an
> external network device and have it managed and configured in
> one place. The external implementation is likely a higher
> performance, silicon based implementation. It should make it
> easier to migrate machines from one physical server to another
> and maintain the same network policy enforcement.
It's worth noting that depending on your network connectivity,
performance is likely to go down significantly with VEPA over the
existing bridge/tap setup, because all frames have to be sent
twice through an external wire that has a limited capacity, so you may
lose inter-guest bandwidth and get more latency in many cases, while
you free up CPU cycles. With the bridge based VEPA, you might not
even gain many cycles because much of the overhead is still there.
On the cost side, external switches can also get quite expensive
compared to x86 servers.
IMHO the real win of VEPA is on the management side, where you can
use a single set of tool for managing the network, rather than
having your network admins deal with both the external switches
and the setup of linux netfilter rules etc.
The macvlan based VEPA has the same features as the bridge based
VEPA, but much simpler code, which allows a number of shortcuts
to save CPU cycles.
> The isolation in the outbound direction is created by the way frames
> are forwarded. They are simply dropped on the wire, so no VMs can
> talk directly to one another without their traffic first going
> external. In the inbound direction, the isolation is created using
> the forwarding table.
Right. Note that in the macvlan case, the filtering on inbound data is an inherent
part of the macvlan setup, it does use the dynamic forwarding table of the
bridge driver.
> > Are there any associated user-space tools required for configuring a
> > VEPA?
> >
>
> The standard brctl utility has been augmented to enable/disable the capability.
That is for the bridge based VEPA, while my patch uses the 'ip link'
command that ships with most distros. It does not need any modifications
right now, but might need them if we add other features like support for
multiple MAC addresses in a single guest.
Arnd <><
^ permalink raw reply [flat|nested] 11+ messages in thread
* RE: [Bridge] [PATCH] macvlan: add tap device backend
2009-08-07 22:05 ` Arnd Bergmann
@ 2009-08-10 12:40 ` Fischer, Anna
2009-08-10 19:04 ` Arnd Bergmann
0 siblings, 1 reply; 11+ messages in thread
From: Fischer, Anna @ 2009-08-10 12:40 UTC (permalink / raw)
To: Arnd Bergmann
Cc: Paul Congdon (UC Davis), drobbins@funtoo.org,
herbert@gondor.apana.org.au, mst@redhat.com,
netdev@vger.kernel.org, bridge@lists.linux-foundation.org,
linux-kernel@vger.kernel.org, ogerlitz@voltaire.com,
evb@yahoogroups.com, davem@davemloft.net
> Subject: Re: [Bridge] [PATCH] macvlan: add tap device backend
>
> On Friday 07 August 2009, Paul Congdon (UC Davis) wrote:
> > Responding to Daniel's questions...
>
> Thanks for the detailed responses. I'll add some more about the
> specifics of the macvlan implementation that differs from the
> bridge based VEPA implementation.
>
> > > Is this new interface to be used within a virtual machine or
> > > container, on the master node, or both?
> >
> > It is really an interface to a new type of virtual switch. When
> > you create virtual network, I would imagine it being a new mode
> > of operation (bridge, NAT, VEPA, etc).
>
> I think the question was whether the patch needs to applied in the
> host or the guest. Both the implementation that you and Anna did
> and the one that I posted only apply to the *host* (master node),
> the virtual machine does not need to know about it.
>
> > > What interface(s) would need to be configured for a single virtual
> > > machine to use VEPA to access the network?
> >
> > It would be the same as if that machine were configure to use a
> > bridge to access the network, but the bridge mode would be different.
>
> Right, with the bridge based VEPA, you would set up a kvm guest
> or a container with the regular tools, then use the sysfs interface
> to put the bridge device into VEPA mode.
>
> With the macvlan based mode, you use 'ip link' to add a new tap
> device to an external network interface and not use a bridge at
> all. Then you configure KVM to use that tap device instead of the
> regular bridge/tap setup.
>
> > > What are the current flexibility, security or performance
> limitations
> > > of tun/tap and bridge that make this new interface necessary or
> > > beneficial?
> >
> > If you have VMs that will be communicating with one another on
> > the same physical machine, and you want their traffic to be
> > exposed to an in-line network device such as a application
> > firewall/IPS/content-filter (without this feature) you will have
> > to have this device co-located within the same physical server.
> > This will use up CPU cycles that you presumable purchased to run
> > applications, it will require a lot of consistent configuration
> > on all physical machines, it could invoke potentially a lot of
> > software licensing, additional cost, etc.. Everything would
> > need to be replicated on each physical machine. With the VEPA
> > capability, you can leverage all this functionality in an
> > external network device and have it managed and configured in
> > one place. The external implementation is likely a higher
> > performance, silicon based implementation. It should make it
> > easier to migrate machines from one physical server to another
> > and maintain the same network policy enforcement.
>
> It's worth noting that depending on your network connectivity,
> performance is likely to go down significantly with VEPA over the
> existing bridge/tap setup, because all frames have to be sent
> twice through an external wire that has a limited capacity, so you may
> lose inter-guest bandwidth and get more latency in many cases, while
> you free up CPU cycles. With the bridge based VEPA, you might not
> even gain many cycles because much of the overhead is still there.
> On the cost side, external switches can also get quite expensive
> compared to x86 servers.
>
> IMHO the real win of VEPA is on the management side, where you can
> use a single set of tool for managing the network, rather than
> having your network admins deal with both the external switches
> and the setup of linux netfilter rules etc.
>
> The macvlan based VEPA has the same features as the bridge based
> VEPA, but much simpler code, which allows a number of shortcuts
> to save CPU cycles.
I am not yet convinced that the macvlan based VEPA would be significantly
better from a performance point-of-view. Really, once you have
implemented all the missing bits and pieces to make the macvlan
driver a VEPA-compatible device, the code path for packet processing
will be very similar. Also, I think you have to keep in mind that,
ultimately, if a user is seriously concerned about high performance,
then they would go for a hardware-based solution, e.g. a SRIOV NIC
with VEPA capabilities. Once you have made the decision for a software-
based approach, tiny performance differences should not have such a
big impact, and so I don't think that this should influence too much
the design decision on where VEPA capabilities should be placed in
the kernel.
If you compare macvtap with traditional QEMU networking interfaces that
are typically used in current bridged setups, then yes, performance will be
different. However, I think that this is not necessarily a fair
comparison, and the performance difference does not come from the
bridge being slow, but simply because you have implemented a better
solution to connect a virtual interface to a backend device that
can be assigned to a VM. There is no reason why you could not do this
for a bridge port as well.
Anna
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [Bridge] [PATCH] macvlan: add tap device backend
2009-08-10 12:40 ` Fischer, Anna
@ 2009-08-10 19:04 ` Arnd Bergmann
2009-08-10 19:32 ` Michael S. Tsirkin
0 siblings, 1 reply; 11+ messages in thread
From: Arnd Bergmann @ 2009-08-10 19:04 UTC (permalink / raw)
To: Fischer, Anna
Cc: Paul Congdon (UC Davis), drobbins@funtoo.org,
herbert@gondor.apana.org.au, mst@redhat.com,
netdev@vger.kernel.org, bridge@lists.linux-foundation.org,
linux-kernel@vger.kernel.org, ogerlitz@voltaire.com,
evb@yahoogroups.com, davem@davemloft.net
On Monday 10 August 2009, Fischer, Anna wrote:
> If you compare macvtap with traditional QEMU networking interfaces that
> are typically used in current bridged setups, then yes, performance will be
> different. However, I think that this is not necessarily a fair
> comparison, and the performance difference does not come from the
> bridge being slow, but simply because you have implemented a better
> solution to connect a virtual interface to a backend device that
> can be assigned to a VM. There is no reason why you could not do this
> for a bridge port as well.
It's not necessarily the bridge itself being slow (though some people
claim it is) but more the bridge preventing optimizations or making
them hard.
You already mentioned hardware filtering by unicast and multicast
mac addresses, which macvlan already does (for unicast) but which would be
relatively complex with a bridge due to the way it does MAC address
learning.
If we want to do zero copy receives, the hardware will on top of
this have to choose the receive buffer based on the mac address,
with the buffer provided by the guest. I think this is not easy
with macvlan but doable, while I have no idea where you would start
using the bridge code.
Arnd <><
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [Bridge] [PATCH] macvlan: add tap device backend
2009-08-10 19:04 ` Arnd Bergmann
@ 2009-08-10 19:32 ` Michael S. Tsirkin
0 siblings, 0 replies; 11+ messages in thread
From: Michael S. Tsirkin @ 2009-08-10 19:32 UTC (permalink / raw)
To: Arnd Bergmann
Cc: Fischer, Anna, Paul Congdon (UC Davis), drobbins@funtoo.org,
herbert@gondor.apana.org.au, netdev@vger.kernel.org,
bridge@lists.linux-foundation.org, linux-kernel@vger.kernel.org,
ogerlitz@voltaire.com, evb@yahoogroups.com, davem@davemloft.net
On Mon, Aug 10, 2009 at 09:04:54PM +0200, Arnd Bergmann wrote:
> On Monday 10 August 2009, Fischer, Anna wrote:
> > If you compare macvtap with traditional QEMU networking interfaces that
> > are typically used in current bridged setups, then yes, performance will be
> > different. However, I think that this is not necessarily a fair
> > comparison, and the performance difference does not come from the
> > bridge being slow, but simply because you have implemented a better
> > solution to connect a virtual interface to a backend device that
> > can be assigned to a VM. There is no reason why you could not do this
> > for a bridge port as well.
>
> It's not necessarily the bridge itself being slow (though some people
> claim it is) but more the bridge preventing optimizations or making
> them hard.
>
> You already mentioned hardware filtering by unicast and multicast
> mac addresses, which macvlan already does (for unicast) but which would be
> relatively complex with a bridge due to the way it does MAC address
> learning.
>
> If we want to do zero copy receives, the hardware will on top of
> this have to choose the receive buffer based on the mac address,
> with the buffer provided by the guest. I think this is not easy
> with macvlan but doable, while I have no idea where you would start
> using the bridge code.
>
> Arnd <><
Similar thing for zero copy sends. You need to know when
the buffers have been consumed to notify userspace,
and this is very hard with a generic bridge in the middle.
--
MST
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2009-08-10 19:32 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <0199E0D51A61344794750DC57738F58E6D6A6CD7F6@GVW1118EXC.americas.hpqcorp.net>
2009-08-07 19:10 ` [Bridge] [PATCH] macvlan: add tap device backend Paul Congdon (UC Davis)
2009-08-07 19:35 ` Stephen Hemminger
2009-08-07 19:44 ` Fischer, Anna
2009-08-07 20:17 ` david
2009-08-07 19:47 ` Paul Congdon (UC Davis)
2009-08-07 21:38 ` Arnd Bergmann
2009-08-07 22:05 ` Arnd Bergmann
2009-08-10 12:40 ` Fischer, Anna
2009-08-10 19:04 ` Arnd Bergmann
2009-08-10 19:32 ` Michael S. Tsirkin
2009-08-06 21:50 Arnd Bergmann
2009-08-07 17:35 ` [Bridge] " Daniel Robbins
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).