netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Question about vlans, bonding, etc.
@ 2010-05-04  0:06 George B.
  2010-05-04  4:48 ` Eric Dumazet
  0 siblings, 1 reply; 6+ messages in thread
From: George B. @ 2010-05-04  0:06 UTC (permalink / raw)
  To: netdev

Watching the "Receive issues with bonding and vlans" thread brought a
question to mind.  In what order should things be done for best
performance?

For example, say I have a pair of ethernet interfaces.  Do I slave the
ethernet interfaces to the bond device and then make the vlans on the
bond devices?
Or do I make the vlans on the ethernet devices and then bond the vlan
interfaces?

In the first case I would have:



bond0.3--|     |------eth0
             bond0
bond0.5--|     |------eth1

The second case would be:

      |------------------eth0.5-----|
      |          |-------eth0.3---eth0
bond0  bond1
      |          |-------eth1.3---eth1
      |------------------eth1.5-----|

I am using he first method currently as it seemed more intuitive to me
at the time to bond the ethernets and then put the vlans on the bonds
but it seems life might be easier for the vlan driver if it is bound
directly to the hardware.  I am using Intel NICs (igb driver) with 4
queues per NIC.

Would there be a performance difference expected between the two
configurations?  Can the vlan driver "see through" the bond interface
to the
hardware and take advantage of multiple queues if the hardware
supports it in the first configuration?

George Bonser

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Question about vlans, bonding, etc.
  2010-05-04  0:06 Question about vlans, bonding, etc George B.
@ 2010-05-04  4:48 ` Eric Dumazet
  2010-05-14  1:10   ` George B.
  0 siblings, 1 reply; 6+ messages in thread
From: Eric Dumazet @ 2010-05-04  4:48 UTC (permalink / raw)
  To: George B.; +Cc: netdev

Le lundi 03 mai 2010 à 17:06 -0700, George B. a écrit :
> Watching the "Receive issues with bonding and vlans" thread brought a
> question to mind.  In what order should things be done for best
> performance?
> 
> For example, say I have a pair of ethernet interfaces.  Do I slave the
> ethernet interfaces to the bond device and then make the vlans on the
> bond devices?
> Or do I make the vlans on the ethernet devices and then bond the vlan
> interfaces?
> 
> In the first case I would have:
> 
> 
> 
> bond0.3--|     |------eth0
>              bond0
> bond0.5--|     |------eth1
> 
> The second case would be:
> 
>       |------------------eth0.5-----|
>       |          |-------eth0.3---eth0
> bond0  bond1
>       |          |-------eth1.3---eth1
>       |------------------eth1.5-----|
> 
> I am using he first method currently as it seemed more intuitive to me
> at the time to bond the ethernets and then put the vlans on the bonds
> but it seems life might be easier for the vlan driver if it is bound
> directly to the hardware.  I am using Intel NICs (igb driver) with 4
> queues per NIC.
> 
> Would there be a performance difference expected between the two
> configurations?  Can the vlan driver "see through" the bond interface
> to the
> hardware and take advantage of multiple queues if the hardware
> supports it in the first configuration?

Unfortunatly, first combination is not multiqueue aware yet.

You'll need to patch bonding driver like this if your nics have 4
queues :

diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
index 85e813c..98cc3c0 100644
--- a/drivers/net/bonding/bond_main.c
+++ b/drivers/net/bonding/bond_main.c
@@ -4915,8 +4915,8 @@ int bond_create(struct net *net, const char *name)
 
        rtnl_lock();
 
-       bond_dev = alloc_netdev(sizeof(struct bonding), name ? name : "",
-                               bond_setup);
+       bond_dev = alloc_netdev_mq(sizeof(struct bonding), name ? name : "",
+                               bond_setup, 4);
        if (!bond_dev) {
                pr_err("%s: eek! can't alloc netdev!\n", name);
                rtnl_unlock();



^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: Question about vlans, bonding, etc.
  2010-05-04  4:48 ` Eric Dumazet
@ 2010-05-14  1:10   ` George B.
  2010-05-14  1:12     ` Stephen Hemminger
  0 siblings, 1 reply; 6+ messages in thread
From: George B. @ 2010-05-14  1:10 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: netdev

On Mon, May 3, 2010 at 9:48 PM, Eric Dumazet <eric.dumazet@gmail.com> wrote:
> Le lundi 03 mai 2010 à 17:06 -0700, George B. a écrit :
>> Watching the "Receive issues with bonding and vlans" thread brought a
>> question to mind.  In what order should things be done for best
>> performance?
>>
>> For example, say I have a pair of ethernet interfaces.  Do I slave the
>> ethernet interfaces to the bond device and then make the vlans on the
>> bond devices?
>> Or do I make the vlans on the ethernet devices and then bond the vlan
>> interfaces?
>>
>> In the first case I would have:
>>
>>
>>
>> bond0.3--|     |------eth0
>>              bond0
>> bond0.5--|     |------eth1
>>
>> The second case would be:
>>
>>       |------------------eth0.5-----|
>>       |          |-------eth0.3---eth0
>> bond0  bond1
>>       |          |-------eth1.3---eth1
>>       |------------------eth1.5-----|
>>
>> I am using he first method currently as it seemed more intuitive to me
>> at the time to bond the ethernets and then put the vlans on the bonds
>> but it seems life might be easier for the vlan driver if it is bound
>> directly to the hardware.  I am using Intel NICs (igb driver) with 4
>> queues per NIC.
>>
>> Would there be a performance difference expected between the two
>> configurations?  Can the vlan driver "see through" the bond interface
>> to the
>> hardware and take advantage of multiple queues if the hardware
>> supports it in the first configuration?
>
> Unfortunatly, first combination is not multiqueue aware yet.
>
> You'll need to patch bonding driver like this if your nics have 4
> queues :
>
> diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
> index 85e813c..98cc3c0 100644
> --- a/drivers/net/bonding/bond_main.c
> +++ b/drivers/net/bonding/bond_main.c
> @@ -4915,8 +4915,8 @@ int bond_create(struct net *net, const char *name)
>
>        rtnl_lock();
>
> -       bond_dev = alloc_netdev(sizeof(struct bonding), name ? name : "",
> -                               bond_setup);
> +       bond_dev = alloc_netdev_mq(sizeof(struct bonding), name ? name : "",
> +                               bond_setup, 4);
>        if (!bond_dev) {
>                pr_err("%s: eek! can't alloc netdev!\n", name);
>                rtnl_unlock();
>
>
>

I just got around to fooling with this some.  It would seem to me that
I should be able to get better performance if I could create the vlans
on the ethernet interfaces and then bond them together.  For example,
it seems intuitive that I should be able to create vlan eth0.5 and
eth1.5 and then enslave them.  Problem is that when I try to create
vlan5 on the second interface, vconfig balks that it already exists.
Yes, I know it exists, but I want vlan5 on two interfaces and I want
to use ifenslave to bond them together into a bond interface.  So if I
have 10 vlans, I would have 10 vlans on each ethernet interface and 10
bond interfaces.  The way it seems I am forced to do it now is bond
the two NICs together and add all the vlans to the single bond
interface.  It seems that the bond interface would then become a
bottleneck for all the vlans.

Is there some physical reason why it is not possible to create the
same vlan on multiple interfaces as long as the naming convention
keeps them named separately so they can be distinguished from each
other?

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Question about vlans, bonding, etc.
  2010-05-14  1:10   ` George B.
@ 2010-05-14  1:12     ` Stephen Hemminger
  2010-05-14  7:28       ` George B.
  2010-05-14  7:53       ` Benny Amorsen
  0 siblings, 2 replies; 6+ messages in thread
From: Stephen Hemminger @ 2010-05-14  1:12 UTC (permalink / raw)
  To: George B.; +Cc: Eric Dumazet, netdev

On Thu, 13 May 2010 18:10:33 -0700
"George B." <georgeb@gmail.com> wrote:

> vlan5 on the second interface, vconfig balks that it already exists.

vconfig is stupid. use 'ip link'


-- 

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Question about vlans, bonding, etc.
  2010-05-14  1:12     ` Stephen Hemminger
@ 2010-05-14  7:28       ` George B.
  2010-05-14  7:53       ` Benny Amorsen
  1 sibling, 0 replies; 6+ messages in thread
From: George B. @ 2010-05-14  7:28 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: Eric Dumazet, netdev

On Thu, May 13, 2010 at 6:12 PM, Stephen Hemminger
<shemminger@vyatta.com> wrote:
> On Thu, 13 May 2010 18:10:33 -0700
> "George B." <georgeb@gmail.com> wrote:
>
>> vlan5 on the second interface, vconfig balks that it already exists.
>
> vconfig is stupid. use 'ip link'
>
>
> --
>

It still didn't work but I was using 2.6.32  it works using 2.6.34-rc7

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Question about vlans, bonding, etc.
  2010-05-14  1:12     ` Stephen Hemminger
  2010-05-14  7:28       ` George B.
@ 2010-05-14  7:53       ` Benny Amorsen
  1 sibling, 0 replies; 6+ messages in thread
From: Benny Amorsen @ 2010-05-14  7:53 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: George B., Eric Dumazet, netdev

Stephen Hemminger <shemminger@vyatta.com> writes:

> vconfig is stupid. use 'ip link'

Yay, another undocumented use of the ip command. I swear, ip was
invented to make man-pages useless.


/Benny


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2010-05-14  7:53 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-05-04  0:06 Question about vlans, bonding, etc George B.
2010-05-04  4:48 ` Eric Dumazet
2010-05-14  1:10   ` George B.
2010-05-14  1:12     ` Stephen Hemminger
2010-05-14  7:28       ` George B.
2010-05-14  7:53       ` Benny Amorsen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).