netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
To: "Jiri Pirko" <jiri@resnulli.us>, "Bjørnar Ness" <bjornar.ness@gmail.com>
Cc: netdev <netdev@vger.kernel.org>
Subject: Re: bonding (IEEE 802.3ad) not working with qemu/virtio
Date: Thu, 28 Jan 2016 14:53:41 +0100	[thread overview]
Message-ID: <56AA1D65.2000509@cumulusnetworks.com> (raw)
In-Reply-To: <20160128133350.GA2231@nanopsycho.orion>

On 01/28/2016 02:33 PM, Jiri Pirko wrote:
> Mon, Jan 25, 2016 at 05:24:48PM CET, bjornar.ness@gmail.com wrote:
>> As subject says, 802.3ad bonding is not working with virtio network model.
>>
>> The only errors I see is:
>>
>> No 802.3ad response from the link partner for any adapters in the bond.
>>
>> Dumping the network traffic shows that no LACP packets are sent from the
>> host running with virtio driver, changing to for example e1000 solves
>> this problem
>> with no configuration changes.
>>
>> Is this a known problem?
Can you show your bond's /proc/net/bonding/bond<X> ? And also in order to
better see what's going on I'd suggest enabling the pr_debug() calls in the 3ad
code:
echo 'file bond_3ad.c +p' > /sys/kernel/debug/dynamic_debug/control
(assuming you have debugfs mounted at /sys/kernel/debug)
Then you can follow the logs to see what's going on.
I can clearly see LACP packets sent over virtio net devices:
14:53:05.323490 52:54:00:51:25:3c > 01:80:c2:00:00:02, ethertype Slow Protocols (0x8809), length 124: LACPv1, length 110

> 
> I believe the problem is virtio_net for obvious reasons does not report
> speed and duplex. Bonding 3ad mode makes that unconfortable :)
root@dev:~# ethtool -i eth1
driver: virtio_net
root@dev:~# ethtool eth1
Settings for eth1:
	Supported ports: [ ]
	Supported link modes:   Not reported
	Supported pause frame use: No
	Supports auto-negotiation: No
	Advertised link modes:  Not reported
	Advertised pause frame use: No
	Advertised auto-negotiation: No
	*Speed: 10Mb/s*
	*Duplex: Full*
	Port: Twisted Pair
	PHYAD: 0
	Transceiver: internal
	Auto-negotiation: off
	MDI-X: Unknown
	Link detected: yes

The bonding catches that correctly,
[54569.138572] bond0: Adding slave eth1
[54569.139686] bond0: Port 1 Received status full duplex update from adapter
[54569.139690] bond0: Port 1 Received link speed 2 update from adapter

The debug messages are from enabled 3ad mode pr_debugs().
Added 2 virtio_net adapters and they successfully went in a single LAG
when it was enabled on the other side.
root@dev:~# cat /proc/net/bonding/bond0 
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

802.3ad info
LACP rate: slow
Min links: 0
Aggregator selection policy (ad_select): stable
System priority: 65535
System MAC address: 52:54:00:51:25:3c
Active Aggregator Info:
	Aggregator ID: 1
	Number of ports: 2
	Actor Key: 5
	Partner Key: 0
	Partner Mac Address: 52:54:00:2f:30:f7

Slave Interface: eth1
MII Status: up
Speed: 10 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 52:54:00:51:25:3c
Slave queue ID: 0
Aggregator ID: 1
Actor Churn State: monitoring
Partner Churn State: monitoring
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
    system priority: 65535
    system mac address: 52:54:00:51:25:3c
    port key: 5
    port priority: 255
    port number: 1
    port state: 61
details partner lacp pdu:
    system priority: 65535
    system mac address: 52:54:00:2f:30:f7
    oper key: 0
    port priority: 255
    port number: 1
    port state: 73

Slave Interface: eth2
MII Status: up
Speed: 10 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 52:54:00:bf:57:16
Slave queue ID: 0
Aggregator ID: 1
Actor Churn State: monitoring
Partner Churn State: monitoring
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
    system priority: 65535
    system mac address: 52:54:00:51:25:3c
    port key: 5
    port priority: 255
    port number: 2
    port state: 61
details partner lacp pdu:
    system priority: 65535
    system mac address: 52:54:00:2f:30:f7
    oper key: 0
    port priority: 255
    port number: 1
    port state: 73

> 
> Use team ;)

  reply	other threads:[~2016-01-28 13:53 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-01-25 16:24 bonding (IEEE 802.3ad) not working with qemu/virtio Bjørnar Ness
2016-01-28 13:33 ` Jiri Pirko
2016-01-28 13:53   ` Nikolay Aleksandrov [this message]
2016-01-28 14:10     ` Nikolay Aleksandrov
2016-01-29 21:31 ` Nikolay Aleksandrov
2016-01-29 21:45   ` Jay Vosburgh
2016-01-29 21:48     ` Nikolay Aleksandrov
2016-01-30  6:59       ` David Miller
2016-01-30 11:34         ` Jiri Pirko
2016-01-30 11:41         ` Nikolay Aleksandrov
2016-01-31 14:50           ` Michael S. Tsirkin
2016-02-01 18:49         ` Rick Jones
2016-01-31 14:58       ` Michael S. Tsirkin
2016-01-31 14:35     ` Michael S. Tsirkin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=56AA1D65.2000509@cumulusnetworks.com \
    --to=nikolay@cumulusnetworks.com \
    --cc=bjornar.ness@gmail.com \
    --cc=jiri@resnulli.us \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).