From: "Daniel P. Berrangé" <berrange@redhat.com>
To: "manish.mishra" <manish.mishra@nutanix.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
Het Gala <het.gala@nutanix.com>,
qemu-devel@nongnu.org, quintela@redhat.com, pbonzini@redhat.com,
armbru@redhat.com, eblake@redhat.com
Subject: Re: [PATCH 0/4] Multiple interface support on top of Multi-FD
Date: Thu, 16 Jun 2022 18:32:15 +0100 [thread overview]
Message-ID: <YqtpH/Rh0t8dm0Kd@redhat.com> (raw)
In-Reply-To: <4f19d641-8064-2eec-8b3f-035d4133fe46@nutanix.com>
On Thu, Jun 16, 2022 at 03:44:09PM +0530, manish.mishra wrote:
>
> On 16/06/22 1:46 pm, Daniel P. Berrangé wrote:
> > On Wed, Jun 15, 2022 at 08:14:26PM +0100, Dr. David Alan Gilbert wrote:
> > > * Daniel P. Berrangé (berrange@redhat.com) wrote:
> > > > On Fri, Jun 10, 2022 at 05:58:31PM +0530, manish.mishra wrote:
> > > > > On 09/06/22 9:17 pm, Daniel P. Berrangé wrote:
> > > > > > On Thu, Jun 09, 2022 at 07:33:01AM +0000, Het Gala wrote:
> > > > > > > As of now, the multi-FD feature supports connection over the default network
> > > > > > > only. This Patchset series is a Qemu side implementation of providing multiple
> > > > > > > interfaces support for multi-FD. This enables us to fully utilize dedicated or
> > > > > > > multiple NICs in case bonding of NICs is not possible.
> > > > > > >
> > > > > > >
> > > > > > > Introduction
> > > > > > > -------------
> > > > > > > Multi-FD Qemu implementation currently supports connection only on the default
> > > > > > > network. This forbids us from advantages like:
> > > > > > > - Separating VM live migration traffic from the default network.
> > > > > Hi Daniel,
> > > > >
> > > > > I totally understand your concern around this approach increasing compexity inside qemu,
> > > > >
> > > > > when similar things can be done with NIC teaming. But we thought this approach provides
> > > > >
> > > > > much more flexibility to user in few cases like.
> > > > >
> > > > > 1. We checked our customer data, almost all of the host had multiple NIC, but LACP support
> > > > >
> > > > > in their setups was very rare. So for those cases this approach can help in utilise multiple
> > > > >
> > > > > NICs as teaming is not possible there.
> > > > AFAIK, LACP is not required in order to do link aggregation with Linux.
> > > > Traditional Linux bonding has no special NIC hardware or switch requirements,
> > > > so LACP is merely a "nice to have" in order to simplify some aspects.
> > > >
> > > > IOW, migration with traffic spread across multiple NICs is already
> > > > possible AFAICT.
> > > Are we sure that works with multifd? I've seen a lot of bonding NIC
> > > setups which spread based on a hash of source/destination IP and port
> > > numbers; given that we use the same dest port and IP at the moment what
> > > happens in reality? That hashing can be quite delicate for high
> > > bandwidth single streams.
> > The simplest Linux bonding mode does per-packet round-robin across
> > NICs, so traffic from the collection of multifd connections should
> > fill up all the NICs in the bond. There are of course other modes
> > which may be sub-optimal for the reasons you describe. Which mode
> > to pick depends on the type of service traffic patterns you're
> > aiming to balance.
>
> My understanding on networking is not good enough so apologies in advance if something
> does not make sense. As per my understanding it is easy to do load balancing on sender
> side because we have full control where to send packet but complicated on receive side
> if we do not have LACP like support. I see there are some teaming technique which does
> load balancing of incoming traffic by possibly sending different slaves mac address on arp
> requests but that does not work for our use case and may require a complicated setup
> for proper usage. Our use case can be something like this e.g. both source and destination
> has 2-2 NICs of 10Gbps each and we want to get a throughput of 20Gbps for live migration.
I believe you are right. The Linux bonding will give us full 20 Gpbs
throughput on the transmit side, without any hardware dependancies.
On the receive side, however, there is a dependancy on the network
switch to be able to balance the traffic it forwards to the target.
This is fairly common in switches, but the typical policies based on
hashing the MAC/IP addr will not be sufficient in this case.
With regards,
Daniel
--
|: https://berrange.com -o- https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o- https://fstop138.berrange.com :|
|: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
next prev parent reply other threads:[~2022-06-16 17:34 UTC|newest]
Thread overview: 34+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-06-09 7:33 [PATCH 0/4] Multiple interface support on top of Multi-FD Het Gala
2022-06-09 7:33 ` [PATCH 1/4] Modifying ‘migrate’ qmp command to add multi-FD socket on particular source and destination pair Het Gala
2022-06-16 17:26 ` Dr. David Alan Gilbert
2022-07-13 8:08 ` Het Gala
2022-07-15 8:07 ` Het Gala
2022-07-13 12:54 ` Claudio Fontana
2022-07-18 8:35 ` Markus Armbruster
2022-07-18 13:33 ` Het Gala
2022-07-18 14:33 ` Markus Armbruster
2022-07-18 15:17 ` Het Gala
2022-07-19 7:06 ` Markus Armbruster
2022-07-19 7:51 ` Het Gala
2022-07-19 9:48 ` Markus Armbruster
2022-07-19 10:40 ` Het Gala
2022-06-09 7:33 ` [PATCH 2/4] Adding multi-interface support for multi-FD on destination side Het Gala
2022-06-16 18:40 ` Dr. David Alan Gilbert
2022-07-13 14:36 ` Het Gala
2022-06-09 7:33 ` [PATCH 3/4] Establishing connection between any non-default source and destination pair Het Gala
2022-06-16 17:39 ` Daniel P. Berrangé
2022-06-21 16:09 ` manish.mishra
[not found] ` <54ca00c7-a108-11e3-1c8d-8771798aed6a@nutanix.com>
[not found] ` <de0646c1-75d7-5f9d-32db-07c498c45715@nutanix.com>
2022-07-20 6:52 ` Daniel P. Berrangé
2022-06-09 7:33 ` [PATCH 4/4] Adding support for multi-FD connections dynamically Het Gala
2022-06-16 18:47 ` Dr. David Alan Gilbert
2022-06-21 16:12 ` manish.mishra
2022-06-09 15:47 ` [PATCH 0/4] Multiple interface support on top of Multi-FD Daniel P. Berrangé
2022-06-10 12:28 ` manish.mishra
2022-06-15 16:43 ` Daniel P. Berrangé
2022-06-15 19:14 ` Dr. David Alan Gilbert
2022-06-16 8:16 ` Daniel P. Berrangé
2022-06-16 10:14 ` manish.mishra
2022-06-16 17:32 ` Daniel P. Berrangé [this message]
2022-06-16 8:27 ` Daniel P. Berrangé
2022-06-16 15:50 ` Dr. David Alan Gilbert
2022-06-21 16:16 ` manish.mishra
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YqtpH/Rh0t8dm0Kd@redhat.com \
--to=berrange@redhat.com \
--cc=armbru@redhat.com \
--cc=dgilbert@redhat.com \
--cc=eblake@redhat.com \
--cc=het.gala@nutanix.com \
--cc=manish.mishra@nutanix.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=quintela@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).