From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
To: "Daniel P. Berrangé" <berrange@redhat.com>
Cc: "manish.mishra" <manish.mishra@nutanix.com>,
Het Gala <het.gala@nutanix.com>,
qemu-devel@nongnu.org, quintela@redhat.com, pbonzini@redhat.com,
armbru@redhat.com, eblake@redhat.com
Subject: Re: [PATCH 0/4] Multiple interface support on top of Multi-FD
Date: Thu, 16 Jun 2022 16:50:58 +0100 [thread overview]
Message-ID: <YqtRYs1wGEQR4wfo@work-vm> (raw)
In-Reply-To: <YqrpVRJazpbMz/HE@redhat.com>
* Daniel P. Berrangé (berrange@redhat.com) wrote:
> On Wed, Jun 15, 2022 at 05:43:28PM +0100, Daniel P. Berrangé wrote:
> > On Fri, Jun 10, 2022 at 05:58:31PM +0530, manish.mishra wrote:
> > >
> > > On 09/06/22 9:17 pm, Daniel P. Berrangé wrote:
> > > > On Thu, Jun 09, 2022 at 07:33:01AM +0000, Het Gala wrote:
> > > > > As of now, the multi-FD feature supports connection over the default network
> > > > > only. This Patchset series is a Qemu side implementation of providing multiple
> > > > > interfaces support for multi-FD. This enables us to fully utilize dedicated or
> > > > > multiple NICs in case bonding of NICs is not possible.
> > > > >
> > > > >
> > > > > Introduction
> > > > > -------------
> > > > > Multi-FD Qemu implementation currently supports connection only on the default
> > > > > network. This forbids us from advantages like:
> > > > > - Separating VM live migration traffic from the default network.
> > >
> > > Hi Daniel,
> > >
> > > I totally understand your concern around this approach increasing compexity inside qemu,
> > >
> > > when similar things can be done with NIC teaming. But we thought this approach provides
> > >
> > > much more flexibility to user in few cases like.
> > >
> > > 1. We checked our customer data, almost all of the host had multiple NIC, but LACP support
> > >
> > > in their setups was very rare. So for those cases this approach can help in utilise multiple
> > >
> > > NICs as teaming is not possible there.
> >
> > AFAIK, LACP is not required in order to do link aggregation with Linux.
> > Traditional Linux bonding has no special NIC hardware or switch requirements,
> > so LACP is merely a "nice to have" in order to simplify some aspects.
> >
> > IOW, migration with traffic spread across multiple NICs is already
> > possible AFAICT.
> >
> > I can understand that some people may not have actually configured
> > bonding on their hosts, but it is not unreasonable to request that
> > they do so, if they want to take advantage fo aggrated bandwidth.
> >
> > It has the further benefit that it will be fault tolerant. With
> > this proposal if any single NIC has a problem, the whole migration
> > will get stuck. With kernel level bonding, if any single NIC haus
> > a problem, it'll get offlined by the kernel and migration will
> > continue to work across remaining active NICs.
> >
> > > 2. We have seen requests recently to separate out traffic of storage, VM netwrok, migration
> > >
> > > over different vswitch which can be backed by 1 or more NICs as this give better
> > >
> > > predictability and assurance. So host with multiple ips/vswitches can be very common
> > >
> > > environment. In this kind of enviroment this approach gives per vm or migration level
> > >
> > > flexibilty, like for critical VM we can still use bandwidth from all available vswitch/interface
> > >
> > > but for normal VM they can keep live migration only on dedicated NICs without changing
> > >
> > > complete host network topology.
> > >
> > > At final we want it to be something like this [<ip-pair>, <multiFD-channels>, <bandwidth_control>]
> > >
> > > to provide bandwidth_control per interface.
> >
> > Again, it is already possible to separate migration traffic from storage
> > traffic, from other network traffic. The target IP given will influence
> > which NIC is used based on routing table and I know this is already
> > done widely with OpenStack deployments.
>
> Actually I should clarify this is only practical if the two NICs are
> using different IP subnets, otherwise routing rules are not viable.
> So needing to set source IP would be needed to select between a pair
> of NICs on the same IP subnet.
Yeh so I think that's one reason that the idea in this series is OK
(together with the idea for the NUMA stuff) and I suspect there are
other cases as well.
Dave
> Previous usage I've seen has always setup fully distinct IP subnets
> for generic vs storage vs migration network traffic.
>
> With regards,
> Daniel
> --
> |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :|
> |: https://libvirt.org -o- https://fstop138.berrange.com :|
> |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
>
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
next prev parent reply other threads:[~2022-06-16 15:52 UTC|newest]
Thread overview: 34+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-06-09 7:33 [PATCH 0/4] Multiple interface support on top of Multi-FD Het Gala
2022-06-09 7:33 ` [PATCH 1/4] Modifying ‘migrate’ qmp command to add multi-FD socket on particular source and destination pair Het Gala
2022-06-16 17:26 ` Dr. David Alan Gilbert
2022-07-13 8:08 ` Het Gala
2022-07-15 8:07 ` Het Gala
2022-07-13 12:54 ` Claudio Fontana
2022-07-18 8:35 ` Markus Armbruster
2022-07-18 13:33 ` Het Gala
2022-07-18 14:33 ` Markus Armbruster
2022-07-18 15:17 ` Het Gala
2022-07-19 7:06 ` Markus Armbruster
2022-07-19 7:51 ` Het Gala
2022-07-19 9:48 ` Markus Armbruster
2022-07-19 10:40 ` Het Gala
2022-06-09 7:33 ` [PATCH 2/4] Adding multi-interface support for multi-FD on destination side Het Gala
2022-06-16 18:40 ` Dr. David Alan Gilbert
2022-07-13 14:36 ` Het Gala
2022-06-09 7:33 ` [PATCH 3/4] Establishing connection between any non-default source and destination pair Het Gala
2022-06-16 17:39 ` Daniel P. Berrangé
2022-06-21 16:09 ` manish.mishra
[not found] ` <54ca00c7-a108-11e3-1c8d-8771798aed6a@nutanix.com>
[not found] ` <de0646c1-75d7-5f9d-32db-07c498c45715@nutanix.com>
2022-07-20 6:52 ` Daniel P. Berrangé
2022-06-09 7:33 ` [PATCH 4/4] Adding support for multi-FD connections dynamically Het Gala
2022-06-16 18:47 ` Dr. David Alan Gilbert
2022-06-21 16:12 ` manish.mishra
2022-06-09 15:47 ` [PATCH 0/4] Multiple interface support on top of Multi-FD Daniel P. Berrangé
2022-06-10 12:28 ` manish.mishra
2022-06-15 16:43 ` Daniel P. Berrangé
2022-06-15 19:14 ` Dr. David Alan Gilbert
2022-06-16 8:16 ` Daniel P. Berrangé
2022-06-16 10:14 ` manish.mishra
2022-06-16 17:32 ` Daniel P. Berrangé
2022-06-16 8:27 ` Daniel P. Berrangé
2022-06-16 15:50 ` Dr. David Alan Gilbert [this message]
2022-06-21 16:16 ` manish.mishra
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YqtRYs1wGEQR4wfo@work-vm \
--to=dgilbert@redhat.com \
--cc=armbru@redhat.com \
--cc=berrange@redhat.com \
--cc=eblake@redhat.com \
--cc=het.gala@nutanix.com \
--cc=manish.mishra@nutanix.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=quintela@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).