qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
To: "Daniel P. Berrangé" <berrange@redhat.com>
Cc: "manish.mishra" <manish.mishra@nutanix.com>,
	Het Gala <het.gala@nutanix.com>,
	qemu-devel@nongnu.org, quintela@redhat.com, pbonzini@redhat.com,
	armbru@redhat.com, eblake@redhat.com
Subject: Re: [PATCH 0/4] Multiple interface support on top of Multi-FD
Date: Wed, 15 Jun 2022 20:14:26 +0100	[thread overview]
Message-ID: <Yqovkrm37mUdggws@work-vm> (raw)
In-Reply-To: <YqoMMCbF3PBnYSn/@redhat.com>

* Daniel P. Berrangé (berrange@redhat.com) wrote:
> On Fri, Jun 10, 2022 at 05:58:31PM +0530, manish.mishra wrote:
> > 
> > On 09/06/22 9:17 pm, Daniel P. Berrangé wrote:
> > > On Thu, Jun 09, 2022 at 07:33:01AM +0000, Het Gala wrote:
> > > > As of now, the multi-FD feature supports connection over the default network
> > > > only. This Patchset series is a Qemu side implementation of providing multiple
> > > > interfaces support for multi-FD. This enables us to fully utilize dedicated or
> > > > multiple NICs in case bonding of NICs is not possible.
> > > > 
> > > > 
> > > > Introduction
> > > > -------------
> > > > Multi-FD Qemu implementation currently supports connection only on the default
> > > > network. This forbids us from advantages like:
> > > > - Separating VM live migration traffic from the default network.
> > 
> > Hi Daniel,
> > 
> > I totally understand your concern around this approach increasing compexity inside qemu,
> > 
> > when similar things can be done with NIC teaming. But we thought this approach provides
> > 
> > much more flexibility to user in few cases like.
> > 
> > 1. We checked our customer data, almost all of the host had multiple NIC, but LACP support
> > 
> >     in their setups was very rare. So for those cases this approach can help in utilise multiple
> > 
> >     NICs as teaming is not possible there.
> 
> AFAIK,  LACP is not required in order to do link aggregation with Linux.
> Traditional Linux bonding has no special NIC hardware or switch requirements,
> so LACP is merely a "nice to have" in order to simplify some aspects.
> 
> IOW, migration with traffic spread across multiple NICs is already
> possible AFAICT.

Are we sure that works with multifd?  I've seen a lot of bonding NIC
setups which spread based on a hash of source/destination IP and port
numbers; given that we use the same dest port and IP at the moment what
happens in reality?  That hashing can be quite delicate for high
bandwidth single streams.

> I can understand that some people may not have actually configured
> bonding on their hosts, but it is not unreasonable to request that
> they do so, if they want to take advantage fo aggrated bandwidth.
> 
> It has the further benefit that it will be fault tolerant. With
> this proposal if any single NIC has a problem, the whole migration
> will get stuck. With kernel level bonding, if any single NIC haus
> a problem, it'll get offlined by the kernel and migration will
> continue to  work across remaining active NICs.
> 
> > 2. We have seen requests recently to separate out traffic of storage, VM netwrok, migration
> > 
> >     over different vswitch which can be backed by 1 or more NICs as this give better
> > 
> >     predictability and assurance. So host with multiple ips/vswitches can be very common
> > 
> >     environment. In this kind of enviroment this approach gives per vm or migration level
> > 
> >     flexibilty, like for critical VM we can still use bandwidth from all available vswitch/interface
> > 
> >     but for normal VM they can keep live migration only on dedicated NICs without changing
> > 
> >     complete host network topology.
> > 
> >     At final we want it to be something like this [<ip-pair>, <multiFD-channels>, <bandwidth_control>]
> > 
> >     to provide bandwidth_control per interface.
> 
> Again, it is already possible to separate migration traffic from storage
> traffic, from other network traffic. The target IP given will influence
> which NIC is used based on routing table and I know this is already
> done widely with OpenStack deployments.
> 
> > 3. Dedicated NIC we mentioned as a use case, agree with you it can be done without this
> > 
> >     approach too.
> 
> 
> > > > Multi-interface with Multi-FD
> > > > -----------------------------
> > > > Multiple-interface support over basic multi-FD has been implemented in the
> > > > patches. Advantages of this implementation are:
> > > > - Able to separate live migration traffic from default network interface by
> > > >    creating multiFD channels on ip addresses of multiple non-default interfaces.
> > > > - Can optimize the number of multi-FD channels on a particular interface
> > > >    depending upon the network bandwidth limit on a particular interface.
> > > Manually assigning individual channels to different NICs is a pretty
> > > inefficient way to optimizing traffic. Feels like you could easily get
> > > into a situation where one NIC ends up idle while the other is busy,
> > > especially if the traffic patterns are different. For example with
> > > post-copy there's an extra channel for OOB async page requests, and
> > > its far from clear that manually picking NICs per chanel upfront is
> > > going work for that.  The kernel can continually dynamically balance
> > > load on the fly and so do much better than any static mapping QEMU
> > > tries to apply, especially if there are multiple distinct QEMU's
> > > competing for bandwidth.
> > > 
> > Yes, Daniel current solution is only for pre-copy. As with postcopy
> > multiFD is not yet supported but in future we can extend it for postcopy

I had been thinking about explicit selection of network device for NUMA
use though; ideally I'd like to be able to associate a set of multifd
threads to each NUMA node, and then associate a NIC with that set of
threads; so that the migration happens down the NIC that's on the node
the RAM is on.  On a really good day you'd have one NIC per top level
NUMA node.

> > channels too.
> > 
> > > > Implementation
> > > > --------------
> > > > 
> > > > Earlier the 'migrate' qmp command:
> > > > { "execute": "migrate", "arguments": { "uri": "tcp:0:4446" } }
> > > > 
> > > > Modified qmp command:
> > > > { "execute": "migrate",
> > > >               "arguments": { "uri": "tcp:0:4446", "multi-fd-uri-list": [ {
> > > >               "source-uri": "tcp::6900", "destination-uri": "tcp:0:4480",
> > > >               "multifd-channels": 4}, { "source-uri": "tcp:10.0.0.0: ",
> > > >               "destination-uri": "tcp:11.0.0.0:7789",
> > > >               "multifd-channels": 5} ] } }
> > > > ------------------------------------------------------------------------------
> > > > 
> > > > Earlier the 'migrate-incoming' qmp command:
> > > > { "execute": "migrate-incoming", "arguments": { "uri": "tcp::4446" } }
> > > > 
> > > > Modified 'migrate-incoming' qmp command:
> > > > { "execute": "migrate-incoming",
> > > >              "arguments": {"uri": "tcp::6789",
> > > >              "multi-fd-uri-list" : [ {"destination-uri" : "tcp::6900",
> > > >              "multifd-channels": 4}, {"destination-uri" : "tcp:11.0.0.0:7789",
> > > >              "multifd-channels": 5} ] } }
> > > > ------------------------------------------------------------------------------
> > > These examples pretty nicely illustrate my concern with this
> > > proposal. It is making QEMU configuration of migration
> > > massively more complicated, while duplicating functionality
> > > the kernel can provide via NIC teaming, but without having
> > > ability to balance it on the fly as the kernel would.
> > 
> > Yes, agree Daniel this raises complexity but we will make sure that it does not
> > 
> > change/imapct anything existing and we provide new options as optional.
> 
> The added code is certainly going to impact ongoing maint of QEMU I/O
> layer and migration in particular. I'm not convinced this complexity
> is compelling enough compared to leveraging kernel native bonding
> to justify the maint burden it will impose.

Dave

> With regards,
> Daniel
> -- 
> |: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
> |: https://libvirt.org         -o-            https://fstop138.berrange.com :|
> |: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



  reply	other threads:[~2022-06-15 19:16 UTC|newest]

Thread overview: 34+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-06-09  7:33 [PATCH 0/4] Multiple interface support on top of Multi-FD Het Gala
2022-06-09  7:33 ` [PATCH 1/4] Modifying ‘migrate’ qmp command to add multi-FD socket on particular source and destination pair Het Gala
2022-06-16 17:26   ` Dr. David Alan Gilbert
2022-07-13  8:08     ` Het Gala
2022-07-15  8:07       ` Het Gala
2022-07-13 12:54     ` Claudio Fontana
2022-07-18  8:35   ` Markus Armbruster
2022-07-18 13:33     ` Het Gala
2022-07-18 14:33       ` Markus Armbruster
2022-07-18 15:17         ` Het Gala
2022-07-19  7:06           ` Markus Armbruster
2022-07-19  7:51             ` Het Gala
2022-07-19  9:48               ` Markus Armbruster
2022-07-19 10:40                 ` Het Gala
2022-06-09  7:33 ` [PATCH 2/4] Adding multi-interface support for multi-FD on destination side Het Gala
2022-06-16 18:40   ` Dr. David Alan Gilbert
2022-07-13 14:36     ` Het Gala
2022-06-09  7:33 ` [PATCH 3/4] Establishing connection between any non-default source and destination pair Het Gala
2022-06-16 17:39   ` Daniel P. Berrangé
2022-06-21 16:09     ` manish.mishra
     [not found]     ` <54ca00c7-a108-11e3-1c8d-8771798aed6a@nutanix.com>
     [not found]       ` <de0646c1-75d7-5f9d-32db-07c498c45715@nutanix.com>
2022-07-20  6:52         ` Daniel P. Berrangé
2022-06-09  7:33 ` [PATCH 4/4] Adding support for multi-FD connections dynamically Het Gala
2022-06-16 18:47   ` Dr. David Alan Gilbert
2022-06-21 16:12     ` manish.mishra
2022-06-09 15:47 ` [PATCH 0/4] Multiple interface support on top of Multi-FD Daniel P. Berrangé
2022-06-10 12:28   ` manish.mishra
2022-06-15 16:43     ` Daniel P. Berrangé
2022-06-15 19:14       ` Dr. David Alan Gilbert [this message]
2022-06-16  8:16         ` Daniel P. Berrangé
2022-06-16 10:14           ` manish.mishra
2022-06-16 17:32             ` Daniel P. Berrangé
2022-06-16  8:27       ` Daniel P. Berrangé
2022-06-16 15:50         ` Dr. David Alan Gilbert
2022-06-21 16:16           ` manish.mishra

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Yqovkrm37mUdggws@work-vm \
    --to=dgilbert@redhat.com \
    --cc=armbru@redhat.com \
    --cc=berrange@redhat.com \
    --cc=eblake@redhat.com \
    --cc=het.gala@nutanix.com \
    --cc=manish.mishra@nutanix.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=quintela@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).