qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: "Daniel P. Berrangé" <berrange@redhat.com>
To: Juan Quintela <quintela@redhat.com>
Cc: Laurent Vivier <lvivier@redhat.com>,
	Thomas Huth <thuth@redhat.com>,
	Eduardo Habkost <ehabkost@redhat.com>,
	Markus Armbruster <armbru@redhat.com>,
	qemu-devel@nongnu.org, Paolo Bonzini <pbonzini@redhat.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>
Subject: Re: [PATCH v2 01/10] migration: Increase default number of multifd channels to 16
Date: Tue, 7 Jan 2020 12:49:34 +0000	[thread overview]
Message-ID: <20200107124934.GK3368802@redhat.com> (raw)
In-Reply-To: <87mub4xurf.fsf@trasno.org>

On Fri, Jan 03, 2020 at 07:25:08PM +0100, Juan Quintela wrote:
> Daniel P. Berrangé <berrange@redhat.com> wrote:
> > On Wed, Dec 18, 2019 at 03:01:10AM +0100, Juan Quintela wrote:
> >> We can scale much better with 16, so we can scale to higher numbers.
> >
> > What was the test scenario showing such scaling ?
> 
> On my test hardware, with 2 channels we can saturate around 8Gigabit max,
> more than that, and the migration thread is not fast enough to fill the
> network bandwidth.
> 
> With 8 that is enough to fill whatever we can find.
> We used to have a bug where we were getting trouble with more channels
> than cores.  That was the initial reason why the default was so low.
> 
> So, pros/cons are:
> - have low value (2).  We are backwards compatible, but we are not using
>   all  bandwith.  Notice that we will dectect the error before 5.0 is
>   out and print a good error message.
> 
> - have high value (I tested 8 and 16).  Found no performance loss when
>   moving to lower bandwidth limits, and clearly we were able to saturate
>   the higher speeds (I tested on localhost, so I had big enough bandwidth)
> 
> 
> > In the real world I'm sceptical that virt hosts will have
> > 16 otherwise idle CPU cores available that are permissible
> > to use for migration, or indeed whether they'll have network
> > bandwidth available to allow 16 cores to saturate the link.
> 
> The problem here is that if you have such a host, and you want to have
> high speed migration, you need to configure it.  My measumermets are
> that high number of channels don't affect performance with low
> bandwidth, but low number of channels affect performance with high
> bandwidth speed.

I'm not concerned about impact on performance of migration on a
low bandwidth link, rather I'm concerned about impact on performance
of other guests on the host. It will cause migration to contend with
other guest's vCPUs and network traffic. 

> So, if we want to have something that works "automatically" everywhere,
> we need to put it to at least 8.  Or we can trust that management app
> will do the right thing.

Aren't we still setting the bandwidth limit to MB bandwidth out of the
box, so we already require mgmt app to change settings to use more
bandwidth ? 

> If you are using a low value of bandwidth, the only difference with 16
> channels is that you are using a bit more memory (just the space for the
> stacks) and that you are having less contention for the locks (but with
> low bandwidth you are not having contention anyways).
> 
> So,  I think that the question is:
> - What does libvirt prefferes

Libvirt doesn't really have an opinion in this case. I believe we'll
always set the number of channels on both src & dst, so we don't
see the defaults.

> - What does ovirt/openstack preffer

Libvirt should insulate them from any change in defaults in QEMU
in this case, but always explicitly setting channels on src & dst
to match.

> - Do we really want that the user "have" to configure that value

Right so this is the key quesiton - for a user not using libvirt
or a libvirt based mgmt app, what we do want out out of the box
migration to be tuned for ?

If we want to maximise migration performance, at cost of anything
else, then we can change the migration channels count, but probably
also ought to remove the 32MB bandwidth cap as no useful guest with
active apps will succeed migration with a 32MB cap.

Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



  reply	other threads:[~2020-01-07 13:22 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-12-18  2:01 [PATCH v2 00/10] Multifd Migration Compression Juan Quintela
2019-12-18  2:01 ` [PATCH v2 01/10] migration: Increase default number of multifd channels to 16 Juan Quintela
2020-01-03 16:51   ` Dr. David Alan Gilbert
2020-01-03 16:58   ` Daniel P. Berrangé
2020-01-03 17:01     ` Dr. David Alan Gilbert
2020-01-03 17:12       ` Daniel P. Berrangé
2020-01-03 17:32         ` Dr. David Alan Gilbert
2020-01-03 18:25     ` Juan Quintela
2020-01-07 12:49       ` Daniel P. Berrangé [this message]
2020-01-07 13:32         ` Juan Quintela
2020-01-07 13:42           ` Daniel P. Berrangé
2020-01-03 17:49   ` Daniel P. Berrangé
2019-12-18  2:01 ` [PATCH v2 02/10] migration-test: Add migration multifd test Juan Quintela
2019-12-18  2:01 ` [PATCH v2 03/10] migration-test: introduce functions to handle string parameters Juan Quintela
2020-01-03 16:57   ` Dr. David Alan Gilbert
2019-12-18  2:01 ` [PATCH v2 04/10] migration: Make multifd_save_setup() get an Error parameter Juan Quintela
2020-01-03 16:46   ` Dr. David Alan Gilbert
2020-01-07 12:35     ` Juan Quintela
2019-12-18  2:01 ` [PATCH v2 05/10] migration: Make multifd_load_setup() " Juan Quintela
2020-01-03 17:22   ` Dr. David Alan Gilbert
2020-01-07 13:00     ` Juan Quintela
2019-12-18  2:01 ` [PATCH v2 06/10] migration: Add multifd-compress parameter Juan Quintela
2019-12-19  7:41   ` Markus Armbruster
2020-01-03 17:57   ` Dr. David Alan Gilbert
2020-01-07 13:03     ` Juan Quintela
2019-12-18  2:01 ` [PATCH v2 07/10] migration: Make no compression operations into its own structure Juan Quintela
2020-01-03 18:20   ` Dr. David Alan Gilbert
2020-01-07 13:08     ` Juan Quintela
2019-12-18  2:01 ` [PATCH v2 08/10] migration: Add zlib compression multifd support Juan Quintela
2019-12-18  2:01 ` [PATCH v2 09/10] configure: Enable test and libs for zstd Juan Quintela
2019-12-18  2:01 ` [PATCH v2 10/10] migration: Add zstd compression multifd support Juan Quintela

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200107124934.GK3368802@redhat.com \
    --to=berrange@redhat.com \
    --cc=armbru@redhat.com \
    --cc=dgilbert@redhat.com \
    --cc=ehabkost@redhat.com \
    --cc=lvivier@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=quintela@redhat.com \
    --cc=thuth@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).