From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
To: Lin Ma <LMa@suse.com>
Cc: "qemu-devel@nongnu.org" <qemu-devel@nongnu.org>
Subject: Re: [Qemu-devel] 答复: migrate_set_speed has no effect if the guest is using hugepages.
Date: Fri, 12 Jul 2019 13:34:00 +0100 [thread overview]
Message-ID: <20190712123400.GJ2730@work-vm> (raw)
In-Reply-To: <BY5PR18MB331347C441DA068E32BFDE53C5F20@BY5PR18MB3313.namprd18.prod.outlook.com>
* Lin Ma (LMa@suse.com) wrote:
>
>
> > -----邮件原件-----
> > 发件人: Dr. David Alan Gilbert <dgilbert@redhat.com>
> > 发送时间: 2019年7月11日 18:24
> > 收件人: Lin Ma <LMa@suse.com>
> > 抄送: qemu-devel@nongnu.org
> > 主题: Re: [Qemu-devel] migrate_set_speed has no effect if the guest is using
> > hugepages.
> >
> > * Lin Ma (LMa@suse.com) wrote:
> > > Hi all,
> >
> > Hi Lin,
>
> Hi Dave,
> >
> > > When I live migrate a qemu/kvm guest, If the guest is using huge
> > > pages, I found that the migrate_set_speed command had no effect during
> > stage 2.
> >
> > Can you explain what you mean by 'stage 2'?
> We know that the live migration contains 3 stages:
> Stage 1: Mark all of RAM dirty.
> Stage 2: Keep sending dirty RAM pages since last iteration
> Stage 3: Stop guest, transfer remaining dirty RAM, device state
> (Please refer to https://developers.redhat.com/blog/2015/03/24/live-migrating-qemu-kvm-virtual-machines/#live-migration for further details)
OK, yeh the numbering is pretty arbitrary so it's not something I
normally think about like that.
>
> > > It was caused by commit 4c011c3 postcopy: Send whole huge pages
> > >
> > > I'm wondering that is it by design or is it a bug waiting for fix?
> >
> > This is the first report I've seen for it. How did you conclude that
> > 4c011c3 caused it? While I can see it might have some effect on the
> > bandwidth management, I'm surprised it has this much effect.
>
> While digging into the bandwidth issue, Git bisect shows that this commit was the first bad commit.
OK.
> > What size huge pages are you using - 2MB or 1GB?
>
> When I hit this issue I was using 1GB huge page size.
> I tested this issue with 2MB page size today On Gigabit LAN, Although the bandwidth control looks
> a little better than using 1GB, But not too much. Please refer to the below test result.
OK, I can certainly see why this might happen with 1GB huge pages; I
need to have a think about a fix.
> > I can imagine we might have a problem that since we only do the sleep between
> > the hugepages, if we were using 1GB hugepages then we'd see <big chunk of
> > data>[sleep]<big chunk of data>[sleep] which isn't as smooth as it used to be.
> >
> > Can you give me some more details of your test?
>
> Live migration bandwidth management testing with 2MB hugepage size:
> sles12sp4_i440fx is a qemu/kvm guest with 6GB memory size.
> Note: the throughput value is approximating value.
>
> Terminal 1:
> virsh migrate-setspeed sles12sp4_i440fx $bandwidth && virsh migrate --live sles12sp4_i440fx qemu+tcp://5810f/system
>
> Terminal 2:
> virsh qemu-monitor-command sles12sp4_i440fx --hmp "info migrate"
>
> bandwidth=5
> throughput: 160 mbps
>
> bandwidth=10
> throughput: 167 mbps
>
> bandwidth=15
> throughput: 168 mbps
>
> bandwidth=20
> throughput: 168 mbps
>
> bandwidth=21
> throughput: 336 mbps
>
> bandwidth=22
> throughput: 336 mbps
>
> bandwidth=25
> throughput: 335.87 mbps
>
> bandwidth=30
> throughput: 335 mbps
>
> bandwidth=35
> throughput: 335 mbps
>
> bandwidth=40
> throughput: 335 mbps
>
> bandwidth=45
> throughput: 504.00 mbps
>
> bandwidth=50
> throughput: 500.00 mbps
>
> bandwidth=55
> throughput: 500.00 mbps
>
> bandwidth=60
> throughput: 500.00 mbps
>
> bandwidth=65
> throughput: 650.00 mbps
>
> bandwidth=70
> throughput: 660.00 mbps
OK, so migrate-setspeed takes a bandwidth in MBytes/sec and I guess
you're throughput is in MBit/sec - so at the higher end it's about
right, and at the lower end it's way off.
Let me think about a fix for this.
What are you using to measure throughput?
Dave
>
> Thanks,
> Lin
>
>
> > Dave
> >
> > >
> > > Thanks,
> > > Lin
> > --
> > Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
>
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
next prev parent reply other threads:[~2019-07-12 12:34 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-07-11 9:44 [Qemu-devel] migrate_set_speed has no effect if the guest is using hugepages Lin Ma
2019-07-11 10:24 ` Dr. David Alan Gilbert
[not found] ` <BY5PR18MB331347C441DA068E32BFDE53C5F20@BY5PR18MB3313.namprd18.prod.outlook.com>
2019-07-12 12:34 ` Dr. David Alan Gilbert [this message]
2019-07-15 9:43 ` [Qemu-devel] 回复: 答复: " Lin Ma
2019-08-02 10:57 ` [Qemu-devel] 回复: " Lin Ma
2019-12-05 10:31 ` 回复: 答复: [Qemu-devel] " Dr. David Alan Gilbert
[not found] ` <BY5PR18MB331333470C356200DDF0A8DAC55B0@BY5PR18MB3313.namprd18.prod.outlook.com>
2019-12-10 14:01 ` 回复: " Dr. David Alan Gilbert
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190712123400.GJ2730@work-vm \
--to=dgilbert@redhat.com \
--cc=LMa@suse.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).