qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: David Hildenbrand <david@redhat.com>
To: Peter Xu <peterx@redhat.com>
Cc: "Wang, Wei W" <wei.w.wang@intel.com>,
	"mst@redhat.com" <mst@redhat.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	"dgilbert@redhat.com" <dgilbert@redhat.com>,
	"quintela@redhat.com" <quintela@redhat.com>
Subject: Re: [PATCH v3] migration: clear the memory region dirty bitmap when skipping free pages
Date: Fri, 23 Jul 2021 14:52:48 +0200	[thread overview]
Message-ID: <30889234-668c-7867-ea6a-b411d5b2a3e5@redhat.com> (raw)
In-Reply-To: <YPq7Txt3SnIpdNKD@t490s>

On 23.07.21 14:51, Peter Xu wrote:
> On Fri, Jul 23, 2021 at 09:50:18AM +0200, David Hildenbrand wrote:
>> On 22.07.21 19:41, Peter Xu wrote:
>>> On Thu, Jul 22, 2021 at 04:51:48PM +0200, David Hildenbrand wrote:
>>>> I'll give it a churn.
>>>
>>> Thanks, David.
>>>
>>
>> Migration of a 8 GiB VM
>> * within the same host
>> * after Linux is up and idle
>> * free page hinting enabled
>> * after dirtying most VM memory using memhog
>> * keeping bandwidth set to QEMU defaults
>> * On my 16 GiB notebook with other stuff running
>>
>>
>> Current upstream with 63268c4970a, without this patch:
>>
>> total time: 28606 ms
>> downtime: 33 ms
>> setup: 3 ms
>> transferred ram: 3722913 kbytes
>> throughput: 1066.37 mbps
>> remaining ram: 0 kbytes
>> total ram: 8389384 kbytes
>> duplicate: 21674 pages
>> skipped: 0 pages
>> normal: 928866 pages
>> normal bytes: 3715464 kbytes
>> dirty sync count: 5
>> pages-per-second: 32710
>>
>> Current upstream without 63268c4970a, without this patch:
>>
>> total time: 28530 ms
>> downtime: 277 ms
>> setup: 4 ms
>> transferred ram: 3726266 kbytes
>> throughput: 1070.21 mbps
>> remaining ram: 0 kbytes
>> total ram: 8389384 kbytes
>> duplicate: 21890 pages
>> skipped: 0 pages
>> normal: 929702 pages
>> normal bytes: 3718808 kbytes
>> dirty sync count: 5
>> pages-per-second: 32710
>>
>>
>> Current upstream without 63268c4970a, with this patch:
>>
>> total time: 5115 ms
>> downtime: 37 ms
>> setup: 5 ms
>> transferred ram: 659532 kbytes
>> throughput: 1057.94 mbps
>> remaining ram: 0 kbytes
>> total ram: 8389384 kbytes
>> duplicate: 20748 pages
>> skipped: 0 pages
>> normal: 164516 pages
>> normal bytes: 658064 kbytes
>> dirty sync count: 4
>> pages-per-second: 32710
>>
>>
>> Current upstream with 63268c4970a, with this patch:
>>
>> total time: 5205 ms
>> downtime: 45 ms
>> setup: 3 ms
>> transferred ram: 659636 kbytes
>> throughput: 1039.39 mbps
>> remaining ram: 0 kbytes
>> total ram: 8389384 kbytes
>> duplicate: 20264 pages
>> skipped: 0 pages
>> normal: 164543 pages
>> normal bytes: 658172 kbytes
>> dirty sync count: 4
>> pages-per-second: 32710
>>
>>
>>
>> I repeated the last two measurements two times and took the "better"
>> results.
>>
>> Looks like this patch does it job and that 63268c4970a doesn't seem to
>> degrade migration in this combination/setup significantly (if at all, we
>> would have to do more measurements).
> 
> Thanks again for helping!
> 
> Just to double check: the loop in qemu_guest_free_page_hint() won't run for a
> lot of iterations, right?  Looks like that only happens when over ramblock
> boundaries.  Otherwise we may also want to move that mutex out of the loop at
> some point because atomic looks indeed expensive on huge hosts.

I'd expect it never ever happens.

-- 
Thanks,

David / dhildenb



  reply	other threads:[~2021-07-23 12:53 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-07-22  8:30 [PATCH v3] migration: clear the memory region dirty bitmap when skipping free pages Wei Wang
2021-07-22  9:47 ` David Hildenbrand
2021-07-22  9:57   ` Wang, Wei W
2021-07-22 14:51     ` Peter Xu
2021-07-22 14:51       ` David Hildenbrand
2021-07-22 17:41         ` Peter Xu
2021-07-23  7:50           ` David Hildenbrand
2021-07-23  8:14             ` Wang, Wei W
2021-07-23  8:16               ` David Hildenbrand
2021-07-23  8:32                 ` Wang, Wei W
2021-07-23 12:51             ` Peter Xu
2021-07-23 12:52               ` David Hildenbrand [this message]
2021-07-23  8:03       ` David Hildenbrand

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=30889234-668c-7867-ea6a-b411d5b2a3e5@redhat.com \
    --to=david@redhat.com \
    --cc=dgilbert@redhat.com \
    --cc=mst@redhat.com \
    --cc=peterx@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=quintela@redhat.com \
    --cc=wei.w.wang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).