qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Andrey Gruzdev <andrey.gruzdev@virtuozzo.com>
To: David Hildenbrand <david@redhat.com>, qemu-devel@nongnu.org
Cc: Den Lunev <den@openvz.org>, Eric Blake <eblake@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Juan Quintela <quintela@redhat.com>,
	"Dr . David Alan Gilbert" <dgilbert@redhat.com>,
	Markus Armbruster <armbru@redhat.com>,
	Peter Xu <peterx@redhat.com>
Subject: Re: [PATCH 3/3] migration: Pre-fault memory before starting background snasphot
Date: Fri, 19 Mar 2021 14:05:47 +0300	[thread overview]
Message-ID: <a40d14b2-b10c-83bf-bdd5-48a465dac67d@virtuozzo.com> (raw)
In-Reply-To: <a5c70d97-0560-0f7e-309e-a07f60a2e1a3@redhat.com>

On 19.03.2021 12:28, David Hildenbrand wrote:
>> +/*
>> + * ram_block_populate_pages: populate memory in the RAM block by 
>> reading
>> + *   an integer from the beginning of each page.
>> + *
>> + * Since it's solely used for userfault_fd WP feature, here we just
>> + *   hardcode page size to TARGET_PAGE_SIZE.
>> + *
>> + * @bs: RAM block to populate
>> + */
>> +volatile int ram_block_populate_pages__tmp;
>> +static void ram_block_populate_pages(RAMBlock *bs)
>> +{
>> +    ram_addr_t offset = 0;
>> +    int tmp = 0;
>> +
>> +    for (char *ptr = (char *) bs->host; offset < bs->used_length;
>> +            ptr += TARGET_PAGE_SIZE, offset += TARGET_PAGE_SIZE) {
>
> You'll want qemu_real_host_page_size instead of TARGET_PAGE_SIZE
>
Ok.
>> +        /* Try to do it without memory writes */
>> +        tmp += *(volatile int *) ptr;
>> +    }
>
>
> The following is slightly simpler and doesn't rely on volatile 
> semantics [1].
> Should work on any arch I guess.
>
> static void ram_block_populate_pages(RAMBlock *bs)
> {
>     char *ptr = (char *) bs->host;
>     ram_addr_t offset;
>
>     for (offset = 0; offset < bs->used_length;
>          offset += qemu_real_host_page_size) {
>     char tmp = *(volatile char *)(ptr + offset)
>
>     /* Don't optimize the read out. */
>     asm volatile ("" : "+r" (tmp));
> }
>
Thanks, good option, I'll change the code.

> Compiles to
>
>     for (offset = 0; offset < bs->used_length;
>     316d:       48 8b 4b 30             mov    0x30(%rbx),%rcx
>     char *ptr = (char *) bs->host;
>     3171:       48 8b 73 18             mov    0x18(%rbx),%rsi
>     for (offset = 0; offset < bs->used_length;
>     3175:       48 85 c9                test   %rcx,%rcx
>     3178:       74 ce                   je     3148 
> <ram_write_tracking_prepare+0x58>
>          offset += qemu_real_host_page_size) {
>     317a:       48 8b 05 00 00 00 00    mov 0x0(%rip),%rax        # 
> 3181 <ram_write_tracking_prepare+0x91>
>     3181:       48 8b 38                mov    (%rax),%rdi
>     3184:       0f 1f 40 00             nopl   0x0(%rax)
>         char tmp = *(volatile char *)(ptr + offset);
>     3188:       48 8d 04 16             lea    (%rsi,%rdx,1),%rax
>     318c:       0f b6 00                movzbl (%rax),%eax
>          offset += qemu_real_host_page_size) {
>     318f:       48 01 fa                add    %rdi,%rdx
>     for (offset = 0; offset < bs->used_length;
>     3192:       48 39 ca                cmp    %rcx,%rdx
>     3195:       72 f1                   jb     3188 
> <ram_write_tracking_prepare+0x98>
>
>
> [1] 
> https://programfan.github.io/blog/2015/04/27/prevent-gcc-optimize-away-code/
>
>
> I'll send patches soon to take care of virtio-mem via RamDiscardManager -
> to skip populating the parts that are supposed to remain discarded and 
> not migrated.
> Unfortunately, the RamDiscardManager patches are still stuck waiting for
> acks ... and now we're in soft-freeze.
>
RamDiscardManager patches - do they also modify migration code?
I mean which part is responsible of not migrating discarded ranges.

-- 
Andrey Gruzdev, Principal Engineer
Virtuozzo GmbH  +7-903-247-6397
                 virtuzzo.com



  parent reply	other threads:[~2021-03-19 11:09 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-03-18 17:46 [PATCH 0/3] migration: Fixes to the 'background-snapshot' code Andrey Gruzdev
2021-03-18 17:46 ` [PATCH 1/3] migration: Fix missing qemu_fflush() on buffer file in bg_migration_thread Andrey Gruzdev
2021-03-19 12:39   ` David Hildenbrand
2021-03-19 13:13     ` Andrey Gruzdev
2021-03-18 17:46 ` [PATCH 2/3] migration: Inhibit virtio-balloon for the duration of background snapshot Andrey Gruzdev
2021-03-18 18:16   ` David Hildenbrand
2021-03-19  8:27     ` Andrey Gruzdev
2021-03-18 17:46 ` [PATCH 3/3] migration: Pre-fault memory before starting background snasphot Andrey Gruzdev
2021-03-19  9:28   ` David Hildenbrand
2021-03-19  9:32     ` David Hildenbrand
2021-03-19 11:09       ` Andrey Gruzdev
2021-03-19 11:05     ` Andrey Gruzdev [this message]
2021-03-19 11:27       ` David Hildenbrand
2021-03-19 12:37         ` Andrey Gruzdev

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a40d14b2-b10c-83bf-bdd5-48a465dac67d@virtuozzo.com \
    --to=andrey.gruzdev@virtuozzo.com \
    --cc=armbru@redhat.com \
    --cc=david@redhat.com \
    --cc=den@openvz.org \
    --cc=dgilbert@redhat.com \
    --cc=eblake@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=peterx@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=quintela@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).