qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Juan Quintela <quintela@redhat.com>
To: Stefan Hajnoczi <stefanha@gmail.com>
Cc: Stefan Hajnoczi <stefanha@redhat.com>,
	qemu-devel@nongnu.org, Xiao Guangrong <xiaoguangrong@tencent.com>
Subject: Re: [Qemu-devel] NVDIMM live migration broken?
Date: Tue, 27 Jun 2017 20:12:04 +0200	[thread overview]
Message-ID: <874lv18fm3.fsf@secure.mitica> (raw)
In-Reply-To: <87bmp98j17.fsf@secure.mitica> (Juan Quintela's message of "Tue, 27 Jun 2017 18:58:12 +0200")

Juan Quintela <quintela@redhat.com> wrote:
> Haozhong Zhang <haozhong.zhang@intel.com> wrote:
>
> ....
>
> Hi
>
> I am trying to see what is going on.
>
>>> 
>>
>> I managed to reproduce this bug. After bisect between good v2.8.0 and
>> bad edf8bc984, it looks a regression introduced by
>>     6b6712efccd "ram: Split dirty bitmap by RAMBlock"
>> This commit may result in guest crash after migration if any host
>> memory backend is used.
>>
>> Could you test whether the attached draft patch fixes this bug? If yes,
>> I will make a formal patch later.
>>
>> Thanks,
>> Haozhong
>>
>> diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h
>> index 73d1bea8b6..2ae4ff3965 100644
>> --- a/include/exec/ram_addr.h
>> +++ b/include/exec/ram_addr.h
>> @@ -377,7 +377,9 @@ uint64_t cpu_physical_memory_sync_dirty_bitmap(RAMBlock *rb,
>>                                                 uint64_t *real_dirty_pages)
>>  {
>>      ram_addr_t addr;
>> +    ram_addr_t offset = rb->offset;
>>      unsigned long page = BIT_WORD(start >> TARGET_PAGE_BITS);
>> +    unsigned long dirty_page = BIT_WORD((start + offset) >> TARGET_PAGE_BITS);
>>      uint64_t num_dirty = 0;
>>      unsigned long *dest = rb->bmap;
>>  
>
>
> If this is the case, I can't understand how it ever worked :-(
>
> Investigating.

Further investigation, it gets as:
- pc.ram, by default is at slot 0
- so offset == 0
- rest of devices are not ram-lived

So it work well.

Only ram ends using that function, so we don't care.

When we use nvdimm device (don't know if any other), it just gets out of
ramblock 0, and then the offset is important.

# No NVDIMM

(qemu) info ramblock 
              Block Name    PSize              Offset               Used              Total
                  pc.ram    4 KiB  0x0000000000000000 0x0000000040000000 0x0000000040000000
                vga.vram    4 KiB  0x0000000040060000 0x0000000000400000 0x0000000000400000

# with NVDIMM

(qemu) info ramblock 
              Block Name    PSize              Offset               Used              Total
           /objects/mem1    4 KiB  0x0000000000000000 0x0000000040000000 0x0000000040000000
                  pc.ram    4 KiB  0x0000000040000000 0x0000000040000000 0x0000000040000000
                vga.vram    4 KiB  0x0000000080060000 0x0000000000400000 0x0000000000400000


I am still amused/confused/integrated? how we haven't discovered the
problem before.

The patch fixes the problem described on the thread.


Later, Juan.

>
> Later, Juan.
>
>> @@ -386,8 +388,9 @@ uint64_t cpu_physical_memory_sync_dirty_bitmap(RAMBlock *rb,
>>          int k;
>>          int nr = BITS_TO_LONGS(length >> TARGET_PAGE_BITS);
>>          unsigned long * const *src;
>> -        unsigned long idx = (page * BITS_PER_LONG) / DIRTY_MEMORY_BLOCK_SIZE;
>> -        unsigned long offset = BIT_WORD((page * BITS_PER_LONG) %
>> +        unsigned long idx = (dirty_page * BITS_PER_LONG) /
>> +                            DIRTY_MEMORY_BLOCK_SIZE;
>> +        unsigned long offset = BIT_WORD((dirty_page * BITS_PER_LONG) %
>>                                          DIRTY_MEMORY_BLOCK_SIZE);
>>  
>>          rcu_read_lock();
>> @@ -416,7 +419,7 @@ uint64_t cpu_physical_memory_sync_dirty_bitmap(RAMBlock *rb,
>>      } else {
>>          for (addr = 0; addr < length; addr += TARGET_PAGE_SIZE) {
>>              if (cpu_physical_memory_test_and_clear_dirty(
>> -                        start + addr,
>> +                        start + addr + offset,
>>                          TARGET_PAGE_SIZE,
>>                          DIRTY_MEMORY_MIGRATION)) {
>>                  *real_dirty_pages += 1;

  reply	other threads:[~2017-06-27 18:12 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-06-22 14:08 [Qemu-devel] NVDIMM live migration broken? Stefan Hajnoczi
2017-06-23  0:13 ` haozhong.zhang
2017-06-23  9:55   ` Stefan Hajnoczi
2017-06-26  2:05     ` Haozhong Zhang
2017-06-26 12:56       ` Stefan Hajnoczi
2017-06-27 14:30         ` Haozhong Zhang
2017-06-27 16:58           ` Juan Quintela
2017-06-27 18:12             ` Juan Quintela [this message]
2017-06-28 10:05           ` Stefan Hajnoczi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=874lv18fm3.fsf@secure.mitica \
    --to=quintela@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@gmail.com \
    --cc=stefanha@redhat.com \
    --cc=xiaoguangrong@tencent.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).