qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Paolo Bonzini <pbonzini@redhat.com>
To: Alexey Kardashevskiy <aik@ozlabs.ru>, qemu-devel@nongnu.org
Cc: Orit Wasserman <owasserm@redhat.com>,
	Juan Quintela <quintela@redhat.com>
Subject: Re: [Qemu-devel] [PATCH] Revert "memory: syncronize kvm bitmap using bitmaps operations"
Date: Wed, 29 Jan 2014 11:03:36 +0100	[thread overview]
Message-ID: <52E8D1F8.3020404@redhat.com> (raw)
In-Reply-To: <52E8B80B.5060904@ozlabs.ru>

Il 29/01/2014 09:12, Alexey Kardashevskiy ha scritto:
> On 01/29/2014 06:30 PM, Paolo Bonzini wrote:
>> Il 29/01/2014 06:50, Alexey Kardashevskiy ha scritto:
>>> Since 64K system page size is quite popular configuration on PPC64,
>>> the original patch breaks migration.
>>>
>>> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
>>> ---
>>>  include/exec/ram_addr.h | 54
>>> +++++++++++++++++--------------------------------
>>>  1 file changed, 18 insertions(+), 36 deletions(-)
>>>
>>> diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h
>>> index 33c8acc..c6736ed 100644
>>> --- a/include/exec/ram_addr.h
>>> +++ b/include/exec/ram_addr.h
>>> @@ -83,47 +83,29 @@ static inline void
>>> cpu_physical_memory_set_dirty_lebitmap(unsigned long *bitmap,
>>>                                                            ram_addr_t start,
>>>                                                            ram_addr_t pages)
>>>  {
>>> -    unsigned long i, j;
>>> +    unsigned int i, j;
>>>      unsigned long page_number, c;
>>>      hwaddr addr;
>>>      ram_addr_t ram_addr;
>>> -    unsigned long len = (pages + HOST_LONG_BITS - 1) / HOST_LONG_BITS;
>>> +    unsigned int len = (pages + HOST_LONG_BITS - 1) / HOST_LONG_BITS;
>>>      unsigned long hpratio = getpagesize() / TARGET_PAGE_SIZE;
>>> -    unsigned long page = BIT_WORD(start >> TARGET_PAGE_BITS);
>>>
>>> -    /* start address is aligned at the start of a word? */
>>> -    if (((page * BITS_PER_LONG) << TARGET_PAGE_BITS) == start) {
>>
>> Why not just add " && hpratio == 1" here?
>
> Or fix dirty map to make it 1 bit per system page size (may be the fix is
> coming, who knows, but I am just not ready to do this now). Or do tricks
> with bits and support hpratio!=1. I could not choose and decided to revert
> it for now :)

Can you post the patch that adds " && hpratio == 1"?

> Do we really earn a lot here?

Yes, because this is the only part of migration that runs with the 
iothread lock taken.  Without Juan's patches you can see large guests 
hiccups that last a few seconds.

Paolo

  reply	other threads:[~2014-01-29 10:03 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-01-29  5:50 [Qemu-devel] [PATCH] Revert "memory: syncronize kvm bitmap using bitmaps operations" Alexey Kardashevskiy
2014-01-29  7:30 ` Paolo Bonzini
2014-01-29  8:12   ` Alexey Kardashevskiy
2014-01-29 10:03     ` Paolo Bonzini [this message]
2014-01-30 12:07       ` Alexey Kardashevskiy
2014-01-29 10:39     ` Juan Quintela

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=52E8D1F8.3020404@redhat.com \
    --to=pbonzini@redhat.com \
    --cc=aik@ozlabs.ru \
    --cc=owasserm@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=quintela@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).