qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* Re: [Qemu-devel] [RFC PATCH v4 3/5] separate migration bitmap
@ 2011-08-19 12:51 Paolo Bonzini
  2011-08-21  1:41 ` Umesh Deshpande
  0 siblings, 1 reply; 4+ messages in thread
From: Paolo Bonzini @ 2011-08-19 12:51 UTC (permalink / raw)
  To: Umesh Deshpande, qemu-devel

On 08/16/2011 08:56 PM, Umesh Deshpande wrote:
> @@ -2128,8 +2132,61 @@ void cpu_physical_memory_reset_dirty(ram_addr_t start, ram_addr_t end,
>                                        start1, length);
>          }
>      }
> +
>  }
>
> +void migration_bitmap_reset_dirty(ram_addr_t start, ram_addr_t end,
> +                                  int dirty_flags)
> +{
> +    unsigned long length, start1;
> +
> +    start &= TARGET_PAGE_MASK;
> +    end = TARGET_PAGE_ALIGN(end);
> +
> +    length = end - start;
> +    if (length == 0) {
> +        return;
> +    }
> +
> +    migration_bitmap_mask_dirty_range(start, length, dirty_flags);
> +
> +    /* we modify the TLB cache so that the dirty bit will be set again
> +       when accessing the range */

The comment does not apply here, and the code below can also be safely 
deleted.

> +    start1 = (unsigned long)qemu_safe_ram_ptr(start);
> +    /* Check that we don't span multiple blocks - this breaks the
> +       address comparisons below.  */
> +    if ((unsigned long)qemu_safe_ram_ptr(end - 1) - start1
> +            != (end - 1) - start) {
> +        abort();
> +    }
> +}
> +
> +void sync_migration_bitmap(ram_addr_t start, ram_addr_t end)
> +{
> +    unsigned long length, len, i;
> +    ram_addr_t addr;
> +    start &= TARGET_PAGE_MASK;
> +    end = TARGET_PAGE_ALIGN(end);
> +
> +    length = end - start;
> +    if (length == 0) {
> +        return;
> +    }
> +
> +    len = length >> TARGET_PAGE_BITS;
> +    for (i = 0; i < len; i++) {
> +        addr = i << TARGET_PAGE_BITS;
> +        if (cpu_physical_memory_get_dirty(addr, MIGRATION_DIRTY_FLAG)) {
> +            migration_bitmap_set_dirty(addr);
> +            cpu_physical_memory_reset_dirty(addr, addr + TARGET_PAGE_SIZE,
> +                                            MIGRATION_DIRTY_FLAG);

This should be run under the iothread lock.  Pay attention to avoiding 
lock inversion: the I/O thread always takes the iothread lock outside 
and the ramlist lock within, so the migration thread must do the same.

BTW, I think this code in the migration thread patch also needs the 
iothread lock:

>     if (stage < 0) {
>         cpu_physical_memory_set_dirty_tracking(0);
>         return 0;
>     }
>
>     if (cpu_physical_sync_dirty_bitmap(0, TARGET_PHYS_ADDR_MAX) != 0) {
>         qemu_file_set_error(f);
>         return 0;
>     }
>

Finally, here:

>         /* Make sure all dirty bits are set */
>         QLIST_FOREACH(block, &ram_list.blocks, next) {
>             for (addr = block->offset; addr < block->offset + block->length;
>                  addr += TARGET_PAGE_SIZE) {
>                 if (!migration_bitmap_get_dirty(addr,
>                                                    MIGRATION_DIRTY_FLAG)) {
>                     migration_bitmap_set_dirty(addr);
>                 }
>             }
>         }
>

... you can skip the get_dirty operation since we are not interested in 
other flags than the migration flag for the migration-specific bitmap.

Paolo

^ permalink raw reply	[flat|nested] 4+ messages in thread
* [Qemu-devel] [RFC PATCH v4 0/5] Separate thread for VM migration
@ 2011-08-17  3:56 Umesh Deshpande
  2011-08-17  3:56 ` [Qemu-devel] [RFC PATCH v4 3/5] separate migration bitmap Umesh Deshpande
  0 siblings, 1 reply; 4+ messages in thread
From: Umesh Deshpande @ 2011-08-17  3:56 UTC (permalink / raw)
  To: kvm, qemu-devel; +Cc: pbonzini, mtosatti, Umesh Deshpande, quintela

Following patch series deals with VCPU and iothread starvation during the
migration of a guest. Currently the iothread is responsible for performing the
guest migration. It holds qemu_mutex during the migration and doesn't allow VCPU
to enter the qemu mode and delays its return to the guest. The guest migration,
executed as an iohandler also delays the execution of other iohandlers.
In the following patch series,

The migration has been moved to a separate thread to
reduce the qemu_mutex contention and iohandler starvation.

Umesh Deshpande (5):
  MRU ram list
  ramlist lock
  separate migration bitmap
  separate thread for VM migration
  synchronous migrate_cancel

 arch_init.c         |   26 +++++++++---
 buffered_file.c     |  104 ++++++++++++++++++++++++++++++++++-----------------
 buffered_file.h     |    3 +
 cpu-all.h           |   41 ++++++++++++++++++++
 exec.c              |  100 ++++++++++++++++++++++++++++++++++++++++++++++--
 hw/hw.h             |    5 ++-
 migration.c         |   78 ++++++++++++++++++++------------------
 migration.h         |    1 +
 qemu-common.h       |    2 +
 qemu-thread-posix.c |   10 +++++
 qemu-thread.h       |    1 +
 savevm.c            |   30 +++++++++------
 12 files changed, 304 insertions(+), 97 deletions(-)

-- 
1.7.4.1

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2011-08-22  8:31 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-08-19 12:51 [Qemu-devel] [RFC PATCH v4 3/5] separate migration bitmap Paolo Bonzini
2011-08-21  1:41 ` Umesh Deshpande
2011-08-22  8:30   ` Paolo Bonzini
  -- strict thread matches above, loose matches on Subject: below --
2011-08-17  3:56 [Qemu-devel] [RFC PATCH v4 0/5] Separate thread for VM migration Umesh Deshpande
2011-08-17  3:56 ` [Qemu-devel] [RFC PATCH v4 3/5] separate migration bitmap Umesh Deshpande

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).