From: Paolo Bonzini <pbonzini@redhat.com>
To: quintela@redhat.com
Cc: chegu_vinod@hp.com, qemu-devel@nongnu.org
Subject: Re: [Qemu-devel] [RFC v2] Migration thread
Date: Fri, 07 Sep 2012 19:08:47 +0200 [thread overview]
Message-ID: <504A2A1F.8090703@redhat.com> (raw)
In-Reply-To: <87a9x1ls3b.fsf@elfo.mitica>
Il 07/09/2012 18:23, Juan Quintela ha scritto:
>
> Hi
>
> here is v2 of the migration thread series. There is still some "issues"
> with locking in the error paths (they are at 54 patches now).
>
> Changes from v1:
> - migration stats series are included
> - migration bitmap sync trace-events to know how long it takes
> - file->last_error use almost removed
> reworked functions to return real error codes and work with that.
> Some more work needed here.
> - new savevm for live migration pending method. see last commit for
> details.
Can you start factoring out any cleanup that can be applied independently?
Paolo
> Please test and comment.
>
> Later, Juan.
>
> The following changes since commit 6e4c0d1f03d6ab407509c32fab7cb4b8230f57ff:
>
> hw/pl110: Fix spelling of 'palette' (2012-09-06 17:04:33 +0200)
>
> are available in the git repository at:
>
> http://repo.or.cz/r/qemu/quintela.git migration-thread-v2
>
> for you to fetch changes up to 688feac0fbc287920dff537ed13fb8483c064f7f:
>
> savem: Add calculating a new save_live migration method: pending (2012-09-07 14:00:35 +0200)
>
> ----------------------------------------------------------------
> Juan Quintela (49):
> buffered_file: g_realloc() can't fail
> fix migration sync
> migration: store end_time in a local variable
> migration: print total downtime for final phase of migration
> migration: rename expected_time to expected_downtime
> migration: export migrate_get_current()
> migration: print expected downtime in info migrate
> savevm: Factorize ram globals reset in its own function
> ram: introduce migration_bitmap_set_dirty()
> ram: Introduce migration_bitmap_test_and_reset_dirty()
> ram: Export last_ram_offset()
> ram: introduce migration_bitmap_sync()
> ram: create trace event for migration sync bitmap
> Separate migration bitmap
> migration: Add dirty_pages_rate to query migrate output
> buffered_file: rename opaque to migration_state
> buffered_file: opaque is MigrationState
> buffered_file: unfold migrate_fd_put_buffer
> buffered_file: unfold migrate_fd_put_ready
> buffered_file: unfold migrate_fd_put_buffer
> buffered_file: unfold migrate_fd_put_buffer
> buffered_file: We can access directly to bandwidth_limit
> buffered_file: callers of buffered_flush() already check for errors
> buffered_file: make buffered_flush return the error code
> migration: make migrate_fd_wait_for_unfreeze() return errors
> savevm: unexport qemu_fflush
> viritio-net: use qemu_get_buffer() in a temp buffer
> savevm: Remove qemu_fseek()
> savevm: make qemu_fflush() return an error code
> savevm: unfold qemu_fclose_internal()
> savevm: unexport qemu_ftell()
> savevm: make qemu_fill_buffer() be consistent
> savevm: Only qemu_fflush() can generate errors
> buffered_file: buffered_put_buffer() don't need to set last_error
> block-migration: make flush_blks() return errors
> block-migration: Switch meaning of return value
> block-migration: handle errors with the return codes correctly
> savevm: un-export qemu_file_set_error()
> savevm: make qemu_file_put_notify() return errors
> buffered_file: Move from using a timer to use a thread
> migration: make qemu_fopen_ops_buffered() return void
> migration: stop all cpus correctly
> migration: make writes blocking
> migration: remove unfreeze logic
> migration: take finer locking
> buffered_file: Unfold the trick to restart generating migration data
> buffered_file: don't flush on put buffer
> buffered_file: unfold buffered_append in buffered_put_buffer
> savem: Add calculating a new save_live migration method: pending
>
> Paolo Bonzini (2):
> split MRU ram list
> BufferedFile: append, then flush
>
> Umesh Deshpande (2):
> add a version number to ram_list
> protect the ramlist with a separate mutex
>
> arch_init.c | 174 ++++++++++++++++++++++++++++++++------------
> block-migration.c | 100 +++++++++----------------
> buffered_file.c | 213 +++++++++++++++++++++---------------------------------
> buffered_file.h | 12 +--
> cpu-all.h | 17 ++++-
> exec-obsolete.h | 10 ---
> exec.c | 45 ++++++++++--
> hmp.c | 12 +++
> hw/virtio-net.c | 4 +-
> migration-exec.c | 2 -
> migration-fd.c | 6 --
> migration-tcp.c | 2 +-
> migration-unix.c | 2 -
> migration.c | 151 +++++++++++++++++++-------------------
> migration.h | 10 +++
> qapi-schema.json | 18 ++++-
> qemu-file.h | 11 ---
> qmp-commands.hx | 9 +++
> savevm.c | 144 ++++++++++++++++++------------------
> sysemu.h | 1 +
> trace-events | 4 +
> vmstate.h | 1 +
> 22 files changed, 498 insertions(+), 450 deletions(-)
>
>
next prev parent reply other threads:[~2012-09-07 17:08 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-09-07 16:23 [Qemu-devel] [RFC v2] Migration thread Juan Quintela
2012-09-07 17:08 ` Paolo Bonzini [this message]
2012-09-07 19:52 ` Juan Quintela
2012-09-07 20:29 ` Paolo Bonzini
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=504A2A1F.8090703@redhat.com \
--to=pbonzini@redhat.com \
--cc=chegu_vinod@hp.com \
--cc=qemu-devel@nongnu.org \
--cc=quintela@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).