* [Qemu-devel] [PATCH 0/2] add speed limit for multifd migration @ 2019-07-29 2:32 Ivan Ren 2019-07-29 2:32 ` [Qemu-devel] [PATCH 1/2] migration: add qemu_file_update_rate_transfer interface Ivan Ren 2019-07-29 2:32 ` [Qemu-devel] [PATCH 2/2] migration: add speed limit for multifd migration Ivan Ren 0 siblings, 2 replies; 6+ messages in thread From: Ivan Ren @ 2019-07-29 2:32 UTC (permalink / raw) To: quintela, dgilbert; +Cc: qemu-devel Currently multifd migration has not been limited and it will consume the whole bandwidth of Nic. These two patches add speed limitation to it. Ivan Ren (2): migration: add qemu_file_update_rate_transfer interface migration: add speed limit for multifd migration migration/qemu-file.c | 5 +++++ migration/qemu-file.h | 1 + migration/ram.c | 22 ++++++++++++---------- 3 files changed, 18 insertions(+), 10 deletions(-) -- 2.17.2 (Apple Git-113) ^ permalink raw reply [flat|nested] 6+ messages in thread
* [Qemu-devel] [PATCH 1/2] migration: add qemu_file_update_rate_transfer interface 2019-07-29 2:32 [Qemu-devel] [PATCH 0/2] add speed limit for multifd migration Ivan Ren @ 2019-07-29 2:32 ` Ivan Ren 2019-07-29 6:38 ` Wei Yang 2019-07-29 2:32 ` [Qemu-devel] [PATCH 2/2] migration: add speed limit for multifd migration Ivan Ren 1 sibling, 1 reply; 6+ messages in thread From: Ivan Ren @ 2019-07-29 2:32 UTC (permalink / raw) To: quintela, dgilbert; +Cc: qemu-devel Add qemu_file_update_rate_transfer for just update bytes_xfer for speed limitation. This will be used for further migration feature such as multifd migration. Signed-off-by: Ivan Ren <ivanren@tencent.com> --- migration/qemu-file.c | 5 +++++ migration/qemu-file.h | 1 + 2 files changed, 6 insertions(+) diff --git a/migration/qemu-file.c b/migration/qemu-file.c index 0431585502..13e7f03f9b 100644 --- a/migration/qemu-file.c +++ b/migration/qemu-file.c @@ -615,6 +615,11 @@ void qemu_file_reset_rate_limit(QEMUFile *f) f->bytes_xfer = 0; } +void qemu_file_update_rate_transfer(QEMUFile *f, int64_t len) +{ + f->bytes_xfer += len; +} + void qemu_put_be16(QEMUFile *f, unsigned int v) { qemu_put_byte(f, v >> 8); diff --git a/migration/qemu-file.h b/migration/qemu-file.h index 13baf896bd..6145d10aca 100644 --- a/migration/qemu-file.h +++ b/migration/qemu-file.h @@ -147,6 +147,7 @@ int qemu_peek_byte(QEMUFile *f, int offset); void qemu_file_skip(QEMUFile *f, int size); void qemu_update_position(QEMUFile *f, size_t size); void qemu_file_reset_rate_limit(QEMUFile *f); +void qemu_file_update_rate_transfer(QEMUFile *f, int64_t len); void qemu_file_set_rate_limit(QEMUFile *f, int64_t new_rate); int64_t qemu_file_get_rate_limit(QEMUFile *f); void qemu_file_set_error(QEMUFile *f, int ret); -- 2.17.2 (Apple Git-113) ^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [Qemu-devel] [PATCH 1/2] migration: add qemu_file_update_rate_transfer interface 2019-07-29 2:32 ` [Qemu-devel] [PATCH 1/2] migration: add qemu_file_update_rate_transfer interface Ivan Ren @ 2019-07-29 6:38 ` Wei Yang 0 siblings, 0 replies; 6+ messages in thread From: Wei Yang @ 2019-07-29 6:38 UTC (permalink / raw) To: Ivan Ren; +Cc: qemu-devel, dgilbert, quintela On Mon, Jul 29, 2019 at 10:32:52AM +0800, Ivan Ren wrote: >Add qemu_file_update_rate_transfer for just update bytes_xfer for >speed limitation. This will be used for further migration feature >such as multifd migration. > >Signed-off-by: Ivan Ren <ivanren@tencent.com> >--- > migration/qemu-file.c | 5 +++++ > migration/qemu-file.h | 1 + > 2 files changed, 6 insertions(+) > >diff --git a/migration/qemu-file.c b/migration/qemu-file.c >index 0431585502..13e7f03f9b 100644 >--- a/migration/qemu-file.c >+++ b/migration/qemu-file.c >@@ -615,6 +615,11 @@ void qemu_file_reset_rate_limit(QEMUFile *f) > f->bytes_xfer = 0; > } > >+void qemu_file_update_rate_transfer(QEMUFile *f, int64_t len) Looks good, except the function name. Not good at naming :-) Reviewed-by: Wei Yang <richardw.yang@linux.intel.com> >+{ >+ f->bytes_xfer += len; >+} >+ > void qemu_put_be16(QEMUFile *f, unsigned int v) > { > qemu_put_byte(f, v >> 8); >diff --git a/migration/qemu-file.h b/migration/qemu-file.h >index 13baf896bd..6145d10aca 100644 >--- a/migration/qemu-file.h >+++ b/migration/qemu-file.h >@@ -147,6 +147,7 @@ int qemu_peek_byte(QEMUFile *f, int offset); > void qemu_file_skip(QEMUFile *f, int size); > void qemu_update_position(QEMUFile *f, size_t size); > void qemu_file_reset_rate_limit(QEMUFile *f); >+void qemu_file_update_rate_transfer(QEMUFile *f, int64_t len); > void qemu_file_set_rate_limit(QEMUFile *f, int64_t new_rate); > int64_t qemu_file_get_rate_limit(QEMUFile *f); > void qemu_file_set_error(QEMUFile *f, int ret); >-- >2.17.2 (Apple Git-113) > -- Wei Yang Help you, Help me ^ permalink raw reply [flat|nested] 6+ messages in thread
* [Qemu-devel] [PATCH 2/2] migration: add speed limit for multifd migration 2019-07-29 2:32 [Qemu-devel] [PATCH 0/2] add speed limit for multifd migration Ivan Ren 2019-07-29 2:32 ` [Qemu-devel] [PATCH 1/2] migration: add qemu_file_update_rate_transfer interface Ivan Ren @ 2019-07-29 2:32 ` Ivan Ren 2019-07-29 6:40 ` Wei Yang 1 sibling, 1 reply; 6+ messages in thread From: Ivan Ren @ 2019-07-29 2:32 UTC (permalink / raw) To: quintela, dgilbert; +Cc: qemu-devel Limit the speed of multifd migration through common speed limitation qemu file. Signed-off-by: Ivan Ren <ivanren@tencent.com> --- migration/ram.c | 22 ++++++++++++---------- 1 file changed, 12 insertions(+), 10 deletions(-) diff --git a/migration/ram.c b/migration/ram.c index 889148dd84..e3fde16776 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -922,7 +922,7 @@ struct { * false. */ -static int multifd_send_pages(void) +static int multifd_send_pages(RAMState *rs) { int i; static int next_channel; @@ -954,6 +954,7 @@ static int multifd_send_pages(void) multifd_send_state->pages = p->pages; p->pages = pages; transferred = ((uint64_t) pages->used) * TARGET_PAGE_SIZE + p->packet_len; + qemu_file_update_rate_transfer(rs->f, transferred); ram_counters.multifd_bytes += transferred; ram_counters.transferred += transferred;; qemu_mutex_unlock(&p->mutex); @@ -962,7 +963,7 @@ static int multifd_send_pages(void) return 1; } -static int multifd_queue_page(RAMBlock *block, ram_addr_t offset) +static int multifd_queue_page(RAMState *rs, RAMBlock *block, ram_addr_t offset) { MultiFDPages_t *pages = multifd_send_state->pages; @@ -981,12 +982,12 @@ static int multifd_queue_page(RAMBlock *block, ram_addr_t offset) } } - if (multifd_send_pages() < 0) { + if (multifd_send_pages(rs) < 0) { return -1; } if (pages->block != block) { - return multifd_queue_page(block, offset); + return multifd_queue_page(rs, block, offset); } return 1; @@ -1054,7 +1055,7 @@ void multifd_save_cleanup(void) multifd_send_state = NULL; } -static void multifd_send_sync_main(void) +static void multifd_send_sync_main(RAMState *rs) { int i; @@ -1062,7 +1063,7 @@ static void multifd_send_sync_main(void) return; } if (multifd_send_state->pages->used) { - if (multifd_send_pages() < 0) { + if (multifd_send_pages(rs) < 0) { error_report("%s: multifd_send_pages fail", __func__); return; } @@ -1083,6 +1084,7 @@ static void multifd_send_sync_main(void) p->packet_num = multifd_send_state->packet_num++; p->flags |= MULTIFD_FLAG_SYNC; p->pending_job++; + qemu_file_update_rate_transfer(rs->f, p->packet_len); qemu_mutex_unlock(&p->mutex); qemu_sem_post(&p->sem); } @@ -2079,7 +2081,7 @@ static int ram_save_page(RAMState *rs, PageSearchStatus *pss, bool last_stage) static int ram_save_multifd_page(RAMState *rs, RAMBlock *block, ram_addr_t offset) { - if (multifd_queue_page(block, offset) < 0) { + if (multifd_queue_page(rs, block, offset) < 0) { return -1; } ram_counters.normal++; @@ -3482,7 +3484,7 @@ static int ram_save_setup(QEMUFile *f, void *opaque) ram_control_before_iterate(f, RAM_CONTROL_SETUP); ram_control_after_iterate(f, RAM_CONTROL_SETUP); - multifd_send_sync_main(); + multifd_send_sync_main(*rsp); qemu_put_be64(f, RAM_SAVE_FLAG_EOS); qemu_fflush(f); @@ -3570,7 +3572,7 @@ static int ram_save_iterate(QEMUFile *f, void *opaque) ram_control_after_iterate(f, RAM_CONTROL_ROUND); out: - multifd_send_sync_main(); + multifd_send_sync_main(rs); qemu_put_be64(f, RAM_SAVE_FLAG_EOS); qemu_fflush(f); ram_counters.transferred += 8; @@ -3629,7 +3631,7 @@ static int ram_save_complete(QEMUFile *f, void *opaque) rcu_read_unlock(); - multifd_send_sync_main(); + multifd_send_sync_main(rs); qemu_put_be64(f, RAM_SAVE_FLAG_EOS); qemu_fflush(f); -- 2.17.2 (Apple Git-113) ^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [Qemu-devel] [PATCH 2/2] migration: add speed limit for multifd migration 2019-07-29 2:32 ` [Qemu-devel] [PATCH 2/2] migration: add speed limit for multifd migration Ivan Ren @ 2019-07-29 6:40 ` Wei Yang 2019-07-29 6:52 ` Ivan Ren 0 siblings, 1 reply; 6+ messages in thread From: Wei Yang @ 2019-07-29 6:40 UTC (permalink / raw) To: Ivan Ren; +Cc: qemu-devel, dgilbert, quintela On Mon, Jul 29, 2019 at 10:32:53AM +0800, Ivan Ren wrote: >Limit the speed of multifd migration through common speed limitation >qemu file. > >Signed-off-by: Ivan Ren <ivanren@tencent.com> >--- > migration/ram.c | 22 ++++++++++++---------- > 1 file changed, 12 insertions(+), 10 deletions(-) > >diff --git a/migration/ram.c b/migration/ram.c >index 889148dd84..e3fde16776 100644 >--- a/migration/ram.c >+++ b/migration/ram.c >@@ -922,7 +922,7 @@ struct { > * false. > */ > >-static int multifd_send_pages(void) >+static int multifd_send_pages(RAMState *rs) > { > int i; > static int next_channel; >@@ -954,6 +954,7 @@ static int multifd_send_pages(void) > multifd_send_state->pages = p->pages; > p->pages = pages; > transferred = ((uint64_t) pages->used) * TARGET_PAGE_SIZE + p->packet_len; >+ qemu_file_update_rate_transfer(rs->f, transferred); > ram_counters.multifd_bytes += transferred; > ram_counters.transferred += transferred;; > qemu_mutex_unlock(&p->mutex); >@@ -962,7 +963,7 @@ static int multifd_send_pages(void) > return 1; > } > >-static int multifd_queue_page(RAMBlock *block, ram_addr_t offset) >+static int multifd_queue_page(RAMState *rs, RAMBlock *block, ram_addr_t offset) > { > MultiFDPages_t *pages = multifd_send_state->pages; > >@@ -981,12 +982,12 @@ static int multifd_queue_page(RAMBlock *block, ram_addr_t offset) > } > } > >- if (multifd_send_pages() < 0) { >+ if (multifd_send_pages(rs) < 0) { > return -1; > } > > if (pages->block != block) { >- return multifd_queue_page(block, offset); >+ return multifd_queue_page(rs, block, offset); > } > > return 1; >@@ -1054,7 +1055,7 @@ void multifd_save_cleanup(void) > multifd_send_state = NULL; > } > >-static void multifd_send_sync_main(void) >+static void multifd_send_sync_main(RAMState *rs) > { > int i; > >@@ -1062,7 +1063,7 @@ static void multifd_send_sync_main(void) > return; > } > if (multifd_send_state->pages->used) { >- if (multifd_send_pages() < 0) { >+ if (multifd_send_pages(rs) < 0) { > error_report("%s: multifd_send_pages fail", __func__); > return; > } >@@ -1083,6 +1084,7 @@ static void multifd_send_sync_main(void) > p->packet_num = multifd_send_state->packet_num++; > p->flags |= MULTIFD_FLAG_SYNC; > p->pending_job++; >+ qemu_file_update_rate_transfer(rs->f, p->packet_len); The original code seems forget to update ram_counters.multifd_bytes ram_counters.transferred Sounds we need to update these counters here too. > qemu_mutex_unlock(&p->mutex); > qemu_sem_post(&p->sem); > } >@@ -2079,7 +2081,7 @@ static int ram_save_page(RAMState *rs, PageSearchStatus *pss, bool last_stage) > static int ram_save_multifd_page(RAMState *rs, RAMBlock *block, > ram_addr_t offset) > { >- if (multifd_queue_page(block, offset) < 0) { >+ if (multifd_queue_page(rs, block, offset) < 0) { > return -1; > } > ram_counters.normal++; >@@ -3482,7 +3484,7 @@ static int ram_save_setup(QEMUFile *f, void *opaque) > ram_control_before_iterate(f, RAM_CONTROL_SETUP); > ram_control_after_iterate(f, RAM_CONTROL_SETUP); > >- multifd_send_sync_main(); >+ multifd_send_sync_main(*rsp); > qemu_put_be64(f, RAM_SAVE_FLAG_EOS); > qemu_fflush(f); > >@@ -3570,7 +3572,7 @@ static int ram_save_iterate(QEMUFile *f, void *opaque) > ram_control_after_iterate(f, RAM_CONTROL_ROUND); > > out: >- multifd_send_sync_main(); >+ multifd_send_sync_main(rs); > qemu_put_be64(f, RAM_SAVE_FLAG_EOS); > qemu_fflush(f); > ram_counters.transferred += 8; >@@ -3629,7 +3631,7 @@ static int ram_save_complete(QEMUFile *f, void *opaque) > > rcu_read_unlock(); > >- multifd_send_sync_main(); >+ multifd_send_sync_main(rs); > qemu_put_be64(f, RAM_SAVE_FLAG_EOS); > qemu_fflush(f); > >-- >2.17.2 (Apple Git-113) > -- Wei Yang Help you, Help me ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [Qemu-devel] [PATCH 2/2] migration: add speed limit for multifd migration 2019-07-29 6:40 ` Wei Yang @ 2019-07-29 6:52 ` Ivan Ren 0 siblings, 0 replies; 6+ messages in thread From: Ivan Ren @ 2019-07-29 6:52 UTC (permalink / raw) To: Wei Yang; +Cc: qemu-devel, dgilbert, quintela >> if (multifd_send_state->pages->used) { >>- if (multifd_send_pages() < 0) { >>+ if (multifd_send_pages(rs) < 0) { >> error_report("%s: multifd_send_pages fail", __func__); >> return; >> } >>@@ -1083,6 +1084,7 @@ static void multifd_send_sync_main(void) >> p->packet_num = multifd_send_state->packet_num++; >> p->flags |= MULTIFD_FLAG_SYNC; >> p->pending_job++; >>+ qemu_file_update_rate_transfer(rs->f, p->packet_len); > >The original code seems forget to update > > ram_counters.multifd_bytes > ram_counters.transferred > >Sounds we need to update these counters here too. Yes, Thanks for review I'll send a new version with a new patch to fix it. On Mon, Jul 29, 2019 at 2:40 PM Wei Yang <richardw.yang@linux.intel.com> wrote: > On Mon, Jul 29, 2019 at 10:32:53AM +0800, Ivan Ren wrote: > >Limit the speed of multifd migration through common speed limitation > >qemu file. > > > >Signed-off-by: Ivan Ren <ivanren@tencent.com> > >--- > > migration/ram.c | 22 ++++++++++++---------- > > 1 file changed, 12 insertions(+), 10 deletions(-) > > > >diff --git a/migration/ram.c b/migration/ram.c > >index 889148dd84..e3fde16776 100644 > >--- a/migration/ram.c > >+++ b/migration/ram.c > >@@ -922,7 +922,7 @@ struct { > > * false. > > */ > > > >-static int multifd_send_pages(void) > >+static int multifd_send_pages(RAMState *rs) > > { > > int i; > > static int next_channel; > >@@ -954,6 +954,7 @@ static int multifd_send_pages(void) > > multifd_send_state->pages = p->pages; > > p->pages = pages; > > transferred = ((uint64_t) pages->used) * TARGET_PAGE_SIZE + > p->packet_len; > >+ qemu_file_update_rate_transfer(rs->f, transferred); > > ram_counters.multifd_bytes += transferred; > > ram_counters.transferred += transferred;; > > qemu_mutex_unlock(&p->mutex); > >@@ -962,7 +963,7 @@ static int multifd_send_pages(void) > > return 1; > > } > > > >-static int multifd_queue_page(RAMBlock *block, ram_addr_t offset) > >+static int multifd_queue_page(RAMState *rs, RAMBlock *block, ram_addr_t > offset) > > { > > MultiFDPages_t *pages = multifd_send_state->pages; > > > >@@ -981,12 +982,12 @@ static int multifd_queue_page(RAMBlock *block, > ram_addr_t offset) > > } > > } > > > >- if (multifd_send_pages() < 0) { > >+ if (multifd_send_pages(rs) < 0) { > > return -1; > > } > > > > if (pages->block != block) { > >- return multifd_queue_page(block, offset); > >+ return multifd_queue_page(rs, block, offset); > > } > > > > return 1; > >@@ -1054,7 +1055,7 @@ void multifd_save_cleanup(void) > > multifd_send_state = NULL; > > } > > > >-static void multifd_send_sync_main(void) > >+static void multifd_send_sync_main(RAMState *rs) > > { > > int i; > > > >@@ -1062,7 +1063,7 @@ static void multifd_send_sync_main(void) > > return; > > } > > if (multifd_send_state->pages->used) { > >- if (multifd_send_pages() < 0) { > >+ if (multifd_send_pages(rs) < 0) { > > error_report("%s: multifd_send_pages fail", __func__); > > return; > > } > >@@ -1083,6 +1084,7 @@ static void multifd_send_sync_main(void) > > p->packet_num = multifd_send_state->packet_num++; > > p->flags |= MULTIFD_FLAG_SYNC; > > p->pending_job++; > >+ qemu_file_update_rate_transfer(rs->f, p->packet_len); > > The original code seems forget to update > > ram_counters.multifd_bytes > ram_counters.transferred > > Sounds we need to update these counters here too. > > > qemu_mutex_unlock(&p->mutex); > > qemu_sem_post(&p->sem); > > } > >@@ -2079,7 +2081,7 @@ static int ram_save_page(RAMState *rs, > PageSearchStatus *pss, bool last_stage) > > static int ram_save_multifd_page(RAMState *rs, RAMBlock *block, > > ram_addr_t offset) > > { > >- if (multifd_queue_page(block, offset) < 0) { > >+ if (multifd_queue_page(rs, block, offset) < 0) { > > return -1; > > } > > ram_counters.normal++; > >@@ -3482,7 +3484,7 @@ static int ram_save_setup(QEMUFile *f, void *opaque) > > ram_control_before_iterate(f, RAM_CONTROL_SETUP); > > ram_control_after_iterate(f, RAM_CONTROL_SETUP); > > > >- multifd_send_sync_main(); > >+ multifd_send_sync_main(*rsp); > > qemu_put_be64(f, RAM_SAVE_FLAG_EOS); > > qemu_fflush(f); > > > >@@ -3570,7 +3572,7 @@ static int ram_save_iterate(QEMUFile *f, void > *opaque) > > ram_control_after_iterate(f, RAM_CONTROL_ROUND); > > > > out: > >- multifd_send_sync_main(); > >+ multifd_send_sync_main(rs); > > qemu_put_be64(f, RAM_SAVE_FLAG_EOS); > > qemu_fflush(f); > > ram_counters.transferred += 8; > >@@ -3629,7 +3631,7 @@ static int ram_save_complete(QEMUFile *f, void > *opaque) > > > > rcu_read_unlock(); > > > >- multifd_send_sync_main(); > >+ multifd_send_sync_main(rs); > > qemu_put_be64(f, RAM_SAVE_FLAG_EOS); > > qemu_fflush(f); > > > >-- > >2.17.2 (Apple Git-113) > > > > -- > Wei Yang > Help you, Help me > ^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2019-07-29 6:52 UTC | newest] Thread overview: 6+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2019-07-29 2:32 [Qemu-devel] [PATCH 0/2] add speed limit for multifd migration Ivan Ren 2019-07-29 2:32 ` [Qemu-devel] [PATCH 1/2] migration: add qemu_file_update_rate_transfer interface Ivan Ren 2019-07-29 6:38 ` Wei Yang 2019-07-29 2:32 ` [Qemu-devel] [PATCH 2/2] migration: add speed limit for multifd migration Ivan Ren 2019-07-29 6:40 ` Wei Yang 2019-07-29 6:52 ` Ivan Ren
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).