From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
To: Juan Quintela <quintela@redhat.com>
Cc: qemu-devel@nongnu.org, amit.shah@redhat.com
Subject: Re: [Qemu-devel] [PATCH 14/17] migration: Create thread infrastructure for multifd recv side
Date: Fri, 3 Feb 2017 11:24:02 +0000 [thread overview]
Message-ID: <20170203112402.GD3208@work-vm> (raw)
In-Reply-To: <1485207141-1941-15-git-send-email-quintela@redhat.com>
* Juan Quintela (quintela@redhat.com) wrote:
> We make the locking and the transfer of information specific, even if we
> are still receiving things through the main thread.
>
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> ---
> migration/ram.c | 77 +++++++++++++++++++++++++++++++++++++++++++++++++--------
> 1 file changed, 67 insertions(+), 10 deletions(-)
>
> diff --git a/migration/ram.c b/migration/ram.c
> index ca94704..4e530ea 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -523,7 +523,7 @@ void migrate_multifd_send_threads_create(void)
> }
> }
>
> -static int multifd_send_page(uint8_t *address)
> +static uint16_t multifd_send_page(uint8_t *address, bool last_page)
> {
> int i, j, thread_count;
> bool found = false;
> @@ -538,8 +538,10 @@ static int multifd_send_page(uint8_t *address)
> pages.address[pages.num] = address;
> pages.num++;
>
> - if (pages.num < (pages.size - 1)) {
> - return UINT16_MAX;
> + if (!last_page) {
> + if (pages.num < (pages.size - 1)) {
> + return UINT16_MAX;
> + }
> }
This should be in the previous patch?
(and the place that adds the last_page parameter below)?
>
> thread_count = migrate_multifd_threads();
> @@ -570,17 +572,25 @@ static int multifd_send_page(uint8_t *address)
> }
>
> struct MultiFDRecvParams {
> + /* not changed */
> QemuThread thread;
> QIOChannel *c;
> QemuCond cond;
> QemuMutex mutex;
> + /* proteced by param mutex */
> bool quit;
> bool started;
> + multifd_pages_t pages;
> + /* proteced by multifd mutex */
> + bool done;
> };
> typedef struct MultiFDRecvParams MultiFDRecvParams;
>
> static MultiFDRecvParams *multifd_recv;
>
> +QemuMutex multifd_recv_mutex;
> +QemuCond multifd_recv_cond;
> +
> static void *multifd_recv_thread(void *opaque)
> {
> MultiFDRecvParams *params = opaque;
> @@ -594,7 +604,17 @@ static void *multifd_recv_thread(void *opaque)
>
> qemu_mutex_lock(¶ms->mutex);
> while (!params->quit){
> - qemu_cond_wait(¶ms->cond, ¶ms->mutex);
> + if (params->pages.num) {
> + params->pages.num = 0;
> + qemu_mutex_unlock(¶ms->mutex);
> + qemu_mutex_lock(&multifd_recv_mutex);
> + params->done = true;
> + qemu_cond_signal(&multifd_recv_cond);
> + qemu_mutex_unlock(&multifd_recv_mutex);
> + qemu_mutex_lock(¶ms->mutex);
> + } else {
> + qemu_cond_wait(¶ms->cond, ¶ms->mutex);
> + }
> }
> qemu_mutex_unlock(¶ms->mutex);
>
> @@ -647,8 +667,9 @@ void migrate_multifd_recv_threads_create(void)
> qemu_cond_init(&multifd_recv[i].cond);
> multifd_recv[i].quit = false;
> multifd_recv[i].started = false;
> + multifd_recv[i].done = true;
> + multifd_init_group(&multifd_recv[i].pages);
> multifd_recv[i].c = socket_recv_channel_create();
> -
> if(!multifd_recv[i].c) {
> error_report("Error creating a recv channel");
> exit(0);
> @@ -664,6 +685,45 @@ void migrate_multifd_recv_threads_create(void)
> }
> }
>
> +static void multifd_recv_page(uint8_t *address, uint16_t fd_num)
> +{
> + int i, thread_count;
> + MultiFDRecvParams *params;
> + static multifd_pages_t pages;
> + static bool once = false;
> +
> + if (!once) {
> + multifd_init_group(&pages);
> + once = true;
> + }
> +
> + pages.address[pages.num] = address;
> + pages.num++;
> +
> + if (fd_num == UINT16_MAX) {
> + return;
> + }
> +
> + thread_count = migrate_multifd_threads();
> + assert(fd_num < thread_count);
> + params = &multifd_recv[fd_num];
> +
> + qemu_mutex_lock(&multifd_recv_mutex);
> + while (!params->done) {
> + qemu_cond_wait(&multifd_recv_cond, &multifd_recv_mutex);
> + }
> + params->done = false;
> + qemu_mutex_unlock(&multifd_recv_mutex);
> + qemu_mutex_lock(¶ms->mutex);
> + for(i = 0; i < pages.num; i++) {
> + params->pages.address[i] = pages.address[i];
> + }
> + params->pages.num = pages.num;
> + pages.num = 0;
> + qemu_cond_signal(¶ms->cond);
> + qemu_mutex_unlock(¶ms->mutex);
> +}
> +
> /**
> * save_page_header: Write page header to wire
> *
> @@ -1097,7 +1157,7 @@ static int ram_multifd_page(QEMUFile *f, PageSearchStatus *pss,
> if (pages == -1) {
> *bytes_transferred +=
> save_page_header(f, block, offset | RAM_SAVE_FLAG_MULTIFD_PAGE);
> - fd_num = multifd_send_page(p);
> + fd_num = multifd_send_page(p, migration_dirty_pages == 1);
> qemu_put_be16(f, fd_num);
> *bytes_transferred += 2; /* size of fd_num */
> qemu_put_buffer(f, p, TARGET_PAGE_SIZE);
> @@ -2920,10 +2980,7 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id)
>
> case RAM_SAVE_FLAG_MULTIFD_PAGE:
> fd_num = qemu_get_be16(f);
> - if (fd_num != 0) {
> - /* this is yet an unused variable, changed later */
> - fd_num = fd_num;
> - }
> + multifd_recv_page(host, fd_num);
This is going to be quite tricky to fit into ram_load_postcopy
in this form; somehow it's going to have to find addresses to use for place page
and with anything with a page size != target page size it gets messy.
Dave
> qemu_get_buffer(f, host, TARGET_PAGE_SIZE);
> break;
>
> --
> 2.9.3
>
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
next prev parent reply other threads:[~2017-02-03 11:24 UTC|newest]
Thread overview: 57+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-01-23 21:32 [Qemu-devel] [PATCH 00/17] multifd v3 Juan Quintela
2017-01-23 21:32 ` [Qemu-devel] [PATCH 01/17] migration: transform remained DPRINTF into trace_ Juan Quintela
2017-01-24 2:20 ` Eric Blake
2017-01-24 12:20 ` Dr. David Alan Gilbert
2017-01-23 21:32 ` [Qemu-devel] [PATCH 02/17] migration: create Migration Incoming State at init time Juan Quintela
2017-01-23 21:32 ` [Qemu-devel] [PATCH 03/17] migration: Test for disabled features on reception Juan Quintela
2017-01-24 10:33 ` Dr. David Alan Gilbert
2017-02-09 17:12 ` Juan Quintela
2017-01-23 21:32 ` [Qemu-devel] [PATCH 04/17] migration: Don't create decompression threads if not enabled Juan Quintela
2017-01-23 21:32 ` [Qemu-devel] [PATCH 05/17] migration: Add multifd capability Juan Quintela
2017-01-23 21:32 ` [Qemu-devel] [PATCH 06/17] migration: Create x-multifd-threads parameter Juan Quintela
2017-02-02 15:06 ` Eric Blake
2017-02-09 17:28 ` Juan Quintela
2017-01-23 21:32 ` [Qemu-devel] [PATCH 07/17] migration: Create x-multifd-group parameter Juan Quintela
2017-01-26 11:47 ` Dr. David Alan Gilbert
2017-01-23 21:32 ` [Qemu-devel] [PATCH 08/17] migration: create multifd migration threads Juan Quintela
2017-01-23 21:32 ` [Qemu-devel] [PATCH 09/17] migration: Start of multiple fd work Juan Quintela
2017-01-27 17:45 ` Dr. David Alan Gilbert
2017-02-13 16:34 ` Juan Quintela
2017-02-13 16:39 ` Dr. David Alan Gilbert
2017-02-13 17:35 ` Daniel P. Berrange
2017-02-15 14:46 ` Dr. David Alan Gilbert
2017-02-15 15:01 ` Daniel P. Berrange
2017-01-23 21:32 ` [Qemu-devel] [PATCH 10/17] migration: create ram_multifd_page Juan Quintela
2017-01-27 18:02 ` Dr. David Alan Gilbert
2017-01-30 10:06 ` Juan Quintela
2017-02-02 11:04 ` Dr. David Alan Gilbert
2017-02-13 16:36 ` Juan Quintela
2017-02-14 11:26 ` Dr. David Alan Gilbert
2017-02-02 11:20 ` Dr. David Alan Gilbert
2017-01-23 21:32 ` [Qemu-devel] [PATCH 11/17] migration: Create thread infrastructure for multifd send side Juan Quintela
2017-01-26 12:38 ` Paolo Bonzini
2017-02-13 16:38 ` Juan Quintela
2017-02-02 12:03 ` Dr. David Alan Gilbert
2017-02-13 16:40 ` Juan Quintela
2017-02-14 11:58 ` Dr. David Alan Gilbert
2017-01-23 21:32 ` [Qemu-devel] [PATCH 12/17] migration: really use multiple pages at a time Juan Quintela
2017-02-03 10:54 ` Dr. David Alan Gilbert
2017-02-13 16:47 ` Juan Quintela
2017-01-23 21:32 ` [Qemu-devel] [PATCH 13/17] migration: Send the fd number which we are going to use for this page Juan Quintela
2017-02-03 10:59 ` Dr. David Alan Gilbert
2017-01-23 21:32 ` [Qemu-devel] [PATCH 14/17] migration: Create thread infrastructure for multifd recv side Juan Quintela
2017-01-26 12:39 ` Paolo Bonzini
2017-02-03 11:24 ` Dr. David Alan Gilbert [this message]
2017-02-13 16:56 ` Juan Quintela
2017-02-14 11:34 ` Dr. David Alan Gilbert
2017-01-23 21:32 ` [Qemu-devel] [PATCH 15/17] migration: Test new fd infrastructure Juan Quintela
2017-02-03 11:36 ` Dr. David Alan Gilbert
2017-02-13 16:57 ` Juan Quintela
2017-02-14 11:05 ` Dr. David Alan Gilbert
2017-02-14 11:15 ` Daniel P. Berrange
2017-01-23 21:32 ` [Qemu-devel] [PATCH 16/17] migration: [HACK]Transfer pages over new channels Juan Quintela
2017-02-03 11:41 ` Dr. David Alan Gilbert
2017-01-23 21:32 ` [Qemu-devel] [PATCH 17/17] migration: flush receive queue Juan Quintela
2017-02-03 12:28 ` Dr. David Alan Gilbert
2017-02-13 17:13 ` Juan Quintela
2017-01-23 22:12 ` [Qemu-devel] [PATCH 00/17] multifd v3 no-reply
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170203112402.GD3208@work-vm \
--to=dgilbert@redhat.com \
--cc=amit.shah@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=quintela@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).