qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: peterx@redhat.com
To: Peter Maydell <peter.maydell@linaro.org>, qemu-devel@nongnu.org
Cc: peterx@redhat.com, Fabiano Rosas <farosas@suse.de>
Subject: [PULL 24/34] migration/multifd: Optimize sender side to be lockless
Date: Thu,  8 Feb 2024 11:05:18 +0800	[thread overview]
Message-ID: <20240208030528.368214-25-peterx@redhat.com> (raw)
In-Reply-To: <20240208030528.368214-1-peterx@redhat.com>

From: Peter Xu <peterx@redhat.com>

When reviewing my attempt to refactor send_prepare(), Fabiano suggested we
try out with dropping the mutex in multifd code [1].

I thought about that before but I never tried to change the code.  Now
maybe it's time to give it a stab.  This only optimizes the sender side.

The trick here is multifd has a clear provider/consumer model, that the
migration main thread publishes requests (either pending_job/pending_sync),
while the multifd sender threads are consumers.  Here we don't have a lot
of complicated data sharing, and the jobs can logically be submitted
lockless.

Arm the code with atomic weapons.  Two things worth mentioning:

  - For multifd_send_pages(): we can use qatomic_load_acquire() when trying
  to find a free channel, but that's expensive if we attach one ACQUIRE per
  channel.  Instead, keep the qatomic_read() on reading the pending_job
  flag as we do already, meanwhile use one smp_mb_acquire() after the loop
  to guarantee the memory ordering.

  - For pending_sync: it doesn't have any extra data required since now
  p->flags are never touched, it should be safe to not use memory barrier.
  That's different from pending_job.

Provide rich comments for all the lockless operations to state how they are
paired.  With that, we can remove the mutex.

[1] https://lore.kernel.org/r/87o7d1jlu5.fsf@suse.de

Suggested-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Fabiano Rosas <farosas@suse.de>
Link: https://lore.kernel.org/r/20240202102857.110210-24-peterx@redhat.com
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 migration/multifd.h |  2 --
 migration/multifd.c | 51 +++++++++++++++++++++++----------------------
 2 files changed, 26 insertions(+), 27 deletions(-)

diff --git a/migration/multifd.h b/migration/multifd.h
index 98876ff94a..78a2317263 100644
--- a/migration/multifd.h
+++ b/migration/multifd.h
@@ -91,8 +91,6 @@ typedef struct {
     /* syncs main thread and channels */
     QemuSemaphore sem_sync;
 
-    /* this mutex protects the following parameters */
-    QemuMutex mutex;
     /* is this channel thread running */
     bool running;
     /* multifd flags for each packet */
diff --git a/migration/multifd.c b/migration/multifd.c
index b317d57d61..fbdb129088 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -501,19 +501,19 @@ static bool multifd_send_pages(void)
         }
     }
 
-    qemu_mutex_lock(&p->mutex);
-    assert(!p->pages->num);
-    assert(!p->pages->block);
     /*
-     * Double check on pending_job==false with the lock.  In the future if
-     * we can have >1 requester thread, we can replace this with a "goto
-     * retry", but that is for later.
+     * Make sure we read p->pending_job before all the rest.  Pairs with
+     * qatomic_store_release() in multifd_send_thread().
      */
-    assert(qatomic_read(&p->pending_job) == false);
-    qatomic_set(&p->pending_job, true);
+    smp_mb_acquire();
+    assert(!p->pages->num);
     multifd_send_state->pages = p->pages;
     p->pages = pages;
-    qemu_mutex_unlock(&p->mutex);
+    /*
+     * Making sure p->pages is setup before marking pending_job=true. Pairs
+     * with the qatomic_load_acquire() in multifd_send_thread().
+     */
+    qatomic_store_release(&p->pending_job, true);
     qemu_sem_post(&p->sem);
 
     return true;
@@ -648,7 +648,6 @@ static bool multifd_send_cleanup_channel(MultiFDSendParams *p, Error **errp)
     }
     multifd_send_channel_destroy(p->c);
     p->c = NULL;
-    qemu_mutex_destroy(&p->mutex);
     qemu_sem_destroy(&p->sem);
     qemu_sem_destroy(&p->sem_sync);
     g_free(p->name);
@@ -742,14 +741,12 @@ int multifd_send_sync_main(void)
 
         trace_multifd_send_sync_main_signal(p->id);
 
-        qemu_mutex_lock(&p->mutex);
         /*
          * We should be the only user so far, so not possible to be set by
          * others concurrently.
          */
         assert(qatomic_read(&p->pending_sync) == false);
         qatomic_set(&p->pending_sync, true);
-        qemu_mutex_unlock(&p->mutex);
         qemu_sem_post(&p->sem);
     }
     for (i = 0; i < migrate_multifd_channels(); i++) {
@@ -796,9 +793,12 @@ static void *multifd_send_thread(void *opaque)
         if (multifd_send_should_exit()) {
             break;
         }
-        qemu_mutex_lock(&p->mutex);
 
-        if (qatomic_read(&p->pending_job)) {
+        /*
+         * Read pending_job flag before p->pages.  Pairs with the
+         * qatomic_store_release() in multifd_send_pages().
+         */
+        if (qatomic_load_acquire(&p->pending_job)) {
             MultiFDPages_t *pages = p->pages;
 
             p->iovs_num = 0;
@@ -806,14 +806,12 @@ static void *multifd_send_thread(void *opaque)
 
             ret = multifd_send_state->ops->send_prepare(p, &local_err);
             if (ret != 0) {
-                qemu_mutex_unlock(&p->mutex);
                 break;
             }
 
             ret = qio_channel_writev_full_all(p->c, p->iov, p->iovs_num, NULL,
                                               0, p->write_flags, &local_err);
             if (ret != 0) {
-                qemu_mutex_unlock(&p->mutex);
                 break;
             }
 
@@ -822,24 +820,31 @@ static void *multifd_send_thread(void *opaque)
 
             multifd_pages_reset(p->pages);
             p->next_packet_size = 0;
-            qatomic_set(&p->pending_job, false);
-            qemu_mutex_unlock(&p->mutex);
+
+            /*
+             * Making sure p->pages is published before saying "we're
+             * free".  Pairs with the smp_mb_acquire() in
+             * multifd_send_pages().
+             */
+            qatomic_store_release(&p->pending_job, false);
         } else {
-            /* If not a normal job, must be a sync request */
+            /*
+             * If not a normal job, must be a sync request.  Note that
+             * pending_sync is a standalone flag (unlike pending_job), so
+             * it doesn't require explicit memory barriers.
+             */
             assert(qatomic_read(&p->pending_sync));
             p->flags = MULTIFD_FLAG_SYNC;
             multifd_send_fill_packet(p);
             ret = qio_channel_write_all(p->c, (void *)p->packet,
                                         p->packet_len, &local_err);
             if (ret != 0) {
-                qemu_mutex_unlock(&p->mutex);
                 break;
             }
             /* p->next_packet_size will always be zero for a SYNC packet */
             stat64_add(&mig_stats.multifd_bytes, p->packet_len);
             p->flags = 0;
             qatomic_set(&p->pending_sync, false);
-            qemu_mutex_unlock(&p->mutex);
             qemu_sem_post(&p->sem_sync);
         }
     }
@@ -853,10 +858,7 @@ out:
         error_free(local_err);
     }
 
-    qemu_mutex_lock(&p->mutex);
     p->running = false;
-    qemu_mutex_unlock(&p->mutex);
-
     rcu_unregister_thread();
     migration_threads_remove(thread);
     trace_multifd_send_thread_end(p->id, p->packets_sent, p->total_normal_pages);
@@ -998,7 +1000,6 @@ int multifd_send_setup(Error **errp)
     for (i = 0; i < thread_count; i++) {
         MultiFDSendParams *p = &multifd_send_state->params[i];
 
-        qemu_mutex_init(&p->mutex);
         qemu_sem_init(&p->sem, 0);
         qemu_sem_init(&p->sem_sync, 0);
         p->id = i;
-- 
2.43.0



  parent reply	other threads:[~2024-02-08  3:10 UTC|newest]

Thread overview: 41+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-02-08  3:04 [PULL 00/34] Migration staging patches peterx
2024-02-08  3:04 ` [PULL 01/34] migration: prevent migration when VM has poisoned memory peterx
2024-02-08  3:04 ` [PULL 02/34] migration/multifd: Drop stale comment for multifd zero copy peterx
2024-02-08  3:04 ` [PULL 03/34] migration/multifd: multifd_send_kick_main() peterx
2024-02-08  3:04 ` [PULL 04/34] migration/multifd: Drop MultiFDSendParams.quit, cleanup error paths peterx
2024-02-08  3:04 ` [PULL 05/34] migration/multifd: Postpone reset of MultiFDPages_t peterx
2024-02-08  3:05 ` [PULL 06/34] migration/multifd: Drop MultiFDSendParams.normal[] array peterx
2024-02-08  3:05 ` [PULL 07/34] migration/multifd: Separate SYNC request with normal jobs peterx
2024-02-08  3:05 ` [PULL 08/34] migration/multifd: Simplify locking in sender thread peterx
2024-02-08  3:05 ` [PULL 09/34] migration/multifd: Drop pages->num check " peterx
2024-02-08  3:05 ` [PULL 10/34] migration/multifd: Rename p->num_packets and clean it up peterx
2024-02-08  3:05 ` [PULL 11/34] migration/multifd: Move total_normal_pages accounting peterx
2024-02-08  3:05 ` [PULL 12/34] migration/multifd: Move trace_multifd_send|recv() peterx
2024-02-08  3:05 ` [PULL 13/34] migration/multifd: multifd_send_prepare_header() peterx
2024-02-08  3:05 ` [PULL 14/34] migration/multifd: Move header prepare/fill into send_prepare() peterx
2024-02-08  3:05 ` [PULL 15/34] migration/multifd: Forbid spurious wakeups peterx
2024-02-08  3:05 ` [PULL 16/34] migration/multifd: Split multifd_send_terminate_threads() peterx
2024-02-08  3:05 ` [PULL 17/34] migration/multifd: Change retval of multifd_queue_page() peterx
2024-02-08  3:05 ` [PULL 18/34] migration/multifd: Change retval of multifd_send_pages() peterx
2024-02-08  3:05 ` [PULL 19/34] migration/multifd: Rewrite multifd_queue_page() peterx
2024-02-08  3:05 ` [PULL 20/34] migration/multifd: Cleanup multifd_save_cleanup() peterx
2024-02-08  3:05 ` [PULL 21/34] migration/multifd: Cleanup multifd_load_cleanup() peterx
2024-02-08  3:05 ` [PULL 22/34] migration/multifd: Stick with send/recv on function names peterx
2024-02-08  3:05 ` [PULL 23/34] migration/multifd: Fix MultiFDSendParams.packet_num race peterx
2024-02-08  3:05 ` peterx [this message]
2024-02-08  3:05 ` [PULL 25/34] migration: Fix logic of channels and transport compatibility check peterx
2024-02-08  3:05 ` [PULL 26/34] migration/multifd: Join the TLS thread peterx
2024-02-10  9:18   ` Michael Tokarev
2024-02-10  9:30     ` Michael Tokarev
2024-02-14 13:27       ` Fabiano Rosas
2024-02-14 13:58         ` Michael Tokarev
2024-02-15 13:24           ` Fabiano Rosas
2024-02-08  3:05 ` [PULL 27/34] migration/multifd: Remove p->running peterx
2024-02-08  3:05 ` [PULL 28/34] migration/multifd: Move multifd_send_setup error handling in to the function peterx
2024-02-08  3:05 ` [PULL 29/34] migration/multifd: Move multifd_send_setup into migration thread peterx
2024-02-08  3:05 ` [PULL 30/34] migration/multifd: Unify multifd and TLS connection paths peterx
2024-02-08  3:05 ` [PULL 31/34] migration/multifd: Add a synchronization point for channel creation peterx
2024-02-08  3:05 ` [PULL 32/34] tests/migration-test: Stick with gicv3 in aarch64 test peterx
2024-02-08  3:05 ` [PULL 33/34] ci: Remove tag dependency for build-previous-qemu peterx
2024-02-08  3:05 ` [PULL 34/34] ci: Update comment for migration-compat-aarch64 peterx
2024-02-09 16:14 ` [PULL 00/34] Migration staging patches Peter Maydell

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20240208030528.368214-25-peterx@redhat.com \
    --to=peterx@redhat.com \
    --cc=farosas@suse.de \
    --cc=peter.maydell@linaro.org \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).