qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Peter Xu <peterx@redhat.com>
To: qemu-devel@nongnu.org
Cc: Alexey Perevalov <a.perevalov@samsung.com>,
	Juraj Marcin <jmarcin@redhat.com>,
	"Dr . David Alan Gilbert" <dave@treblig.org>,
	peterx@redhat.com, Fabiano Rosas <farosas@suse.de>
Subject: [PATCH v2 03/13] migration/postcopy: Drop all atomic ops in blocktime feature
Date: Mon,  9 Jun 2025 15:12:49 -0400	[thread overview]
Message-ID: <20250609191259.9053-4-peterx@redhat.com> (raw)
In-Reply-To: <20250609191259.9053-1-peterx@redhat.com>

Now with the mutex protection it's not needed anymore.

Reviewed-by: Fabiano Rosas <farosas@suse.de>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 migration/postcopy-ram.c | 23 ++++++++++-------------
 1 file changed, 10 insertions(+), 13 deletions(-)

diff --git a/migration/postcopy-ram.c b/migration/postcopy-ram.c
index 32fa06dabd..81925532de 100644
--- a/migration/postcopy-ram.c
+++ b/migration/postcopy-ram.c
@@ -849,12 +849,12 @@ void mark_postcopy_blocktime_begin(uintptr_t addr, uint32_t ptid,
 
     low_time_offset = get_low_time_offset(dc);
     if (dc->vcpu_addr[cpu] == 0) {
-        qatomic_inc(&dc->smp_cpus_down);
+        dc->smp_cpus_down++;
     }
 
-    qatomic_xchg(&dc->last_begin, low_time_offset);
-    qatomic_xchg(&dc->page_fault_vcpu_time[cpu], low_time_offset);
-    qatomic_xchg(&dc->vcpu_addr[cpu], addr);
+    dc->last_begin = low_time_offset;
+    dc->page_fault_vcpu_time[cpu] = low_time_offset;
+    dc->vcpu_addr[cpu] = addr;
 
     /*
      * The caller should only inject a blocktime entry when the page is
@@ -915,29 +915,26 @@ static void mark_postcopy_blocktime_end(uintptr_t addr)
     for (i = 0; i < smp_cpus; i++) {
         uint32_t vcpu_blocktime = 0;
 
-        read_vcpu_time = qatomic_fetch_add(&dc->page_fault_vcpu_time[i], 0);
-        if (qatomic_fetch_add(&dc->vcpu_addr[i], 0) != addr ||
-            read_vcpu_time == 0) {
+        read_vcpu_time = dc->page_fault_vcpu_time[i];
+        if (dc->vcpu_addr[i] != addr || read_vcpu_time == 0) {
             continue;
         }
-        qatomic_xchg(&dc->vcpu_addr[i], 0);
+        dc->vcpu_addr[i] = 0;
         vcpu_blocktime = low_time_offset - read_vcpu_time;
         affected_cpu += 1;
         /* we need to know is that mark_postcopy_end was due to
          * faulted page, another possible case it's prefetched
          * page and in that case we shouldn't be here */
-        if (!vcpu_total_blocktime &&
-            qatomic_fetch_add(&dc->smp_cpus_down, 0) == smp_cpus) {
+        if (!vcpu_total_blocktime && dc->smp_cpus_down == smp_cpus) {
             vcpu_total_blocktime = true;
         }
         /* continue cycle, due to one page could affect several vCPUs */
         dc->vcpu_blocktime[i] += vcpu_blocktime;
     }
 
-    qatomic_sub(&dc->smp_cpus_down, affected_cpu);
+    dc->smp_cpus_down -= affected_cpu;
     if (vcpu_total_blocktime) {
-        dc->total_blocktime += low_time_offset - qatomic_fetch_add(
-                &dc->last_begin, 0);
+        dc->total_blocktime += low_time_offset - dc->last_begin;
     }
     trace_mark_postcopy_blocktime_end(addr, dc, dc->total_blocktime,
                                       affected_cpu);
-- 
2.49.0



  parent reply	other threads:[~2025-06-09 19:13 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-06-09 19:12 [PATCH v2 00/13] migration/postcopy: Blocktime tracking overhaul Peter Xu
2025-06-09 19:12 ` [PATCH v2 01/13] migration: Add option to set postcopy-blocktime Peter Xu
2025-06-09 19:12 ` [PATCH v2 02/13] migration/postcopy: Push blocktime start/end into page req mutex Peter Xu
2025-06-09 19:12 ` Peter Xu [this message]
2025-06-09 19:12 ` [PATCH v2 04/13] migration/postcopy: Make all blocktime vars 64bits Peter Xu
2025-06-09 19:12 ` [PATCH v2 05/13] migration/postcopy: Drop PostcopyBlocktimeContext.start_time Peter Xu
2025-06-09 19:12 ` [PATCH v2 06/13] migration/postcopy: Bring blocktime layer to us level Peter Xu
2025-06-09 19:12 ` [PATCH v2 07/13] migration/postcopy: Add blocktime fault counts per-vcpu Peter Xu
2025-06-09 19:12 ` [PATCH v2 08/13] migration/postcopy: Report fault latencies in blocktime Peter Xu
2025-06-09 22:05   ` Peter Xu
2025-06-09 22:25     ` Peter Xu
2025-06-10  0:08   ` Dr. David Alan Gilbert
2025-06-10 13:39     ` Peter Xu
2025-06-10 13:53       ` Dr. David Alan Gilbert
2025-06-10 14:08         ` Peter Xu
2025-06-09 19:12 ` [PATCH v2 09/13] migration/postcopy: Initialize blocktime context only until listen Peter Xu
2025-06-09 19:12 ` [PATCH v2 10/13] migration/postcopy: Cache the tid->vcpu mapping for blocktime Peter Xu
2025-06-09 19:12 ` [PATCH v2 11/13] migration/postcopy: Cleanup the total blocktime accounting Peter Xu
2025-06-09 19:12 ` [PATCH v2 12/13] migration/postcopy: Optimize blocktime fault tracking with hashtable Peter Xu
2025-06-09 19:12 ` [PATCH v2 13/13] migration/postcopy: blocktime allows track / report non-vCPU faults Peter Xu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250609191259.9053-4-peterx@redhat.com \
    --to=peterx@redhat.com \
    --cc=a.perevalov@samsung.com \
    --cc=dave@treblig.org \
    --cc=farosas@suse.de \
    --cc=jmarcin@redhat.com \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).