qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Juan Quintela <quintela@redhat.com>
To: qemu-devel@nongnu.org
Cc: "Marcel Apfelbaum" <marcel.apfelbaum@gmail.com>,
	"Philippe Mathieu-Daudé" <f4bug@amsat.org>,
	"Juan Quintela" <quintela@redhat.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	"Eduardo Habkost" <eduardo@habkost.net>,
	"Yanan Wang" <wangyanan55@huawei.com>
Subject: [PATCH v2 4/6] migration: Make find_dirty_block() return a single parameter
Date: Thu, 28 Jul 2022 13:59:55 +0200	[thread overview]
Message-ID: <20220728115957.5554-5-quintela@redhat.com> (raw)
In-Reply-To: <20220728115957.5554-1-quintela@redhat.com>

We used to return two bools, just return a single int with the
following meaning:

old return / again / new return
false        false   PAGE_ALL_CLEAN
false        true    PAGE_TRY_AGAIN
true         true    PAGE_DIRTY_FOUND  /* We don't care about again at all */

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/ram.c | 37 ++++++++++++++++++++++---------------
 1 file changed, 22 insertions(+), 15 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index c2c939ee03..1507ba1991 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1532,17 +1532,23 @@ retry:
     return pages;
 }
 
+#define PAGE_ALL_CLEAN 0
+#define PAGE_TRY_AGAIN 1
+#define PAGE_DIRTY_FOUND 2
 /**
  * find_dirty_block: find the next dirty page and update any state
  * associated with the search process.
  *
- * Returns true if a page is found
+ * Returns:
+ *         PAGE_ALL_CLEAN: no dirty page found, give up
+ *         PAGE_TRY_AGAIN: no dirty page found, retry for next block
+ *         PAGE_DIRTY_FOUND: dirty page found
  *
  * @rs: current RAM state
  * @pss: data about the state of the current dirty page scan
  * @again: set to false if the search has scanned the whole of RAM
  */
-static bool find_dirty_block(RAMState *rs, PageSearchStatus *pss, bool *again)
+static int find_dirty_block(RAMState *rs, PageSearchStatus *pss)
 {
     /*
      * This is not a postcopy requested page, mark it "not urgent", and use
@@ -1558,8 +1564,7 @@ static bool find_dirty_block(RAMState *rs, PageSearchStatus *pss, bool *again)
          * We've been once around the RAM and haven't found anything.
          * Give up.
          */
-        *again = false;
-        return false;
+        return PAGE_ALL_CLEAN;
     }
     if (!offset_in_ramblock(pss->block,
                             ((ram_addr_t)pss->page) << TARGET_PAGE_BITS)) {
@@ -1588,13 +1593,10 @@ static bool find_dirty_block(RAMState *rs, PageSearchStatus *pss, bool *again)
             }
         }
         /* Didn't find anything this time, but try again on the new block */
-        *again = true;
-        return false;
+        return PAGE_TRY_AGAIN;
     } else {
-        /* Can go around again, but... */
-        *again = true;
-        /* We've found something so probably don't need to */
-        return true;
+        /* We've found something */
+        return PAGE_DIRTY_FOUND;
     }
 }
 
@@ -2538,7 +2540,7 @@ static int ram_find_and_save_block(RAMState *rs)
         pss.block = QLIST_FIRST_RCU(&ram_list.blocks);
     }
 
-    do {
+    while (true){
         if (!get_queued_page(rs, &pss)) {
             /*
              * Recover previous precopy ramblock/offset if postcopy has
@@ -2548,16 +2550,21 @@ static int ram_find_and_save_block(RAMState *rs)
                 postcopy_preempt_restore(rs, &pss, false);
             } else {
                 /* priority queue empty, so just search for something dirty */
-                bool again = true;
-                if (!find_dirty_block(rs, &pss, &again)) {
-                    if (!again) {
+                int res = find_dirty_block(rs, &pss);
+                if (res != PAGE_DIRTY_FOUND) {
+                    if (res == PAGE_ALL_CLEAN) {
                         break;
+                    } else if (res == PAGE_TRY_AGAIN) {
+                        continue;
                     }
                 }
             }
         }
         pages = ram_save_host_page(rs, &pss);
-    } while (!pages);
+        if (pages) {
+            break;
+        }
+    }
 
     rs->last_seen_block = pss.block;
     rs->last_page = pss.page;
-- 
2.37.1



  parent reply	other threads:[~2022-07-28 12:08 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-07-28 11:59 [PATCH v2 0/6] Eliminate multifd flush Juan Quintela
2022-07-28 11:59 ` [PATCH v2 1/6] multifd: Create property multifd-sync-after-each-section Juan Quintela
2022-07-28 11:59 ` [PATCH v2 2/6] multifd: Protect multifd_send_sync_main() calls Juan Quintela
2022-07-28 11:59 ` [PATCH v2 3/6] migration: Simplify ram_find_and_save_block() Juan Quintela
2022-07-28 11:59 ` Juan Quintela [this message]
2022-07-28 11:59 ` [PATCH v2 5/6] multifd: Only sync once each full round of memory Juan Quintela
2022-07-28 11:59 ` [PATCH v2 6/6] ram: Document migration ram flags Juan Quintela

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220728115957.5554-5-quintela@redhat.com \
    --to=quintela@redhat.com \
    --cc=dgilbert@redhat.com \
    --cc=eduardo@habkost.net \
    --cc=f4bug@amsat.org \
    --cc=marcel.apfelbaum@gmail.com \
    --cc=qemu-devel@nongnu.org \
    --cc=wangyanan55@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).