From: Juan Quintela <quintela@redhat.com>
To: qemu-devel@nongnu.org
Cc: Fabiano Rosas <farosas@suse.de>,
Leonardo Bras <leobras@redhat.com>, Peter Xu <peterx@redhat.com>,
Juan Quintela <quintela@redhat.com>,
Lukas Straub <lukasstraub2@web.de>
Subject: [PATCH v2 07/11] migration: Create compress_update_rates()
Date: Thu, 19 Oct 2023 13:07:20 +0200 [thread overview]
Message-ID: <20231019110724.15324-8-quintela@redhat.com> (raw)
In-Reply-To: <20231019110724.15324-1-quintela@redhat.com>
So we can move more compression_counters stuff to ram-compress.c.
Create compression_counters struct to add the stuff that was on
MigrationState.
Reviewed-by: Lukas Straub <lukasstraub2@web.de>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
migration/ram-compress.h | 1 +
migration/ram.h | 1 -
migration/ram-compress.c | 42 +++++++++++++++++++++++++++++++++++++++-
migration/ram.c | 29 +--------------------------
4 files changed, 43 insertions(+), 30 deletions(-)
diff --git a/migration/ram-compress.h b/migration/ram-compress.h
index b228640092..76dacd3ec7 100644
--- a/migration/ram-compress.h
+++ b/migration/ram-compress.h
@@ -71,5 +71,6 @@ void decompress_data_with_multi_threads(QEMUFile *f, void *host, int len);
void populate_compress(MigrationInfo *info);
uint64_t ram_compressed_pages(void);
void update_compress_thread_counts(const CompressParam *param, int bytes_xmit);
+void compress_update_rates(uint64_t page_count);
#endif
diff --git a/migration/ram.h b/migration/ram.h
index 145c915ca7..9cf8f58e97 100644
--- a/migration/ram.h
+++ b/migration/ram.h
@@ -34,7 +34,6 @@
#include "io/channel.h"
extern XBZRLECacheStats xbzrle_counters;
-extern CompressionStats compression_counters;
/* Should be holding either ram_list.mutex, or the RCU lock. */
#define RAMBLOCK_FOREACH_NOT_IGNORED(block) \
diff --git a/migration/ram-compress.c b/migration/ram-compress.c
index f56e1f8e69..af42cab0fe 100644
--- a/migration/ram-compress.c
+++ b/migration/ram-compress.c
@@ -41,7 +41,20 @@
#include "ram.h"
#include "migration-stats.h"
-CompressionStats compression_counters;
+static struct {
+ int64_t pages;
+ int64_t busy;
+ double busy_rate;
+ int64_t compressed_size;
+ double compression_rate;
+ /* compression statistics since the beginning of the period */
+ /* amount of count that no free thread to compress data */
+ uint64_t compress_thread_busy_prev;
+ /* amount bytes after compression */
+ uint64_t compressed_size_prev;
+ /* amount of compressed pages */
+ uint64_t compress_pages_prev;
+} compression_counters;
static CompressParam *comp_param;
static QemuThread *compress_threads;
@@ -518,3 +531,30 @@ void update_compress_thread_counts(const CompressParam *param, int bytes_xmit)
compression_counters.pages++;
}
+void compress_update_rates(uint64_t page_count)
+{
+ if (!migrate_compress()) {
+ return;
+ }
+ compression_counters.busy_rate = (double)(compression_counters.busy -
+ compression_counters.compress_thread_busy_prev) / page_count;
+ compression_counters.compress_thread_busy_prev =
+ compression_counters.busy;
+
+ double compressed_size = compression_counters.compressed_size -
+ compression_counters.compressed_size_prev;
+ if (compressed_size) {
+ double uncompressed_size = (compression_counters.pages -
+ compression_counters.compress_pages_prev) *
+ qemu_target_page_size();
+
+ /* Compression-Ratio = Uncompressed-size / Compressed-size */
+ compression_counters.compression_rate =
+ uncompressed_size / compressed_size;
+
+ compression_counters.compress_pages_prev =
+ compression_counters.pages;
+ compression_counters.compressed_size_prev =
+ compression_counters.compressed_size;
+ }
+}
diff --git a/migration/ram.c b/migration/ram.c
index a3c5fcc549..bfb2f02351 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -369,13 +369,6 @@ struct RAMState {
bool xbzrle_started;
/* Are we on the last stage of migration */
bool last_stage;
- /* compression statistics since the beginning of the period */
- /* amount of count that no free thread to compress data */
- uint64_t compress_thread_busy_prev;
- /* amount bytes after compression */
- uint64_t compressed_size_prev;
- /* amount of compressed pages */
- uint64_t compress_pages_prev;
/* total handled target pages at the beginning of period */
uint64_t target_page_count_prev;
@@ -945,7 +938,6 @@ uint64_t ram_get_total_transferred_pages(void)
static void migration_update_rates(RAMState *rs, int64_t end_time)
{
uint64_t page_count = rs->target_page_count - rs->target_page_count_prev;
- double compressed_size;
/* calculate period counters */
stat64_set(&mig_stats.dirty_pages_rate,
@@ -973,26 +965,7 @@ static void migration_update_rates(RAMState *rs, int64_t end_time)
rs->xbzrle_pages_prev = xbzrle_counters.pages;
rs->xbzrle_bytes_prev = xbzrle_counters.bytes;
}
-
- if (migrate_compress()) {
- compression_counters.busy_rate = (double)(compression_counters.busy -
- rs->compress_thread_busy_prev) / page_count;
- rs->compress_thread_busy_prev = compression_counters.busy;
-
- compressed_size = compression_counters.compressed_size -
- rs->compressed_size_prev;
- if (compressed_size) {
- double uncompressed_size = (compression_counters.pages -
- rs->compress_pages_prev) * TARGET_PAGE_SIZE;
-
- /* Compression-Ratio = Uncompressed-size / Compressed-size */
- compression_counters.compression_rate =
- uncompressed_size / compressed_size;
-
- rs->compress_pages_prev = compression_counters.pages;
- rs->compressed_size_prev = compression_counters.compressed_size;
- }
- }
+ compress_update_rates(page_count);
}
/*
--
2.41.0
next prev parent reply other threads:[~2023-10-19 11:38 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-10-19 11:07 [PATCH v2 00/11] Migration compression cleanup Juan Quintela
2023-10-19 11:07 ` [PATCH v2 01/11] migration: Give one error if trying to set MULTIFD and XBZRLE Juan Quintela
2023-10-23 13:30 ` Fabiano Rosas
2023-10-19 11:07 ` [PATCH v2 02/11] migration: Give one error if trying to set COMPRESSION " Juan Quintela
2023-10-23 13:32 ` Fabiano Rosas
2023-10-19 11:07 ` [PATCH v2 03/11] migration: Remove save_page_use_compression() Juan Quintela
2023-10-23 13:34 ` Fabiano Rosas
2023-10-19 11:07 ` [PATCH v2 04/11] migration: Make compress_data_with_multithreads return bool Juan Quintela
2023-10-23 13:40 ` Fabiano Rosas
2023-10-19 11:07 ` [PATCH v2 05/11] migration: Simplify compress_page_with_multithread() Juan Quintela
2023-10-23 13:42 ` Fabiano Rosas
2023-10-19 11:07 ` [PATCH v2 06/11] migration: Move busy++ to migrate_with_multithread Juan Quintela
2023-10-23 13:49 ` Fabiano Rosas
2023-10-19 11:07 ` Juan Quintela [this message]
2023-10-23 13:57 ` [PATCH v2 07/11] migration: Create compress_update_rates() Fabiano Rosas
2023-10-24 10:29 ` Juan Quintela
2023-10-19 11:07 ` [PATCH v2 08/11] migration: Export send_queued_data() Juan Quintela
2023-10-23 13:58 ` Fabiano Rosas
2023-10-19 11:07 ` [PATCH v2 09/11] migration: Move ram_flush_compressed_data() to ram-compress.c Juan Quintela
2023-10-23 13:59 ` Fabiano Rosas
2023-10-19 11:07 ` [PATCH v2 10/11] migration: Merge flush_compressed_data() and compress_flush_data() Juan Quintela
2023-10-23 14:00 ` Fabiano Rosas
2023-10-19 11:07 ` [PATCH v2 11/11] migration: Rename ram_compressed_pages() to compress_ram_pages() Juan Quintela
2023-10-23 14:00 ` Fabiano Rosas
2023-10-19 11:43 ` [PATCH v2 00/11] Migration compression cleanup Juan Quintela
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20231019110724.15324-8-quintela@redhat.com \
--to=quintela@redhat.com \
--cc=farosas@suse.de \
--cc=leobras@redhat.com \
--cc=lukasstraub2@web.de \
--cc=peterx@redhat.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).