From: "Daniel P. Berrangé" <berrange@redhat.com>
To: qemu-devel@nongnu.org
Cc: "Daniel P. Berrangé" <berrange@redhat.com>,
"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
qemu-block@nongnu.org, "Stefan Hajnoczi" <stefanha@redhat.com>,
"Hailiang Zhang" <zhang.zhanghailiang@huawei.com>,
"Juan Quintela" <quintela@redhat.com>,
"Fam Zheng" <fam@euphon.net>
Subject: [PATCH 04/20] migration: rename rate limiting fields in QEMUFile
Date: Tue, 24 May 2022 12:02:19 +0100 [thread overview]
Message-ID: <20220524110235.145079-5-berrange@redhat.com> (raw)
In-Reply-To: <20220524110235.145079-1-berrange@redhat.com>
This renames the following QEMUFile fields
* bytes_xfer -> rate_limit_used
* xfer_limit -> rate_limit_max
The intent is to make it clear that 'bytes_xfer' is specifically related
to rate limiting of data and applies to data queued, which need not have
been transferred on the wire yet if a flush hasn't taken place.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
---
migration/qemu-file.c | 30 +++++++++++++++++++-----------
1 file changed, 19 insertions(+), 11 deletions(-)
diff --git a/migration/qemu-file.c b/migration/qemu-file.c
index 1479cddad9..03f0b13a55 100644
--- a/migration/qemu-file.c
+++ b/migration/qemu-file.c
@@ -39,8 +39,16 @@ struct QEMUFile {
const QEMUFileHooks *hooks;
void *opaque;
- int64_t bytes_xfer;
- int64_t xfer_limit;
+ /*
+ * Maximum amount of data in bytes to transfer during one
+ * rate limiting time window
+ */
+ int64_t rate_limit_max;
+ /*
+ * Total amount of data in bytes queued for transfer
+ * during this rate limiting time window
+ */
+ int64_t rate_limit_used;
int64_t pos; /* start of buffer when writing, end of buffer
when reading */
@@ -304,7 +312,7 @@ size_t ram_control_save_page(QEMUFile *f, ram_addr_t block_offset,
int ret = f->hooks->save_page(f, f->opaque, block_offset,
offset, size, bytes_sent);
if (ret != RAM_SAVE_CONTROL_NOT_SUPP) {
- f->bytes_xfer += size;
+ f->rate_limit_used += size;
}
if (ret != RAM_SAVE_CONTROL_DELAYED &&
@@ -457,7 +465,7 @@ void qemu_put_buffer_async(QEMUFile *f, const uint8_t *buf, size_t size,
return;
}
- f->bytes_xfer += size;
+ f->rate_limit_used += size;
add_to_iovec(f, buf, size, may_free);
}
@@ -475,7 +483,7 @@ void qemu_put_buffer(QEMUFile *f, const uint8_t *buf, size_t size)
l = size;
}
memcpy(f->buf + f->buf_index, buf, l);
- f->bytes_xfer += l;
+ f->rate_limit_used += l;
add_buf_to_iovec(f, l);
if (qemu_file_get_error(f)) {
break;
@@ -492,7 +500,7 @@ void qemu_put_byte(QEMUFile *f, int v)
}
f->buf[f->buf_index] = v;
- f->bytes_xfer++;
+ f->rate_limit_used++;
add_buf_to_iovec(f, 1);
}
@@ -674,7 +682,7 @@ int qemu_file_rate_limit(QEMUFile *f)
if (qemu_file_get_error(f)) {
return 1;
}
- if (f->xfer_limit > 0 && f->bytes_xfer > f->xfer_limit) {
+ if (f->rate_limit_max > 0 && f->rate_limit_used > f->rate_limit_max) {
return 1;
}
return 0;
@@ -682,22 +690,22 @@ int qemu_file_rate_limit(QEMUFile *f)
int64_t qemu_file_get_rate_limit(QEMUFile *f)
{
- return f->xfer_limit;
+ return f->rate_limit_max;
}
void qemu_file_set_rate_limit(QEMUFile *f, int64_t limit)
{
- f->xfer_limit = limit;
+ f->rate_limit_max = limit;
}
void qemu_file_reset_rate_limit(QEMUFile *f)
{
- f->bytes_xfer = 0;
+ f->rate_limit_used = 0;
}
void qemu_file_update_transfer(QEMUFile *f, int64_t len)
{
- f->bytes_xfer += len;
+ f->rate_limit_used += len;
}
void qemu_put_be16(QEMUFile *f, unsigned int v)
--
2.36.1
next prev parent reply other threads:[~2022-05-24 11:44 UTC|newest]
Thread overview: 51+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-05-24 11:02 [PATCH 00/20] migration: remove QEMUFileOps concept and assume use of QIOChannel Daniel P. Berrangé
2022-05-24 11:02 ` [PATCH 01/20] io: add a QIOChannelNull equivalent to /dev/null Daniel P. Berrangé
2022-05-24 21:14 ` Eric Blake
2022-06-16 16:26 ` Daniel P. Berrangé
2022-05-24 11:02 ` [PATCH 02/20] migration: switch to use QIOChannelNull for dummy channel Daniel P. Berrangé
2022-05-24 21:15 ` Eric Blake
2022-05-24 11:02 ` [PATCH 03/20] migration: remove unreachble RDMA code in save_hook impl Daniel P. Berrangé
2022-05-25 12:29 ` Eric Blake
2022-06-08 17:59 ` Dr. David Alan Gilbert
2022-05-24 11:02 ` Daniel P. Berrangé [this message]
2022-06-09 9:29 ` [PATCH 04/20] migration: rename rate limiting fields in QEMUFile Dr. David Alan Gilbert
2022-05-24 11:02 ` [PATCH 05/20] migration: rename 'pos' field in QEMUFile to 'bytes_processed' Daniel P. Berrangé
2022-06-09 9:51 ` Dr. David Alan Gilbert
2022-06-09 9:57 ` Daniel P. Berrangé
2022-06-09 9:59 ` Dr. David Alan Gilbert
2022-05-24 11:02 ` [PATCH 06/20] migration: rename qemu_ftell to qemu_file_total_transferred Daniel P. Berrangé
2022-06-09 10:23 ` Dr. David Alan Gilbert
2022-05-24 11:02 ` [PATCH 07/20] migration: rename qemu_update_position to qemu_file_credit_transfer Daniel P. Berrangé
2022-06-09 10:29 ` Dr. David Alan Gilbert
2022-06-09 12:56 ` Peter Maydell
2022-06-09 13:02 ` Daniel P. Berrangé
2022-06-09 13:15 ` Dr. David Alan Gilbert
2022-05-24 11:02 ` [PATCH 08/20] migration: introduce a QIOChannel impl for BlockDriverState VMState Daniel P. Berrangé
2022-05-24 11:02 ` [PATCH 09/20] migration: convert savevm to use QIOChannelBlock for VMState Daniel P. Berrangé
2022-06-09 14:57 ` Dr. David Alan Gilbert
2022-05-24 11:02 ` [PATCH 10/20] migration: stop passing 'opaque' parameter to QEMUFile hooks Daniel P. Berrangé
2022-06-09 15:00 ` Dr. David Alan Gilbert
2022-05-24 11:02 ` [PATCH 11/20] migration: hardcode assumption that QEMUFile is backed with QIOChannel Daniel P. Berrangé
2022-06-09 15:01 ` Dr. David Alan Gilbert
2022-05-24 11:02 ` [PATCH 12/20] migration: introduce new constructors for QEMUFile Daniel P. Berrangé
2022-06-09 15:33 ` Dr. David Alan Gilbert
2022-05-24 11:02 ` [PATCH 13/20] migration: remove unused QEMUFileGetFD typedef Daniel P. Berrangé
2022-06-09 16:03 ` Dr. David Alan Gilbert
2022-05-24 11:02 ` [PATCH 14/20] migration: remove the QEMUFileOps 'shut_down' callback Daniel P. Berrangé
2022-06-09 16:12 ` Dr. David Alan Gilbert
2022-06-09 16:14 ` Daniel P. Berrangé
2022-06-09 16:17 ` Dr. David Alan Gilbert
2022-05-24 11:02 ` [PATCH 15/20] migration: remove the QEMUFileOps 'set_blocking' callback Daniel P. Berrangé
2022-06-09 16:21 ` Dr. David Alan Gilbert
2022-05-24 11:02 ` [PATCH 16/20] migration: remove the QEMUFileOps 'close' callback Daniel P. Berrangé
2022-06-09 16:40 ` Dr. David Alan Gilbert
2022-05-24 11:02 ` [PATCH 17/20] migration: remove the QEMUFileOps 'get_buffer' callback Daniel P. Berrangé
2022-06-09 16:46 ` Dr. David Alan Gilbert
2022-06-09 17:09 ` Daniel P. Berrangé
2022-05-24 11:02 ` [PATCH 18/20] migration: remove the QEMUFileOps 'writev_buffer' callback Daniel P. Berrangé
2022-06-09 16:51 ` Dr. David Alan Gilbert
2022-05-24 11:02 ` [PATCH 19/20] migration: remove the QEMUFileOps 'get_return_path' callback Daniel P. Berrangé
2022-06-09 16:54 ` Dr. David Alan Gilbert
2022-05-24 11:02 ` [PATCH 20/20] migration: remove the QEMUFileOps abstraction Daniel P. Berrangé
2022-06-09 16:59 ` Dr. David Alan Gilbert
2022-06-09 17:10 ` Daniel P. Berrangé
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220524110235.145079-5-berrange@redhat.com \
--to=berrange@redhat.com \
--cc=dgilbert@redhat.com \
--cc=fam@euphon.net \
--cc=qemu-block@nongnu.org \
--cc=qemu-devel@nongnu.org \
--cc=quintela@redhat.com \
--cc=stefanha@redhat.com \
--cc=zhang.zhanghailiang@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).