* [PATCH v6 01/19] migration/multifd: Reduce access to p->pages
2024-08-27 17:45 [PATCH v6 00/19] migration/multifd: Remove multifd_send_state->pages Fabiano Rosas
@ 2024-08-27 17:45 ` Fabiano Rosas
2024-08-27 17:45 ` [PATCH v6 02/19] migration/multifd: Inline page_size and page_count Fabiano Rosas
` (17 subsequent siblings)
18 siblings, 0 replies; 34+ messages in thread
From: Fabiano Rosas @ 2024-08-27 17:45 UTC (permalink / raw)
To: qemu-devel; +Cc: Peter Xu, Maciej S . Szmigiero, Philippe Mathieu-Daudé
I'm about to replace the p->pages pointer with an opaque pointer, so
do a cleanup now to reduce direct accesses to p->page, which makes the
next diffs cleaner.
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
---
migration/multifd-qpl.c | 8 +++++---
migration/multifd-uadk.c | 9 +++++----
migration/multifd-zlib.c | 2 +-
migration/multifd-zstd.c | 2 +-
migration/multifd.c | 13 +++++++------
5 files changed, 19 insertions(+), 15 deletions(-)
diff --git a/migration/multifd-qpl.c b/migration/multifd-qpl.c
index 9265098ee7..f8c84c52cf 100644
--- a/migration/multifd-qpl.c
+++ b/migration/multifd-qpl.c
@@ -404,13 +404,14 @@ retry:
static void multifd_qpl_compress_pages_slow_path(MultiFDSendParams *p)
{
QplData *qpl = p->compress_data;
+ MultiFDPages_t *pages = p->pages;
uint32_t size = p->page_size;
qpl_job *job = qpl->sw_job;
uint8_t *zbuf = qpl->zbuf;
uint8_t *buf;
- for (int i = 0; i < p->pages->normal_num; i++) {
- buf = p->pages->block->host + p->pages->offset[i];
+ for (int i = 0; i < pages->normal_num; i++) {
+ buf = pages->block->host + pages->offset[i];
multifd_qpl_prepare_comp_job(job, buf, zbuf, size);
if (qpl_execute_job(job) == QPL_STS_OK) {
multifd_qpl_fill_packet(i, p, zbuf, job->total_out);
@@ -498,6 +499,7 @@ static void multifd_qpl_compress_pages(MultiFDSendParams *p)
static int multifd_qpl_send_prepare(MultiFDSendParams *p, Error **errp)
{
QplData *qpl = p->compress_data;
+ MultiFDPages_t *pages = p->pages;
uint32_t len = 0;
if (!multifd_send_prepare_common(p)) {
@@ -505,7 +507,7 @@ static int multifd_qpl_send_prepare(MultiFDSendParams *p, Error **errp)
}
/* The first IOV is used to store the compressed page lengths */
- len = p->pages->normal_num * sizeof(uint32_t);
+ len = pages->normal_num * sizeof(uint32_t);
multifd_qpl_fill_iov(p, (uint8_t *) qpl->zlen, len);
if (qpl->hw_avail) {
multifd_qpl_compress_pages(p);
diff --git a/migration/multifd-uadk.c b/migration/multifd-uadk.c
index d12353fb21..b8ba3cd9c1 100644
--- a/migration/multifd-uadk.c
+++ b/migration/multifd-uadk.c
@@ -174,19 +174,20 @@ static int multifd_uadk_send_prepare(MultiFDSendParams *p, Error **errp)
uint32_t hdr_size;
uint8_t *buf = uadk_data->buf;
int ret = 0;
+ MultiFDPages_t *pages = p->pages;
if (!multifd_send_prepare_common(p)) {
goto out;
}
- hdr_size = p->pages->normal_num * sizeof(uint32_t);
+ hdr_size = pages->normal_num * sizeof(uint32_t);
/* prepare the header that stores the lengths of all compressed data */
prepare_next_iov(p, uadk_data->buf_hdr, hdr_size);
- for (int i = 0; i < p->pages->normal_num; i++) {
+ for (int i = 0; i < pages->normal_num; i++) {
struct wd_comp_req creq = {
.op_type = WD_DIR_COMPRESS,
- .src = p->pages->block->host + p->pages->offset[i],
+ .src = pages->block->host + pages->offset[i],
.src_len = p->page_size,
.dst = buf,
/* Set dst_len to double the src in case compressed out >= page_size */
@@ -214,7 +215,7 @@ static int multifd_uadk_send_prepare(MultiFDSendParams *p, Error **errp)
*/
if (!uadk_data->handle || creq.dst_len >= p->page_size) {
uadk_data->buf_hdr[i] = cpu_to_be32(p->page_size);
- prepare_next_iov(p, p->pages->block->host + p->pages->offset[i],
+ prepare_next_iov(p, pages->block->host + pages->offset[i],
p->page_size);
buf += p->page_size;
}
diff --git a/migration/multifd-zlib.c b/migration/multifd-zlib.c
index 2ced69487e..65f8aba5c8 100644
--- a/migration/multifd-zlib.c
+++ b/migration/multifd-zlib.c
@@ -147,7 +147,7 @@ static int zlib_send_prepare(MultiFDSendParams *p, Error **errp)
* with compression. zlib does not guarantee that this is safe,
* therefore copy the page before calling deflate().
*/
- memcpy(z->buf, p->pages->block->host + pages->offset[i], p->page_size);
+ memcpy(z->buf, pages->block->host + pages->offset[i], p->page_size);
zs->avail_in = p->page_size;
zs->next_in = z->buf;
diff --git a/migration/multifd-zstd.c b/migration/multifd-zstd.c
index ca17b7e310..cb6075a9a5 100644
--- a/migration/multifd-zstd.c
+++ b/migration/multifd-zstd.c
@@ -138,7 +138,7 @@ static int zstd_send_prepare(MultiFDSendParams *p, Error **errp)
if (i == pages->normal_num - 1) {
flush = ZSTD_e_flush;
}
- z->in.src = p->pages->block->host + pages->offset[i];
+ z->in.src = pages->block->host + pages->offset[i];
z->in.size = p->page_size;
z->in.pos = 0;
diff --git a/migration/multifd.c b/migration/multifd.c
index a6db05502a..0bd9c2253e 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -114,11 +114,11 @@ static void multifd_set_file_bitmap(MultiFDSendParams *p)
assert(pages->block);
- for (int i = 0; i < p->pages->normal_num; i++) {
+ for (int i = 0; i < pages->normal_num; i++) {
ramblock_set_file_bmap_atomic(pages->block, pages->offset[i], true);
}
- for (int i = p->pages->normal_num; i < p->pages->num; i++) {
+ for (int i = pages->normal_num; i < pages->num; i++) {
ramblock_set_file_bmap_atomic(pages->block, pages->offset[i], false);
}
}
@@ -417,7 +417,7 @@ void multifd_send_fill_packet(MultiFDSendParams *p)
int i;
packet->flags = cpu_to_be32(p->flags);
- packet->pages_alloc = cpu_to_be32(p->pages->allocated);
+ packet->pages_alloc = cpu_to_be32(pages->allocated);
packet->normal_pages = cpu_to_be32(pages->normal_num);
packet->zero_pages = cpu_to_be32(zero_num);
packet->next_packet_size = cpu_to_be32(p->next_packet_size);
@@ -953,7 +953,7 @@ static void *multifd_send_thread(void *opaque)
if (migrate_mapped_ram()) {
ret = file_write_ramblock_iov(p->c, p->iov, p->iovs_num,
- p->pages->block, &local_err);
+ pages->block, &local_err);
} else {
ret = qio_channel_writev_full_all(p->c, p->iov, p->iovs_num,
NULL, 0, p->write_flags,
@@ -969,7 +969,7 @@ static void *multifd_send_thread(void *opaque)
stat64_add(&mig_stats.normal_pages, pages->normal_num);
stat64_add(&mig_stats.zero_pages, pages->num - pages->normal_num);
- multifd_pages_reset(p->pages);
+ multifd_pages_reset(pages);
p->next_packet_size = 0;
/*
@@ -1690,9 +1690,10 @@ void multifd_recv_new_channel(QIOChannel *ioc, Error **errp)
bool multifd_send_prepare_common(MultiFDSendParams *p)
{
+ MultiFDPages_t *pages = p->pages;
multifd_send_zero_page_detect(p);
- if (!p->pages->normal_num) {
+ if (!pages->normal_num) {
p->next_packet_size = 0;
return false;
}
--
2.35.3
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH v6 02/19] migration/multifd: Inline page_size and page_count
2024-08-27 17:45 [PATCH v6 00/19] migration/multifd: Remove multifd_send_state->pages Fabiano Rosas
2024-08-27 17:45 ` [PATCH v6 01/19] migration/multifd: Reduce access to p->pages Fabiano Rosas
@ 2024-08-27 17:45 ` Fabiano Rosas
2024-08-27 17:45 ` [PATCH v6 03/19] migration/multifd: Remove pages->allocated Fabiano Rosas
` (16 subsequent siblings)
18 siblings, 0 replies; 34+ messages in thread
From: Fabiano Rosas @ 2024-08-27 17:45 UTC (permalink / raw)
To: qemu-devel; +Cc: Peter Xu, Maciej S . Szmigiero, Philippe Mathieu-Daudé
The MultiFD*Params structures are for per-channel data. Constant
values should not be there because that needlessly wastes cycles and
storage. The page_size and page_count fall into this category so move
them inline in multifd.h.
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
---
migration/multifd-qpl.c | 10 +++++++---
migration/multifd-uadk.c | 36 ++++++++++++++++++++---------------
migration/multifd-zero-page.c | 4 ++--
migration/multifd-zlib.c | 14 ++++++++------
migration/multifd-zstd.c | 11 ++++++-----
migration/multifd.c | 33 ++++++++++++++++----------------
migration/multifd.h | 18 ++++++++++--------
7 files changed, 71 insertions(+), 55 deletions(-)
diff --git a/migration/multifd-qpl.c b/migration/multifd-qpl.c
index f8c84c52cf..db60c05795 100644
--- a/migration/multifd-qpl.c
+++ b/migration/multifd-qpl.c
@@ -233,8 +233,10 @@ static void multifd_qpl_deinit(QplData *qpl)
static int multifd_qpl_send_setup(MultiFDSendParams *p, Error **errp)
{
QplData *qpl;
+ uint32_t page_size = multifd_ram_page_size();
+ uint32_t page_count = multifd_ram_page_count();
- qpl = multifd_qpl_init(p->page_count, p->page_size, errp);
+ qpl = multifd_qpl_init(page_count, page_size, errp);
if (!qpl) {
return -1;
}
@@ -245,7 +247,7 @@ static int multifd_qpl_send_setup(MultiFDSendParams *p, Error **errp)
* additional two IOVs are used to store packet header and compressed data
* length
*/
- p->iov = g_new0(struct iovec, p->page_count + 2);
+ p->iov = g_new0(struct iovec, page_count + 2);
return 0;
}
@@ -534,8 +536,10 @@ out:
static int multifd_qpl_recv_setup(MultiFDRecvParams *p, Error **errp)
{
QplData *qpl;
+ uint32_t page_size = multifd_ram_page_size();
+ uint32_t page_count = multifd_ram_page_count();
- qpl = multifd_qpl_init(p->page_count, p->page_size, errp);
+ qpl = multifd_qpl_init(page_count, page_size, errp);
if (!qpl) {
return -1;
}
diff --git a/migration/multifd-uadk.c b/migration/multifd-uadk.c
index b8ba3cd9c1..1ed1c6afe6 100644
--- a/migration/multifd-uadk.c
+++ b/migration/multifd-uadk.c
@@ -114,8 +114,10 @@ static void multifd_uadk_uninit_sess(struct wd_data *wd)
static int multifd_uadk_send_setup(MultiFDSendParams *p, Error **errp)
{
struct wd_data *wd;
+ uint32_t page_size = multifd_ram_page_size();
+ uint32_t page_count = multifd_ram_page_count();
- wd = multifd_uadk_init_sess(p->page_count, p->page_size, true, errp);
+ wd = multifd_uadk_init_sess(page_count, page_size, true, errp);
if (!wd) {
return -1;
}
@@ -128,7 +130,7 @@ static int multifd_uadk_send_setup(MultiFDSendParams *p, Error **errp)
* length
*/
- p->iov = g_new0(struct iovec, p->page_count + 2);
+ p->iov = g_new0(struct iovec, page_count + 2);
return 0;
}
@@ -172,6 +174,7 @@ static int multifd_uadk_send_prepare(MultiFDSendParams *p, Error **errp)
{
struct wd_data *uadk_data = p->compress_data;
uint32_t hdr_size;
+ uint32_t page_size = multifd_ram_page_size();
uint8_t *buf = uadk_data->buf;
int ret = 0;
MultiFDPages_t *pages = p->pages;
@@ -188,7 +191,7 @@ static int multifd_uadk_send_prepare(MultiFDSendParams *p, Error **errp)
struct wd_comp_req creq = {
.op_type = WD_DIR_COMPRESS,
.src = pages->block->host + pages->offset[i],
- .src_len = p->page_size,
+ .src_len = page_size,
.dst = buf,
/* Set dst_len to double the src in case compressed out >= page_size */
.dst_len = p->page_size * 2,
@@ -201,7 +204,7 @@ static int multifd_uadk_send_prepare(MultiFDSendParams *p, Error **errp)
p->id, ret, creq.status);
return -1;
}
- if (creq.dst_len < p->page_size) {
+ if (creq.dst_len < page_size) {
uadk_data->buf_hdr[i] = cpu_to_be32(creq.dst_len);
prepare_next_iov(p, buf, creq.dst_len);
buf += creq.dst_len;
@@ -213,11 +216,11 @@ static int multifd_uadk_send_prepare(MultiFDSendParams *p, Error **errp)
* than page_size as well because at the receive end we can skip the
* decompression. But it is tricky to find the right number here.
*/
- if (!uadk_data->handle || creq.dst_len >= p->page_size) {
- uadk_data->buf_hdr[i] = cpu_to_be32(p->page_size);
+ if (!uadk_data->handle || creq.dst_len >= page_size) {
+ uadk_data->buf_hdr[i] = cpu_to_be32(page_size);
prepare_next_iov(p, pages->block->host + pages->offset[i],
- p->page_size);
- buf += p->page_size;
+ page_size);
+ buf += page_size;
}
}
out:
@@ -239,8 +242,10 @@ out:
static int multifd_uadk_recv_setup(MultiFDRecvParams *p, Error **errp)
{
struct wd_data *wd;
+ uint32_t page_size = multifd_ram_page_size();
+ uint32_t page_count = multifd_ram_page_count();
- wd = multifd_uadk_init_sess(p->page_count, p->page_size, false, errp);
+ wd = multifd_uadk_init_sess(page_count, page_size, false, errp);
if (!wd) {
return -1;
}
@@ -281,6 +286,7 @@ static int multifd_uadk_recv(MultiFDRecvParams *p, Error **errp)
uint32_t flags = p->flags & MULTIFD_FLAG_COMPRESSION_MASK;
uint32_t hdr_len = p->normal_num * sizeof(uint32_t);
uint32_t data_len = 0;
+ uint32_t page_size = multifd_ram_page_size();
uint8_t *buf = uadk_data->buf;
int ret = 0;
@@ -307,7 +313,7 @@ static int multifd_uadk_recv(MultiFDRecvParams *p, Error **errp)
for (int i = 0; i < p->normal_num; i++) {
uadk_data->buf_hdr[i] = be32_to_cpu(uadk_data->buf_hdr[i]);
data_len += uadk_data->buf_hdr[i];
- assert(uadk_data->buf_hdr[i] <= p->page_size);
+ assert(uadk_data->buf_hdr[i] <= page_size);
}
/* read compressed data */
@@ -323,12 +329,12 @@ static int multifd_uadk_recv(MultiFDRecvParams *p, Error **errp)
.src = buf,
.src_len = uadk_data->buf_hdr[i],
.dst = p->host + p->normal[i],
- .dst_len = p->page_size,
+ .dst_len = page_size,
};
- if (uadk_data->buf_hdr[i] == p->page_size) {
- memcpy(p->host + p->normal[i], buf, p->page_size);
- buf += p->page_size;
+ if (uadk_data->buf_hdr[i] == page_size) {
+ memcpy(p->host + p->normal[i], buf, page_size);
+ buf += page_size;
continue;
}
@@ -344,7 +350,7 @@ static int multifd_uadk_recv(MultiFDRecvParams *p, Error **errp)
p->id, ret, creq.status);
return -1;
}
- if (creq.dst_len != p->page_size) {
+ if (creq.dst_len != page_size) {
error_setg(errp, "multifd %u: decompressed length error", p->id);
return -1;
}
diff --git a/migration/multifd-zero-page.c b/migration/multifd-zero-page.c
index e1b8370f88..cc624e36b3 100644
--- a/migration/multifd-zero-page.c
+++ b/migration/multifd-zero-page.c
@@ -63,7 +63,7 @@ void multifd_send_zero_page_detect(MultiFDSendParams *p)
while (i <= j) {
uint64_t offset = pages->offset[i];
- if (!buffer_is_zero(rb->host + offset, p->page_size)) {
+ if (!buffer_is_zero(rb->host + offset, multifd_ram_page_size())) {
i++;
continue;
}
@@ -81,7 +81,7 @@ void multifd_recv_zero_page_process(MultiFDRecvParams *p)
for (int i = 0; i < p->zero_num; i++) {
void *page = p->host + p->zero[i];
if (ramblock_recv_bitmap_test_byte_offset(p->block, p->zero[i])) {
- memset(page, 0, p->page_size);
+ memset(page, 0, multifd_ram_page_size());
} else {
ramblock_recv_bitmap_set_offset(p->block, p->zero[i]);
}
diff --git a/migration/multifd-zlib.c b/migration/multifd-zlib.c
index 65f8aba5c8..e47d7f70dc 100644
--- a/migration/multifd-zlib.c
+++ b/migration/multifd-zlib.c
@@ -127,6 +127,7 @@ static int zlib_send_prepare(MultiFDSendParams *p, Error **errp)
struct zlib_data *z = p->compress_data;
z_stream *zs = &z->zs;
uint32_t out_size = 0;
+ uint32_t page_size = multifd_ram_page_size();
int ret;
uint32_t i;
@@ -147,8 +148,8 @@ static int zlib_send_prepare(MultiFDSendParams *p, Error **errp)
* with compression. zlib does not guarantee that this is safe,
* therefore copy the page before calling deflate().
*/
- memcpy(z->buf, pages->block->host + pages->offset[i], p->page_size);
- zs->avail_in = p->page_size;
+ memcpy(z->buf, pages->block->host + pages->offset[i], page_size);
+ zs->avail_in = page_size;
zs->next_in = z->buf;
zs->avail_out = available;
@@ -260,7 +261,8 @@ static int zlib_recv(MultiFDRecvParams *p, Error **errp)
uint32_t in_size = p->next_packet_size;
/* we measure the change of total_out */
uint32_t out_size = zs->total_out;
- uint32_t expected_size = p->normal_num * p->page_size;
+ uint32_t page_size = multifd_ram_page_size();
+ uint32_t expected_size = p->normal_num * page_size;
uint32_t flags = p->flags & MULTIFD_FLAG_COMPRESSION_MASK;
int ret;
int i;
@@ -296,7 +298,7 @@ static int zlib_recv(MultiFDRecvParams *p, Error **errp)
flush = Z_SYNC_FLUSH;
}
- zs->avail_out = p->page_size;
+ zs->avail_out = page_size;
zs->next_out = p->host + p->normal[i];
/*
@@ -310,8 +312,8 @@ static int zlib_recv(MultiFDRecvParams *p, Error **errp)
do {
ret = inflate(zs, flush);
} while (ret == Z_OK && zs->avail_in
- && (zs->total_out - start) < p->page_size);
- if (ret == Z_OK && (zs->total_out - start) < p->page_size) {
+ && (zs->total_out - start) < page_size);
+ if (ret == Z_OK && (zs->total_out - start) < page_size) {
error_setg(errp, "multifd %u: inflate generated too few output",
p->id);
return -1;
diff --git a/migration/multifd-zstd.c b/migration/multifd-zstd.c
index cb6075a9a5..1812fd1b48 100644
--- a/migration/multifd-zstd.c
+++ b/migration/multifd-zstd.c
@@ -139,7 +139,7 @@ static int zstd_send_prepare(MultiFDSendParams *p, Error **errp)
flush = ZSTD_e_flush;
}
z->in.src = pages->block->host + pages->offset[i];
- z->in.size = p->page_size;
+ z->in.size = multifd_ram_page_size();
z->in.pos = 0;
/*
@@ -254,7 +254,8 @@ static int zstd_recv(MultiFDRecvParams *p, Error **errp)
{
uint32_t in_size = p->next_packet_size;
uint32_t out_size = 0;
- uint32_t expected_size = p->normal_num * p->page_size;
+ uint32_t page_size = multifd_ram_page_size();
+ uint32_t expected_size = p->normal_num * page_size;
uint32_t flags = p->flags & MULTIFD_FLAG_COMPRESSION_MASK;
struct zstd_data *z = p->compress_data;
int ret;
@@ -286,7 +287,7 @@ static int zstd_recv(MultiFDRecvParams *p, Error **errp)
for (i = 0; i < p->normal_num; i++) {
ramblock_recv_bitmap_set_offset(p->block, p->normal[i]);
z->out.dst = p->host + p->normal[i];
- z->out.size = p->page_size;
+ z->out.size = page_size;
z->out.pos = 0;
/*
@@ -300,8 +301,8 @@ static int zstd_recv(MultiFDRecvParams *p, Error **errp)
do {
ret = ZSTD_decompressStream(z->zds, &z->out, &z->in);
} while (ret > 0 && (z->in.size - z->in.pos > 0)
- && (z->out.pos < p->page_size));
- if (ret > 0 && (z->out.pos < p->page_size)) {
+ && (z->out.pos < page_size));
+ if (ret > 0 && (z->out.pos < page_size)) {
error_setg(errp, "multifd %u: decompressStream buffer too small",
p->id);
return -1;
diff --git a/migration/multifd.c b/migration/multifd.c
index 0bd9c2253e..3dfed8a005 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -133,15 +133,17 @@ static void multifd_set_file_bitmap(MultiFDSendParams *p)
*/
static int nocomp_send_setup(MultiFDSendParams *p, Error **errp)
{
+ uint32_t page_count = multifd_ram_page_count();
+
if (migrate_zero_copy_send()) {
p->write_flags |= QIO_CHANNEL_WRITE_FLAG_ZERO_COPY;
}
if (multifd_use_packets()) {
/* We need one extra place for the packet header */
- p->iov = g_new0(struct iovec, p->page_count + 1);
+ p->iov = g_new0(struct iovec, page_count + 1);
} else {
- p->iov = g_new0(struct iovec, p->page_count);
+ p->iov = g_new0(struct iovec, page_count);
}
return 0;
@@ -165,14 +167,15 @@ static void nocomp_send_cleanup(MultiFDSendParams *p, Error **errp)
static void multifd_send_prepare_iovs(MultiFDSendParams *p)
{
MultiFDPages_t *pages = p->pages;
+ uint32_t page_size = multifd_ram_page_size();
for (int i = 0; i < pages->normal_num; i++) {
p->iov[p->iovs_num].iov_base = pages->block->host + pages->offset[i];
- p->iov[p->iovs_num].iov_len = p->page_size;
+ p->iov[p->iovs_num].iov_len = page_size;
p->iovs_num++;
}
- p->next_packet_size = pages->normal_num * p->page_size;
+ p->next_packet_size = pages->normal_num * page_size;
}
/**
@@ -237,7 +240,7 @@ static int nocomp_send_prepare(MultiFDSendParams *p, Error **errp)
*/
static int nocomp_recv_setup(MultiFDRecvParams *p, Error **errp)
{
- p->iov = g_new0(struct iovec, p->page_count);
+ p->iov = g_new0(struct iovec, multifd_ram_page_count());
return 0;
}
@@ -288,7 +291,7 @@ static int nocomp_recv(MultiFDRecvParams *p, Error **errp)
for (int i = 0; i < p->normal_num; i++) {
p->iov[i].iov_base = p->host + p->normal[i];
- p->iov[i].iov_len = p->page_size;
+ p->iov[i].iov_len = multifd_ram_page_size();
ramblock_recv_bitmap_set_offset(p->block, p->normal[i]);
}
return qio_channel_readv_all(p->c, p->iov, p->normal_num, errp);
@@ -447,6 +450,8 @@ void multifd_send_fill_packet(MultiFDSendParams *p)
static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
{
MultiFDPacket_t *packet = p->packet;
+ uint32_t page_count = multifd_ram_page_count();
+ uint32_t page_size = multifd_ram_page_size();
int i;
packet->magic = be32_to_cpu(packet->magic);
@@ -472,10 +477,10 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
* If we received a packet that is 100 times bigger than expected
* just stop migration. It is a magic number.
*/
- if (packet->pages_alloc > p->page_count) {
+ if (packet->pages_alloc > page_count) {
error_setg(errp, "multifd: received packet "
"with size %u and expected a size of %u",
- packet->pages_alloc, p->page_count) ;
+ packet->pages_alloc, page_count) ;
return -1;
}
@@ -521,7 +526,7 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
for (i = 0; i < p->normal_num; i++) {
uint64_t offset = be64_to_cpu(packet->offset[i]);
- if (offset > (p->block->used_length - p->page_size)) {
+ if (offset > (p->block->used_length - page_size)) {
error_setg(errp, "multifd: offset too long %" PRIu64
" (max " RAM_ADDR_FMT ")",
offset, p->block->used_length);
@@ -533,7 +538,7 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
for (i = 0; i < p->zero_num; i++) {
uint64_t offset = be64_to_cpu(packet->offset[p->normal_num + i]);
- if (offset > (p->block->used_length - p->page_size)) {
+ if (offset > (p->block->used_length - page_size)) {
error_setg(errp, "multifd: offset too long %" PRIu64
" (max " RAM_ADDR_FMT ")",
offset, p->block->used_length);
@@ -1157,7 +1162,7 @@ bool multifd_send_setup(void)
{
MigrationState *s = migrate_get_current();
int thread_count, ret = 0;
- uint32_t page_count = MULTIFD_PACKET_SIZE / qemu_target_page_size();
+ uint32_t page_count = multifd_ram_page_count();
bool use_packets = multifd_use_packets();
uint8_t i;
@@ -1191,8 +1196,6 @@ bool multifd_send_setup(void)
p->packet->version = cpu_to_be32(MULTIFD_VERSION);
}
p->name = g_strdup_printf("mig/src/send_%d", i);
- p->page_size = qemu_target_page_size();
- p->page_count = page_count;
p->write_flags = 0;
if (!multifd_new_send_channel_create(p, &local_err)) {
@@ -1569,7 +1572,7 @@ static void *multifd_recv_thread(void *opaque)
int multifd_recv_setup(Error **errp)
{
int thread_count;
- uint32_t page_count = MULTIFD_PACKET_SIZE / qemu_target_page_size();
+ uint32_t page_count = multifd_ram_page_count();
bool use_packets = multifd_use_packets();
uint8_t i;
@@ -1613,8 +1616,6 @@ int multifd_recv_setup(Error **errp)
p->name = g_strdup_printf("mig/dst/recv_%d", i);
p->normal = g_new0(ram_addr_t, page_count);
p->zero = g_new0(ram_addr_t, page_count);
- p->page_count = page_count;
- p->page_size = qemu_target_page_size();
}
for (i = 0; i < thread_count; i++) {
diff --git a/migration/multifd.h b/migration/multifd.h
index 0ecd6f47d7..a2bba23af9 100644
--- a/migration/multifd.h
+++ b/migration/multifd.h
@@ -13,6 +13,7 @@
#ifndef QEMU_MIGRATION_MULTIFD_H
#define QEMU_MIGRATION_MULTIFD_H
+#include "exec/target_page.h"
#include "ram.h"
typedef struct MultiFDRecvData MultiFDRecvData;
@@ -106,10 +107,6 @@ typedef struct {
QIOChannel *c;
/* packet allocated len */
uint32_t packet_len;
- /* guest page size */
- uint32_t page_size;
- /* number of pages in a full packet */
- uint32_t page_count;
/* multifd flags for sending ram */
int write_flags;
@@ -173,10 +170,6 @@ typedef struct {
QIOChannel *c;
/* packet allocated len */
uint32_t packet_len;
- /* guest page size */
- uint32_t page_size;
- /* number of pages in a full packet */
- uint32_t page_count;
/* syncs main thread and channels */
QemuSemaphore sem_sync;
@@ -254,4 +247,13 @@ static inline void multifd_send_prepare_header(MultiFDSendParams *p)
void multifd_channel_connect(MultiFDSendParams *p, QIOChannel *ioc);
+static inline uint32_t multifd_ram_page_size(void)
+{
+ return qemu_target_page_size();
+}
+
+static inline uint32_t multifd_ram_page_count(void)
+{
+ return MULTIFD_PACKET_SIZE / qemu_target_page_size();
+}
#endif
--
2.35.3
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH v6 03/19] migration/multifd: Remove pages->allocated
2024-08-27 17:45 [PATCH v6 00/19] migration/multifd: Remove multifd_send_state->pages Fabiano Rosas
2024-08-27 17:45 ` [PATCH v6 01/19] migration/multifd: Reduce access to p->pages Fabiano Rosas
2024-08-27 17:45 ` [PATCH v6 02/19] migration/multifd: Inline page_size and page_count Fabiano Rosas
@ 2024-08-27 17:45 ` Fabiano Rosas
2024-08-27 17:45 ` [PATCH v6 04/19] migration/multifd: Pass in MultiFDPages_t to file_write_ramblock_iov Fabiano Rosas
` (15 subsequent siblings)
18 siblings, 0 replies; 34+ messages in thread
From: Fabiano Rosas @ 2024-08-27 17:45 UTC (permalink / raw)
To: qemu-devel; +Cc: Peter Xu, Maciej S . Szmigiero, Philippe Mathieu-Daudé
This value never changes and is always the same as page_count. We
don't need a copy of it per-channel plus one in the extra slot. Remove
it.
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
---
migration/multifd.c | 6 ++----
migration/multifd.h | 2 --
2 files changed, 2 insertions(+), 6 deletions(-)
diff --git a/migration/multifd.c b/migration/multifd.c
index 3dfed8a005..30e5c687d3 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -396,7 +396,6 @@ static MultiFDPages_t *multifd_pages_init(uint32_t n)
{
MultiFDPages_t *pages = g_new0(MultiFDPages_t, 1);
- pages->allocated = n;
pages->offset = g_new0(ram_addr_t, n);
return pages;
@@ -405,7 +404,6 @@ static MultiFDPages_t *multifd_pages_init(uint32_t n)
static void multifd_pages_clear(MultiFDPages_t *pages)
{
multifd_pages_reset(pages);
- pages->allocated = 0;
g_free(pages->offset);
pages->offset = NULL;
g_free(pages);
@@ -420,7 +418,7 @@ void multifd_send_fill_packet(MultiFDSendParams *p)
int i;
packet->flags = cpu_to_be32(p->flags);
- packet->pages_alloc = cpu_to_be32(pages->allocated);
+ packet->pages_alloc = cpu_to_be32(multifd_ram_page_count());
packet->normal_pages = cpu_to_be32(pages->normal_num);
packet->zero_pages = cpu_to_be32(zero_num);
packet->next_packet_size = cpu_to_be32(p->next_packet_size);
@@ -651,7 +649,7 @@ static inline bool multifd_queue_empty(MultiFDPages_t *pages)
static inline bool multifd_queue_full(MultiFDPages_t *pages)
{
- return pages->num == pages->allocated;
+ return pages->num == multifd_ram_page_count();
}
static inline void multifd_enqueue(MultiFDPages_t *pages, ram_addr_t offset)
diff --git a/migration/multifd.h b/migration/multifd.h
index a2bba23af9..660a9882c2 100644
--- a/migration/multifd.h
+++ b/migration/multifd.h
@@ -76,8 +76,6 @@ typedef struct {
uint32_t num;
/* number of normal pages */
uint32_t normal_num;
- /* number of allocated pages */
- uint32_t allocated;
/* offset of each page */
ram_addr_t *offset;
RAMBlock *block;
--
2.35.3
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH v6 04/19] migration/multifd: Pass in MultiFDPages_t to file_write_ramblock_iov
2024-08-27 17:45 [PATCH v6 00/19] migration/multifd: Remove multifd_send_state->pages Fabiano Rosas
` (2 preceding siblings ...)
2024-08-27 17:45 ` [PATCH v6 03/19] migration/multifd: Remove pages->allocated Fabiano Rosas
@ 2024-08-27 17:45 ` Fabiano Rosas
2024-08-27 17:45 ` [PATCH v6 05/19] migration/multifd: Introduce MultiFDSendData Fabiano Rosas
` (14 subsequent siblings)
18 siblings, 0 replies; 34+ messages in thread
From: Fabiano Rosas @ 2024-08-27 17:45 UTC (permalink / raw)
To: qemu-devel; +Cc: Peter Xu, Maciej S . Szmigiero, Philippe Mathieu-Daudé
We want to stop dereferencing 'pages' so it can be replaced by an
opaque pointer in the next patches.
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
---
migration/file.c | 3 ++-
migration/file.h | 2 +-
migration/multifd.c | 2 +-
3 files changed, 4 insertions(+), 3 deletions(-)
diff --git a/migration/file.c b/migration/file.c
index 6451a21c86..7f11e26f5c 100644
--- a/migration/file.c
+++ b/migration/file.c
@@ -196,12 +196,13 @@ void file_start_incoming_migration(FileMigrationArgs *file_args, Error **errp)
}
int file_write_ramblock_iov(QIOChannel *ioc, const struct iovec *iov,
- int niov, RAMBlock *block, Error **errp)
+ int niov, MultiFDPages_t *pages, Error **errp)
{
ssize_t ret = 0;
int i, slice_idx, slice_num;
uintptr_t base, next, offset;
size_t len;
+ RAMBlock *block = pages->block;
slice_idx = 0;
slice_num = 1;
diff --git a/migration/file.h b/migration/file.h
index 9f71e87f74..1a1115f7f1 100644
--- a/migration/file.h
+++ b/migration/file.h
@@ -21,6 +21,6 @@ int file_parse_offset(char *filespec, uint64_t *offsetp, Error **errp);
void file_cleanup_outgoing_migration(void);
bool file_send_channel_create(gpointer opaque, Error **errp);
int file_write_ramblock_iov(QIOChannel *ioc, const struct iovec *iov,
- int niov, RAMBlock *block, Error **errp);
+ int niov, MultiFDPages_t *pages, Error **errp);
int multifd_file_recv_data(MultiFDRecvParams *p, Error **errp);
#endif
diff --git a/migration/multifd.c b/migration/multifd.c
index 30e5c687d3..640e4450ff 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -956,7 +956,7 @@ static void *multifd_send_thread(void *opaque)
if (migrate_mapped_ram()) {
ret = file_write_ramblock_iov(p->c, p->iov, p->iovs_num,
- pages->block, &local_err);
+ pages, &local_err);
} else {
ret = qio_channel_writev_full_all(p->c, p->iov, p->iovs_num,
NULL, 0, p->write_flags,
--
2.35.3
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH v6 05/19] migration/multifd: Introduce MultiFDSendData
2024-08-27 17:45 [PATCH v6 00/19] migration/multifd: Remove multifd_send_state->pages Fabiano Rosas
` (3 preceding siblings ...)
2024-08-27 17:45 ` [PATCH v6 04/19] migration/multifd: Pass in MultiFDPages_t to file_write_ramblock_iov Fabiano Rosas
@ 2024-08-27 17:45 ` Fabiano Rosas
2024-08-27 17:45 ` [PATCH v6 06/19] migration/multifd: Make MultiFDPages_t:offset a flexible array member Fabiano Rosas
` (13 subsequent siblings)
18 siblings, 0 replies; 34+ messages in thread
From: Fabiano Rosas @ 2024-08-27 17:45 UTC (permalink / raw)
To: qemu-devel; +Cc: Peter Xu, Maciej S . Szmigiero, Philippe Mathieu-Daudé
Add a new data structure to replace p->pages in the multifd
channel. This new structure will hide the multifd payload type behind
an union, so we don't need to add a new field to the channel each time
we want to handle a different data type.
This also allow us to keep multifd_send_pages() as is, without needing
to complicate the pointer switching.
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
---
migration/multifd.h | 26 ++++++++++++++++++++++++++
1 file changed, 26 insertions(+)
diff --git a/migration/multifd.h b/migration/multifd.h
index 660a9882c2..7bb4a2cbc4 100644
--- a/migration/multifd.h
+++ b/migration/multifd.h
@@ -17,6 +17,7 @@
#include "ram.h"
typedef struct MultiFDRecvData MultiFDRecvData;
+typedef struct MultiFDSendData MultiFDSendData;
bool multifd_send_setup(void);
void multifd_send_shutdown(void);
@@ -88,6 +89,31 @@ struct MultiFDRecvData {
off_t file_offset;
};
+typedef enum {
+ MULTIFD_PAYLOAD_NONE,
+ MULTIFD_PAYLOAD_RAM,
+} MultiFDPayloadType;
+
+typedef union MultiFDPayload {
+ MultiFDPages_t ram;
+} MultiFDPayload;
+
+struct MultiFDSendData {
+ MultiFDPayloadType type;
+ MultiFDPayload u;
+};
+
+static inline bool multifd_payload_empty(MultiFDSendData *data)
+{
+ return data->type == MULTIFD_PAYLOAD_NONE;
+}
+
+static inline void multifd_set_payload_type(MultiFDSendData *data,
+ MultiFDPayloadType type)
+{
+ data->type = type;
+}
+
typedef struct {
/* Fields are only written at creating/deletion time */
/* No lock required for them, they are read only */
--
2.35.3
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH v6 06/19] migration/multifd: Make MultiFDPages_t:offset a flexible array member
2024-08-27 17:45 [PATCH v6 00/19] migration/multifd: Remove multifd_send_state->pages Fabiano Rosas
` (4 preceding siblings ...)
2024-08-27 17:45 ` [PATCH v6 05/19] migration/multifd: Introduce MultiFDSendData Fabiano Rosas
@ 2024-08-27 17:45 ` Fabiano Rosas
2024-08-27 17:45 ` [PATCH v6 07/19] migration/multifd: Replace p->pages with an union pointer Fabiano Rosas
` (12 subsequent siblings)
18 siblings, 0 replies; 34+ messages in thread
From: Fabiano Rosas @ 2024-08-27 17:45 UTC (permalink / raw)
To: qemu-devel; +Cc: Peter Xu, Maciej S . Szmigiero, Philippe Mathieu-Daudé
We're about to use MultiFDPages_t from inside the MultiFDSendData
payload union, which means we cannot have pointers to allocated data
inside the pages structure, otherwise we'd lose the reference to that
memory once another payload type touches the union. Move the offset
array into the end of the structure and turn it into a flexible array
member, so it is allocated along with the rest of MultiFDSendData in
the next patches.
Note that other pointers, such as the ramblock pointer are still fine
as long as the storage for them is not owned by the migration code and
can be correctly released at some point.
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
---
migration/multifd.c | 19 ++++++++++++-------
migration/multifd.h | 4 ++--
2 files changed, 14 insertions(+), 9 deletions(-)
diff --git a/migration/multifd.c b/migration/multifd.c
index 640e4450ff..717e71f539 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -98,6 +98,17 @@ struct {
MultiFDMethods *ops;
} *multifd_recv_state;
+static size_t multifd_ram_payload_size(void)
+{
+ uint32_t n = multifd_ram_page_count();
+
+ /*
+ * We keep an array of page offsets at the end of MultiFDPages_t,
+ * add space for it in the allocation.
+ */
+ return sizeof(MultiFDPages_t) + n * sizeof(ram_addr_t);
+}
+
static bool multifd_use_packets(void)
{
return !migrate_mapped_ram();
@@ -394,18 +405,12 @@ static int multifd_recv_initial_packet(QIOChannel *c, Error **errp)
static MultiFDPages_t *multifd_pages_init(uint32_t n)
{
- MultiFDPages_t *pages = g_new0(MultiFDPages_t, 1);
-
- pages->offset = g_new0(ram_addr_t, n);
-
- return pages;
+ return g_malloc0(multifd_ram_payload_size());
}
static void multifd_pages_clear(MultiFDPages_t *pages)
{
multifd_pages_reset(pages);
- g_free(pages->offset);
- pages->offset = NULL;
g_free(pages);
}
diff --git a/migration/multifd.h b/migration/multifd.h
index 7bb4a2cbc4..a7fdd97f70 100644
--- a/migration/multifd.h
+++ b/migration/multifd.h
@@ -77,9 +77,9 @@ typedef struct {
uint32_t num;
/* number of normal pages */
uint32_t normal_num;
+ RAMBlock *block;
/* offset of each page */
- ram_addr_t *offset;
- RAMBlock *block;
+ ram_addr_t offset[];
} MultiFDPages_t;
struct MultiFDRecvData {
--
2.35.3
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH v6 07/19] migration/multifd: Replace p->pages with an union pointer
2024-08-27 17:45 [PATCH v6 00/19] migration/multifd: Remove multifd_send_state->pages Fabiano Rosas
` (5 preceding siblings ...)
2024-08-27 17:45 ` [PATCH v6 06/19] migration/multifd: Make MultiFDPages_t:offset a flexible array member Fabiano Rosas
@ 2024-08-27 17:45 ` Fabiano Rosas
2024-08-27 17:45 ` [PATCH v6 08/19] migration/multifd: Move pages accounting into multifd_send_zero_page_detect() Fabiano Rosas
` (11 subsequent siblings)
18 siblings, 0 replies; 34+ messages in thread
From: Fabiano Rosas @ 2024-08-27 17:45 UTC (permalink / raw)
To: qemu-devel; +Cc: Peter Xu, Maciej S . Szmigiero, Philippe Mathieu-Daudé
We want multifd to be able to handle more types of data than just ram
pages. To start decoupling multifd from pages, replace p->pages
(MultiFDPages_t) with the new type MultiFDSendData that hides the
client payload inside an union.
The general idea here is to isolate functions that *need* to handle
MultiFDPages_t and move them in the future to multifd-ram.c, while
multifd.c will stay with only the core functions that handle
MultiFDSendData/MultiFDRecvData.
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
---
migration/multifd-qpl.c | 6 +--
migration/multifd-uadk.c | 2 +-
migration/multifd-zero-page.c | 2 +-
migration/multifd-zlib.c | 2 +-
migration/multifd-zstd.c | 2 +-
migration/multifd.c | 83 +++++++++++++++++++++--------------
migration/multifd.h | 7 +--
7 files changed, 57 insertions(+), 47 deletions(-)
diff --git a/migration/multifd-qpl.c b/migration/multifd-qpl.c
index db60c05795..21153f1987 100644
--- a/migration/multifd-qpl.c
+++ b/migration/multifd-qpl.c
@@ -406,7 +406,7 @@ retry:
static void multifd_qpl_compress_pages_slow_path(MultiFDSendParams *p)
{
QplData *qpl = p->compress_data;
- MultiFDPages_t *pages = p->pages;
+ MultiFDPages_t *pages = &p->data->u.ram;
uint32_t size = p->page_size;
qpl_job *job = qpl->sw_job;
uint8_t *zbuf = qpl->zbuf;
@@ -437,7 +437,7 @@ static void multifd_qpl_compress_pages_slow_path(MultiFDSendParams *p)
static void multifd_qpl_compress_pages(MultiFDSendParams *p)
{
QplData *qpl = p->compress_data;
- MultiFDPages_t *pages = p->pages;
+ MultiFDPages_t *pages = &p->data->u.ram;
uint32_t size = p->page_size;
QplHwJob *hw_job;
uint8_t *buf;
@@ -501,7 +501,7 @@ static void multifd_qpl_compress_pages(MultiFDSendParams *p)
static int multifd_qpl_send_prepare(MultiFDSendParams *p, Error **errp)
{
QplData *qpl = p->compress_data;
- MultiFDPages_t *pages = p->pages;
+ MultiFDPages_t *pages = &p->data->u.ram;
uint32_t len = 0;
if (!multifd_send_prepare_common(p)) {
diff --git a/migration/multifd-uadk.c b/migration/multifd-uadk.c
index 1ed1c6afe6..9d99807af5 100644
--- a/migration/multifd-uadk.c
+++ b/migration/multifd-uadk.c
@@ -177,7 +177,7 @@ static int multifd_uadk_send_prepare(MultiFDSendParams *p, Error **errp)
uint32_t page_size = multifd_ram_page_size();
uint8_t *buf = uadk_data->buf;
int ret = 0;
- MultiFDPages_t *pages = p->pages;
+ MultiFDPages_t *pages = &p->data->u.ram;
if (!multifd_send_prepare_common(p)) {
goto out;
diff --git a/migration/multifd-zero-page.c b/migration/multifd-zero-page.c
index cc624e36b3..6506a4aa89 100644
--- a/migration/multifd-zero-page.c
+++ b/migration/multifd-zero-page.c
@@ -46,7 +46,7 @@ static void swap_page_offset(ram_addr_t *pages_offset, int a, int b)
*/
void multifd_send_zero_page_detect(MultiFDSendParams *p)
{
- MultiFDPages_t *pages = p->pages;
+ MultiFDPages_t *pages = &p->data->u.ram;
RAMBlock *rb = pages->block;
int i = 0;
int j = pages->num - 1;
diff --git a/migration/multifd-zlib.c b/migration/multifd-zlib.c
index e47d7f70dc..66517c1067 100644
--- a/migration/multifd-zlib.c
+++ b/migration/multifd-zlib.c
@@ -123,7 +123,7 @@ static void zlib_send_cleanup(MultiFDSendParams *p, Error **errp)
*/
static int zlib_send_prepare(MultiFDSendParams *p, Error **errp)
{
- MultiFDPages_t *pages = p->pages;
+ MultiFDPages_t *pages = &p->data->u.ram;
struct zlib_data *z = p->compress_data;
z_stream *zs = &z->zs;
uint32_t out_size = 0;
diff --git a/migration/multifd-zstd.c b/migration/multifd-zstd.c
index 1812fd1b48..04ac711cf4 100644
--- a/migration/multifd-zstd.c
+++ b/migration/multifd-zstd.c
@@ -119,7 +119,7 @@ static void zstd_send_cleanup(MultiFDSendParams *p, Error **errp)
*/
static int zstd_send_prepare(MultiFDSendParams *p, Error **errp)
{
- MultiFDPages_t *pages = p->pages;
+ MultiFDPages_t *pages = &p->data->u.ram;
struct zstd_data *z = p->compress_data;
int ret;
uint32_t i;
diff --git a/migration/multifd.c b/migration/multifd.c
index 717e71f539..c310d28532 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -49,8 +49,7 @@ typedef struct {
struct {
MultiFDSendParams *params;
- /* array of pages to sent */
- MultiFDPages_t *pages;
+ MultiFDSendData *data;
/*
* Global number of generated multifd packets.
*
@@ -109,6 +108,28 @@ static size_t multifd_ram_payload_size(void)
return sizeof(MultiFDPages_t) + n * sizeof(ram_addr_t);
}
+static MultiFDSendData *multifd_send_data_alloc(void)
+{
+ size_t max_payload_size, size_minus_payload;
+
+ /*
+ * MultiFDPages_t has a flexible array at the end, account for it
+ * when allocating MultiFDSendData. Use max() in case other types
+ * added to the union in the future are larger than
+ * (MultiFDPages_t + flex array).
+ */
+ max_payload_size = MAX(multifd_ram_payload_size(), sizeof(MultiFDPayload));
+
+ /*
+ * Account for any holes the compiler might insert. We can't pack
+ * the structure because that misaligns the members and triggers
+ * Waddress-of-packed-member.
+ */
+ size_minus_payload = sizeof(MultiFDSendData) - sizeof(MultiFDPayload);
+
+ return g_malloc0(size_minus_payload + max_payload_size);
+}
+
static bool multifd_use_packets(void)
{
return !migrate_mapped_ram();
@@ -121,7 +142,7 @@ void multifd_send_channel_created(void)
static void multifd_set_file_bitmap(MultiFDSendParams *p)
{
- MultiFDPages_t *pages = p->pages;
+ MultiFDPages_t *pages = &p->data->u.ram;
assert(pages->block);
@@ -177,7 +198,7 @@ static void nocomp_send_cleanup(MultiFDSendParams *p, Error **errp)
static void multifd_send_prepare_iovs(MultiFDSendParams *p)
{
- MultiFDPages_t *pages = p->pages;
+ MultiFDPages_t *pages = &p->data->u.ram;
uint32_t page_size = multifd_ram_page_size();
for (int i = 0; i < pages->normal_num; i++) {
@@ -403,21 +424,10 @@ static int multifd_recv_initial_packet(QIOChannel *c, Error **errp)
return msg.id;
}
-static MultiFDPages_t *multifd_pages_init(uint32_t n)
-{
- return g_malloc0(multifd_ram_payload_size());
-}
-
-static void multifd_pages_clear(MultiFDPages_t *pages)
-{
- multifd_pages_reset(pages);
- g_free(pages);
-}
-
void multifd_send_fill_packet(MultiFDSendParams *p)
{
MultiFDPacket_t *packet = p->packet;
- MultiFDPages_t *pages = p->pages;
+ MultiFDPages_t *pages = &p->data->u.ram;
uint64_t packet_num;
uint32_t zero_num = pages->num - pages->normal_num;
int i;
@@ -599,7 +609,7 @@ static bool multifd_send_pages(void)
int i;
static int next_channel;
MultiFDSendParams *p = NULL; /* make happy gcc */
- MultiFDPages_t *pages = multifd_send_state->pages;
+ MultiFDSendData *tmp;
if (multifd_send_should_exit()) {
return false;
@@ -634,11 +644,14 @@ static bool multifd_send_pages(void)
* qatomic_store_release() in multifd_send_thread().
*/
smp_mb_acquire();
- assert(!p->pages->num);
- multifd_send_state->pages = p->pages;
- p->pages = pages;
+
+ assert(!p->data->u.ram.num);
+
+ tmp = multifd_send_state->data;
+ multifd_send_state->data = p->data;
+ p->data = tmp;
/*
- * Making sure p->pages is setup before marking pending_job=true. Pairs
+ * Making sure p->data is setup before marking pending_job=true. Pairs
* with the qatomic_load_acquire() in multifd_send_thread().
*/
qatomic_store_release(&p->pending_job, true);
@@ -668,7 +681,7 @@ bool multifd_queue_page(RAMBlock *block, ram_addr_t offset)
MultiFDPages_t *pages;
retry:
- pages = multifd_send_state->pages;
+ pages = &multifd_send_state->data->u.ram;
/* If the queue is empty, we can already enqueue now */
if (multifd_queue_empty(pages)) {
@@ -798,8 +811,8 @@ static bool multifd_send_cleanup_channel(MultiFDSendParams *p, Error **errp)
qemu_sem_destroy(&p->sem_sync);
g_free(p->name);
p->name = NULL;
- multifd_pages_clear(p->pages);
- p->pages = NULL;
+ g_free(p->data);
+ p->data = NULL;
p->packet_len = 0;
g_free(p->packet);
p->packet = NULL;
@@ -816,8 +829,8 @@ static void multifd_send_cleanup_state(void)
qemu_sem_destroy(&multifd_send_state->channels_ready);
g_free(multifd_send_state->params);
multifd_send_state->params = NULL;
- multifd_pages_clear(multifd_send_state->pages);
- multifd_send_state->pages = NULL;
+ g_free(multifd_send_state->data);
+ multifd_send_state->data = NULL;
g_free(multifd_send_state);
multifd_send_state = NULL;
}
@@ -866,11 +879,13 @@ int multifd_send_sync_main(void)
{
int i;
bool flush_zero_copy;
+ MultiFDPages_t *pages;
if (!migrate_multifd()) {
return 0;
}
- if (multifd_send_state->pages->num) {
+ pages = &multifd_send_state->data->u.ram;
+ if (pages->num) {
if (!multifd_send_pages()) {
error_report("%s: multifd_send_pages fail", __func__);
return -1;
@@ -945,11 +960,11 @@ static void *multifd_send_thread(void *opaque)
}
/*
- * Read pending_job flag before p->pages. Pairs with the
+ * Read pending_job flag before p->data. Pairs with the
* qatomic_store_release() in multifd_send_pages().
*/
if (qatomic_load_acquire(&p->pending_job)) {
- MultiFDPages_t *pages = p->pages;
+ MultiFDPages_t *pages = &p->data->u.ram;
p->iovs_num = 0;
assert(pages->num);
@@ -961,7 +976,7 @@ static void *multifd_send_thread(void *opaque)
if (migrate_mapped_ram()) {
ret = file_write_ramblock_iov(p->c, p->iov, p->iovs_num,
- pages, &local_err);
+ &p->data->u.ram, &local_err);
} else {
ret = qio_channel_writev_full_all(p->c, p->iov, p->iovs_num,
NULL, 0, p->write_flags,
@@ -981,7 +996,7 @@ static void *multifd_send_thread(void *opaque)
p->next_packet_size = 0;
/*
- * Making sure p->pages is published before saying "we're
+ * Making sure p->data is published before saying "we're
* free". Pairs with the smp_mb_acquire() in
* multifd_send_pages().
*/
@@ -1176,7 +1191,7 @@ bool multifd_send_setup(void)
thread_count = migrate_multifd_channels();
multifd_send_state = g_malloc0(sizeof(*multifd_send_state));
multifd_send_state->params = g_new0(MultiFDSendParams, thread_count);
- multifd_send_state->pages = multifd_pages_init(page_count);
+ multifd_send_state->data = multifd_send_data_alloc();
qemu_sem_init(&multifd_send_state->channels_created, 0);
qemu_sem_init(&multifd_send_state->channels_ready, 0);
qatomic_set(&multifd_send_state->exiting, 0);
@@ -1189,7 +1204,7 @@ bool multifd_send_setup(void)
qemu_sem_init(&p->sem, 0);
qemu_sem_init(&p->sem_sync, 0);
p->id = i;
- p->pages = multifd_pages_init(page_count);
+ p->data = multifd_send_data_alloc();
if (use_packets) {
p->packet_len = sizeof(MultiFDPacket_t)
@@ -1694,7 +1709,7 @@ void multifd_recv_new_channel(QIOChannel *ioc, Error **errp)
bool multifd_send_prepare_common(MultiFDSendParams *p)
{
- MultiFDPages_t *pages = p->pages;
+ MultiFDPages_t *pages = &p->data->u.ram;
multifd_send_zero_page_detect(p);
if (!pages->normal_num) {
diff --git a/migration/multifd.h b/migration/multifd.h
index a7fdd97f70..c2ba4cad13 100644
--- a/migration/multifd.h
+++ b/migration/multifd.h
@@ -152,12 +152,7 @@ typedef struct {
*/
bool pending_job;
bool pending_sync;
- /* array of pages to sent.
- * The owner of 'pages' depends of 'pending_job' value:
- * pending_job == 0 -> migration_thread can use it.
- * pending_job != 0 -> multifd_channel can use it.
- */
- MultiFDPages_t *pages;
+ MultiFDSendData *data;
/* thread local variables. No locking required */
--
2.35.3
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH v6 08/19] migration/multifd: Move pages accounting into multifd_send_zero_page_detect()
2024-08-27 17:45 [PATCH v6 00/19] migration/multifd: Remove multifd_send_state->pages Fabiano Rosas
` (6 preceding siblings ...)
2024-08-27 17:45 ` [PATCH v6 07/19] migration/multifd: Replace p->pages with an union pointer Fabiano Rosas
@ 2024-08-27 17:45 ` Fabiano Rosas
2024-08-27 17:45 ` [PATCH v6 09/19] migration/multifd: Remove total pages tracing Fabiano Rosas
` (10 subsequent siblings)
18 siblings, 0 replies; 34+ messages in thread
From: Fabiano Rosas @ 2024-08-27 17:45 UTC (permalink / raw)
To: qemu-devel; +Cc: Peter Xu, Maciej S . Szmigiero, Philippe Mathieu-Daudé
All references to pages are being removed from the multifd worker
threads in order to allow multifd to deal with different payload
types.
multifd_send_zero_page_detect() is called by all multifd migration
paths that deal with pages and is the last spot where zero pages and
normal page amounts are adjusted. Move the pages accounting into that
function.
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
---
migration/multifd-zero-page.c | 7 ++++++-
migration/multifd.c | 2 --
2 files changed, 6 insertions(+), 3 deletions(-)
diff --git a/migration/multifd-zero-page.c b/migration/multifd-zero-page.c
index 6506a4aa89..f1e988a959 100644
--- a/migration/multifd-zero-page.c
+++ b/migration/multifd-zero-page.c
@@ -14,6 +14,7 @@
#include "qemu/cutils.h"
#include "exec/ramblock.h"
#include "migration.h"
+#include "migration-stats.h"
#include "multifd.h"
#include "options.h"
#include "ram.h"
@@ -53,7 +54,7 @@ void multifd_send_zero_page_detect(MultiFDSendParams *p)
if (!multifd_zero_page_enabled()) {
pages->normal_num = pages->num;
- return;
+ goto out;
}
/*
@@ -74,6 +75,10 @@ void multifd_send_zero_page_detect(MultiFDSendParams *p)
}
pages->normal_num = i;
+
+out:
+ stat64_add(&mig_stats.normal_pages, pages->normal_num);
+ stat64_add(&mig_stats.zero_pages, pages->num - pages->normal_num);
}
void multifd_recv_zero_page_process(MultiFDRecvParams *p)
diff --git a/migration/multifd.c b/migration/multifd.c
index c310d28532..410b7e12cc 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -989,8 +989,6 @@ static void *multifd_send_thread(void *opaque)
stat64_add(&mig_stats.multifd_bytes,
p->next_packet_size + p->packet_len);
- stat64_add(&mig_stats.normal_pages, pages->normal_num);
- stat64_add(&mig_stats.zero_pages, pages->num - pages->normal_num);
multifd_pages_reset(pages);
p->next_packet_size = 0;
--
2.35.3
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH v6 09/19] migration/multifd: Remove total pages tracing
2024-08-27 17:45 [PATCH v6 00/19] migration/multifd: Remove multifd_send_state->pages Fabiano Rosas
` (7 preceding siblings ...)
2024-08-27 17:45 ` [PATCH v6 08/19] migration/multifd: Move pages accounting into multifd_send_zero_page_detect() Fabiano Rosas
@ 2024-08-27 17:45 ` Fabiano Rosas
2024-08-27 17:45 ` [PATCH v6 10/19] migration/multifd: Isolate ram pages packet data Fabiano Rosas
` (9 subsequent siblings)
18 siblings, 0 replies; 34+ messages in thread
From: Fabiano Rosas @ 2024-08-27 17:45 UTC (permalink / raw)
To: qemu-devel; +Cc: Peter Xu, Maciej S . Szmigiero, Philippe Mathieu-Daudé
The total_normal_pages and total_zero_pages elements are used only for
the end tracepoints of the multifd threads. These are not super useful
since they record per-channel numbers and are just the sum of all the
pages that are transmitted per-packet, for which we already have
tracepoints. Remove the totals from the tracing.
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
---
migration/multifd.c | 12 ++----------
migration/multifd.h | 8 --------
migration/trace-events | 4 ++--
3 files changed, 4 insertions(+), 20 deletions(-)
diff --git a/migration/multifd.c b/migration/multifd.c
index 410b7e12cc..df8dfcc98f 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -453,8 +453,6 @@ void multifd_send_fill_packet(MultiFDSendParams *p)
}
p->packets_sent++;
- p->total_normal_pages += pages->normal_num;
- p->total_zero_pages += zero_num;
trace_multifd_send(p->id, packet_num, pages->normal_num, zero_num,
p->flags, p->next_packet_size);
@@ -516,8 +514,6 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
p->next_packet_size = be32_to_cpu(packet->next_packet_size);
p->packet_num = be64_to_cpu(packet->packet_num);
p->packets_recved++;
- p->total_normal_pages += p->normal_num;
- p->total_zero_pages += p->zero_num;
trace_multifd_recv(p->id, p->packet_num, p->normal_num, p->zero_num,
p->flags, p->next_packet_size);
@@ -1036,8 +1032,7 @@ out:
rcu_unregister_thread();
migration_threads_remove(thread);
- trace_multifd_send_thread_end(p->id, p->packets_sent, p->total_normal_pages,
- p->total_zero_pages);
+ trace_multifd_send_thread_end(p->id, p->packets_sent);
return NULL;
}
@@ -1561,7 +1556,6 @@ static void *multifd_recv_thread(void *opaque)
qemu_sem_wait(&p->sem_sync);
}
} else {
- p->total_normal_pages += p->data->size / qemu_target_page_size();
p->data->size = 0;
/*
* Order data->size update before clearing
@@ -1578,9 +1572,7 @@ static void *multifd_recv_thread(void *opaque)
}
rcu_unregister_thread();
- trace_multifd_recv_thread_end(p->id, p->packets_recved,
- p->total_normal_pages,
- p->total_zero_pages);
+ trace_multifd_recv_thread_end(p->id, p->packets_recved);
return NULL;
}
diff --git a/migration/multifd.h b/migration/multifd.h
index c2ba4cad13..9175104aea 100644
--- a/migration/multifd.h
+++ b/migration/multifd.h
@@ -162,10 +162,6 @@ typedef struct {
uint32_t next_packet_size;
/* packets sent through this channel */
uint64_t packets_sent;
- /* non zero pages sent through this channel */
- uint64_t total_normal_pages;
- /* zero pages sent through this channel */
- uint64_t total_zero_pages;
/* buffers to send */
struct iovec *iov;
/* number of iovs used */
@@ -218,10 +214,6 @@ typedef struct {
RAMBlock *block;
/* ramblock host address */
uint8_t *host;
- /* non zero pages recv through this channel */
- uint64_t total_normal_pages;
- /* zero pages recv through this channel */
- uint64_t total_zero_pages;
/* buffers to recv */
struct iovec *iov;
/* Pages that are not zero */
diff --git a/migration/trace-events b/migration/trace-events
index 0b7c3324fb..0887cef912 100644
--- a/migration/trace-events
+++ b/migration/trace-events
@@ -134,7 +134,7 @@ multifd_recv_sync_main(long packet_num) "packet num %ld"
multifd_recv_sync_main_signal(uint8_t id) "channel %u"
multifd_recv_sync_main_wait(uint8_t id) "iter %u"
multifd_recv_terminate_threads(bool error) "error %d"
-multifd_recv_thread_end(uint8_t id, uint64_t packets, uint64_t normal_pages, uint64_t zero_pages) "channel %u packets %" PRIu64 " normal pages %" PRIu64 " zero pages %" PRIu64
+multifd_recv_thread_end(uint8_t id, uint64_t packets) "channel %u packets %" PRIu64
multifd_recv_thread_start(uint8_t id) "%u"
multifd_send(uint8_t id, uint64_t packet_num, uint32_t normal_pages, uint32_t zero_pages, uint32_t flags, uint32_t next_packet_size) "channel %u packet_num %" PRIu64 " normal pages %u zero pages %u flags 0x%x next packet size %u"
multifd_send_error(uint8_t id) "channel %u"
@@ -142,7 +142,7 @@ multifd_send_sync_main(long packet_num) "packet num %ld"
multifd_send_sync_main_signal(uint8_t id) "channel %u"
multifd_send_sync_main_wait(uint8_t id) "channel %u"
multifd_send_terminate_threads(void) ""
-multifd_send_thread_end(uint8_t id, uint64_t packets, uint64_t normal_pages, uint64_t zero_pages) "channel %u packets %" PRIu64 " normal pages %" PRIu64 " zero pages %" PRIu64
+multifd_send_thread_end(uint8_t id, uint64_t packets) "channel %u packets %" PRIu64
multifd_send_thread_start(uint8_t id) "%u"
multifd_tls_outgoing_handshake_start(void *ioc, void *tioc, const char *hostname) "ioc=%p tioc=%p hostname=%s"
multifd_tls_outgoing_handshake_error(void *ioc, const char *err) "ioc=%p err=%s"
--
2.35.3
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH v6 10/19] migration/multifd: Isolate ram pages packet data
2024-08-27 17:45 [PATCH v6 00/19] migration/multifd: Remove multifd_send_state->pages Fabiano Rosas
` (8 preceding siblings ...)
2024-08-27 17:45 ` [PATCH v6 09/19] migration/multifd: Remove total pages tracing Fabiano Rosas
@ 2024-08-27 17:45 ` Fabiano Rosas
2024-08-27 17:45 ` [PATCH v6 11/19] migration/multifd: Don't send ram data during SYNC Fabiano Rosas
` (8 subsequent siblings)
18 siblings, 0 replies; 34+ messages in thread
From: Fabiano Rosas @ 2024-08-27 17:45 UTC (permalink / raw)
To: qemu-devel; +Cc: Peter Xu, Maciej S . Szmigiero, Philippe Mathieu-Daudé
While we cannot yet disentangle the multifd packet from page data, we
can make the code a bit cleaner by setting the page-related fields in
a separate function.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
---
migration/multifd.c | 99 +++++++++++++++++++++++++-----------------
migration/trace-events | 5 ++-
2 files changed, 63 insertions(+), 41 deletions(-)
diff --git a/migration/multifd.c b/migration/multifd.c
index df8dfcc98f..d64fcdf4ac 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -424,65 +424,61 @@ static int multifd_recv_initial_packet(QIOChannel *c, Error **errp)
return msg.id;
}
-void multifd_send_fill_packet(MultiFDSendParams *p)
+static void multifd_ram_fill_packet(MultiFDSendParams *p)
{
MultiFDPacket_t *packet = p->packet;
MultiFDPages_t *pages = &p->data->u.ram;
- uint64_t packet_num;
uint32_t zero_num = pages->num - pages->normal_num;
- int i;
- packet->flags = cpu_to_be32(p->flags);
packet->pages_alloc = cpu_to_be32(multifd_ram_page_count());
packet->normal_pages = cpu_to_be32(pages->normal_num);
packet->zero_pages = cpu_to_be32(zero_num);
- packet->next_packet_size = cpu_to_be32(p->next_packet_size);
-
- packet_num = qatomic_fetch_inc(&multifd_send_state->packet_num);
- packet->packet_num = cpu_to_be64(packet_num);
if (pages->block) {
strncpy(packet->ramblock, pages->block->idstr, 256);
}
- for (i = 0; i < pages->num; i++) {
+ for (int i = 0; i < pages->num; i++) {
/* there are architectures where ram_addr_t is 32 bit */
uint64_t temp = pages->offset[i];
packet->offset[i] = cpu_to_be64(temp);
}
+ trace_multifd_send_ram_fill(p->id, pages->normal_num, zero_num);
+}
+
+void multifd_send_fill_packet(MultiFDSendParams *p)
+{
+ MultiFDPacket_t *packet = p->packet;
+ uint64_t packet_num;
+
+ memset(packet, 0, p->packet_len);
+
+ packet->magic = cpu_to_be32(MULTIFD_MAGIC);
+ packet->version = cpu_to_be32(MULTIFD_VERSION);
+
+ packet->flags = cpu_to_be32(p->flags);
+ packet->next_packet_size = cpu_to_be32(p->next_packet_size);
+
+ packet_num = qatomic_fetch_inc(&multifd_send_state->packet_num);
+ packet->packet_num = cpu_to_be64(packet_num);
+
p->packets_sent++;
- trace_multifd_send(p->id, packet_num, pages->normal_num, zero_num,
- p->flags, p->next_packet_size);
+ multifd_ram_fill_packet(p);
+
+ trace_multifd_send_fill(p->id, packet_num,
+ p->flags, p->next_packet_size);
}
-static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
+static int multifd_ram_unfill_packet(MultiFDRecvParams *p, Error **errp)
{
MultiFDPacket_t *packet = p->packet;
uint32_t page_count = multifd_ram_page_count();
uint32_t page_size = multifd_ram_page_size();
int i;
- packet->magic = be32_to_cpu(packet->magic);
- if (packet->magic != MULTIFD_MAGIC) {
- error_setg(errp, "multifd: received packet "
- "magic %x and expected magic %x",
- packet->magic, MULTIFD_MAGIC);
- return -1;
- }
-
- packet->version = be32_to_cpu(packet->version);
- if (packet->version != MULTIFD_VERSION) {
- error_setg(errp, "multifd: received packet "
- "version %u and expected version %u",
- packet->version, MULTIFD_VERSION);
- return -1;
- }
-
- p->flags = be32_to_cpu(packet->flags);
-
packet->pages_alloc = be32_to_cpu(packet->pages_alloc);
/*
* If we received a packet that is 100 times bigger than expected
@@ -511,13 +507,6 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
return -1;
}
- p->next_packet_size = be32_to_cpu(packet->next_packet_size);
- p->packet_num = be64_to_cpu(packet->packet_num);
- p->packets_recved++;
-
- trace_multifd_recv(p->id, p->packet_num, p->normal_num, p->zero_num,
- p->flags, p->next_packet_size);
-
if (p->normal_num == 0 && p->zero_num == 0) {
return 0;
}
@@ -559,6 +548,40 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
return 0;
}
+static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
+{
+ MultiFDPacket_t *packet = p->packet;
+ int ret = 0;
+
+ packet->magic = be32_to_cpu(packet->magic);
+ if (packet->magic != MULTIFD_MAGIC) {
+ error_setg(errp, "multifd: received packet "
+ "magic %x and expected magic %x",
+ packet->magic, MULTIFD_MAGIC);
+ return -1;
+ }
+
+ packet->version = be32_to_cpu(packet->version);
+ if (packet->version != MULTIFD_VERSION) {
+ error_setg(errp, "multifd: received packet "
+ "version %u and expected version %u",
+ packet->version, MULTIFD_VERSION);
+ return -1;
+ }
+
+ p->flags = be32_to_cpu(packet->flags);
+ p->next_packet_size = be32_to_cpu(packet->next_packet_size);
+ p->packet_num = be64_to_cpu(packet->packet_num);
+ p->packets_recved++;
+
+ ret = multifd_ram_unfill_packet(p, errp);
+
+ trace_multifd_recv_unfill(p->id, p->packet_num, p->flags,
+ p->next_packet_size);
+
+ return ret;
+}
+
static bool multifd_send_should_exit(void)
{
return qatomic_read(&multifd_send_state->exiting);
@@ -1203,8 +1226,6 @@ bool multifd_send_setup(void)
p->packet_len = sizeof(MultiFDPacket_t)
+ sizeof(uint64_t) * page_count;
p->packet = g_malloc0(p->packet_len);
- p->packet->magic = cpu_to_be32(MULTIFD_MAGIC);
- p->packet->version = cpu_to_be32(MULTIFD_VERSION);
}
p->name = g_strdup_printf("mig/src/send_%d", i);
p->write_flags = 0;
diff --git a/migration/trace-events b/migration/trace-events
index 0887cef912..c65902f042 100644
--- a/migration/trace-events
+++ b/migration/trace-events
@@ -128,7 +128,7 @@ postcopy_preempt_reset_channel(void) ""
# multifd.c
multifd_new_send_channel_async(uint8_t id) "channel %u"
multifd_new_send_channel_async_error(uint8_t id, void *err) "channel=%u err=%p"
-multifd_recv(uint8_t id, uint64_t packet_num, uint32_t normal, uint32_t zero, uint32_t flags, uint32_t next_packet_size) "channel %u packet_num %" PRIu64 " normal pages %u zero pages %u flags 0x%x next packet size %u"
+multifd_recv_unfill(uint8_t id, uint64_t packet_num, uint32_t flags, uint32_t next_packet_size) "channel %u packet_num %" PRIu64 " flags 0x%x next packet size %u"
multifd_recv_new_channel(uint8_t id) "channel %u"
multifd_recv_sync_main(long packet_num) "packet num %ld"
multifd_recv_sync_main_signal(uint8_t id) "channel %u"
@@ -136,7 +136,8 @@ multifd_recv_sync_main_wait(uint8_t id) "iter %u"
multifd_recv_terminate_threads(bool error) "error %d"
multifd_recv_thread_end(uint8_t id, uint64_t packets) "channel %u packets %" PRIu64
multifd_recv_thread_start(uint8_t id) "%u"
-multifd_send(uint8_t id, uint64_t packet_num, uint32_t normal_pages, uint32_t zero_pages, uint32_t flags, uint32_t next_packet_size) "channel %u packet_num %" PRIu64 " normal pages %u zero pages %u flags 0x%x next packet size %u"
+multifd_send_fill(uint8_t id, uint64_t packet_num, uint32_t flags, uint32_t next_packet_size) "channel %u packet_num %" PRIu64 " flags 0x%x next packet size %u"
+multifd_send_ram_fill(uint8_t id, uint32_t normal, uint32_t zero) "channel %u normal pages %u zero pages %u"
multifd_send_error(uint8_t id) "channel %u"
multifd_send_sync_main(long packet_num) "packet num %ld"
multifd_send_sync_main_signal(uint8_t id) "channel %u"
--
2.35.3
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH v6 11/19] migration/multifd: Don't send ram data during SYNC
2024-08-27 17:45 [PATCH v6 00/19] migration/multifd: Remove multifd_send_state->pages Fabiano Rosas
` (9 preceding siblings ...)
2024-08-27 17:45 ` [PATCH v6 10/19] migration/multifd: Isolate ram pages packet data Fabiano Rosas
@ 2024-08-27 17:45 ` Fabiano Rosas
2024-08-27 17:45 ` [PATCH v6 12/19] migration/multifd: Replace multifd_send_state->pages with client data Fabiano Rosas
` (7 subsequent siblings)
18 siblings, 0 replies; 34+ messages in thread
From: Fabiano Rosas @ 2024-08-27 17:45 UTC (permalink / raw)
To: qemu-devel; +Cc: Peter Xu, Maciej S . Szmigiero, Philippe Mathieu-Daudé
Skip saving and loading any ram data in the packet in the case of a
SYNC. This fixes a shortcoming of the current code which requires a
reset of the MultiFDPages_t fields right after the previous
pending_job finishes, otherwise the very next job might be a SYNC and
multifd_send_fill_packet() will put the stale values in the packet.
By not calling multifd_ram_fill_packet(), we can stop resetting
MultiFDPages_t in the multifd core and leave that to the client code.
Actually moving the reset function is not yet done because
pages->num==0 is used by the client code to determine whether the
MultiFDPages_t needs to be flushed. The subsequent patches will
replace that with a generic flag that is not dependent on
MultiFDPages_t.
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
---
migration/multifd.c | 13 ++++++++++---
1 file changed, 10 insertions(+), 3 deletions(-)
diff --git a/migration/multifd.c b/migration/multifd.c
index d64fcdf4ac..3a164c124d 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -452,6 +452,7 @@ void multifd_send_fill_packet(MultiFDSendParams *p)
{
MultiFDPacket_t *packet = p->packet;
uint64_t packet_num;
+ bool sync_packet = p->flags & MULTIFD_FLAG_SYNC;
memset(packet, 0, p->packet_len);
@@ -466,7 +467,9 @@ void multifd_send_fill_packet(MultiFDSendParams *p)
p->packets_sent++;
- multifd_ram_fill_packet(p);
+ if (!sync_packet) {
+ multifd_ram_fill_packet(p);
+ }
trace_multifd_send_fill(p->id, packet_num,
p->flags, p->next_packet_size);
@@ -574,7 +577,9 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
p->packet_num = be64_to_cpu(packet->packet_num);
p->packets_recved++;
- ret = multifd_ram_unfill_packet(p, errp);
+ if (!(p->flags & MULTIFD_FLAG_SYNC)) {
+ ret = multifd_ram_unfill_packet(p, errp);
+ }
trace_multifd_recv_unfill(p->id, p->packet_num, p->flags,
p->next_packet_size);
@@ -1536,7 +1541,9 @@ static void *multifd_recv_thread(void *opaque)
flags = p->flags;
/* recv methods don't know how to handle the SYNC flag */
p->flags &= ~MULTIFD_FLAG_SYNC;
- has_data = p->normal_num || p->zero_num;
+ if (!(flags & MULTIFD_FLAG_SYNC)) {
+ has_data = p->normal_num || p->zero_num;
+ }
qemu_mutex_unlock(&p->mutex);
} else {
/*
--
2.35.3
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH v6 12/19] migration/multifd: Replace multifd_send_state->pages with client data
2024-08-27 17:45 [PATCH v6 00/19] migration/multifd: Remove multifd_send_state->pages Fabiano Rosas
` (10 preceding siblings ...)
2024-08-27 17:45 ` [PATCH v6 11/19] migration/multifd: Don't send ram data during SYNC Fabiano Rosas
@ 2024-08-27 17:45 ` Fabiano Rosas
2024-08-27 17:46 ` [PATCH v6 13/19] migration/multifd: Allow multifd sync without flush Fabiano Rosas
` (6 subsequent siblings)
18 siblings, 0 replies; 34+ messages in thread
From: Fabiano Rosas @ 2024-08-27 17:45 UTC (permalink / raw)
To: qemu-devel; +Cc: Peter Xu, Maciej S . Szmigiero, Philippe Mathieu-Daudé
Multifd currently has a simple scheduling mechanism that distributes
work to the various channels by keeping storage space within each
channel and an extra space that is given to the client. Each time the
client fills the space with data and calls into multifd, that space is
given to the next idle channel and a free storage space is taken from
the channel and given to client for the next iteration.
This means we always need (#multifd_channels + 1) memory slots to
operate multifd.
This is fine, except that the presence of this one extra memory slot
doesn't allow different types of payloads to be processed at the same
time in different channels, i.e. the data type of
multifd_send_state->pages needs to be the same as p->pages.
For each new data type different from MultiFDPage_t that is to be
handled, this logic would need to be duplicated by adding new fields
to multifd_send_state, to the channels and to multifd_send_pages().
Fix this situation by moving the extra slot into the client and using
only the generic type MultiFDSendData in the multifd core.
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
---
migration/multifd.c | 79 ++++++++++++++++++++++++++-------------------
migration/multifd.h | 3 ++
migration/ram.c | 2 ++
3 files changed, 50 insertions(+), 34 deletions(-)
diff --git a/migration/multifd.c b/migration/multifd.c
index 3a164c124d..cb7a121eb0 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -49,7 +49,6 @@ typedef struct {
struct {
MultiFDSendParams *params;
- MultiFDSendData *data;
/*
* Global number of generated multifd packets.
*
@@ -97,6 +96,8 @@ struct {
MultiFDMethods *ops;
} *multifd_recv_state;
+static MultiFDSendData *multifd_ram_send;
+
static size_t multifd_ram_payload_size(void)
{
uint32_t n = multifd_ram_page_count();
@@ -130,6 +131,17 @@ static MultiFDSendData *multifd_send_data_alloc(void)
return g_malloc0(size_minus_payload + max_payload_size);
}
+void multifd_ram_save_setup(void)
+{
+ multifd_ram_send = multifd_send_data_alloc();
+}
+
+void multifd_ram_save_cleanup(void)
+{
+ g_free(multifd_ram_send);
+ multifd_ram_send = NULL;
+}
+
static bool multifd_use_packets(void)
{
return !migrate_mapped_ram();
@@ -610,25 +622,20 @@ static void multifd_send_kick_main(MultiFDSendParams *p)
}
/*
- * How we use multifd_send_state->pages and channel->pages?
+ * multifd_send() works by exchanging the MultiFDSendData object
+ * provided by the caller with an unused MultiFDSendData object from
+ * the next channel that is found to be idle.
*
- * We create a pages for each channel, and a main one. Each time that
- * we need to send a batch of pages we interchange the ones between
- * multifd_send_state and the channel that is sending it. There are
- * two reasons for that:
- * - to not have to do so many mallocs during migration
- * - to make easier to know what to free at the end of migration
+ * The channel owns the data until it finishes transmitting and the
+ * caller owns the empty object until it fills it with data and calls
+ * this function again. No locking necessary.
*
- * This way we always know who is the owner of each "pages" struct,
- * and we don't need any locking. It belongs to the migration thread
- * or to the channel thread. Switching is safe because the migration
- * thread is using the channel mutex when changing it, and the channel
- * have to had finish with its own, otherwise pending_job can't be
- * false.
+ * Switching is safe because both the migration thread and the channel
+ * thread have barriers in place to serialize access.
*
* Returns true if succeed, false otherwise.
*/
-static bool multifd_send_pages(void)
+static bool multifd_send(MultiFDSendData **send_data)
{
int i;
static int next_channel;
@@ -669,11 +676,16 @@ static bool multifd_send_pages(void)
*/
smp_mb_acquire();
- assert(!p->data->u.ram.num);
+ assert(multifd_payload_empty(p->data));
- tmp = multifd_send_state->data;
- multifd_send_state->data = p->data;
+ /*
+ * Swap the pointers. The channel gets the client data for
+ * transferring and the client gets back an unused data slot.
+ */
+ tmp = *send_data;
+ *send_data = p->data;
p->data = tmp;
+
/*
* Making sure p->data is setup before marking pending_job=true. Pairs
* with the qatomic_load_acquire() in multifd_send_thread().
@@ -705,7 +717,12 @@ bool multifd_queue_page(RAMBlock *block, ram_addr_t offset)
MultiFDPages_t *pages;
retry:
- pages = &multifd_send_state->data->u.ram;
+ pages = &multifd_ram_send->u.ram;
+
+ if (multifd_payload_empty(multifd_ram_send)) {
+ multifd_pages_reset(pages);
+ multifd_set_payload_type(multifd_ram_send, MULTIFD_PAYLOAD_RAM);
+ }
/* If the queue is empty, we can already enqueue now */
if (multifd_queue_empty(pages)) {
@@ -723,7 +740,7 @@ retry:
* After flush, always retry.
*/
if (pages->block != block || multifd_queue_full(pages)) {
- if (!multifd_send_pages()) {
+ if (!multifd_send(&multifd_ram_send)) {
return false;
}
goto retry;
@@ -853,8 +870,6 @@ static void multifd_send_cleanup_state(void)
qemu_sem_destroy(&multifd_send_state->channels_ready);
g_free(multifd_send_state->params);
multifd_send_state->params = NULL;
- g_free(multifd_send_state->data);
- multifd_send_state->data = NULL;
g_free(multifd_send_state);
multifd_send_state = NULL;
}
@@ -903,15 +918,14 @@ int multifd_send_sync_main(void)
{
int i;
bool flush_zero_copy;
- MultiFDPages_t *pages;
if (!migrate_multifd()) {
return 0;
}
- pages = &multifd_send_state->data->u.ram;
- if (pages->num) {
- if (!multifd_send_pages()) {
- error_report("%s: multifd_send_pages fail", __func__);
+
+ if (!multifd_payload_empty(multifd_ram_send)) {
+ if (!multifd_send(&multifd_ram_send)) {
+ error_report("%s: multifd_send fail", __func__);
return -1;
}
}
@@ -985,13 +999,11 @@ static void *multifd_send_thread(void *opaque)
/*
* Read pending_job flag before p->data. Pairs with the
- * qatomic_store_release() in multifd_send_pages().
+ * qatomic_store_release() in multifd_send().
*/
if (qatomic_load_acquire(&p->pending_job)) {
- MultiFDPages_t *pages = &p->data->u.ram;
-
p->iovs_num = 0;
- assert(pages->num);
+ assert(!multifd_payload_empty(p->data));
ret = multifd_send_state->ops->send_prepare(p, &local_err);
if (ret != 0) {
@@ -1014,13 +1026,13 @@ static void *multifd_send_thread(void *opaque)
stat64_add(&mig_stats.multifd_bytes,
p->next_packet_size + p->packet_len);
- multifd_pages_reset(pages);
p->next_packet_size = 0;
+ multifd_set_payload_type(p->data, MULTIFD_PAYLOAD_NONE);
/*
* Making sure p->data is published before saying "we're
* free". Pairs with the smp_mb_acquire() in
- * multifd_send_pages().
+ * multifd_send().
*/
qatomic_store_release(&p->pending_job, false);
} else {
@@ -1212,7 +1224,6 @@ bool multifd_send_setup(void)
thread_count = migrate_multifd_channels();
multifd_send_state = g_malloc0(sizeof(*multifd_send_state));
multifd_send_state->params = g_new0(MultiFDSendParams, thread_count);
- multifd_send_state->data = multifd_send_data_alloc();
qemu_sem_init(&multifd_send_state->channels_created, 0);
qemu_sem_init(&multifd_send_state->channels_ready, 0);
qatomic_set(&multifd_send_state->exiting, 0);
diff --git a/migration/multifd.h b/migration/multifd.h
index 9175104aea..5fa384d9af 100644
--- a/migration/multifd.h
+++ b/migration/multifd.h
@@ -267,4 +267,7 @@ static inline uint32_t multifd_ram_page_count(void)
{
return MULTIFD_PACKET_SIZE / qemu_target_page_size();
}
+
+void multifd_ram_save_setup(void);
+void multifd_ram_save_cleanup(void);
#endif
diff --git a/migration/ram.c b/migration/ram.c
index edec1a2d07..1815b2557b 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -2387,6 +2387,7 @@ static void ram_save_cleanup(void *opaque)
ram_bitmaps_destroy();
xbzrle_cleanup();
+ multifd_ram_save_cleanup();
ram_state_cleanup(rsp);
g_free(migration_ops);
migration_ops = NULL;
@@ -3058,6 +3059,7 @@ static int ram_save_setup(QEMUFile *f, void *opaque, Error **errp)
migration_ops = g_malloc0(sizeof(MigrationOps));
if (migrate_multifd()) {
+ multifd_ram_save_setup();
migration_ops->ram_save_target_page = ram_save_target_page_multifd;
} else {
migration_ops->ram_save_target_page = ram_save_target_page_legacy;
--
2.35.3
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH v6 13/19] migration/multifd: Allow multifd sync without flush
2024-08-27 17:45 [PATCH v6 00/19] migration/multifd: Remove multifd_send_state->pages Fabiano Rosas
` (11 preceding siblings ...)
2024-08-27 17:45 ` [PATCH v6 12/19] migration/multifd: Replace multifd_send_state->pages with client data Fabiano Rosas
@ 2024-08-27 17:46 ` Fabiano Rosas
2024-08-27 17:46 ` [PATCH v6 14/19] migration/multifd: Standardize on multifd ops names Fabiano Rosas
` (5 subsequent siblings)
18 siblings, 0 replies; 34+ messages in thread
From: Fabiano Rosas @ 2024-08-27 17:46 UTC (permalink / raw)
To: qemu-devel; +Cc: Peter Xu, Maciej S . Szmigiero, Philippe Mathieu-Daudé
Separate the multifd sync from flushing the client data to the
channels. These two operations are closely related but not strictly
necessary to be executed together.
The multifd sync is intrinsic to how multifd works. The multiple
channels operate independently and may finish IO out of order in
relation to each other. This applies also between the source and
destination QEMU.
Flushing the data that is left in the client-owned data structures
(e.g. MultiFDPages_t) prior to sync is usually the right thing to do,
but that is particular to how the ram migration is implemented with
several passes over dirty data.
Make these two routines separate, allowing future code to call the
sync by itself if needed. This also allows the usage of
multifd_ram_send to be isolated to ram code.
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
---
migration/multifd.c | 13 +++++++++----
migration/multifd.h | 1 +
migration/ram.c | 8 ++++----
3 files changed, 14 insertions(+), 8 deletions(-)
diff --git a/migration/multifd.c b/migration/multifd.c
index cb7a121eb0..ce08257706 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -914,11 +914,8 @@ static int multifd_zero_copy_flush(QIOChannel *c)
return ret;
}
-int multifd_send_sync_main(void)
+int multifd_ram_flush_and_sync(void)
{
- int i;
- bool flush_zero_copy;
-
if (!migrate_multifd()) {
return 0;
}
@@ -930,6 +927,14 @@ int multifd_send_sync_main(void)
}
}
+ return multifd_send_sync_main();
+}
+
+int multifd_send_sync_main(void)
+{
+ int i;
+ bool flush_zero_copy;
+
flush_zero_copy = migrate_zero_copy_send();
for (i = 0; i < migrate_multifd_channels(); i++) {
diff --git a/migration/multifd.h b/migration/multifd.h
index 5fa384d9af..00c872dfda 100644
--- a/migration/multifd.h
+++ b/migration/multifd.h
@@ -270,4 +270,5 @@ static inline uint32_t multifd_ram_page_count(void)
void multifd_ram_save_setup(void);
void multifd_ram_save_cleanup(void);
+int multifd_ram_flush_and_sync(void);
#endif
diff --git a/migration/ram.c b/migration/ram.c
index 1815b2557b..67ca3d5d51 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1326,7 +1326,7 @@ static int find_dirty_block(RAMState *rs, PageSearchStatus *pss)
(!migrate_multifd_flush_after_each_section() ||
migrate_mapped_ram())) {
QEMUFile *f = rs->pss[RAM_CHANNEL_PRECOPY].pss_channel;
- int ret = multifd_send_sync_main();
+ int ret = multifd_ram_flush_and_sync();
if (ret < 0) {
return ret;
}
@@ -3066,7 +3066,7 @@ static int ram_save_setup(QEMUFile *f, void *opaque, Error **errp)
}
bql_unlock();
- ret = multifd_send_sync_main();
+ ret = multifd_ram_flush_and_sync();
bql_lock();
if (ret < 0) {
error_setg(errp, "%s: multifd synchronization failed", __func__);
@@ -3213,7 +3213,7 @@ out:
&& migration_is_setup_or_active()) {
if (migrate_multifd() && migrate_multifd_flush_after_each_section() &&
!migrate_mapped_ram()) {
- ret = multifd_send_sync_main();
+ ret = multifd_ram_flush_and_sync();
if (ret < 0) {
return ret;
}
@@ -3285,7 +3285,7 @@ static int ram_save_complete(QEMUFile *f, void *opaque)
}
}
- ret = multifd_send_sync_main();
+ ret = multifd_ram_flush_and_sync();
if (ret < 0) {
return ret;
}
--
2.35.3
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH v6 14/19] migration/multifd: Standardize on multifd ops names
2024-08-27 17:45 [PATCH v6 00/19] migration/multifd: Remove multifd_send_state->pages Fabiano Rosas
` (12 preceding siblings ...)
2024-08-27 17:46 ` [PATCH v6 13/19] migration/multifd: Allow multifd sync without flush Fabiano Rosas
@ 2024-08-27 17:46 ` Fabiano Rosas
2024-08-27 17:46 ` [PATCH v6 15/19] migration/multifd: Register nocomp ops dynamically Fabiano Rosas
` (4 subsequent siblings)
18 siblings, 0 replies; 34+ messages in thread
From: Fabiano Rosas @ 2024-08-27 17:46 UTC (permalink / raw)
To: qemu-devel; +Cc: Peter Xu, Maciej S . Szmigiero, Philippe Mathieu-Daudé
Add the multifd_ prefix to all functions and remove the useless
docstrings.
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
---
migration/multifd-qpl.c | 57 ----------------------------
migration/multifd-uadk.c | 55 ---------------------------
migration/multifd-zlib.c | 81 ++++++----------------------------------
migration/multifd-zstd.c | 81 ++++++----------------------------------
migration/multifd.c | 78 ++++++--------------------------------
5 files changed, 36 insertions(+), 316 deletions(-)
diff --git a/migration/multifd-qpl.c b/migration/multifd-qpl.c
index 21153f1987..75041a4c4d 100644
--- a/migration/multifd-qpl.c
+++ b/migration/multifd-qpl.c
@@ -220,16 +220,6 @@ static void multifd_qpl_deinit(QplData *qpl)
}
}
-/**
- * multifd_qpl_send_setup: set up send side
- *
- * Set up the channel with QPL compression.
- *
- * Returns 0 on success or -1 on error
- *
- * @p: Params for the channel being used
- * @errp: pointer to an error
- */
static int multifd_qpl_send_setup(MultiFDSendParams *p, Error **errp)
{
QplData *qpl;
@@ -251,14 +241,6 @@ static int multifd_qpl_send_setup(MultiFDSendParams *p, Error **errp)
return 0;
}
-/**
- * multifd_qpl_send_cleanup: clean up send side
- *
- * Close the channel and free memory.
- *
- * @p: Params for the channel being used
- * @errp: pointer to an error
- */
static void multifd_qpl_send_cleanup(MultiFDSendParams *p, Error **errp)
{
multifd_qpl_deinit(p->compress_data);
@@ -487,17 +469,6 @@ static void multifd_qpl_compress_pages(MultiFDSendParams *p)
}
}
-/**
- * multifd_qpl_send_prepare: prepare data to be able to send
- *
- * Create a compressed buffer with all the pages that we are going to
- * send.
- *
- * Returns 0 on success or -1 on error
- *
- * @p: Params for the channel being used
- * @errp: pointer to an error
- */
static int multifd_qpl_send_prepare(MultiFDSendParams *p, Error **errp)
{
QplData *qpl = p->compress_data;
@@ -523,16 +494,6 @@ out:
return 0;
}
-/**
- * multifd_qpl_recv_setup: set up receive side
- *
- * Create the compressed channel and buffer.
- *
- * Returns 0 on success or -1 on error
- *
- * @p: Params for the channel being used
- * @errp: pointer to an error
- */
static int multifd_qpl_recv_setup(MultiFDRecvParams *p, Error **errp)
{
QplData *qpl;
@@ -547,13 +508,6 @@ static int multifd_qpl_recv_setup(MultiFDRecvParams *p, Error **errp)
return 0;
}
-/**
- * multifd_qpl_recv_cleanup: set up receive side
- *
- * Close the channel and free memory.
- *
- * @p: Params for the channel being used
- */
static void multifd_qpl_recv_cleanup(MultiFDRecvParams *p)
{
multifd_qpl_deinit(p->compress_data);
@@ -694,17 +648,6 @@ static int multifd_qpl_decompress_pages(MultiFDRecvParams *p, Error **errp)
}
return 0;
}
-/**
- * multifd_qpl_recv: read the data from the channel into actual pages
- *
- * Read the compressed buffer, and uncompress it into the actual
- * pages.
- *
- * Returns 0 on success or -1 on error
- *
- * @p: Params for the channel being used
- * @errp: pointer to an error
- */
static int multifd_qpl_recv(MultiFDRecvParams *p, Error **errp)
{
QplData *qpl = p->compress_data;
diff --git a/migration/multifd-uadk.c b/migration/multifd-uadk.c
index 9d99807af5..db2549f59b 100644
--- a/migration/multifd-uadk.c
+++ b/migration/multifd-uadk.c
@@ -103,14 +103,6 @@ static void multifd_uadk_uninit_sess(struct wd_data *wd)
g_free(wd);
}
-/**
- * multifd_uadk_send_setup: setup send side
- *
- * Returns 0 for success or -1 for error
- *
- * @p: Params for the channel that we are using
- * @errp: pointer to an error
- */
static int multifd_uadk_send_setup(MultiFDSendParams *p, Error **errp)
{
struct wd_data *wd;
@@ -134,14 +126,6 @@ static int multifd_uadk_send_setup(MultiFDSendParams *p, Error **errp)
return 0;
}
-/**
- * multifd_uadk_send_cleanup: cleanup send side
- *
- * Close the channel and return memory.
- *
- * @p: Params for the channel that we are using
- * @errp: pointer to an error
- */
static void multifd_uadk_send_cleanup(MultiFDSendParams *p, Error **errp)
{
struct wd_data *wd = p->compress_data;
@@ -159,17 +143,6 @@ static inline void prepare_next_iov(MultiFDSendParams *p, void *base,
p->iovs_num++;
}
-/**
- * multifd_uadk_send_prepare: prepare data to be able to send
- *
- * Create a compressed buffer with all the pages that we are going to
- * send.
- *
- * Returns 0 for success or -1 for error
- *
- * @p: Params for the channel that we are using
- * @errp: pointer to an error
- */
static int multifd_uadk_send_prepare(MultiFDSendParams *p, Error **errp)
{
struct wd_data *uadk_data = p->compress_data;
@@ -229,16 +202,6 @@ out:
return 0;
}
-/**
- * multifd_uadk_recv_setup: setup receive side
- *
- * Create the compressed channel and buffer.
- *
- * Returns 0 for success or -1 for error
- *
- * @p: Params for the channel that we are using
- * @errp: pointer to an error
- */
static int multifd_uadk_recv_setup(MultiFDRecvParams *p, Error **errp)
{
struct wd_data *wd;
@@ -253,13 +216,6 @@ static int multifd_uadk_recv_setup(MultiFDRecvParams *p, Error **errp)
return 0;
}
-/**
- * multifd_uadk_recv_cleanup: cleanup receive side
- *
- * Close the channel and return memory.
- *
- * @p: Params for the channel that we are using
- */
static void multifd_uadk_recv_cleanup(MultiFDRecvParams *p)
{
struct wd_data *wd = p->compress_data;
@@ -268,17 +224,6 @@ static void multifd_uadk_recv_cleanup(MultiFDRecvParams *p)
p->compress_data = NULL;
}
-/**
- * multifd_uadk_recv: read the data from the channel into actual pages
- *
- * Read the compressed buffer, and uncompress it into the actual
- * pages.
- *
- * Returns 0 for success or -1 for error
- *
- * @p: Params for the channel that we are using
- * @errp: pointer to an error
- */
static int multifd_uadk_recv(MultiFDRecvParams *p, Error **errp)
{
struct wd_data *uadk_data = p->compress_data;
diff --git a/migration/multifd-zlib.c b/migration/multifd-zlib.c
index 66517c1067..6787538762 100644
--- a/migration/multifd-zlib.c
+++ b/migration/multifd-zlib.c
@@ -34,17 +34,7 @@ struct zlib_data {
/* Multifd zlib compression */
-/**
- * zlib_send_setup: setup send side
- *
- * Setup each channel with zlib compression.
- *
- * Returns 0 for success or -1 for error
- *
- * @p: Params for the channel that we are using
- * @errp: pointer to an error
- */
-static int zlib_send_setup(MultiFDSendParams *p, Error **errp)
+static int multifd_zlib_send_setup(MultiFDSendParams *p, Error **errp)
{
struct zlib_data *z = g_new0(struct zlib_data, 1);
z_stream *zs = &z->zs;
@@ -86,15 +76,7 @@ err_free_z:
return -1;
}
-/**
- * zlib_send_cleanup: cleanup send side
- *
- * Close the channel and return memory.
- *
- * @p: Params for the channel that we are using
- * @errp: pointer to an error
- */
-static void zlib_send_cleanup(MultiFDSendParams *p, Error **errp)
+static void multifd_zlib_send_cleanup(MultiFDSendParams *p, Error **errp)
{
struct zlib_data *z = p->compress_data;
@@ -110,18 +92,7 @@ static void zlib_send_cleanup(MultiFDSendParams *p, Error **errp)
p->iov = NULL;
}
-/**
- * zlib_send_prepare: prepare date to be able to send
- *
- * Create a compressed buffer with all the pages that we are going to
- * send.
- *
- * Returns 0 for success or -1 for error
- *
- * @p: Params for the channel that we are using
- * @errp: pointer to an error
- */
-static int zlib_send_prepare(MultiFDSendParams *p, Error **errp)
+static int multifd_zlib_send_prepare(MultiFDSendParams *p, Error **errp)
{
MultiFDPages_t *pages = &p->data->u.ram;
struct zlib_data *z = p->compress_data;
@@ -189,17 +160,7 @@ out:
return 0;
}
-/**
- * zlib_recv_setup: setup receive side
- *
- * Create the compressed channel and buffer.
- *
- * Returns 0 for success or -1 for error
- *
- * @p: Params for the channel that we are using
- * @errp: pointer to an error
- */
-static int zlib_recv_setup(MultiFDRecvParams *p, Error **errp)
+static int multifd_zlib_recv_setup(MultiFDRecvParams *p, Error **errp)
{
struct zlib_data *z = g_new0(struct zlib_data, 1);
z_stream *zs = &z->zs;
@@ -225,14 +186,7 @@ static int zlib_recv_setup(MultiFDRecvParams *p, Error **errp)
return 0;
}
-/**
- * zlib_recv_cleanup: setup receive side
- *
- * For no compression this function does nothing.
- *
- * @p: Params for the channel that we are using
- */
-static void zlib_recv_cleanup(MultiFDRecvParams *p)
+static void multifd_zlib_recv_cleanup(MultiFDRecvParams *p)
{
struct zlib_data *z = p->compress_data;
@@ -243,18 +197,7 @@ static void zlib_recv_cleanup(MultiFDRecvParams *p)
p->compress_data = NULL;
}
-/**
- * zlib_recv: read the data from the channel into actual pages
- *
- * Read the compressed buffer, and uncompress it into the actual
- * pages.
- *
- * Returns 0 for success or -1 for error
- *
- * @p: Params for the channel that we are using
- * @errp: pointer to an error
- */
-static int zlib_recv(MultiFDRecvParams *p, Error **errp)
+static int multifd_zlib_recv(MultiFDRecvParams *p, Error **errp)
{
struct zlib_data *z = p->compress_data;
z_stream *zs = &z->zs;
@@ -335,12 +278,12 @@ static int zlib_recv(MultiFDRecvParams *p, Error **errp)
}
static MultiFDMethods multifd_zlib_ops = {
- .send_setup = zlib_send_setup,
- .send_cleanup = zlib_send_cleanup,
- .send_prepare = zlib_send_prepare,
- .recv_setup = zlib_recv_setup,
- .recv_cleanup = zlib_recv_cleanup,
- .recv = zlib_recv
+ .send_setup = multifd_zlib_send_setup,
+ .send_cleanup = multifd_zlib_send_cleanup,
+ .send_prepare = multifd_zlib_send_prepare,
+ .recv_setup = multifd_zlib_recv_setup,
+ .recv_cleanup = multifd_zlib_recv_cleanup,
+ .recv = multifd_zlib_recv
};
static void multifd_zlib_register(void)
diff --git a/migration/multifd-zstd.c b/migration/multifd-zstd.c
index 04ac711cf4..1576b1e2ad 100644
--- a/migration/multifd-zstd.c
+++ b/migration/multifd-zstd.c
@@ -37,17 +37,7 @@ struct zstd_data {
/* Multifd zstd compression */
-/**
- * zstd_send_setup: setup send side
- *
- * Setup each channel with zstd compression.
- *
- * Returns 0 for success or -1 for error
- *
- * @p: Params for the channel that we are using
- * @errp: pointer to an error
- */
-static int zstd_send_setup(MultiFDSendParams *p, Error **errp)
+static int multifd_zstd_send_setup(MultiFDSendParams *p, Error **errp)
{
struct zstd_data *z = g_new0(struct zstd_data, 1);
int res;
@@ -83,15 +73,7 @@ static int zstd_send_setup(MultiFDSendParams *p, Error **errp)
return 0;
}
-/**
- * zstd_send_cleanup: cleanup send side
- *
- * Close the channel and return memory.
- *
- * @p: Params for the channel that we are using
- * @errp: pointer to an error
- */
-static void zstd_send_cleanup(MultiFDSendParams *p, Error **errp)
+static void multifd_zstd_send_cleanup(MultiFDSendParams *p, Error **errp)
{
struct zstd_data *z = p->compress_data;
@@ -106,18 +88,7 @@ static void zstd_send_cleanup(MultiFDSendParams *p, Error **errp)
p->iov = NULL;
}
-/**
- * zstd_send_prepare: prepare date to be able to send
- *
- * Create a compressed buffer with all the pages that we are going to
- * send.
- *
- * Returns 0 for success or -1 for error
- *
- * @p: Params for the channel that we are using
- * @errp: pointer to an error
- */
-static int zstd_send_prepare(MultiFDSendParams *p, Error **errp)
+static int multifd_zstd_send_prepare(MultiFDSendParams *p, Error **errp)
{
MultiFDPages_t *pages = &p->data->u.ram;
struct zstd_data *z = p->compress_data;
@@ -176,17 +147,7 @@ out:
return 0;
}
-/**
- * zstd_recv_setup: setup receive side
- *
- * Create the compressed channel and buffer.
- *
- * Returns 0 for success or -1 for error
- *
- * @p: Params for the channel that we are using
- * @errp: pointer to an error
- */
-static int zstd_recv_setup(MultiFDRecvParams *p, Error **errp)
+static int multifd_zstd_recv_setup(MultiFDRecvParams *p, Error **errp)
{
struct zstd_data *z = g_new0(struct zstd_data, 1);
int ret;
@@ -220,14 +181,7 @@ static int zstd_recv_setup(MultiFDRecvParams *p, Error **errp)
return 0;
}
-/**
- * zstd_recv_cleanup: setup receive side
- *
- * For no compression this function does nothing.
- *
- * @p: Params for the channel that we are using
- */
-static void zstd_recv_cleanup(MultiFDRecvParams *p)
+static void multifd_zstd_recv_cleanup(MultiFDRecvParams *p)
{
struct zstd_data *z = p->compress_data;
@@ -239,18 +193,7 @@ static void zstd_recv_cleanup(MultiFDRecvParams *p)
p->compress_data = NULL;
}
-/**
- * zstd_recv: read the data from the channel into actual pages
- *
- * Read the compressed buffer, and uncompress it into the actual
- * pages.
- *
- * Returns 0 for success or -1 for error
- *
- * @p: Params for the channel that we are using
- * @errp: pointer to an error
- */
-static int zstd_recv(MultiFDRecvParams *p, Error **errp)
+static int multifd_zstd_recv(MultiFDRecvParams *p, Error **errp)
{
uint32_t in_size = p->next_packet_size;
uint32_t out_size = 0;
@@ -323,12 +266,12 @@ static int zstd_recv(MultiFDRecvParams *p, Error **errp)
}
static MultiFDMethods multifd_zstd_ops = {
- .send_setup = zstd_send_setup,
- .send_cleanup = zstd_send_cleanup,
- .send_prepare = zstd_send_prepare,
- .recv_setup = zstd_recv_setup,
- .recv_cleanup = zstd_recv_cleanup,
- .recv = zstd_recv
+ .send_setup = multifd_zstd_send_setup,
+ .send_cleanup = multifd_zstd_send_cleanup,
+ .send_prepare = multifd_zstd_send_prepare,
+ .recv_setup = multifd_zstd_recv_setup,
+ .recv_cleanup = multifd_zstd_recv_cleanup,
+ .recv = multifd_zstd_recv
};
static void multifd_zstd_register(void)
diff --git a/migration/multifd.c b/migration/multifd.c
index ce08257706..9f40bb2f16 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -167,15 +167,7 @@ static void multifd_set_file_bitmap(MultiFDSendParams *p)
}
}
-/* Multifd without compression */
-
-/**
- * nocomp_send_setup: setup send side
- *
- * @p: Params for the channel that we are using
- * @errp: pointer to an error
- */
-static int nocomp_send_setup(MultiFDSendParams *p, Error **errp)
+static int multifd_nocomp_send_setup(MultiFDSendParams *p, Error **errp)
{
uint32_t page_count = multifd_ram_page_count();
@@ -193,15 +185,7 @@ static int nocomp_send_setup(MultiFDSendParams *p, Error **errp)
return 0;
}
-/**
- * nocomp_send_cleanup: cleanup send side
- *
- * For no compression this function does nothing.
- *
- * @p: Params for the channel that we are using
- * @errp: pointer to an error
- */
-static void nocomp_send_cleanup(MultiFDSendParams *p, Error **errp)
+static void multifd_nocomp_send_cleanup(MultiFDSendParams *p, Error **errp)
{
g_free(p->iov);
p->iov = NULL;
@@ -222,18 +206,7 @@ static void multifd_send_prepare_iovs(MultiFDSendParams *p)
p->next_packet_size = pages->normal_num * page_size;
}
-/**
- * nocomp_send_prepare: prepare date to be able to send
- *
- * For no compression we just have to calculate the size of the
- * packet.
- *
- * Returns 0 for success or -1 for error
- *
- * @p: Params for the channel that we are using
- * @errp: pointer to an error
- */
-static int nocomp_send_prepare(MultiFDSendParams *p, Error **errp)
+static int multifd_nocomp_send_prepare(MultiFDSendParams *p, Error **errp)
{
bool use_zero_copy_send = migrate_zero_copy_send();
int ret;
@@ -272,46 +245,19 @@ static int nocomp_send_prepare(MultiFDSendParams *p, Error **errp)
return 0;
}
-/**
- * nocomp_recv_setup: setup receive side
- *
- * For no compression this function does nothing.
- *
- * Returns 0 for success or -1 for error
- *
- * @p: Params for the channel that we are using
- * @errp: pointer to an error
- */
-static int nocomp_recv_setup(MultiFDRecvParams *p, Error **errp)
+static int multifd_nocomp_recv_setup(MultiFDRecvParams *p, Error **errp)
{
p->iov = g_new0(struct iovec, multifd_ram_page_count());
return 0;
}
-/**
- * nocomp_recv_cleanup: setup receive side
- *
- * For no compression this function does nothing.
- *
- * @p: Params for the channel that we are using
- */
-static void nocomp_recv_cleanup(MultiFDRecvParams *p)
+static void multifd_nocomp_recv_cleanup(MultiFDRecvParams *p)
{
g_free(p->iov);
p->iov = NULL;
}
-/**
- * nocomp_recv: read the data from the channel
- *
- * For no compression we just need to read things into the correct place.
- *
- * Returns 0 for success or -1 for error
- *
- * @p: Params for the channel that we are using
- * @errp: pointer to an error
- */
-static int nocomp_recv(MultiFDRecvParams *p, Error **errp)
+static int multifd_nocomp_recv(MultiFDRecvParams *p, Error **errp)
{
uint32_t flags;
@@ -342,12 +288,12 @@ static int nocomp_recv(MultiFDRecvParams *p, Error **errp)
}
static MultiFDMethods multifd_nocomp_ops = {
- .send_setup = nocomp_send_setup,
- .send_cleanup = nocomp_send_cleanup,
- .send_prepare = nocomp_send_prepare,
- .recv_setup = nocomp_recv_setup,
- .recv_cleanup = nocomp_recv_cleanup,
- .recv = nocomp_recv
+ .send_setup = multifd_nocomp_send_setup,
+ .send_cleanup = multifd_nocomp_send_cleanup,
+ .send_prepare = multifd_nocomp_send_prepare,
+ .recv_setup = multifd_nocomp_recv_setup,
+ .recv_cleanup = multifd_nocomp_recv_cleanup,
+ .recv = multifd_nocomp_recv
};
static MultiFDMethods *multifd_ops[MULTIFD_COMPRESSION__MAX] = {
--
2.35.3
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH v6 15/19] migration/multifd: Register nocomp ops dynamically
2024-08-27 17:45 [PATCH v6 00/19] migration/multifd: Remove multifd_send_state->pages Fabiano Rosas
` (13 preceding siblings ...)
2024-08-27 17:46 ` [PATCH v6 14/19] migration/multifd: Standardize on multifd ops names Fabiano Rosas
@ 2024-08-27 17:46 ` Fabiano Rosas
2024-08-27 17:46 ` [PATCH v6 16/19] migration/multifd: Move nocomp code into multifd-nocomp.c Fabiano Rosas
` (3 subsequent siblings)
18 siblings, 0 replies; 34+ messages in thread
From: Fabiano Rosas @ 2024-08-27 17:46 UTC (permalink / raw)
To: qemu-devel
Cc: Peter Xu, Maciej S . Szmigiero, Philippe Mathieu-Daudé,
Prasad Pandit
Prior to moving the ram code into multifd-nocomp.c, change the code to
register the nocomp ops dynamically so we don't need to have the ops
structure defined in multifd.c.
While here, move the ops struct initialization to the end of the file
to make the next diff cleaner.
Reviewed-by: Prasad Pandit <pjp@fedoraproject.org>
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
---
migration/multifd.c | 32 +++++++++++++++++++-------------
1 file changed, 19 insertions(+), 13 deletions(-)
diff --git a/migration/multifd.c b/migration/multifd.c
index 9f40bb2f16..e100836cbe 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -287,22 +287,12 @@ static int multifd_nocomp_recv(MultiFDRecvParams *p, Error **errp)
return qio_channel_readv_all(p->c, p->iov, p->normal_num, errp);
}
-static MultiFDMethods multifd_nocomp_ops = {
- .send_setup = multifd_nocomp_send_setup,
- .send_cleanup = multifd_nocomp_send_cleanup,
- .send_prepare = multifd_nocomp_send_prepare,
- .recv_setup = multifd_nocomp_recv_setup,
- .recv_cleanup = multifd_nocomp_recv_cleanup,
- .recv = multifd_nocomp_recv
-};
-
-static MultiFDMethods *multifd_ops[MULTIFD_COMPRESSION__MAX] = {
- [MULTIFD_COMPRESSION_NONE] = &multifd_nocomp_ops,
-};
+static MultiFDMethods *multifd_ops[MULTIFD_COMPRESSION__MAX] = {};
void multifd_register_ops(int method, MultiFDMethods *ops)
{
- assert(0 < method && method < MULTIFD_COMPRESSION__MAX);
+ assert(0 <= method && method < MULTIFD_COMPRESSION__MAX);
+ assert(!multifd_ops[method]);
multifd_ops[method] = ops;
}
@@ -1701,3 +1691,19 @@ bool multifd_send_prepare_common(MultiFDSendParams *p)
return true;
}
+
+static MultiFDMethods multifd_nocomp_ops = {
+ .send_setup = multifd_nocomp_send_setup,
+ .send_cleanup = multifd_nocomp_send_cleanup,
+ .send_prepare = multifd_nocomp_send_prepare,
+ .recv_setup = multifd_nocomp_recv_setup,
+ .recv_cleanup = multifd_nocomp_recv_cleanup,
+ .recv = multifd_nocomp_recv
+};
+
+static void multifd_nocomp_register(void)
+{
+ multifd_register_ops(MULTIFD_COMPRESSION_NONE, &multifd_nocomp_ops);
+}
+
+migration_init(multifd_nocomp_register);
--
2.35.3
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH v6 16/19] migration/multifd: Move nocomp code into multifd-nocomp.c
2024-08-27 17:45 [PATCH v6 00/19] migration/multifd: Remove multifd_send_state->pages Fabiano Rosas
` (14 preceding siblings ...)
2024-08-27 17:46 ` [PATCH v6 15/19] migration/multifd: Register nocomp ops dynamically Fabiano Rosas
@ 2024-08-27 17:46 ` Fabiano Rosas
2024-08-27 17:46 ` [PATCH v6 17/19] migration/multifd: Make MultiFDMethods const Fabiano Rosas
` (2 subsequent siblings)
18 siblings, 0 replies; 34+ messages in thread
From: Fabiano Rosas @ 2024-08-27 17:46 UTC (permalink / raw)
To: qemu-devel; +Cc: Peter Xu, Maciej S . Szmigiero, Philippe Mathieu-Daudé
In preparation for adding new payload types to multifd, move most of
the no-compression code into multifd-nocomp.c. Let's try to keep a
semblance of layering by not mixing general multifd control flow with
the details of transmitting pages of ram.
There are still some pieces leftover, namely the p->normal, p->zero,
etc variables that we use for zero page tracking and the packet
allocation which is heavily dependent on the ram code.
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
---
migration/meson.build | 1 +
migration/multifd-nocomp.c | 394 +++++++++++++++++++++++++++++++++++++
migration/multifd.c | 377 +----------------------------------
migration/multifd.h | 5 +
4 files changed, 402 insertions(+), 375 deletions(-)
create mode 100644 migration/multifd-nocomp.c
diff --git a/migration/meson.build b/migration/meson.build
index 5ce2acb41e..77f3abf08e 100644
--- a/migration/meson.build
+++ b/migration/meson.build
@@ -21,6 +21,7 @@ system_ss.add(files(
'migration-hmp-cmds.c',
'migration.c',
'multifd.c',
+ 'multifd-nocomp.c',
'multifd-zlib.c',
'multifd-zero-page.c',
'options.c',
diff --git a/migration/multifd-nocomp.c b/migration/multifd-nocomp.c
new file mode 100644
index 0000000000..53ea9f9c83
--- /dev/null
+++ b/migration/multifd-nocomp.c
@@ -0,0 +1,394 @@
+/*
+ * Multifd RAM migration without compression
+ *
+ * Copyright (c) 2019-2020 Red Hat Inc
+ *
+ * Authors:
+ * Juan Quintela <quintela@redhat.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ */
+
+#include "qemu/osdep.h"
+#include "exec/ramblock.h"
+#include "exec/target_page.h"
+#include "file.h"
+#include "multifd.h"
+#include "options.h"
+#include "qapi/error.h"
+#include "qemu/error-report.h"
+#include "trace.h"
+
+static MultiFDSendData *multifd_ram_send;
+
+size_t multifd_ram_payload_size(void)
+{
+ uint32_t n = multifd_ram_page_count();
+
+ /*
+ * We keep an array of page offsets at the end of MultiFDPages_t,
+ * add space for it in the allocation.
+ */
+ return sizeof(MultiFDPages_t) + n * sizeof(ram_addr_t);
+}
+
+void multifd_ram_save_setup(void)
+{
+ multifd_ram_send = multifd_send_data_alloc();
+}
+
+void multifd_ram_save_cleanup(void)
+{
+ g_free(multifd_ram_send);
+ multifd_ram_send = NULL;
+}
+
+static void multifd_set_file_bitmap(MultiFDSendParams *p)
+{
+ MultiFDPages_t *pages = &p->data->u.ram;
+
+ assert(pages->block);
+
+ for (int i = 0; i < pages->normal_num; i++) {
+ ramblock_set_file_bmap_atomic(pages->block, pages->offset[i], true);
+ }
+
+ for (int i = pages->normal_num; i < pages->num; i++) {
+ ramblock_set_file_bmap_atomic(pages->block, pages->offset[i], false);
+ }
+}
+
+static int multifd_nocomp_send_setup(MultiFDSendParams *p, Error **errp)
+{
+ uint32_t page_count = multifd_ram_page_count();
+
+ if (migrate_zero_copy_send()) {
+ p->write_flags |= QIO_CHANNEL_WRITE_FLAG_ZERO_COPY;
+ }
+
+ if (!migrate_mapped_ram()) {
+ /* We need one extra place for the packet header */
+ p->iov = g_new0(struct iovec, page_count + 1);
+ } else {
+ p->iov = g_new0(struct iovec, page_count);
+ }
+
+ return 0;
+}
+
+static void multifd_nocomp_send_cleanup(MultiFDSendParams *p, Error **errp)
+{
+ g_free(p->iov);
+ p->iov = NULL;
+ return;
+}
+
+static void multifd_send_prepare_iovs(MultiFDSendParams *p)
+{
+ MultiFDPages_t *pages = &p->data->u.ram;
+ uint32_t page_size = multifd_ram_page_size();
+
+ for (int i = 0; i < pages->normal_num; i++) {
+ p->iov[p->iovs_num].iov_base = pages->block->host + pages->offset[i];
+ p->iov[p->iovs_num].iov_len = page_size;
+ p->iovs_num++;
+ }
+
+ p->next_packet_size = pages->normal_num * page_size;
+}
+
+static int multifd_nocomp_send_prepare(MultiFDSendParams *p, Error **errp)
+{
+ bool use_zero_copy_send = migrate_zero_copy_send();
+ int ret;
+
+ multifd_send_zero_page_detect(p);
+
+ if (migrate_mapped_ram()) {
+ multifd_send_prepare_iovs(p);
+ multifd_set_file_bitmap(p);
+
+ return 0;
+ }
+
+ if (!use_zero_copy_send) {
+ /*
+ * Only !zerocopy needs the header in IOV; zerocopy will
+ * send it separately.
+ */
+ multifd_send_prepare_header(p);
+ }
+
+ multifd_send_prepare_iovs(p);
+ p->flags |= MULTIFD_FLAG_NOCOMP;
+
+ multifd_send_fill_packet(p);
+
+ if (use_zero_copy_send) {
+ /* Send header first, without zerocopy */
+ ret = qio_channel_write_all(p->c, (void *)p->packet,
+ p->packet_len, errp);
+ if (ret != 0) {
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+static int multifd_nocomp_recv_setup(MultiFDRecvParams *p, Error **errp)
+{
+ p->iov = g_new0(struct iovec, multifd_ram_page_count());
+ return 0;
+}
+
+static void multifd_nocomp_recv_cleanup(MultiFDRecvParams *p)
+{
+ g_free(p->iov);
+ p->iov = NULL;
+}
+
+static int multifd_nocomp_recv(MultiFDRecvParams *p, Error **errp)
+{
+ uint32_t flags;
+
+ if (migrate_mapped_ram()) {
+ return multifd_file_recv_data(p, errp);
+ }
+
+ flags = p->flags & MULTIFD_FLAG_COMPRESSION_MASK;
+
+ if (flags != MULTIFD_FLAG_NOCOMP) {
+ error_setg(errp, "multifd %u: flags received %x flags expected %x",
+ p->id, flags, MULTIFD_FLAG_NOCOMP);
+ return -1;
+ }
+
+ multifd_recv_zero_page_process(p);
+
+ if (!p->normal_num) {
+ return 0;
+ }
+
+ for (int i = 0; i < p->normal_num; i++) {
+ p->iov[i].iov_base = p->host + p->normal[i];
+ p->iov[i].iov_len = multifd_ram_page_size();
+ ramblock_recv_bitmap_set_offset(p->block, p->normal[i]);
+ }
+ return qio_channel_readv_all(p->c, p->iov, p->normal_num, errp);
+}
+
+static void multifd_pages_reset(MultiFDPages_t *pages)
+{
+ /*
+ * We don't need to touch offset[] array, because it will be
+ * overwritten later when reused.
+ */
+ pages->num = 0;
+ pages->normal_num = 0;
+ pages->block = NULL;
+}
+
+void multifd_ram_fill_packet(MultiFDSendParams *p)
+{
+ MultiFDPacket_t *packet = p->packet;
+ MultiFDPages_t *pages = &p->data->u.ram;
+ uint32_t zero_num = pages->num - pages->normal_num;
+
+ packet->pages_alloc = cpu_to_be32(multifd_ram_page_count());
+ packet->normal_pages = cpu_to_be32(pages->normal_num);
+ packet->zero_pages = cpu_to_be32(zero_num);
+
+ if (pages->block) {
+ strncpy(packet->ramblock, pages->block->idstr, 256);
+ }
+
+ for (int i = 0; i < pages->num; i++) {
+ /* there are architectures where ram_addr_t is 32 bit */
+ uint64_t temp = pages->offset[i];
+
+ packet->offset[i] = cpu_to_be64(temp);
+ }
+
+ trace_multifd_send_ram_fill(p->id, pages->normal_num,
+ zero_num);
+}
+
+int multifd_ram_unfill_packet(MultiFDRecvParams *p, Error **errp)
+{
+ MultiFDPacket_t *packet = p->packet;
+ uint32_t page_count = multifd_ram_page_count();
+ uint32_t page_size = multifd_ram_page_size();
+ int i;
+
+ packet->pages_alloc = be32_to_cpu(packet->pages_alloc);
+ /*
+ * If we received a packet that is 100 times bigger than expected
+ * just stop migration. It is a magic number.
+ */
+ if (packet->pages_alloc > page_count) {
+ error_setg(errp, "multifd: received packet "
+ "with size %u and expected a size of %u",
+ packet->pages_alloc, page_count) ;
+ return -1;
+ }
+
+ p->normal_num = be32_to_cpu(packet->normal_pages);
+ if (p->normal_num > packet->pages_alloc) {
+ error_setg(errp, "multifd: received packet "
+ "with %u normal pages and expected maximum pages are %u",
+ p->normal_num, packet->pages_alloc) ;
+ return -1;
+ }
+
+ p->zero_num = be32_to_cpu(packet->zero_pages);
+ if (p->zero_num > packet->pages_alloc - p->normal_num) {
+ error_setg(errp, "multifd: received packet "
+ "with %u zero pages and expected maximum zero pages are %u",
+ p->zero_num, packet->pages_alloc - p->normal_num) ;
+ return -1;
+ }
+
+ if (p->normal_num == 0 && p->zero_num == 0) {
+ return 0;
+ }
+
+ /* make sure that ramblock is 0 terminated */
+ packet->ramblock[255] = 0;
+ p->block = qemu_ram_block_by_name(packet->ramblock);
+ if (!p->block) {
+ error_setg(errp, "multifd: unknown ram block %s",
+ packet->ramblock);
+ return -1;
+ }
+
+ p->host = p->block->host;
+ for (i = 0; i < p->normal_num; i++) {
+ uint64_t offset = be64_to_cpu(packet->offset[i]);
+
+ if (offset > (p->block->used_length - page_size)) {
+ error_setg(errp, "multifd: offset too long %" PRIu64
+ " (max " RAM_ADDR_FMT ")",
+ offset, p->block->used_length);
+ return -1;
+ }
+ p->normal[i] = offset;
+ }
+
+ for (i = 0; i < p->zero_num; i++) {
+ uint64_t offset = be64_to_cpu(packet->offset[p->normal_num + i]);
+
+ if (offset > (p->block->used_length - page_size)) {
+ error_setg(errp, "multifd: offset too long %" PRIu64
+ " (max " RAM_ADDR_FMT ")",
+ offset, p->block->used_length);
+ return -1;
+ }
+ p->zero[i] = offset;
+ }
+
+ return 0;
+}
+
+static inline bool multifd_queue_empty(MultiFDPages_t *pages)
+{
+ return pages->num == 0;
+}
+
+static inline bool multifd_queue_full(MultiFDPages_t *pages)
+{
+ return pages->num == multifd_ram_page_count();
+}
+
+static inline void multifd_enqueue(MultiFDPages_t *pages, ram_addr_t offset)
+{
+ pages->offset[pages->num++] = offset;
+}
+
+/* Returns true if enqueue successful, false otherwise */
+bool multifd_queue_page(RAMBlock *block, ram_addr_t offset)
+{
+ MultiFDPages_t *pages;
+
+retry:
+ pages = &multifd_ram_send->u.ram;
+
+ if (multifd_payload_empty(multifd_ram_send)) {
+ multifd_pages_reset(pages);
+ multifd_set_payload_type(multifd_ram_send, MULTIFD_PAYLOAD_RAM);
+ }
+
+ /* If the queue is empty, we can already enqueue now */
+ if (multifd_queue_empty(pages)) {
+ pages->block = block;
+ multifd_enqueue(pages, offset);
+ return true;
+ }
+
+ /*
+ * Not empty, meanwhile we need a flush. It can because of either:
+ *
+ * (1) The page is not on the same ramblock of previous ones, or,
+ * (2) The queue is full.
+ *
+ * After flush, always retry.
+ */
+ if (pages->block != block || multifd_queue_full(pages)) {
+ if (!multifd_send(&multifd_ram_send)) {
+ return false;
+ }
+ goto retry;
+ }
+
+ /* Not empty, and we still have space, do it! */
+ multifd_enqueue(pages, offset);
+ return true;
+}
+
+int multifd_ram_flush_and_sync(void)
+{
+ if (!migrate_multifd()) {
+ return 0;
+ }
+
+ if (!multifd_payload_empty(multifd_ram_send)) {
+ if (!multifd_send(&multifd_ram_send)) {
+ error_report("%s: multifd_send fail", __func__);
+ return -1;
+ }
+ }
+
+ return multifd_send_sync_main();
+}
+
+bool multifd_send_prepare_common(MultiFDSendParams *p)
+{
+ MultiFDPages_t *pages = &p->data->u.ram;
+ multifd_send_zero_page_detect(p);
+
+ if (!pages->normal_num) {
+ p->next_packet_size = 0;
+ return false;
+ }
+
+ multifd_send_prepare_header(p);
+
+ return true;
+}
+
+static MultiFDMethods multifd_nocomp_ops = {
+ .send_setup = multifd_nocomp_send_setup,
+ .send_cleanup = multifd_nocomp_send_cleanup,
+ .send_prepare = multifd_nocomp_send_prepare,
+ .recv_setup = multifd_nocomp_recv_setup,
+ .recv_cleanup = multifd_nocomp_recv_cleanup,
+ .recv = multifd_nocomp_recv
+};
+
+static void multifd_nocomp_register(void)
+{
+ multifd_register_ops(MULTIFD_COMPRESSION_NONE, &multifd_nocomp_ops);
+}
+
+migration_init(multifd_nocomp_register);
diff --git a/migration/multifd.c b/migration/multifd.c
index e100836cbe..0c07a2040b 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -96,20 +96,7 @@ struct {
MultiFDMethods *ops;
} *multifd_recv_state;
-static MultiFDSendData *multifd_ram_send;
-
-static size_t multifd_ram_payload_size(void)
-{
- uint32_t n = multifd_ram_page_count();
-
- /*
- * We keep an array of page offsets at the end of MultiFDPages_t,
- * add space for it in the allocation.
- */
- return sizeof(MultiFDPages_t) + n * sizeof(ram_addr_t);
-}
-
-static MultiFDSendData *multifd_send_data_alloc(void)
+MultiFDSendData *multifd_send_data_alloc(void)
{
size_t max_payload_size, size_minus_payload;
@@ -131,17 +118,6 @@ static MultiFDSendData *multifd_send_data_alloc(void)
return g_malloc0(size_minus_payload + max_payload_size);
}
-void multifd_ram_save_setup(void)
-{
- multifd_ram_send = multifd_send_data_alloc();
-}
-
-void multifd_ram_save_cleanup(void)
-{
- g_free(multifd_ram_send);
- multifd_ram_send = NULL;
-}
-
static bool multifd_use_packets(void)
{
return !migrate_mapped_ram();
@@ -152,141 +128,6 @@ void multifd_send_channel_created(void)
qemu_sem_post(&multifd_send_state->channels_created);
}
-static void multifd_set_file_bitmap(MultiFDSendParams *p)
-{
- MultiFDPages_t *pages = &p->data->u.ram;
-
- assert(pages->block);
-
- for (int i = 0; i < pages->normal_num; i++) {
- ramblock_set_file_bmap_atomic(pages->block, pages->offset[i], true);
- }
-
- for (int i = pages->normal_num; i < pages->num; i++) {
- ramblock_set_file_bmap_atomic(pages->block, pages->offset[i], false);
- }
-}
-
-static int multifd_nocomp_send_setup(MultiFDSendParams *p, Error **errp)
-{
- uint32_t page_count = multifd_ram_page_count();
-
- if (migrate_zero_copy_send()) {
- p->write_flags |= QIO_CHANNEL_WRITE_FLAG_ZERO_COPY;
- }
-
- if (multifd_use_packets()) {
- /* We need one extra place for the packet header */
- p->iov = g_new0(struct iovec, page_count + 1);
- } else {
- p->iov = g_new0(struct iovec, page_count);
- }
-
- return 0;
-}
-
-static void multifd_nocomp_send_cleanup(MultiFDSendParams *p, Error **errp)
-{
- g_free(p->iov);
- p->iov = NULL;
- return;
-}
-
-static void multifd_send_prepare_iovs(MultiFDSendParams *p)
-{
- MultiFDPages_t *pages = &p->data->u.ram;
- uint32_t page_size = multifd_ram_page_size();
-
- for (int i = 0; i < pages->normal_num; i++) {
- p->iov[p->iovs_num].iov_base = pages->block->host + pages->offset[i];
- p->iov[p->iovs_num].iov_len = page_size;
- p->iovs_num++;
- }
-
- p->next_packet_size = pages->normal_num * page_size;
-}
-
-static int multifd_nocomp_send_prepare(MultiFDSendParams *p, Error **errp)
-{
- bool use_zero_copy_send = migrate_zero_copy_send();
- int ret;
-
- multifd_send_zero_page_detect(p);
-
- if (!multifd_use_packets()) {
- multifd_send_prepare_iovs(p);
- multifd_set_file_bitmap(p);
-
- return 0;
- }
-
- if (!use_zero_copy_send) {
- /*
- * Only !zerocopy needs the header in IOV; zerocopy will
- * send it separately.
- */
- multifd_send_prepare_header(p);
- }
-
- multifd_send_prepare_iovs(p);
- p->flags |= MULTIFD_FLAG_NOCOMP;
-
- multifd_send_fill_packet(p);
-
- if (use_zero_copy_send) {
- /* Send header first, without zerocopy */
- ret = qio_channel_write_all(p->c, (void *)p->packet,
- p->packet_len, errp);
- if (ret != 0) {
- return -1;
- }
- }
-
- return 0;
-}
-
-static int multifd_nocomp_recv_setup(MultiFDRecvParams *p, Error **errp)
-{
- p->iov = g_new0(struct iovec, multifd_ram_page_count());
- return 0;
-}
-
-static void multifd_nocomp_recv_cleanup(MultiFDRecvParams *p)
-{
- g_free(p->iov);
- p->iov = NULL;
-}
-
-static int multifd_nocomp_recv(MultiFDRecvParams *p, Error **errp)
-{
- uint32_t flags;
-
- if (!multifd_use_packets()) {
- return multifd_file_recv_data(p, errp);
- }
-
- flags = p->flags & MULTIFD_FLAG_COMPRESSION_MASK;
-
- if (flags != MULTIFD_FLAG_NOCOMP) {
- error_setg(errp, "multifd %u: flags received %x flags expected %x",
- p->id, flags, MULTIFD_FLAG_NOCOMP);
- return -1;
- }
-
- multifd_recv_zero_page_process(p);
-
- if (!p->normal_num) {
- return 0;
- }
-
- for (int i = 0; i < p->normal_num; i++) {
- p->iov[i].iov_base = p->host + p->normal[i];
- p->iov[i].iov_len = multifd_ram_page_size();
- ramblock_recv_bitmap_set_offset(p->block, p->normal[i]);
- }
- return qio_channel_readv_all(p->c, p->iov, p->normal_num, errp);
-}
-
static MultiFDMethods *multifd_ops[MULTIFD_COMPRESSION__MAX] = {};
void multifd_register_ops(int method, MultiFDMethods *ops)
@@ -296,18 +137,6 @@ void multifd_register_ops(int method, MultiFDMethods *ops)
multifd_ops[method] = ops;
}
-/* Reset a MultiFDPages_t* object for the next use */
-static void multifd_pages_reset(MultiFDPages_t *pages)
-{
- /*
- * We don't need to touch offset[] array, because it will be
- * overwritten later when reused.
- */
- pages->num = 0;
- pages->normal_num = 0;
- pages->block = NULL;
-}
-
static int multifd_send_initial_packet(MultiFDSendParams *p, Error **errp)
{
MultiFDInit_t msg = {};
@@ -372,30 +201,6 @@ static int multifd_recv_initial_packet(QIOChannel *c, Error **errp)
return msg.id;
}
-static void multifd_ram_fill_packet(MultiFDSendParams *p)
-{
- MultiFDPacket_t *packet = p->packet;
- MultiFDPages_t *pages = &p->data->u.ram;
- uint32_t zero_num = pages->num - pages->normal_num;
-
- packet->pages_alloc = cpu_to_be32(multifd_ram_page_count());
- packet->normal_pages = cpu_to_be32(pages->normal_num);
- packet->zero_pages = cpu_to_be32(zero_num);
-
- if (pages->block) {
- strncpy(packet->ramblock, pages->block->idstr, 256);
- }
-
- for (int i = 0; i < pages->num; i++) {
- /* there are architectures where ram_addr_t is 32 bit */
- uint64_t temp = pages->offset[i];
-
- packet->offset[i] = cpu_to_be64(temp);
- }
-
- trace_multifd_send_ram_fill(p->id, pages->normal_num, zero_num);
-}
-
void multifd_send_fill_packet(MultiFDSendParams *p)
{
MultiFDPacket_t *packet = p->packet;
@@ -423,82 +228,6 @@ void multifd_send_fill_packet(MultiFDSendParams *p)
p->flags, p->next_packet_size);
}
-static int multifd_ram_unfill_packet(MultiFDRecvParams *p, Error **errp)
-{
- MultiFDPacket_t *packet = p->packet;
- uint32_t page_count = multifd_ram_page_count();
- uint32_t page_size = multifd_ram_page_size();
- int i;
-
- packet->pages_alloc = be32_to_cpu(packet->pages_alloc);
- /*
- * If we received a packet that is 100 times bigger than expected
- * just stop migration. It is a magic number.
- */
- if (packet->pages_alloc > page_count) {
- error_setg(errp, "multifd: received packet "
- "with size %u and expected a size of %u",
- packet->pages_alloc, page_count) ;
- return -1;
- }
-
- p->normal_num = be32_to_cpu(packet->normal_pages);
- if (p->normal_num > packet->pages_alloc) {
- error_setg(errp, "multifd: received packet "
- "with %u normal pages and expected maximum pages are %u",
- p->normal_num, packet->pages_alloc) ;
- return -1;
- }
-
- p->zero_num = be32_to_cpu(packet->zero_pages);
- if (p->zero_num > packet->pages_alloc - p->normal_num) {
- error_setg(errp, "multifd: received packet "
- "with %u zero pages and expected maximum zero pages are %u",
- p->zero_num, packet->pages_alloc - p->normal_num) ;
- return -1;
- }
-
- if (p->normal_num == 0 && p->zero_num == 0) {
- return 0;
- }
-
- /* make sure that ramblock is 0 terminated */
- packet->ramblock[255] = 0;
- p->block = qemu_ram_block_by_name(packet->ramblock);
- if (!p->block) {
- error_setg(errp, "multifd: unknown ram block %s",
- packet->ramblock);
- return -1;
- }
-
- p->host = p->block->host;
- for (i = 0; i < p->normal_num; i++) {
- uint64_t offset = be64_to_cpu(packet->offset[i]);
-
- if (offset > (p->block->used_length - page_size)) {
- error_setg(errp, "multifd: offset too long %" PRIu64
- " (max " RAM_ADDR_FMT ")",
- offset, p->block->used_length);
- return -1;
- }
- p->normal[i] = offset;
- }
-
- for (i = 0; i < p->zero_num; i++) {
- uint64_t offset = be64_to_cpu(packet->offset[p->normal_num + i]);
-
- if (offset > (p->block->used_length - page_size)) {
- error_setg(errp, "multifd: offset too long %" PRIu64
- " (max " RAM_ADDR_FMT ")",
- offset, p->block->used_length);
- return -1;
- }
- p->zero[i] = offset;
- }
-
- return 0;
-}
-
static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
{
MultiFDPacket_t *packet = p->packet;
@@ -571,7 +300,7 @@ static void multifd_send_kick_main(MultiFDSendParams *p)
*
* Returns true if succeed, false otherwise.
*/
-static bool multifd_send(MultiFDSendData **send_data)
+bool multifd_send(MultiFDSendData **send_data)
{
int i;
static int next_channel;
@@ -632,61 +361,6 @@ static bool multifd_send(MultiFDSendData **send_data)
return true;
}
-static inline bool multifd_queue_empty(MultiFDPages_t *pages)
-{
- return pages->num == 0;
-}
-
-static inline bool multifd_queue_full(MultiFDPages_t *pages)
-{
- return pages->num == multifd_ram_page_count();
-}
-
-static inline void multifd_enqueue(MultiFDPages_t *pages, ram_addr_t offset)
-{
- pages->offset[pages->num++] = offset;
-}
-
-/* Returns true if enqueue successful, false otherwise */
-bool multifd_queue_page(RAMBlock *block, ram_addr_t offset)
-{
- MultiFDPages_t *pages;
-
-retry:
- pages = &multifd_ram_send->u.ram;
-
- if (multifd_payload_empty(multifd_ram_send)) {
- multifd_pages_reset(pages);
- multifd_set_payload_type(multifd_ram_send, MULTIFD_PAYLOAD_RAM);
- }
-
- /* If the queue is empty, we can already enqueue now */
- if (multifd_queue_empty(pages)) {
- pages->block = block;
- multifd_enqueue(pages, offset);
- return true;
- }
-
- /*
- * Not empty, meanwhile we need a flush. It can because of either:
- *
- * (1) The page is not on the same ramblock of previous ones, or,
- * (2) The queue is full.
- *
- * After flush, always retry.
- */
- if (pages->block != block || multifd_queue_full(pages)) {
- if (!multifd_send(&multifd_ram_send)) {
- return false;
- }
- goto retry;
- }
-
- /* Not empty, and we still have space, do it! */
- multifd_enqueue(pages, offset);
- return true;
-}
-
/* Multifd send side hit an error; remember it and prepare to quit */
static void multifd_send_set_error(Error *err)
{
@@ -850,22 +524,6 @@ static int multifd_zero_copy_flush(QIOChannel *c)
return ret;
}
-int multifd_ram_flush_and_sync(void)
-{
- if (!migrate_multifd()) {
- return 0;
- }
-
- if (!multifd_payload_empty(multifd_ram_send)) {
- if (!multifd_send(&multifd_ram_send)) {
- error_report("%s: multifd_send fail", __func__);
- return -1;
- }
- }
-
- return multifd_send_sync_main();
-}
-
int multifd_send_sync_main(void)
{
int i;
@@ -1676,34 +1334,3 @@ void multifd_recv_new_channel(QIOChannel *ioc, Error **errp)
QEMU_THREAD_JOINABLE);
qatomic_inc(&multifd_recv_state->count);
}
-
-bool multifd_send_prepare_common(MultiFDSendParams *p)
-{
- MultiFDPages_t *pages = &p->data->u.ram;
- multifd_send_zero_page_detect(p);
-
- if (!pages->normal_num) {
- p->next_packet_size = 0;
- return false;
- }
-
- multifd_send_prepare_header(p);
-
- return true;
-}
-
-static MultiFDMethods multifd_nocomp_ops = {
- .send_setup = multifd_nocomp_send_setup,
- .send_cleanup = multifd_nocomp_send_cleanup,
- .send_prepare = multifd_nocomp_send_prepare,
- .recv_setup = multifd_nocomp_recv_setup,
- .recv_cleanup = multifd_nocomp_recv_cleanup,
- .recv = multifd_nocomp_recv
-};
-
-static void multifd_nocomp_register(void)
-{
- multifd_register_ops(MULTIFD_COMPRESSION_NONE, &multifd_nocomp_ops);
-}
-
-migration_init(multifd_nocomp_register);
diff --git a/migration/multifd.h b/migration/multifd.h
index 00c872dfda..a3e35196d1 100644
--- a/migration/multifd.h
+++ b/migration/multifd.h
@@ -257,6 +257,8 @@ static inline void multifd_send_prepare_header(MultiFDSendParams *p)
}
void multifd_channel_connect(MultiFDSendParams *p, QIOChannel *ioc);
+bool multifd_send(MultiFDSendData **send_data);
+MultiFDSendData *multifd_send_data_alloc(void);
static inline uint32_t multifd_ram_page_size(void)
{
@@ -271,4 +273,7 @@ static inline uint32_t multifd_ram_page_count(void)
void multifd_ram_save_setup(void);
void multifd_ram_save_cleanup(void);
int multifd_ram_flush_and_sync(void);
+size_t multifd_ram_payload_size(void);
+void multifd_ram_fill_packet(MultiFDSendParams *p);
+int multifd_ram_unfill_packet(MultiFDRecvParams *p, Error **errp);
#endif
--
2.35.3
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH v6 17/19] migration/multifd: Make MultiFDMethods const
2024-08-27 17:45 [PATCH v6 00/19] migration/multifd: Remove multifd_send_state->pages Fabiano Rosas
` (15 preceding siblings ...)
2024-08-27 17:46 ` [PATCH v6 16/19] migration/multifd: Move nocomp code into multifd-nocomp.c Fabiano Rosas
@ 2024-08-27 17:46 ` Fabiano Rosas
2024-08-27 17:46 ` [PATCH v6 18/19] migration/multifd: Stop changing the packet on recv side Fabiano Rosas
2024-08-27 17:46 ` [PATCH v6 19/19] migration/multifd: Add documentation for multifd methods Fabiano Rosas
18 siblings, 0 replies; 34+ messages in thread
From: Fabiano Rosas @ 2024-08-27 17:46 UTC (permalink / raw)
To: qemu-devel; +Cc: Peter Xu, Maciej S . Szmigiero, Philippe Mathieu-Daudé
The methods are defined at module_init time and don't ever
change. Make them const.
Suggested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
---
migration/multifd-nocomp.c | 2 +-
migration/multifd-qpl.c | 2 +-
migration/multifd-uadk.c | 2 +-
migration/multifd-zlib.c | 2 +-
migration/multifd-zstd.c | 2 +-
migration/multifd.c | 8 ++++----
migration/multifd.h | 2 +-
7 files changed, 10 insertions(+), 10 deletions(-)
diff --git a/migration/multifd-nocomp.c b/migration/multifd-nocomp.c
index 53ea9f9c83..f294d1b0b2 100644
--- a/migration/multifd-nocomp.c
+++ b/migration/multifd-nocomp.c
@@ -377,7 +377,7 @@ bool multifd_send_prepare_common(MultiFDSendParams *p)
return true;
}
-static MultiFDMethods multifd_nocomp_ops = {
+static const MultiFDMethods multifd_nocomp_ops = {
.send_setup = multifd_nocomp_send_setup,
.send_cleanup = multifd_nocomp_send_cleanup,
.send_prepare = multifd_nocomp_send_prepare,
diff --git a/migration/multifd-qpl.c b/migration/multifd-qpl.c
index 75041a4c4d..b0f1e2ba46 100644
--- a/migration/multifd-qpl.c
+++ b/migration/multifd-qpl.c
@@ -694,7 +694,7 @@ static int multifd_qpl_recv(MultiFDRecvParams *p, Error **errp)
return multifd_qpl_decompress_pages_slow_path(p, errp);
}
-static MultiFDMethods multifd_qpl_ops = {
+static const MultiFDMethods multifd_qpl_ops = {
.send_setup = multifd_qpl_send_setup,
.send_cleanup = multifd_qpl_send_cleanup,
.send_prepare = multifd_qpl_send_prepare,
diff --git a/migration/multifd-uadk.c b/migration/multifd-uadk.c
index db2549f59b..89f6a72f0e 100644
--- a/migration/multifd-uadk.c
+++ b/migration/multifd-uadk.c
@@ -305,7 +305,7 @@ static int multifd_uadk_recv(MultiFDRecvParams *p, Error **errp)
return 0;
}
-static MultiFDMethods multifd_uadk_ops = {
+static const MultiFDMethods multifd_uadk_ops = {
.send_setup = multifd_uadk_send_setup,
.send_cleanup = multifd_uadk_send_cleanup,
.send_prepare = multifd_uadk_send_prepare,
diff --git a/migration/multifd-zlib.c b/migration/multifd-zlib.c
index 6787538762..8cf8a26bb4 100644
--- a/migration/multifd-zlib.c
+++ b/migration/multifd-zlib.c
@@ -277,7 +277,7 @@ static int multifd_zlib_recv(MultiFDRecvParams *p, Error **errp)
return 0;
}
-static MultiFDMethods multifd_zlib_ops = {
+static const MultiFDMethods multifd_zlib_ops = {
.send_setup = multifd_zlib_send_setup,
.send_cleanup = multifd_zlib_send_cleanup,
.send_prepare = multifd_zlib_send_prepare,
diff --git a/migration/multifd-zstd.c b/migration/multifd-zstd.c
index 1576b1e2ad..53da33e048 100644
--- a/migration/multifd-zstd.c
+++ b/migration/multifd-zstd.c
@@ -265,7 +265,7 @@ static int multifd_zstd_recv(MultiFDRecvParams *p, Error **errp)
return 0;
}
-static MultiFDMethods multifd_zstd_ops = {
+static const MultiFDMethods multifd_zstd_ops = {
.send_setup = multifd_zstd_send_setup,
.send_cleanup = multifd_zstd_send_cleanup,
.send_prepare = multifd_zstd_send_prepare,
diff --git a/migration/multifd.c b/migration/multifd.c
index 0c07a2040b..b89715fdc2 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -76,7 +76,7 @@ struct {
*/
int exiting;
/* multifd ops */
- MultiFDMethods *ops;
+ const MultiFDMethods *ops;
} *multifd_send_state;
struct {
@@ -93,7 +93,7 @@ struct {
uint64_t packet_num;
int exiting;
/* multifd ops */
- MultiFDMethods *ops;
+ const MultiFDMethods *ops;
} *multifd_recv_state;
MultiFDSendData *multifd_send_data_alloc(void)
@@ -128,9 +128,9 @@ void multifd_send_channel_created(void)
qemu_sem_post(&multifd_send_state->channels_created);
}
-static MultiFDMethods *multifd_ops[MULTIFD_COMPRESSION__MAX] = {};
+static const MultiFDMethods *multifd_ops[MULTIFD_COMPRESSION__MAX] = {};
-void multifd_register_ops(int method, MultiFDMethods *ops)
+void multifd_register_ops(int method, const MultiFDMethods *ops)
{
assert(0 <= method && method < MULTIFD_COMPRESSION__MAX);
assert(!multifd_ops[method]);
diff --git a/migration/multifd.h b/migration/multifd.h
index a3e35196d1..13e7a88c01 100644
--- a/migration/multifd.h
+++ b/migration/multifd.h
@@ -243,7 +243,7 @@ typedef struct {
int (*recv)(MultiFDRecvParams *p, Error **errp);
} MultiFDMethods;
-void multifd_register_ops(int method, MultiFDMethods *ops);
+void multifd_register_ops(int method, const MultiFDMethods *ops);
void multifd_send_fill_packet(MultiFDSendParams *p);
bool multifd_send_prepare_common(MultiFDSendParams *p);
void multifd_send_zero_page_detect(MultiFDSendParams *p);
--
2.35.3
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH v6 18/19] migration/multifd: Stop changing the packet on recv side
2024-08-27 17:45 [PATCH v6 00/19] migration/multifd: Remove multifd_send_state->pages Fabiano Rosas
` (16 preceding siblings ...)
2024-08-27 17:46 ` [PATCH v6 17/19] migration/multifd: Make MultiFDMethods const Fabiano Rosas
@ 2024-08-27 17:46 ` Fabiano Rosas
2024-08-27 18:07 ` Peter Xu
2024-08-27 17:46 ` [PATCH v6 19/19] migration/multifd: Add documentation for multifd methods Fabiano Rosas
18 siblings, 1 reply; 34+ messages in thread
From: Fabiano Rosas @ 2024-08-27 17:46 UTC (permalink / raw)
To: qemu-devel; +Cc: Peter Xu, Maciej S . Szmigiero, Philippe Mathieu-Daudé
As observed by Philippe, the multifd_ram_unfill_packet() function
currently leaves the MultiFDPacket structure with mixed
endianness. This is harmless, but ultimately not very clean. Aside
from that, the packet is also written to on the recv side to ensure
the ramblock name is null-terminated.
Stop touching the received packet and do the necessary work using
stack variables instead.
While here tweak the error strings and fix the space before
semicolons. Also remove the "100 times bigger" comment because it's
just one possible explanation for a size mismatch and it doesn't even
match the code.
CC: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
---
migration/multifd-nocomp.c | 40 ++++++++++++++++----------------------
migration/multifd.c | 20 +++++++++----------
2 files changed, 26 insertions(+), 34 deletions(-)
diff --git a/migration/multifd-nocomp.c b/migration/multifd-nocomp.c
index f294d1b0b2..a759470c9c 100644
--- a/migration/multifd-nocomp.c
+++ b/migration/multifd-nocomp.c
@@ -217,36 +217,32 @@ void multifd_ram_fill_packet(MultiFDSendParams *p)
int multifd_ram_unfill_packet(MultiFDRecvParams *p, Error **errp)
{
- MultiFDPacket_t *packet = p->packet;
+ const MultiFDPacket_t *packet = p->packet;
uint32_t page_count = multifd_ram_page_count();
uint32_t page_size = multifd_ram_page_size();
+ uint32_t pages_per_packet = be32_to_cpu(packet->pages_alloc);
+ g_autofree const char *ramblock_name = NULL;
int i;
- packet->pages_alloc = be32_to_cpu(packet->pages_alloc);
- /*
- * If we received a packet that is 100 times bigger than expected
- * just stop migration. It is a magic number.
- */
- if (packet->pages_alloc > page_count) {
- error_setg(errp, "multifd: received packet "
- "with size %u and expected a size of %u",
- packet->pages_alloc, page_count) ;
+ if (pages_per_packet > page_count) {
+ error_setg(errp, "multifd: received packet with %u pages, expected %u",
+ pages_per_packet, page_count);
return -1;
}
p->normal_num = be32_to_cpu(packet->normal_pages);
- if (p->normal_num > packet->pages_alloc) {
- error_setg(errp, "multifd: received packet "
- "with %u normal pages and expected maximum pages are %u",
- p->normal_num, packet->pages_alloc) ;
+ if (p->normal_num > pages_per_packet) {
+ error_setg(errp, "multifd: received packet with %u non-zero pages, "
+ "which exceeds maximum expected pages %u",
+ p->normal_num, pages_per_packet);
return -1;
}
p->zero_num = be32_to_cpu(packet->zero_pages);
- if (p->zero_num > packet->pages_alloc - p->normal_num) {
- error_setg(errp, "multifd: received packet "
- "with %u zero pages and expected maximum zero pages are %u",
- p->zero_num, packet->pages_alloc - p->normal_num) ;
+ if (p->zero_num > pages_per_packet - p->normal_num) {
+ error_setg(errp,
+ "multifd: received packet with %u zero pages, expected maximum %u",
+ p->zero_num, pages_per_packet - p->normal_num);
return -1;
}
@@ -254,12 +250,10 @@ int multifd_ram_unfill_packet(MultiFDRecvParams *p, Error **errp)
return 0;
}
- /* make sure that ramblock is 0 terminated */
- packet->ramblock[255] = 0;
- p->block = qemu_ram_block_by_name(packet->ramblock);
+ ramblock_name = g_strndup(packet->ramblock, 255);
+ p->block = qemu_ram_block_by_name(ramblock_name);
if (!p->block) {
- error_setg(errp, "multifd: unknown ram block %s",
- packet->ramblock);
+ error_setg(errp, "multifd: unknown ram block %s", ramblock_name);
return -1;
}
diff --git a/migration/multifd.c b/migration/multifd.c
index b89715fdc2..2a8cd9174c 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -230,22 +230,20 @@ void multifd_send_fill_packet(MultiFDSendParams *p)
static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
{
- MultiFDPacket_t *packet = p->packet;
+ const MultiFDPacket_t *packet = p->packet;
+ uint32_t magic = be32_to_cpu(packet->magic);
+ uint32_t version = be32_to_cpu(packet->version);
int ret = 0;
- packet->magic = be32_to_cpu(packet->magic);
- if (packet->magic != MULTIFD_MAGIC) {
- error_setg(errp, "multifd: received packet "
- "magic %x and expected magic %x",
- packet->magic, MULTIFD_MAGIC);
+ if (magic != MULTIFD_MAGIC) {
+ error_setg(errp, "multifd: received packet magic %x, expected %x",
+ magic, MULTIFD_MAGIC);
return -1;
}
- packet->version = be32_to_cpu(packet->version);
- if (packet->version != MULTIFD_VERSION) {
- error_setg(errp, "multifd: received packet "
- "version %u and expected version %u",
- packet->version, MULTIFD_VERSION);
+ if (version != MULTIFD_VERSION) {
+ error_setg(errp, "multifd: received packet version %u, expected %u",
+ version, MULTIFD_VERSION);
return -1;
}
--
2.35.3
^ permalink raw reply related [flat|nested] 34+ messages in thread
* Re: [PATCH v6 18/19] migration/multifd: Stop changing the packet on recv side
2024-08-27 17:46 ` [PATCH v6 18/19] migration/multifd: Stop changing the packet on recv side Fabiano Rosas
@ 2024-08-27 18:07 ` Peter Xu
2024-08-27 18:45 ` Fabiano Rosas
0 siblings, 1 reply; 34+ messages in thread
From: Peter Xu @ 2024-08-27 18:07 UTC (permalink / raw)
To: Fabiano Rosas
Cc: qemu-devel, Maciej S . Szmigiero, Philippe Mathieu-Daudé
On Tue, Aug 27, 2024 at 02:46:05PM -0300, Fabiano Rosas wrote:
> @@ -254,12 +250,10 @@ int multifd_ram_unfill_packet(MultiFDRecvParams *p, Error **errp)
> return 0;
> }
>
> - /* make sure that ramblock is 0 terminated */
> - packet->ramblock[255] = 0;
> - p->block = qemu_ram_block_by_name(packet->ramblock);
> + ramblock_name = g_strndup(packet->ramblock, 255);
I understand we want to move to a const*, however this introduces a 256B
allocation per multifd packet, which we definitely want to avoid.. I wonder
whether that's worthwhile just to make it const. :-(
I don't worry too much on the const* and vars pointed being abused /
updated when without it - the packet struct is pretty much limited only to
be referenced in this unfill function, and then we will do the load based
on MultiFDRecvParams* later anyway. So personally I'd rather lose the
const* v.s. one allocation.
Or we could also sanity check byte 255 to be '\0' (which, AFAIU, should
always be the case..), then we can get both benefits.
> + p->block = qemu_ram_block_by_name(ramblock_name);
> if (!p->block) {
> - error_setg(errp, "multifd: unknown ram block %s",
> - packet->ramblock);
> + error_setg(errp, "multifd: unknown ram block %s", ramblock_name);
> return -1;
> }
--
Peter Xu
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v6 18/19] migration/multifd: Stop changing the packet on recv side
2024-08-27 18:07 ` Peter Xu
@ 2024-08-27 18:45 ` Fabiano Rosas
2024-08-27 19:05 ` Peter Xu
0 siblings, 1 reply; 34+ messages in thread
From: Fabiano Rosas @ 2024-08-27 18:45 UTC (permalink / raw)
To: Peter Xu; +Cc: qemu-devel, Maciej S . Szmigiero, Philippe Mathieu-Daudé
Peter Xu <peterx@redhat.com> writes:
> On Tue, Aug 27, 2024 at 02:46:05PM -0300, Fabiano Rosas wrote:
>> @@ -254,12 +250,10 @@ int multifd_ram_unfill_packet(MultiFDRecvParams *p, Error **errp)
>> return 0;
>> }
>>
>> - /* make sure that ramblock is 0 terminated */
>> - packet->ramblock[255] = 0;
>> - p->block = qemu_ram_block_by_name(packet->ramblock);
>> + ramblock_name = g_strndup(packet->ramblock, 255);
>
> I understand we want to move to a const*, however this introduces a 256B
> allocation per multifd packet, which we definitely want to avoid.. I wonder
> whether that's worthwhile just to make it const. :-(
>
> I don't worry too much on the const* and vars pointed being abused /
> updated when without it - the packet struct is pretty much limited only to
> be referenced in this unfill function, and then we will do the load based
> on MultiFDRecvParams* later anyway. So personally I'd rather lose the
> const* v.s. one allocation.
>
> Or we could also sanity check byte 255 to be '\0' (which, AFAIU, should
> always be the case..), then we can get both benefits.
We can't because it breaks compat. Previous QEMUs didn't zero the
packet.
>
>> + p->block = qemu_ram_block_by_name(ramblock_name);
>> if (!p->block) {
>> - error_setg(errp, "multifd: unknown ram block %s",
>> - packet->ramblock);
>> + error_setg(errp, "multifd: unknown ram block %s", ramblock_name);
>> return -1;
>> }
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v6 18/19] migration/multifd: Stop changing the packet on recv side
2024-08-27 18:45 ` Fabiano Rosas
@ 2024-08-27 19:05 ` Peter Xu
2024-08-27 19:27 ` Fabiano Rosas
0 siblings, 1 reply; 34+ messages in thread
From: Peter Xu @ 2024-08-27 19:05 UTC (permalink / raw)
To: Fabiano Rosas
Cc: qemu-devel, Maciej S . Szmigiero, Philippe Mathieu-Daudé
On Tue, Aug 27, 2024 at 03:45:11PM -0300, Fabiano Rosas wrote:
> Peter Xu <peterx@redhat.com> writes:
>
> > On Tue, Aug 27, 2024 at 02:46:05PM -0300, Fabiano Rosas wrote:
> >> @@ -254,12 +250,10 @@ int multifd_ram_unfill_packet(MultiFDRecvParams *p, Error **errp)
> >> return 0;
> >> }
> >>
> >> - /* make sure that ramblock is 0 terminated */
> >> - packet->ramblock[255] = 0;
> >> - p->block = qemu_ram_block_by_name(packet->ramblock);
> >> + ramblock_name = g_strndup(packet->ramblock, 255);
> >
> > I understand we want to move to a const*, however this introduces a 256B
> > allocation per multifd packet, which we definitely want to avoid.. I wonder
> > whether that's worthwhile just to make it const. :-(
> >
> > I don't worry too much on the const* and vars pointed being abused /
> > updated when without it - the packet struct is pretty much limited only to
> > be referenced in this unfill function, and then we will do the load based
> > on MultiFDRecvParams* later anyway. So personally I'd rather lose the
> > const* v.s. one allocation.
> >
> > Or we could also sanity check byte 255 to be '\0' (which, AFAIU, should
> > always be the case..), then we can get both benefits.
>
> We can't because it breaks compat. Previous QEMUs didn't zero the
> packet.
Ouch!
Then.. shall we still try to avoid the allocation?
--
Peter Xu
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v6 18/19] migration/multifd: Stop changing the packet on recv side
2024-08-27 19:05 ` Peter Xu
@ 2024-08-27 19:27 ` Fabiano Rosas
2024-08-27 19:49 ` Peter Xu
0 siblings, 1 reply; 34+ messages in thread
From: Fabiano Rosas @ 2024-08-27 19:27 UTC (permalink / raw)
To: Peter Xu; +Cc: qemu-devel, Maciej S . Szmigiero, Philippe Mathieu-Daudé
Peter Xu <peterx@redhat.com> writes:
> On Tue, Aug 27, 2024 at 03:45:11PM -0300, Fabiano Rosas wrote:
>> Peter Xu <peterx@redhat.com> writes:
>>
>> > On Tue, Aug 27, 2024 at 02:46:05PM -0300, Fabiano Rosas wrote:
>> >> @@ -254,12 +250,10 @@ int multifd_ram_unfill_packet(MultiFDRecvParams *p, Error **errp)
>> >> return 0;
>> >> }
>> >>
>> >> - /* make sure that ramblock is 0 terminated */
>> >> - packet->ramblock[255] = 0;
>> >> - p->block = qemu_ram_block_by_name(packet->ramblock);
>> >> + ramblock_name = g_strndup(packet->ramblock, 255);
>> >
>> > I understand we want to move to a const*, however this introduces a 256B
>> > allocation per multifd packet, which we definitely want to avoid.. I wonder
>> > whether that's worthwhile just to make it const. :-(
>> >
>> > I don't worry too much on the const* and vars pointed being abused /
>> > updated when without it - the packet struct is pretty much limited only to
>> > be referenced in this unfill function, and then we will do the load based
>> > on MultiFDRecvParams* later anyway. So personally I'd rather lose the
>> > const* v.s. one allocation.
>> >
>> > Or we could also sanity check byte 255 to be '\0' (which, AFAIU, should
>> > always be the case..), then we can get both benefits.
>>
>> We can't because it breaks compat. Previous QEMUs didn't zero the
>> packet.
>
> Ouch!
>
> Then.. shall we still try to avoid the allocation?
Can I strcpy it to the stack?
char idstr[256];
strncpy(&idstr, packet->ramblock, 256);
idstr[255] = 0;
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v6 18/19] migration/multifd: Stop changing the packet on recv side
2024-08-27 19:27 ` Fabiano Rosas
@ 2024-08-27 19:49 ` Peter Xu
0 siblings, 0 replies; 34+ messages in thread
From: Peter Xu @ 2024-08-27 19:49 UTC (permalink / raw)
To: Fabiano Rosas
Cc: qemu-devel, Maciej S . Szmigiero, Philippe Mathieu-Daudé
On Tue, Aug 27, 2024 at 04:27:42PM -0300, Fabiano Rosas wrote:
> Peter Xu <peterx@redhat.com> writes:
>
> > On Tue, Aug 27, 2024 at 03:45:11PM -0300, Fabiano Rosas wrote:
> >> Peter Xu <peterx@redhat.com> writes:
> >>
> >> > On Tue, Aug 27, 2024 at 02:46:05PM -0300, Fabiano Rosas wrote:
> >> >> @@ -254,12 +250,10 @@ int multifd_ram_unfill_packet(MultiFDRecvParams *p, Error **errp)
> >> >> return 0;
> >> >> }
> >> >>
> >> >> - /* make sure that ramblock is 0 terminated */
> >> >> - packet->ramblock[255] = 0;
> >> >> - p->block = qemu_ram_block_by_name(packet->ramblock);
> >> >> + ramblock_name = g_strndup(packet->ramblock, 255);
> >> >
> >> > I understand we want to move to a const*, however this introduces a 256B
> >> > allocation per multifd packet, which we definitely want to avoid.. I wonder
> >> > whether that's worthwhile just to make it const. :-(
> >> >
> >> > I don't worry too much on the const* and vars pointed being abused /
> >> > updated when without it - the packet struct is pretty much limited only to
> >> > be referenced in this unfill function, and then we will do the load based
> >> > on MultiFDRecvParams* later anyway. So personally I'd rather lose the
> >> > const* v.s. one allocation.
> >> >
> >> > Or we could also sanity check byte 255 to be '\0' (which, AFAIU, should
> >> > always be the case..), then we can get both benefits.
> >>
> >> We can't because it breaks compat. Previous QEMUs didn't zero the
> >> packet.
> >
> > Ouch!
> >
> > Then.. shall we still try to avoid the allocation?
>
> Can I strcpy it to the stack?
>
> char idstr[256];
>
> strncpy(&idstr, packet->ramblock, 256);
> idstr[255] = 0;
Should be much better than an allocation, yes. However personally I'd
still try to avoid that.
Multifd is a performance feature, after all, so we care about perf here
more than elsewhere. Meanwhile this is exactly the hot path on recv
side.. so it might still be wise we leave all non-trivial cosmetic changes
for later when it's against it.
--
Peter Xu
^ permalink raw reply [flat|nested] 34+ messages in thread
* [PATCH v6 19/19] migration/multifd: Add documentation for multifd methods
2024-08-27 17:45 [PATCH v6 00/19] migration/multifd: Remove multifd_send_state->pages Fabiano Rosas
` (17 preceding siblings ...)
2024-08-27 17:46 ` [PATCH v6 18/19] migration/multifd: Stop changing the packet on recv side Fabiano Rosas
@ 2024-08-27 17:46 ` Fabiano Rosas
2024-08-27 18:30 ` Peter Xu
18 siblings, 1 reply; 34+ messages in thread
From: Fabiano Rosas @ 2024-08-27 17:46 UTC (permalink / raw)
To: qemu-devel; +Cc: Peter Xu, Maciej S . Szmigiero, Philippe Mathieu-Daudé
Add documentation clarifying the usage of the multifd methods. The
general idea is that the client code calls into multifd to trigger
send/recv of data and multifd then calls these hooks back from the
worker threads at opportune moments so the client can process a
portion of the data.
Suggested-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
---
Note that the doc is not symmetrical among send/recv because the recv
side is still wonky. It doesn't give the packet to the hooks, which
forces the p->normal, p->zero, etc. to be processed at the top level
of the threads, where no client-specific information should be.
---
migration/multifd.h | 76 +++++++++++++++++++++++++++++++++++++++++----
1 file changed, 70 insertions(+), 6 deletions(-)
diff --git a/migration/multifd.h b/migration/multifd.h
index 13e7a88c01..ebb17bdbcf 100644
--- a/migration/multifd.h
+++ b/migration/multifd.h
@@ -229,17 +229,81 @@ typedef struct {
} MultiFDRecvParams;
typedef struct {
- /* Setup for sending side */
+ /*
+ * The send_setup, send_cleanup, send_prepare are only called on
+ * the QEMU instance at the migration source.
+ */
+
+ /*
+ * Setup for sending side. Called once per channel during channel
+ * setup phase.
+ *
+ * Must allocate p->iov. If packets are in use (default), one
+ * extra iovec must be allocated for the packet header. Any memory
+ * allocated in this hook must be released at send_cleanup.
+ *
+ * p->write_flags may be used for passing flags to the QIOChannel.
+ *
+ * p->compression_data may be used by compression methods to store
+ * compression data.
+ */
int (*send_setup)(MultiFDSendParams *p, Error **errp);
- /* Cleanup for sending side */
+
+ /*
+ * Cleanup for sending side. Called once per channel during
+ * channel cleanup phase. May be empty.
+ */
void (*send_cleanup)(MultiFDSendParams *p, Error **errp);
- /* Prepare the send packet */
+
+ /*
+ * Prepare the send packet. Called from multifd_send(), with p
+ * pointing to the MultiFDSendParams of a channel that is
+ * currently idle.
+ *
+ * Must populate p->iov with the data to be sent, increment
+ * p->iovs_num to match the amount of iovecs used and set
+ * p->next_packet_size with the amount of data currently present
+ * in p->iov.
+ *
+ * Must indicate whether this is a compression packet by setting
+ * p->flags.
+ *
+ * As a last step, if packets are in use (default), must prepare
+ * the packet by calling multifd_send_fill_packet().
+ */
int (*send_prepare)(MultiFDSendParams *p, Error **errp);
- /* Setup for receiving side */
+
+ /*
+ * The recv_setup, recv_cleanup, recv are only called on the QEMU
+ * instance at the migration destination.
+ */
+
+ /*
+ * Setup for receiving side. Called once per channel during
+ * channel setup phase. May be empty.
+ *
+ * May allocate data structures for the receiving of data. May use
+ * p->iov. Compression methods may use p->compress_data.
+ */
int (*recv_setup)(MultiFDRecvParams *p, Error **errp);
- /* Cleanup for receiving side */
+
+ /*
+ * Cleanup for receiving side. Called once per channel during
+ * channel cleanup phase. May be empty.
+ */
void (*recv_cleanup)(MultiFDRecvParams *p);
- /* Read all data */
+
+ /*
+ * Data receive method. Called from multifd_recv(), with p
+ * pointing to the MultiFDRecvParams of a channel that is
+ * currently idle. Only called if there is data available to
+ * receive.
+ *
+ * Must validate p->flags according to what was set at
+ * send_prepare.
+ *
+ * Must read the data from the QIOChannel p->c.
+ */
int (*recv)(MultiFDRecvParams *p, Error **errp);
} MultiFDMethods;
--
2.35.3
^ permalink raw reply related [flat|nested] 34+ messages in thread
* Re: [PATCH v6 19/19] migration/multifd: Add documentation for multifd methods
2024-08-27 17:46 ` [PATCH v6 19/19] migration/multifd: Add documentation for multifd methods Fabiano Rosas
@ 2024-08-27 18:30 ` Peter Xu
2024-08-27 18:54 ` Fabiano Rosas
0 siblings, 1 reply; 34+ messages in thread
From: Peter Xu @ 2024-08-27 18:30 UTC (permalink / raw)
To: Fabiano Rosas
Cc: qemu-devel, Maciej S . Szmigiero, Philippe Mathieu-Daudé
On Tue, Aug 27, 2024 at 02:46:06PM -0300, Fabiano Rosas wrote:
> Add documentation clarifying the usage of the multifd methods. The
> general idea is that the client code calls into multifd to trigger
> send/recv of data and multifd then calls these hooks back from the
> worker threads at opportune moments so the client can process a
> portion of the data.
>
> Suggested-by: Peter Xu <peterx@redhat.com>
> Signed-off-by: Fabiano Rosas <farosas@suse.de>
> ---
> Note that the doc is not symmetrical among send/recv because the recv
> side is still wonky. It doesn't give the packet to the hooks, which
> forces the p->normal, p->zero, etc. to be processed at the top level
> of the threads, where no client-specific information should be.
> ---
> migration/multifd.h | 76 +++++++++++++++++++++++++++++++++++++++++----
> 1 file changed, 70 insertions(+), 6 deletions(-)
>
> diff --git a/migration/multifd.h b/migration/multifd.h
> index 13e7a88c01..ebb17bdbcf 100644
> --- a/migration/multifd.h
> +++ b/migration/multifd.h
> @@ -229,17 +229,81 @@ typedef struct {
> } MultiFDRecvParams;
>
> typedef struct {
> - /* Setup for sending side */
> + /*
> + * The send_setup, send_cleanup, send_prepare are only called on
> + * the QEMU instance at the migration source.
> + */
> +
> + /*
> + * Setup for sending side. Called once per channel during channel
> + * setup phase.
> + *
> + * Must allocate p->iov. If packets are in use (default), one
Pure thoughts: wonder whether we can assert(p->iov) that after the hook
returns in code to match this line.
> + * extra iovec must be allocated for the packet header. Any memory
> + * allocated in this hook must be released at send_cleanup.
> + *
> + * p->write_flags may be used for passing flags to the QIOChannel.
> + *
> + * p->compression_data may be used by compression methods to store
> + * compression data.
> + */
> int (*send_setup)(MultiFDSendParams *p, Error **errp);
> - /* Cleanup for sending side */
> +
> + /*
> + * Cleanup for sending side. Called once per channel during
> + * channel cleanup phase. May be empty.
Hmm, if we require p->iov allocation per-ops, then they must free it here?
I wonder whether we leaked it in most compressors.
With that, I wonder whether we should also assert(p->iov == NULL) after
this one returns (squash in this same patch).
> + */
> void (*send_cleanup)(MultiFDSendParams *p, Error **errp);
> - /* Prepare the send packet */
> +
> + /*
> + * Prepare the send packet. Called from multifd_send(), with p
multifd_send_thread()?
> + * pointing to the MultiFDSendParams of a channel that is
> + * currently idle.
> + *
> + * Must populate p->iov with the data to be sent, increment
> + * p->iovs_num to match the amount of iovecs used and set
> + * p->next_packet_size with the amount of data currently present
> + * in p->iov.
> + *
> + * Must indicate whether this is a compression packet by setting
> + * p->flags.
Sigh.. I wonder whether we could avoid mentioning this, and also we avoid
adding new flags for new compressors, relying on libvirt guarding things.
Then when we have the handshakes that's something we verify there.
> + *
> + * As a last step, if packets are in use (default), must prepare
> + * the packet by calling multifd_send_fill_packet().
> + */
> int (*send_prepare)(MultiFDSendParams *p, Error **errp);
> - /* Setup for receiving side */
> +
> + /*
> + * The recv_setup, recv_cleanup, recv are only called on the QEMU
> + * instance at the migration destination.
> + */
> +
> + /*
> + * Setup for receiving side. Called once per channel during
> + * channel setup phase. May be empty.
> + *
> + * May allocate data structures for the receiving of data. May use
> + * p->iov. Compression methods may use p->compress_data.
> + */
> int (*recv_setup)(MultiFDRecvParams *p, Error **errp);
> - /* Cleanup for receiving side */
> +
> + /*
> + * Cleanup for receiving side. Called once per channel during
> + * channel cleanup phase. May be empty.
> + */
> void (*recv_cleanup)(MultiFDRecvParams *p);
> - /* Read all data */
> +
> + /*
> + * Data receive method. Called from multifd_recv(), with p
multifd_recv_thread()?
> + * pointing to the MultiFDRecvParams of a channel that is
> + * currently idle. Only called if there is data available to
> + * receive.
> + *
> + * Must validate p->flags according to what was set at
> + * send_prepare.
> + *
> + * Must read the data from the QIOChannel p->c.
> + */
> int (*recv)(MultiFDRecvParams *p, Error **errp);
> } MultiFDMethods;
>
> --
> 2.35.3
>
--
Peter Xu
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v6 19/19] migration/multifd: Add documentation for multifd methods
2024-08-27 18:30 ` Peter Xu
@ 2024-08-27 18:54 ` Fabiano Rosas
2024-08-27 19:09 ` Peter Xu
0 siblings, 1 reply; 34+ messages in thread
From: Fabiano Rosas @ 2024-08-27 18:54 UTC (permalink / raw)
To: Peter Xu; +Cc: qemu-devel, Maciej S . Szmigiero, Philippe Mathieu-Daudé
Peter Xu <peterx@redhat.com> writes:
> On Tue, Aug 27, 2024 at 02:46:06PM -0300, Fabiano Rosas wrote:
>> Add documentation clarifying the usage of the multifd methods. The
>> general idea is that the client code calls into multifd to trigger
>> send/recv of data and multifd then calls these hooks back from the
>> worker threads at opportune moments so the client can process a
>> portion of the data.
>>
>> Suggested-by: Peter Xu <peterx@redhat.com>
>> Signed-off-by: Fabiano Rosas <farosas@suse.de>
>> ---
>> Note that the doc is not symmetrical among send/recv because the recv
>> side is still wonky. It doesn't give the packet to the hooks, which
>> forces the p->normal, p->zero, etc. to be processed at the top level
>> of the threads, where no client-specific information should be.
>> ---
>> migration/multifd.h | 76 +++++++++++++++++++++++++++++++++++++++++----
>> 1 file changed, 70 insertions(+), 6 deletions(-)
>>
>> diff --git a/migration/multifd.h b/migration/multifd.h
>> index 13e7a88c01..ebb17bdbcf 100644
>> --- a/migration/multifd.h
>> +++ b/migration/multifd.h
>> @@ -229,17 +229,81 @@ typedef struct {
>> } MultiFDRecvParams;
>>
>> typedef struct {
>> - /* Setup for sending side */
>> + /*
>> + * The send_setup, send_cleanup, send_prepare are only called on
>> + * the QEMU instance at the migration source.
>> + */
>> +
>> + /*
>> + * Setup for sending side. Called once per channel during channel
>> + * setup phase.
>> + *
>> + * Must allocate p->iov. If packets are in use (default), one
>
> Pure thoughts: wonder whether we can assert(p->iov) that after the hook
> returns in code to match this line.
Not worth the extra instructions in my opinion. It would crash
immediately once the thread touches p->iov anyway.
>
>> + * extra iovec must be allocated for the packet header. Any memory
>> + * allocated in this hook must be released at send_cleanup.
>> + *
>> + * p->write_flags may be used for passing flags to the QIOChannel.
>> + *
>> + * p->compression_data may be used by compression methods to store
>> + * compression data.
>> + */
>> int (*send_setup)(MultiFDSendParams *p, Error **errp);
>> - /* Cleanup for sending side */
>> +
>> + /*
>> + * Cleanup for sending side. Called once per channel during
>> + * channel cleanup phase. May be empty.
>
> Hmm, if we require p->iov allocation per-ops, then they must free it here?
> I wonder whether we leaked it in most compressors.
Sorry, this one shouldn't have that text.
>
> With that, I wonder whether we should also assert(p->iov == NULL) after
> this one returns (squash in this same patch).
>
>> + */
>> void (*send_cleanup)(MultiFDSendParams *p, Error **errp);
>> - /* Prepare the send packet */
>> +
>> + /*
>> + * Prepare the send packet. Called from multifd_send(), with p
>
> multifd_send_thread()?
No, I meant called as a result of multifd_send(), which is the function
the client uses to trigger a send on the thread.
>
>> + * pointing to the MultiFDSendParams of a channel that is
>> + * currently idle.
>> + *
>> + * Must populate p->iov with the data to be sent, increment
>> + * p->iovs_num to match the amount of iovecs used and set
>> + * p->next_packet_size with the amount of data currently present
>> + * in p->iov.
>> + *
>> + * Must indicate whether this is a compression packet by setting
>> + * p->flags.
>
> Sigh.. I wonder whether we could avoid mentioning this, and also we avoid
> adding new flags for new compressors, relying on libvirt guarding things.
> Then when we have the handshakes that's something we verify there.
>
I understand that part is not in the best shape, but we must document
the current state. There's no problem changing this later.
Besides, there's the whole "the migration stream should be considered
hostile" which might mean we should really be keeping these sanity check
flags around in case something really weird happens so we don't carry on
with a bad stream.
>> + *
>> + * As a last step, if packets are in use (default), must prepare
>> + * the packet by calling multifd_send_fill_packet().
>> + */
>> int (*send_prepare)(MultiFDSendParams *p, Error **errp);
>> - /* Setup for receiving side */
>> +
>> + /*
>> + * The recv_setup, recv_cleanup, recv are only called on the QEMU
>> + * instance at the migration destination.
>> + */
>> +
>> + /*
>> + * Setup for receiving side. Called once per channel during
>> + * channel setup phase. May be empty.
>> + *
>> + * May allocate data structures for the receiving of data. May use
>> + * p->iov. Compression methods may use p->compress_data.
>> + */
>> int (*recv_setup)(MultiFDRecvParams *p, Error **errp);
>> - /* Cleanup for receiving side */
>> +
>> + /*
>> + * Cleanup for receiving side. Called once per channel during
>> + * channel cleanup phase. May be empty.
>> + */
>> void (*recv_cleanup)(MultiFDRecvParams *p);
>> - /* Read all data */
>> +
>> + /*
>> + * Data receive method. Called from multifd_recv(), with p
>
> multifd_recv_thread()?
Same as before. I'll reword this somehow.
>
>> + * pointing to the MultiFDRecvParams of a channel that is
>> + * currently idle. Only called if there is data available to
>> + * receive.
>> + *
>> + * Must validate p->flags according to what was set at
>> + * send_prepare.
>> + *
>> + * Must read the data from the QIOChannel p->c.
>> + */
>> int (*recv)(MultiFDRecvParams *p, Error **errp);
>> } MultiFDMethods;
>>
>> --
>> 2.35.3
>>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v6 19/19] migration/multifd: Add documentation for multifd methods
2024-08-27 18:54 ` Fabiano Rosas
@ 2024-08-27 19:09 ` Peter Xu
2024-08-27 19:17 ` Fabiano Rosas
0 siblings, 1 reply; 34+ messages in thread
From: Peter Xu @ 2024-08-27 19:09 UTC (permalink / raw)
To: Fabiano Rosas
Cc: qemu-devel, Maciej S . Szmigiero, Philippe Mathieu-Daudé
On Tue, Aug 27, 2024 at 03:54:51PM -0300, Fabiano Rosas wrote:
> Peter Xu <peterx@redhat.com> writes:
>
> > On Tue, Aug 27, 2024 at 02:46:06PM -0300, Fabiano Rosas wrote:
> >> Add documentation clarifying the usage of the multifd methods. The
> >> general idea is that the client code calls into multifd to trigger
> >> send/recv of data and multifd then calls these hooks back from the
> >> worker threads at opportune moments so the client can process a
> >> portion of the data.
> >>
> >> Suggested-by: Peter Xu <peterx@redhat.com>
> >> Signed-off-by: Fabiano Rosas <farosas@suse.de>
> >> ---
> >> Note that the doc is not symmetrical among send/recv because the recv
> >> side is still wonky. It doesn't give the packet to the hooks, which
> >> forces the p->normal, p->zero, etc. to be processed at the top level
> >> of the threads, where no client-specific information should be.
> >> ---
> >> migration/multifd.h | 76 +++++++++++++++++++++++++++++++++++++++++----
> >> 1 file changed, 70 insertions(+), 6 deletions(-)
> >>
> >> diff --git a/migration/multifd.h b/migration/multifd.h
> >> index 13e7a88c01..ebb17bdbcf 100644
> >> --- a/migration/multifd.h
> >> +++ b/migration/multifd.h
> >> @@ -229,17 +229,81 @@ typedef struct {
> >> } MultiFDRecvParams;
> >>
> >> typedef struct {
> >> - /* Setup for sending side */
> >> + /*
> >> + * The send_setup, send_cleanup, send_prepare are only called on
> >> + * the QEMU instance at the migration source.
> >> + */
> >> +
> >> + /*
> >> + * Setup for sending side. Called once per channel during channel
> >> + * setup phase.
> >> + *
> >> + * Must allocate p->iov. If packets are in use (default), one
> >
> > Pure thoughts: wonder whether we can assert(p->iov) that after the hook
> > returns in code to match this line.
>
> Not worth the extra instructions in my opinion. It would crash
> immediately once the thread touches p->iov anyway.
It might still be good IMHO to have that assert(), not only to abort
earlier, but also as a code-styled comment. Your call when resend.
PS: feel free to queue existing patches into your own tree without
resending the whole series!
>
> >
> >> + * extra iovec must be allocated for the packet header. Any memory
> >> + * allocated in this hook must be released at send_cleanup.
> >> + *
> >> + * p->write_flags may be used for passing flags to the QIOChannel.
> >> + *
> >> + * p->compression_data may be used by compression methods to store
> >> + * compression data.
> >> + */
> >> int (*send_setup)(MultiFDSendParams *p, Error **errp);
> >> - /* Cleanup for sending side */
> >> +
> >> + /*
> >> + * Cleanup for sending side. Called once per channel during
> >> + * channel cleanup phase. May be empty.
> >
> > Hmm, if we require p->iov allocation per-ops, then they must free it here?
> > I wonder whether we leaked it in most compressors.
>
> Sorry, this one shouldn't have that text.
I still want to double check with you: we leaked iov[] in most compressors
here, or did I overlook something?
That's definitely more important than the doc update itself..
>
> >
> > With that, I wonder whether we should also assert(p->iov == NULL) after
> > this one returns (squash in this same patch).
> >
> >> + */
> >> void (*send_cleanup)(MultiFDSendParams *p, Error **errp);
> >> - /* Prepare the send packet */
> >> +
> >> + /*
> >> + * Prepare the send packet. Called from multifd_send(), with p
> >
> > multifd_send_thread()?
>
> No, I meant called as a result of multifd_send(), which is the function
> the client uses to trigger a send on the thread.
OK, but it's confusing. Some rewords you mentioned below could work.
>
> >
> >> + * pointing to the MultiFDSendParams of a channel that is
> >> + * currently idle.
> >> + *
> >> + * Must populate p->iov with the data to be sent, increment
> >> + * p->iovs_num to match the amount of iovecs used and set
> >> + * p->next_packet_size with the amount of data currently present
> >> + * in p->iov.
> >> + *
> >> + * Must indicate whether this is a compression packet by setting
> >> + * p->flags.
> >
> > Sigh.. I wonder whether we could avoid mentioning this, and also we avoid
> > adding new flags for new compressors, relying on libvirt guarding things.
> > Then when we have the handshakes that's something we verify there.
> >
>
> I understand that part is not in the best shape, but we must document
> the current state. There's no problem changing this later.
>
> Besides, there's the whole "the migration stream should be considered
> hostile" which might mean we should really be keeping these sanity check
> flags around in case something really weird happens so we don't carry on
> with a bad stream.
Yep, it's OK.
>
> >> + *
> >> + * As a last step, if packets are in use (default), must prepare
> >> + * the packet by calling multifd_send_fill_packet().
> >> + */
> >> int (*send_prepare)(MultiFDSendParams *p, Error **errp);
> >> - /* Setup for receiving side */
> >> +
> >> + /*
> >> + * The recv_setup, recv_cleanup, recv are only called on the QEMU
> >> + * instance at the migration destination.
> >> + */
> >> +
> >> + /*
> >> + * Setup for receiving side. Called once per channel during
> >> + * channel setup phase. May be empty.
> >> + *
> >> + * May allocate data structures for the receiving of data. May use
> >> + * p->iov. Compression methods may use p->compress_data.
> >> + */
> >> int (*recv_setup)(MultiFDRecvParams *p, Error **errp);
> >> - /* Cleanup for receiving side */
> >> +
> >> + /*
> >> + * Cleanup for receiving side. Called once per channel during
> >> + * channel cleanup phase. May be empty.
> >> + */
> >> void (*recv_cleanup)(MultiFDRecvParams *p);
> >> - /* Read all data */
> >> +
> >> + /*
> >> + * Data receive method. Called from multifd_recv(), with p
> >
> > multifd_recv_thread()?
>
> Same as before. I'll reword this somehow.
>
> >
> >> + * pointing to the MultiFDRecvParams of a channel that is
> >> + * currently idle. Only called if there is data available to
> >> + * receive.
> >> + *
> >> + * Must validate p->flags according to what was set at
> >> + * send_prepare.
> >> + *
> >> + * Must read the data from the QIOChannel p->c.
> >> + */
> >> int (*recv)(MultiFDRecvParams *p, Error **errp);
> >> } MultiFDMethods;
> >>
> >> --
> >> 2.35.3
> >>
>
--
Peter Xu
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v6 19/19] migration/multifd: Add documentation for multifd methods
2024-08-27 19:09 ` Peter Xu
@ 2024-08-27 19:17 ` Fabiano Rosas
2024-08-27 19:44 ` Peter Xu
0 siblings, 1 reply; 34+ messages in thread
From: Fabiano Rosas @ 2024-08-27 19:17 UTC (permalink / raw)
To: Peter Xu; +Cc: qemu-devel, Maciej S . Szmigiero, Philippe Mathieu-Daudé
Peter Xu <peterx@redhat.com> writes:
> On Tue, Aug 27, 2024 at 03:54:51PM -0300, Fabiano Rosas wrote:
>> Peter Xu <peterx@redhat.com> writes:
>>
>> > On Tue, Aug 27, 2024 at 02:46:06PM -0300, Fabiano Rosas wrote:
>> >> Add documentation clarifying the usage of the multifd methods. The
>> >> general idea is that the client code calls into multifd to trigger
>> >> send/recv of data and multifd then calls these hooks back from the
>> >> worker threads at opportune moments so the client can process a
>> >> portion of the data.
>> >>
>> >> Suggested-by: Peter Xu <peterx@redhat.com>
>> >> Signed-off-by: Fabiano Rosas <farosas@suse.de>
>> >> ---
>> >> Note that the doc is not symmetrical among send/recv because the recv
>> >> side is still wonky. It doesn't give the packet to the hooks, which
>> >> forces the p->normal, p->zero, etc. to be processed at the top level
>> >> of the threads, where no client-specific information should be.
>> >> ---
>> >> migration/multifd.h | 76 +++++++++++++++++++++++++++++++++++++++++----
>> >> 1 file changed, 70 insertions(+), 6 deletions(-)
>> >>
>> >> diff --git a/migration/multifd.h b/migration/multifd.h
>> >> index 13e7a88c01..ebb17bdbcf 100644
>> >> --- a/migration/multifd.h
>> >> +++ b/migration/multifd.h
>> >> @@ -229,17 +229,81 @@ typedef struct {
>> >> } MultiFDRecvParams;
>> >>
>> >> typedef struct {
>> >> - /* Setup for sending side */
>> >> + /*
>> >> + * The send_setup, send_cleanup, send_prepare are only called on
>> >> + * the QEMU instance at the migration source.
>> >> + */
>> >> +
>> >> + /*
>> >> + * Setup for sending side. Called once per channel during channel
>> >> + * setup phase.
>> >> + *
>> >> + * Must allocate p->iov. If packets are in use (default), one
>> >
>> > Pure thoughts: wonder whether we can assert(p->iov) that after the hook
>> > returns in code to match this line.
>>
>> Not worth the extra instructions in my opinion. It would crash
>> immediately once the thread touches p->iov anyway.
>
> It might still be good IMHO to have that assert(), not only to abort
> earlier, but also as a code-styled comment. Your call when resend.
>
> PS: feel free to queue existing patches into your own tree without
> resending the whole series!
>
>>
>> >
>> >> + * extra iovec must be allocated for the packet header. Any memory
>> >> + * allocated in this hook must be released at send_cleanup.
>> >> + *
>> >> + * p->write_flags may be used for passing flags to the QIOChannel.
>> >> + *
>> >> + * p->compression_data may be used by compression methods to store
>> >> + * compression data.
>> >> + */
>> >> int (*send_setup)(MultiFDSendParams *p, Error **errp);
>> >> - /* Cleanup for sending side */
>> >> +
>> >> + /*
>> >> + * Cleanup for sending side. Called once per channel during
>> >> + * channel cleanup phase. May be empty.
>> >
>> > Hmm, if we require p->iov allocation per-ops, then they must free it here?
>> > I wonder whether we leaked it in most compressors.
>>
>> Sorry, this one shouldn't have that text.
>
> I still want to double check with you: we leaked iov[] in most compressors
> here, or did I overlook something?
They have their own send_cleanup function where p->iov is freed.
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v6 19/19] migration/multifd: Add documentation for multifd methods
2024-08-27 19:17 ` Fabiano Rosas
@ 2024-08-27 19:44 ` Peter Xu
2024-08-27 20:22 ` Fabiano Rosas
0 siblings, 1 reply; 34+ messages in thread
From: Peter Xu @ 2024-08-27 19:44 UTC (permalink / raw)
To: Fabiano Rosas
Cc: qemu-devel, Maciej S . Szmigiero, Philippe Mathieu-Daudé
On Tue, Aug 27, 2024 at 04:17:59PM -0300, Fabiano Rosas wrote:
> Peter Xu <peterx@redhat.com> writes:
>
> > On Tue, Aug 27, 2024 at 03:54:51PM -0300, Fabiano Rosas wrote:
> >> Peter Xu <peterx@redhat.com> writes:
> >>
> >> > On Tue, Aug 27, 2024 at 02:46:06PM -0300, Fabiano Rosas wrote:
> >> >> Add documentation clarifying the usage of the multifd methods. The
> >> >> general idea is that the client code calls into multifd to trigger
> >> >> send/recv of data and multifd then calls these hooks back from the
> >> >> worker threads at opportune moments so the client can process a
> >> >> portion of the data.
> >> >>
> >> >> Suggested-by: Peter Xu <peterx@redhat.com>
> >> >> Signed-off-by: Fabiano Rosas <farosas@suse.de>
> >> >> ---
> >> >> Note that the doc is not symmetrical among send/recv because the recv
> >> >> side is still wonky. It doesn't give the packet to the hooks, which
> >> >> forces the p->normal, p->zero, etc. to be processed at the top level
> >> >> of the threads, where no client-specific information should be.
> >> >> ---
> >> >> migration/multifd.h | 76 +++++++++++++++++++++++++++++++++++++++++----
> >> >> 1 file changed, 70 insertions(+), 6 deletions(-)
> >> >>
> >> >> diff --git a/migration/multifd.h b/migration/multifd.h
> >> >> index 13e7a88c01..ebb17bdbcf 100644
> >> >> --- a/migration/multifd.h
> >> >> +++ b/migration/multifd.h
> >> >> @@ -229,17 +229,81 @@ typedef struct {
> >> >> } MultiFDRecvParams;
> >> >>
> >> >> typedef struct {
> >> >> - /* Setup for sending side */
> >> >> + /*
> >> >> + * The send_setup, send_cleanup, send_prepare are only called on
> >> >> + * the QEMU instance at the migration source.
> >> >> + */
> >> >> +
> >> >> + /*
> >> >> + * Setup for sending side. Called once per channel during channel
> >> >> + * setup phase.
> >> >> + *
> >> >> + * Must allocate p->iov. If packets are in use (default), one
> >> >
> >> > Pure thoughts: wonder whether we can assert(p->iov) that after the hook
> >> > returns in code to match this line.
> >>
> >> Not worth the extra instructions in my opinion. It would crash
> >> immediately once the thread touches p->iov anyway.
> >
> > It might still be good IMHO to have that assert(), not only to abort
> > earlier, but also as a code-styled comment. Your call when resend.
> >
> > PS: feel free to queue existing patches into your own tree without
> > resending the whole series!
> >
> >>
> >> >
> >> >> + * extra iovec must be allocated for the packet header. Any memory
> >> >> + * allocated in this hook must be released at send_cleanup.
> >> >> + *
> >> >> + * p->write_flags may be used for passing flags to the QIOChannel.
> >> >> + *
> >> >> + * p->compression_data may be used by compression methods to store
> >> >> + * compression data.
> >> >> + */
> >> >> int (*send_setup)(MultiFDSendParams *p, Error **errp);
> >> >> - /* Cleanup for sending side */
> >> >> +
> >> >> + /*
> >> >> + * Cleanup for sending side. Called once per channel during
> >> >> + * channel cleanup phase. May be empty.
> >> >
> >> > Hmm, if we require p->iov allocation per-ops, then they must free it here?
> >> > I wonder whether we leaked it in most compressors.
> >>
> >> Sorry, this one shouldn't have that text.
> >
> > I still want to double check with you: we leaked iov[] in most compressors
> > here, or did I overlook something?
>
> They have their own send_cleanup function where p->iov is freed.
Oh, so I guess I just accidentally stumbled upon
multifd_uadk_send_cleanup() when looking..
I thought I looked a few more but now when I check most of them are indeed
there but looks like uadk is missing that.
I think it might still be a good idea to assert(iov==NULL) after the
cleanup..
--
Peter Xu
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v6 19/19] migration/multifd: Add documentation for multifd methods
2024-08-27 19:44 ` Peter Xu
@ 2024-08-27 20:22 ` Fabiano Rosas
2024-08-27 21:40 ` Peter Xu
0 siblings, 1 reply; 34+ messages in thread
From: Fabiano Rosas @ 2024-08-27 20:22 UTC (permalink / raw)
To: Peter Xu; +Cc: qemu-devel, Maciej S . Szmigiero, Philippe Mathieu-Daudé
Peter Xu <peterx@redhat.com> writes:
> On Tue, Aug 27, 2024 at 04:17:59PM -0300, Fabiano Rosas wrote:
>> Peter Xu <peterx@redhat.com> writes:
>>
>> > On Tue, Aug 27, 2024 at 03:54:51PM -0300, Fabiano Rosas wrote:
>> >> Peter Xu <peterx@redhat.com> writes:
>> >>
>> >> > On Tue, Aug 27, 2024 at 02:46:06PM -0300, Fabiano Rosas wrote:
>> >> >> Add documentation clarifying the usage of the multifd methods. The
>> >> >> general idea is that the client code calls into multifd to trigger
>> >> >> send/recv of data and multifd then calls these hooks back from the
>> >> >> worker threads at opportune moments so the client can process a
>> >> >> portion of the data.
>> >> >>
>> >> >> Suggested-by: Peter Xu <peterx@redhat.com>
>> >> >> Signed-off-by: Fabiano Rosas <farosas@suse.de>
>> >> >> ---
>> >> >> Note that the doc is not symmetrical among send/recv because the recv
>> >> >> side is still wonky. It doesn't give the packet to the hooks, which
>> >> >> forces the p->normal, p->zero, etc. to be processed at the top level
>> >> >> of the threads, where no client-specific information should be.
>> >> >> ---
>> >> >> migration/multifd.h | 76 +++++++++++++++++++++++++++++++++++++++++----
>> >> >> 1 file changed, 70 insertions(+), 6 deletions(-)
>> >> >>
>> >> >> diff --git a/migration/multifd.h b/migration/multifd.h
>> >> >> index 13e7a88c01..ebb17bdbcf 100644
>> >> >> --- a/migration/multifd.h
>> >> >> +++ b/migration/multifd.h
>> >> >> @@ -229,17 +229,81 @@ typedef struct {
>> >> >> } MultiFDRecvParams;
>> >> >>
>> >> >> typedef struct {
>> >> >> - /* Setup for sending side */
>> >> >> + /*
>> >> >> + * The send_setup, send_cleanup, send_prepare are only called on
>> >> >> + * the QEMU instance at the migration source.
>> >> >> + */
>> >> >> +
>> >> >> + /*
>> >> >> + * Setup for sending side. Called once per channel during channel
>> >> >> + * setup phase.
>> >> >> + *
>> >> >> + * Must allocate p->iov. If packets are in use (default), one
>> >> >
>> >> > Pure thoughts: wonder whether we can assert(p->iov) that after the hook
>> >> > returns in code to match this line.
>> >>
>> >> Not worth the extra instructions in my opinion. It would crash
>> >> immediately once the thread touches p->iov anyway.
>> >
>> > It might still be good IMHO to have that assert(), not only to abort
>> > earlier, but also as a code-styled comment. Your call when resend.
>> >
>> > PS: feel free to queue existing patches into your own tree without
>> > resending the whole series!
>> >
>> >>
>> >> >
>> >> >> + * extra iovec must be allocated for the packet header. Any memory
>> >> >> + * allocated in this hook must be released at send_cleanup.
>> >> >> + *
>> >> >> + * p->write_flags may be used for passing flags to the QIOChannel.
>> >> >> + *
>> >> >> + * p->compression_data may be used by compression methods to store
>> >> >> + * compression data.
>> >> >> + */
>> >> >> int (*send_setup)(MultiFDSendParams *p, Error **errp);
>> >> >> - /* Cleanup for sending side */
>> >> >> +
>> >> >> + /*
>> >> >> + * Cleanup for sending side. Called once per channel during
>> >> >> + * channel cleanup phase. May be empty.
>> >> >
>> >> > Hmm, if we require p->iov allocation per-ops, then they must free it here?
>> >> > I wonder whether we leaked it in most compressors.
>> >>
>> >> Sorry, this one shouldn't have that text.
>> >
>> > I still want to double check with you: we leaked iov[] in most compressors
>> > here, or did I overlook something?
>>
>> They have their own send_cleanup function where p->iov is freed.
>
> Oh, so I guess I just accidentally stumbled upon
> multifd_uadk_send_cleanup() when looking..
Yeah, this is a bit worrying. The reason this has not shown on valgrind
or the asan that Peter ran recently is that uadk, qpl and soon qat are
never enabled in a regular build. I have myself introduced compilation
errors in those files that I only caught by accident at a later point
(before sending to the ml).
>
> I thought I looked a few more but now when I check most of them are indeed
> there but looks like uadk is missing that.
>
> I think it might still be a good idea to assert(iov==NULL) after the
> cleanup..
Should we maybe just free p->iov at the top level then?
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v6 19/19] migration/multifd: Add documentation for multifd methods
2024-08-27 20:22 ` Fabiano Rosas
@ 2024-08-27 21:40 ` Peter Xu
2024-08-28 13:04 ` Fabiano Rosas
0 siblings, 1 reply; 34+ messages in thread
From: Peter Xu @ 2024-08-27 21:40 UTC (permalink / raw)
To: Fabiano Rosas
Cc: qemu-devel, Maciej S . Szmigiero, Philippe Mathieu-Daudé
On Tue, Aug 27, 2024 at 05:22:32PM -0300, Fabiano Rosas wrote:
> Peter Xu <peterx@redhat.com> writes:
>
> > On Tue, Aug 27, 2024 at 04:17:59PM -0300, Fabiano Rosas wrote:
> >> Peter Xu <peterx@redhat.com> writes:
> >>
> >> > On Tue, Aug 27, 2024 at 03:54:51PM -0300, Fabiano Rosas wrote:
> >> >> Peter Xu <peterx@redhat.com> writes:
> >> >>
> >> >> > On Tue, Aug 27, 2024 at 02:46:06PM -0300, Fabiano Rosas wrote:
> >> >> >> Add documentation clarifying the usage of the multifd methods. The
> >> >> >> general idea is that the client code calls into multifd to trigger
> >> >> >> send/recv of data and multifd then calls these hooks back from the
> >> >> >> worker threads at opportune moments so the client can process a
> >> >> >> portion of the data.
> >> >> >>
> >> >> >> Suggested-by: Peter Xu <peterx@redhat.com>
> >> >> >> Signed-off-by: Fabiano Rosas <farosas@suse.de>
> >> >> >> ---
> >> >> >> Note that the doc is not symmetrical among send/recv because the recv
> >> >> >> side is still wonky. It doesn't give the packet to the hooks, which
> >> >> >> forces the p->normal, p->zero, etc. to be processed at the top level
> >> >> >> of the threads, where no client-specific information should be.
> >> >> >> ---
> >> >> >> migration/multifd.h | 76 +++++++++++++++++++++++++++++++++++++++++----
> >> >> >> 1 file changed, 70 insertions(+), 6 deletions(-)
> >> >> >>
> >> >> >> diff --git a/migration/multifd.h b/migration/multifd.h
> >> >> >> index 13e7a88c01..ebb17bdbcf 100644
> >> >> >> --- a/migration/multifd.h
> >> >> >> +++ b/migration/multifd.h
> >> >> >> @@ -229,17 +229,81 @@ typedef struct {
> >> >> >> } MultiFDRecvParams;
> >> >> >>
> >> >> >> typedef struct {
> >> >> >> - /* Setup for sending side */
> >> >> >> + /*
> >> >> >> + * The send_setup, send_cleanup, send_prepare are only called on
> >> >> >> + * the QEMU instance at the migration source.
> >> >> >> + */
> >> >> >> +
> >> >> >> + /*
> >> >> >> + * Setup for sending side. Called once per channel during channel
> >> >> >> + * setup phase.
> >> >> >> + *
> >> >> >> + * Must allocate p->iov. If packets are in use (default), one
> >> >> >
> >> >> > Pure thoughts: wonder whether we can assert(p->iov) that after the hook
> >> >> > returns in code to match this line.
> >> >>
> >> >> Not worth the extra instructions in my opinion. It would crash
> >> >> immediately once the thread touches p->iov anyway.
> >> >
> >> > It might still be good IMHO to have that assert(), not only to abort
> >> > earlier, but also as a code-styled comment. Your call when resend.
> >> >
> >> > PS: feel free to queue existing patches into your own tree without
> >> > resending the whole series!
> >> >
> >> >>
> >> >> >
> >> >> >> + * extra iovec must be allocated for the packet header. Any memory
> >> >> >> + * allocated in this hook must be released at send_cleanup.
> >> >> >> + *
> >> >> >> + * p->write_flags may be used for passing flags to the QIOChannel.
> >> >> >> + *
> >> >> >> + * p->compression_data may be used by compression methods to store
> >> >> >> + * compression data.
> >> >> >> + */
> >> >> >> int (*send_setup)(MultiFDSendParams *p, Error **errp);
> >> >> >> - /* Cleanup for sending side */
> >> >> >> +
> >> >> >> + /*
> >> >> >> + * Cleanup for sending side. Called once per channel during
> >> >> >> + * channel cleanup phase. May be empty.
> >> >> >
> >> >> > Hmm, if we require p->iov allocation per-ops, then they must free it here?
> >> >> > I wonder whether we leaked it in most compressors.
> >> >>
> >> >> Sorry, this one shouldn't have that text.
> >> >
> >> > I still want to double check with you: we leaked iov[] in most compressors
> >> > here, or did I overlook something?
> >>
> >> They have their own send_cleanup function where p->iov is freed.
> >
> > Oh, so I guess I just accidentally stumbled upon
> > multifd_uadk_send_cleanup() when looking..
>
> Yeah, this is a bit worrying. The reason this has not shown on valgrind
> or the asan that Peter ran recently is that uadk, qpl and soon qat are
> never enabled in a regular build. I have myself introduced compilation
> errors in those files that I only caught by accident at a later point
> (before sending to the ml).
I tried to manually install qpl and uadk just now but neither of them is
trivial to compile and install.. I hit random errors here and there in my
first shot.
OTOH, qatzip packages are around at least in Fedora repositories, so that
might be the easiest to reach. Not sure how's that when with OpenSUSE.
Shall we perhaps draft an email and check with them? E.g., would that be
better if there's plan they would at some point provide RPMs for libraries
at some point so that we could somehow integrate that into CI routines?
>
> >
> > I thought I looked a few more but now when I check most of them are indeed
> > there but looks like uadk is missing that.
> >
> > I think it might still be a good idea to assert(iov==NULL) after the
> > cleanup..
>
> Should we maybe just free p->iov at the top level then?
We could, but if so it might be good to also allocate at the top level so
the hooks are paired up on these allocations/frees.
IMHO we could already always allocate iov[] to page_count+2 which is the
maximum of all compressors - currently we've got 128 pages per packet by
default, which is 128*16=2KB iov[] per channel. Not so bad when only used
during migrations.
Or we can perhaps do send_setup(..., &iovs_needed), returns how many iovs
are needed, then multifd allocates them.
--
Peter Xu
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v6 19/19] migration/multifd: Add documentation for multifd methods
2024-08-27 21:40 ` Peter Xu
@ 2024-08-28 13:04 ` Fabiano Rosas
2024-08-28 13:13 ` Peter Xu
0 siblings, 1 reply; 34+ messages in thread
From: Fabiano Rosas @ 2024-08-28 13:04 UTC (permalink / raw)
To: Peter Xu; +Cc: qemu-devel, Maciej S . Szmigiero, Philippe Mathieu-Daudé
Peter Xu <peterx@redhat.com> writes:
> On Tue, Aug 27, 2024 at 05:22:32PM -0300, Fabiano Rosas wrote:
>> Peter Xu <peterx@redhat.com> writes:
>>
>> > On Tue, Aug 27, 2024 at 04:17:59PM -0300, Fabiano Rosas wrote:
>> >> Peter Xu <peterx@redhat.com> writes:
>> >>
>> >> > On Tue, Aug 27, 2024 at 03:54:51PM -0300, Fabiano Rosas wrote:
>> >> >> Peter Xu <peterx@redhat.com> writes:
>> >> >>
>> >> >> > On Tue, Aug 27, 2024 at 02:46:06PM -0300, Fabiano Rosas wrote:
>> >> >> >> Add documentation clarifying the usage of the multifd methods. The
>> >> >> >> general idea is that the client code calls into multifd to trigger
>> >> >> >> send/recv of data and multifd then calls these hooks back from the
>> >> >> >> worker threads at opportune moments so the client can process a
>> >> >> >> portion of the data.
>> >> >> >>
>> >> >> >> Suggested-by: Peter Xu <peterx@redhat.com>
>> >> >> >> Signed-off-by: Fabiano Rosas <farosas@suse.de>
>> >> >> >> ---
>> >> >> >> Note that the doc is not symmetrical among send/recv because the recv
>> >> >> >> side is still wonky. It doesn't give the packet to the hooks, which
>> >> >> >> forces the p->normal, p->zero, etc. to be processed at the top level
>> >> >> >> of the threads, where no client-specific information should be.
>> >> >> >> ---
>> >> >> >> migration/multifd.h | 76 +++++++++++++++++++++++++++++++++++++++++----
>> >> >> >> 1 file changed, 70 insertions(+), 6 deletions(-)
>> >> >> >>
>> >> >> >> diff --git a/migration/multifd.h b/migration/multifd.h
>> >> >> >> index 13e7a88c01..ebb17bdbcf 100644
>> >> >> >> --- a/migration/multifd.h
>> >> >> >> +++ b/migration/multifd.h
>> >> >> >> @@ -229,17 +229,81 @@ typedef struct {
>> >> >> >> } MultiFDRecvParams;
>> >> >> >>
>> >> >> >> typedef struct {
>> >> >> >> - /* Setup for sending side */
>> >> >> >> + /*
>> >> >> >> + * The send_setup, send_cleanup, send_prepare are only called on
>> >> >> >> + * the QEMU instance at the migration source.
>> >> >> >> + */
>> >> >> >> +
>> >> >> >> + /*
>> >> >> >> + * Setup for sending side. Called once per channel during channel
>> >> >> >> + * setup phase.
>> >> >> >> + *
>> >> >> >> + * Must allocate p->iov. If packets are in use (default), one
>> >> >> >
>> >> >> > Pure thoughts: wonder whether we can assert(p->iov) that after the hook
>> >> >> > returns in code to match this line.
>> >> >>
>> >> >> Not worth the extra instructions in my opinion. It would crash
>> >> >> immediately once the thread touches p->iov anyway.
>> >> >
>> >> > It might still be good IMHO to have that assert(), not only to abort
>> >> > earlier, but also as a code-styled comment. Your call when resend.
>> >> >
>> >> > PS: feel free to queue existing patches into your own tree without
>> >> > resending the whole series!
>> >> >
>> >> >>
>> >> >> >
>> >> >> >> + * extra iovec must be allocated for the packet header. Any memory
>> >> >> >> + * allocated in this hook must be released at send_cleanup.
>> >> >> >> + *
>> >> >> >> + * p->write_flags may be used for passing flags to the QIOChannel.
>> >> >> >> + *
>> >> >> >> + * p->compression_data may be used by compression methods to store
>> >> >> >> + * compression data.
>> >> >> >> + */
>> >> >> >> int (*send_setup)(MultiFDSendParams *p, Error **errp);
>> >> >> >> - /* Cleanup for sending side */
>> >> >> >> +
>> >> >> >> + /*
>> >> >> >> + * Cleanup for sending side. Called once per channel during
>> >> >> >> + * channel cleanup phase. May be empty.
>> >> >> >
>> >> >> > Hmm, if we require p->iov allocation per-ops, then they must free it here?
>> >> >> > I wonder whether we leaked it in most compressors.
>> >> >>
>> >> >> Sorry, this one shouldn't have that text.
>> >> >
>> >> > I still want to double check with you: we leaked iov[] in most compressors
>> >> > here, or did I overlook something?
>> >>
>> >> They have their own send_cleanup function where p->iov is freed.
>> >
>> > Oh, so I guess I just accidentally stumbled upon
>> > multifd_uadk_send_cleanup() when looking..
>>
>> Yeah, this is a bit worrying. The reason this has not shown on valgrind
>> or the asan that Peter ran recently is that uadk, qpl and soon qat are
>> never enabled in a regular build. I have myself introduced compilation
>> errors in those files that I only caught by accident at a later point
>> (before sending to the ml).
>
> I tried to manually install qpl and uadk just now but neither of them is
> trivial to compile and install.. I hit random errors here and there in my
> first shot.
>
> OTOH, qatzip packages are around at least in Fedora repositories, so that
> might be the easiest to reach. Not sure how's that when with OpenSUSE.
>
> Shall we perhaps draft an email and check with them? E.g., would that be
> better if there's plan they would at some point provide RPMs for libraries
> at some point so that we could somehow integrate that into CI routines?
We merged most of these things already. Now even if rpms show up at some
point we still have to deal with not being able to build that code until
then. Perhaps we could have a container that has all of these
pre-installed just to exercize the code a bit. But it still wouldn't
catch some issues becase we cannot run the code due to the lack of
hardware.
>
>>
>> >
>> > I thought I looked a few more but now when I check most of them are indeed
>> > there but looks like uadk is missing that.
>> >
>> > I think it might still be a good idea to assert(iov==NULL) after the
>> > cleanup..
>>
>> Should we maybe just free p->iov at the top level then?
>
> We could, but if so it might be good to also allocate at the top level so
> the hooks are paired up on these allocations/frees.
>
> IMHO we could already always allocate iov[] to page_count+2 which is the
> maximum of all compressors - currently we've got 128 pages per packet by
> default, which is 128*16=2KB iov[] per channel. Not so bad when only used
> during migrations.
>
> Or we can perhaps do send_setup(..., &iovs_needed), returns how many iovs
> are needed, then multifd allocates them.
Let me play around with these a bit. I might just fix uadk and leave
everything else as it is for now.
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v6 19/19] migration/multifd: Add documentation for multifd methods
2024-08-28 13:04 ` Fabiano Rosas
@ 2024-08-28 13:13 ` Peter Xu
0 siblings, 0 replies; 34+ messages in thread
From: Peter Xu @ 2024-08-28 13:13 UTC (permalink / raw)
To: Fabiano Rosas
Cc: qemu-devel, Maciej S . Szmigiero, Philippe Mathieu-Daudé
On Wed, Aug 28, 2024 at 10:04:47AM -0300, Fabiano Rosas wrote:
> We merged most of these things already. Now even if rpms show up at some
> point we still have to deal with not being able to build that code until
> then. Perhaps we could have a container that has all of these
> pre-installed just to exercize the code a bit. But it still wouldn't
> catch some issues becase we cannot run the code due to the lack of
> hardware.
Yes, ultimately we may need help from the relevant people..
One last fallback plan is we can consult them for help at least to make
sure it's working at the end of each release, so it might be helpful they
help verify the code at soft-freeze for each release. Then we can keep the
development as usual ignoring them during dev cycles.
If we find some feature broken (e.g. fail to compile..) for more than
multiple releases, it may mean that upstream has nobody using it, then we
suggest obsoletions.
--
Peter Xu
^ permalink raw reply [flat|nested] 34+ messages in thread