qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 00/14] migration/multifd: Remove multifd_send_state->pages
@ 2024-08-01 12:35 Fabiano Rosas
  2024-08-01 12:35 ` [PATCH v3 01/14] migration/multifd: Reduce access to p->pages Fabiano Rosas
                   ` (14 more replies)
  0 siblings, 15 replies; 37+ messages in thread
From: Fabiano Rosas @ 2024-08-01 12:35 UTC (permalink / raw)
  To: qemu-devel; +Cc: Peter Xu, Maciej S . Szmigiero

Hi,

This v3 incorporates the suggestions done by Peter in v2. Aside from
those, of note:

- fixed the allocation of MultiFDSendData. The previous version didn't
  account for compiler-inserted holes;

- kept the packet split patch;

- added some patches to remove p->page_count, p->page_size,
  pages->allocated. These are all constants and don't need to be
  per-channel;

- moved the code into multifd-ram.c.

  However, I left the p->packet allocation (depends on page_count) and
  p->normal + p->zero behind because I need to see how the device
  state patches will deal with the packet stuff before I can come up
  with a way to move those out of the MultiFD*Params. It might not be
  worth it adding another struct just for the ram code to store
  p->normal, p->zero.

With this I'm pretty much done with what I think needs to be changed
as a prereq for the device state work. I don't have anything else in
mind to add to this series.

CI run: https://gitlab.com/farosas/qemu/-/pipelines/1395572680

v2:
https://lore.kernel.org/r/20240722175914.24022-1-farosas@suse.de

v1:
https://lore.kernel.org/r/20240620212111.29319-1-farosas@suse.de

First of all, apologies for the roughness of the series. I'm off for
the next couple of weeks and wanted to put something together early
for your consideration.

This series is a refactoring (based on an earlier, off-list
attempt[0]), aimed to remove the usage of the MultiFDPages_t type in
the multifd core. If we're going to add support for more data types to
multifd, we first need to clean that up.

This time around this work was prompted by Maciej's series[1]. I see
you're having to add a bunch of is_device_state checks to work around
the rigidity of the code.

Aside from the VFIO work, there is also the intent (coming back from
Juan's ideas) to make multifd the default code path for migration,
which will have to include the vmstate migration and anything else we
put on the stream via QEMUFile.

I have long since been bothered by having 'pages' sprinkled all over
the code, so I might be coming at this with a bit of a narrow focus,
but I believe in order to support more types of payloads in multifd,
we need to first allow the scheduling at multifd_send_pages() to be
independent of MultiFDPages_t. So here it is. Let me know what you
think.

(as I said, I'll be off for a couple of weeks, so feel free to
incorporate any of this code if it's useful. Or to ignore it
completely).

CI run: https://gitlab.com/farosas/qemu/-/pipelines/1340992028

0- https://github.com/farosas/qemu/commits/multifd-packet-cleanups/
1- https://lore.kernel.org/r/cover.1718717584.git.maciej.szmigiero@oracle.com

Fabiano Rosas (14):
  migration/multifd: Reduce access to p->pages
  migration/multifd: Inline page_size and page_count
  migration/multifd: Remove pages->allocated
  migration/multifd: Pass in MultiFDPages_t to file_write_ramblock_iov
  migration/multifd: Introduce MultiFDSendData
  migration/multifd: Make MultiFDPages_t:offset a flexible array member
  migration/multifd: Replace p->pages with an union pointer
  migration/multifd: Move pages accounting into
    multifd_send_zero_page_detect()
  migration/multifd: Isolate ram pages packet data
  migration/multifd: Don't send ram data during SYNC
  migration/multifd: Replace multifd_send_state->pages with client data
  migration/multifd: Allow multifd sync without flush
  migration/multifd: Register nocomp ops dynamically
  migration/multifd: Move ram code into multifd-ram.c

 migration/file.c              |   3 +-
 migration/file.h              |   2 +-
 migration/meson.build         |   1 +
 migration/multifd-qpl.c       |  20 +-
 migration/multifd-ram.c       | 400 +++++++++++++++++++++++++
 migration/multifd-uadk.c      |  45 +--
 migration/multifd-zero-page.c |  13 +-
 migration/multifd-zlib.c      |  16 +-
 migration/multifd-zstd.c      |  15 +-
 migration/multifd.c           | 535 ++++++----------------------------
 migration/multifd.h           |  66 +++--
 migration/ram.c               |  10 +-
 migration/trace-events        |   9 +-
 13 files changed, 613 insertions(+), 522 deletions(-)
 create mode 100644 migration/multifd-ram.c


base-commit: 3cce8bd4d737f2ca688bbdcb92cd5cc683245bbd
prerequisite-patch-id: 13fb8c983c330913887987533e1e69bb6a4ed613
prerequisite-patch-id: 3d95428f94bd2126417f309aa7448f6261a8f2f8
prerequisite-patch-id: 71687ba9c29408b37eb25a784965690cfb9698f8
prerequisite-patch-id: 4b935fefd5ef932d45323c5ebeef130cc1587f72
prerequisite-patch-id: d81df6a9e428a82fe9f0a04c13e3b36341c9ffda
prerequisite-patch-id: b0f171771de7cd44e68d8ade8588d49f6bf03d8a
prerequisite-patch-id: e0e973dee5d548a5667e0512650fa1c5b88c90c2
prerequisite-patch-id: 944d3a14068acf1f3ff5ae04beae442347474ae5
prerequisite-patch-id: 35e4690cfcb51cccacec5ba7c03fd32004c0ca67
prerequisite-patch-id: 9f3085d6702c691cae5b5767fe20b008d0e128b2
prerequisite-patch-id: dc6423ec895b9e28e8f6edd97113536242e95c01
prerequisite-patch-id: 53b1c7680fac959f5c2ef786a0fd2c5fd5711524
prerequisite-patch-id: 9392b7caa794ed8a871a4d1986914c99309c9b96
prerequisite-patch-id: 15a7b92cd04c871f2577360475be4c0233837422
prerequisite-patch-id: 5424a98773296ea3f782f199c61f40d948222d6b
prerequisite-patch-id: ea27cd0a9855caf3b5d88940c0988a4d1a76c809
prerequisite-patch-id: f922d0deec9376de12f83dccdf9c146ad96d5953
prerequisite-patch-id: 06ec6991abae625119bc4fb1021fef068a34531b
prerequisite-patch-id: dd0d83e0bc7407ac090eee2c04a4179ac7a29ed7
prerequisite-patch-id: 56d976d50f3179c2ad72ce9757b905cd88be43b5
prerequisite-patch-id: 21b2e1e07f57622742647c1f1be1010f7c1a345c
prerequisite-patch-id: 690216d1b858779a9d5b8b87e5740428c084eff8
prerequisite-patch-id: dd7720e6a8a37181726eb1382236e54a89b5d518
prerequisite-patch-id: e52a32da796a00ace9d5b6aeaecc9238719a222f
prerequisite-patch-id: bc3a88d2a1a55257df56404e1b41b52e24e2bb6a
prerequisite-patch-id: e04e47ceced2b728a04034f08ec4d6b39060ca04
prerequisite-patch-id: 7a450ee99f6a11bb1a511175989b615c818d792f
prerequisite-patch-id: a27b6e2cfa162eb2294920983389c82e2e2aafc4
prerequisite-patch-id: 81d915f5136a90552f794e28f007bc5eada6ced3
prerequisite-patch-id: 1e6be5ba2ca0869009aa2e98269c6d07cd1c073c
prerequisite-patch-id: a7071e107ca7012c21a272063aece5bf6a165234
prerequisite-patch-id: acf2e421fbbe7480311224f0080321bd24bace72
prerequisite-patch-id: c6b305efc098b716b5fd86974486610c35310bf9
prerequisite-patch-id: b2ea0671b1ca8ace09a5be5bc8789e1f9ed362e9
prerequisite-patch-id: dbb0b9c95dc9be1b5f4397afa0e88ec0cc61e17d
prerequisite-patch-id: 20b540d8c51706dfad625fa6747b03f87f85b214
prerequisite-patch-id: 221419287e45a25205d7d854a7fca8f3515d89bd
prerequisite-patch-id: 041d82f9a313815769a294e2dd5323f1e47a9960
prerequisite-patch-id: f23f851a6c0bcb9bf7c3c35e8991f35308275e29
prerequisite-patch-id: 079dc1b0988e3895b2e587909fd569dd1e3e4bfa
prerequisite-patch-id: ef20808cb5817567b151c290304d2407ead5b4d2
prerequisite-patch-id: 11ba2c5120a650e489e49e1dd8859cbeb0385081
prerequisite-patch-id: d06198e7597766a7f620c26c53f1f4ddff088c0c
prerequisite-patch-id: c0e5474ff1e06e3650ac210f0f6d23c71b26f694
prerequisite-patch-id: d190df2ffd206a2f473a47d53da8302254d3c0b7
prerequisite-patch-id: 9bae8b536d05070c5b142ca238ad98c08fcb892f
prerequisite-patch-id: 39719510e3d8000a693f59966c5d6be0a51c60e7
prerequisite-patch-id: 0ada1d54ec5ca0e87bb497b042aa9e447afc9b3c
prerequisite-patch-id: 52bd66de61e3b394eacf88debb333b352cdd27f1
prerequisite-patch-id: a4cb433552382d323e361491a10f21bd3ea5b146
prerequisite-patch-id: 3e1bc2bce418c96703e328db3366aea3050c685f
prerequisite-patch-id: 0f3f6d353e7274ae6c042b32f51ef89af5a913ca
prerequisite-patch-id: bc75f89192c8d1420556c7977f93102ed38cf1f7
prerequisite-patch-id: 7fb07c996e4229d1edaeb94fed9c03eb710e894c
prerequisite-patch-id: d62bbbc43d995fbd0016e05e7c71aed38478ab07
prerequisite-patch-id: 3faca00d83f7cfc29d31df2cb972b1b97f57e505
prerequisite-patch-id: d276940aaf4fa4ef2358a28008ef13b731a39629
prerequisite-patch-id: b8a268e1585faac7c99a5a49cd0ba02b4ea04fc7
prerequisite-patch-id: 8ceeef85cb7108e965defb5dcd38ced0b1e4eb43
prerequisite-patch-id: 17c85eb4e53c93833d2983fafa8e74ac11f3ce9a
prerequisite-patch-id: 9db5aa84bdb09ccd1c89a65f9fe00a5b967a1fa4
prerequisite-patch-id: 8a98f58e3d04dabce63e5a73605052ec8f99e77d
prerequisite-patch-id: d0bbfe026091c750b52a87a837f25444a1671e2d
prerequisite-patch-id: 56ec9da207926c5cee8bcd090fb01782e3c8f94d
prerequisite-patch-id: 5efc4ef57b1fb632e38dc1a0c8db294745fe7651
prerequisite-patch-id: aa4f714be6f0e4f47edaa2c7ab08c0a124cc8f10
prerequisite-patch-id: 96161dfcc9b8be7ab2f7d8b45b84c6dab46e261e
prerequisite-patch-id: fa09bb268a192039476c21e0826a5cd6b3a3814e
prerequisite-patch-id: 9dea8db103a584ef8217bc8b8cef6577b082810e
prerequisite-patch-id: 7486f1a89780da95370873a5ef54cba0768a3a76
prerequisite-patch-id: 743c04b4fca6f3c872571fd6409260fc022a378d
prerequisite-patch-id: f72522e4b88a8f0253bb42b7a0f2c8b2cd556faf
prerequisite-patch-id: b87feef9f763eb9abb52ff4002fab9e383170b7c
prerequisite-patch-id: 4d1a35c120848cefac2cae61525d68cc7f2e9225
prerequisite-patch-id: 5a17fd53cc2d5b1ce9648f5da88159fc1b71f386
prerequisite-patch-id: efabe93b4dd446e98a18a95e0d6458f5754d0b22
prerequisite-patch-id: d5096ebd2a916dbb553d1fa87afaab2f022b1ccc
prerequisite-patch-id: e127d6b1738baac49f5771be941bf46e9249ed1e
prerequisite-patch-id: ec959712db5cf3994d67a8e63901f8e9bc9a5ff6
prerequisite-patch-id: ca7dfa5a6be16b670b4f3ef78ea30dae32441785
prerequisite-patch-id: 6eefed2e2e07fecf02ff59082750d15d69da3308
prerequisite-patch-id: f51f1fc9d9551d23d593d1fd00fc1705fc13bbe3
prerequisite-patch-id: a26a1a49f1c0aa37facd754357c7c662e906302f
prerequisite-patch-id: b9842ad176b1464d14c6c2c3738513a4a9dee4aa
prerequisite-patch-id: 60bda711f65550428d0c71db40c40e988488ddd4
prerequisite-patch-id: 34eb77fabced2ee263ac741c0655a7bf08a4ab6d
prerequisite-patch-id: 963ee62cced18c3c670ac37205d214a2e7685e81
prerequisite-patch-id: b89b171328298b015df107d929f51cbf4d014cf7
prerequisite-patch-id: a72b6c6e6466da6d3331b0f81342ec10d4d5b284
prerequisite-patch-id: 03c31a68fce3b7c376b7cc5af90682e5b01fd0df
prerequisite-patch-id: 873751126726c46b946273d2be7f7f27248cd1bb
prerequisite-patch-id: ecbe43e43396941f0c53da3c37d8c5d802b9ac05
prerequisite-patch-id: b681715a6d736c2be8d8fec32aba8eed1e6f9078
prerequisite-patch-id: 939558921113ff8049eef6670a8cc029ef05543e
prerequisite-patch-id: bd17dc99ece68b5bab3ff9daf7fc8e499e1ebc6b
prerequisite-patch-id: f98b097eb060620a5abab23e6b1e7a844d93e34f
prerequisite-patch-id: d18c958b3170513ba516a684a012e614e5fffb24
prerequisite-patch-id: c5dc0d8523e1ec5350aade0f26fdf7b0fe2512dc
prerequisite-patch-id: 85a9994d9c832ce1b93b88596a4907fca9f6eef9
prerequisite-patch-id: db1e0e6af4396d4d1aea022bbf0accb42df1d95c
prerequisite-patch-id: aeec629b49de4924e258f703c80004ab2372028a
prerequisite-patch-id: 6e7604117c7a60a516d040f96a5b2e0003e769fa
prerequisite-patch-id: a7603aee0aa6ffeca0e6cf61bfa3f86999f03075
prerequisite-patch-id: 37a53f691313007b37d0710aca98d2135017c1df
prerequisite-patch-id: 9ae8ac17cc02abe3e1edf0433882f25a56d071e0
prerequisite-patch-id: 09bf6d466076368108476f838f1cedb69bbfc1d3
prerequisite-patch-id: 6b2d5a9254dad9516992a254c4222a5cabd820b0
prerequisite-patch-id: d5ac270f9afe46a59c9cd242c8c6891b2ef11dbc
prerequisite-patch-id: 891d6e87aa83dc113c14136f5f68aa1ac3407bcb
prerequisite-patch-id: f620362deac56665b85f3fb3feee9a52c380a642
prerequisite-patch-id: bc3c5c647832c31e554f4db529b652ad611a6f6d
prerequisite-patch-id: 2cf2edb8b7fb86f2b15e0577919539a4968e6c55
prerequisite-patch-id: 9adea6cc53fd14a958fa6526609069e1d2f08b17
prerequisite-patch-id: 2b2449c503dd43b1fc076c7aeb6f9a88d414f9aa
prerequisite-patch-id: 6f3f7bae420d5cd3f5336d03586c74649bdec5ae
prerequisite-patch-id: 72e3fa6afc8abeffe053d0b4fda8a1a92f439ccb
prerequisite-patch-id: 399beb1082afa887e2ed8f6366e9ff0fadb1add2
prerequisite-patch-id: 6d2257580f53bbcc3019372bfa71b7802fb443be
prerequisite-patch-id: 5373234aa561a7ad4882ee1297a722a2cd563483
prerequisite-patch-id: 72770bf2d749881a5050ef1967e376e323a953fa
prerequisite-patch-id: ea475a629d03117fab36bf15479a8b0f9d2c3d44
prerequisite-patch-id: 6194f6893c70ee15284f8004be19322ef27a7153
prerequisite-patch-id: b817779acaeef58d74508709c5315c6ff9639761
prerequisite-patch-id: 56fdfa5291464155be2a22d5f1773a3875ba53ea
prerequisite-patch-id: c5f55ac00f159a79d625ca24239c0198c663e807
prerequisite-patch-id: 1742d4904cae5a1faf79668ac2b18be284b9aa30
prerequisite-patch-id: bcf9a9f7e117c6d135dc53f5e0e41d2e10a24b4d
prerequisite-patch-id: 2801f2428ec6fa368e55556cd1c4f1633f82b371
prerequisite-patch-id: e48d9b78d4e0c59797a93c27366c0e8e7b40831a
prerequisite-patch-id: 9d378c3fcc3defb5990e9b4fac0481b4d9cbf998
prerequisite-patch-id: 30a79f47815477c6626c59770ba09e401785a095
prerequisite-patch-id: 8bd103d6b2d89ae3ac5e38a041b47a531ed41e40
prerequisite-patch-id: 61e8832b8c1f293cddeeceaaaf30ae9709d99cf2
prerequisite-patch-id: 0c27b5ed30dc34d6c6b1158822d687afc570af53
prerequisite-patch-id: 30218ba738c05231a97a239a7b7db263b9fa3e00
prerequisite-patch-id: de4606a7b886fe844979bf396dec51fcbb6dde0a
prerequisite-patch-id: 96c8fd8f3835ea9ab2dd1494a7716f4a65579554
prerequisite-patch-id: d7eb400ef0b6474bd1a4ae562a2989ee7f09a38e
prerequisite-patch-id: 10100b6534f56912a29a165b8709f9e00ebfb423
prerequisite-patch-id: 4aa8159251d656ee0ac1d23930533ac3e2a63aec
prerequisite-patch-id: 1583345b2b64479d2a3255e8e1f633c46c36bb94
prerequisite-patch-id: a7e6527566e0c715a3b07af5d6b1ec2477cd2e0e
prerequisite-patch-id: 84ef10eb1a1b0456fadbb166267a94662d0606ff
prerequisite-patch-id: f2049657d1fa03cd95a6be6ce152b6a155aff063
prerequisite-patch-id: a6474c935071bdf3cba1a1b2e067f7119f77513a
prerequisite-patch-id: 6ba32fbc28136539958b000c47032d8ccd9fba51
prerequisite-patch-id: e5adff7e7c035831ddb7fc2e518c23e1e6cf6369
prerequisite-patch-id: 4f07f2a5c4ac917a2d3bb5b1e988c11264a0fc93
prerequisite-patch-id: 99a1e21173e5f25502866539e23a8be9beee9d3c
prerequisite-patch-id: 22fe9bf6a8a79c33d4f87b5ca395e65bff1721b7
prerequisite-patch-id: 3e1ab4b997e26be7ded8af84ff934fa38337f517
prerequisite-patch-id: feab895cda6daab7368c9eccfad561262178e772
prerequisite-patch-id: 1488d6151031bc38309677c5b34b83322a062eff
prerequisite-patch-id: 3a533bdde4bf66a9096e0fda7ee0b945d99e7f53
prerequisite-patch-id: 6524f2c8d3366b5767a937b15b3e0c256d18ceea
prerequisite-patch-id: 4578afb3b16b6fb60edc4309f0b6895867707396
prerequisite-patch-id: d17ec591d5951b6b0929fc0cb0eedcbad473d5e0
prerequisite-patch-id: c2c0be108c4cfc5fbd7f371b3bcf8e3a7fbb293c
prerequisite-patch-id: 7c72b99380179c2342c6d4a45445709ccfcc1525
prerequisite-patch-id: 6eff34ed40cef45df4b3fa9831e4bdd3fb6cbe01
prerequisite-patch-id: 28051ebaadce4361b951f0e33d8b6729eec44fe0
prerequisite-patch-id: 4145960f8ce4d8ac3a2e6c1b983245a545e7fbd2
prerequisite-patch-id: 567ab210666266b8566798223b6f86cf511981c7
prerequisite-patch-id: 7039619fae616a73255d0e309b827795c8fcdae4
prerequisite-patch-id: f802c83e5ca45642f1efb2ee24be6d69a831e654
prerequisite-patch-id: 6fc655fd9dd8e6e33e1dae0b603d89b59d0889e3
prerequisite-patch-id: 0f17452e2988269385fbec6a7c49eee867cf6995
prerequisite-patch-id: e00b304f28d1b602bfb228a99c7338cd352577ce
prerequisite-patch-id: d409d879b604278d1142da46257cdb506c6a327c
prerequisite-patch-id: abbd22811acacdbcf1bc3e7aab071df609f4b5e7
prerequisite-patch-id: 48154ad8726f2e7839ee5b7ab937527d2752f670
prerequisite-patch-id: 693d29957da2544f09651bf63f2df66335c784af
prerequisite-patch-id: d4fdd1f5cc03bf7544edc9ffab8eb6576053ec8d
prerequisite-patch-id: ed843b76ff082fcd8fe2bb2098333e67049bb724
prerequisite-patch-id: 7b1184dbb311148e8cd7638fac8ef0da5bcc7736
prerequisite-patch-id: 463e54d104b912c7e5a889c472f108dada39935b
prerequisite-patch-id: cdf627d2db55db40ff6135ccd1caef4ba040b65e
prerequisite-patch-id: 505f3b47cf39030805b5bc55271222bd5269ecb7
prerequisite-patch-id: afe92aa2cc06dd6c2ead36a8a75e5d4db3e39216
prerequisite-patch-id: abcc6b86f2739c9bf24d8311ac1085d196ec86ae
prerequisite-patch-id: bfe8fa1d7b92e78f868a1115fe34b0e7705e7270
prerequisite-patch-id: f6c12160e8c18342e212913ab446abaf0c1fa4ef
prerequisite-patch-id: 4f77617bceba99ff696734c39787cf13e637e4aa
prerequisite-patch-id: 19fc9d4f0fb10ceb5f9b44cc06014de8ccbe9b08
prerequisite-patch-id: 145fa70dbad65ce73309a9eb3b68698dd4d87bce
prerequisite-patch-id: 632ece0cbce4600b2920d8692eb54f29cc3780f7
prerequisite-patch-id: 492005e007760cb30207694288683d1afa2b8da2
prerequisite-patch-id: aef029cdfe24b3eb7a6e57202d3937c0973b66a2
prerequisite-patch-id: 2b1421f93dcaf115e7d95af6dbe1e187a4206224
prerequisite-patch-id: f37070ef2ea84b81a3c0166b3e48e06f9c42b7fd
prerequisite-patch-id: c491e100f717942929306a711fed782954f619f0
prerequisite-patch-id: 709fe575604297bc824e269787d6214f271eab44
prerequisite-patch-id: cb5b779a69c709562a00b20b5f17edcec759584b
prerequisite-patch-id: 70e23e7d5d17b0a40b0222ed1e9f601de0641e43
prerequisite-patch-id: 406a69894a1b8b1d8569eda0708c3c2ce8512ac7
prerequisite-patch-id: 4aa66ad0c58e5db7d3006e5b01ec07cd334d194a
prerequisite-patch-id: 46e029947ece9be7d6dd62c52777ff663f603513
prerequisite-patch-id: f7e74cf4e29abbf89f757d0057de236be65ccbbb
prerequisite-patch-id: e4cde74b826dfcda173263446482fa2279b22daf
prerequisite-patch-id: 60df1811efdf86a39285528668c49064d5b635fc
prerequisite-patch-id: 07ee1e34207e626feaa9383e95e46744946fcf7d
prerequisite-patch-id: 1218747de1943091b3e2a2b7edba586204711b94
prerequisite-patch-id: 0ea0a831ea8d77a44feed1312184292a1933b406
prerequisite-patch-id: 2fb210c9318e3baa065ad0fe319cd0bf5500253c
prerequisite-patch-id: afb12a368a0eab1bd0fc8ec8d2031d87b1400628
prerequisite-patch-id: bcf116b7685ab717b31a73d6204e2c3396945a4d
prerequisite-patch-id: ccf321d3d877ad63aafae004d61e677c3b9e0e4d
prerequisite-patch-id: 785c406be98b9abcdf556f32b87bd3abc291b565
prerequisite-patch-id: ce08905deae2f43f5785c7c3cefe6e2ff7202566
prerequisite-patch-id: 67a48c522f6917694df79a7da2cacb124639b1fc
prerequisite-patch-id: 28d33665f3057d9c568b1be55793186ee8bf2a3b
prerequisite-patch-id: 141d789226542c2208b6cc93c8c730dae78f3206
prerequisite-patch-id: 4cae9bcfc9da364a22443c0d3388d5656130f3a7
prerequisite-patch-id: 9b0bcc495cd71e84c091144a946b3072fedd6cef
prerequisite-patch-id: 378a972aa8c88bc4e13de334be029e28b7324390
prerequisite-patch-id: 769159abc5615c77664ccf96cddb999d1615a60d
prerequisite-patch-id: 241522598e54a7f2e5e447d4387c1d362c9f3db9
prerequisite-patch-id: 886c2e24efa498a77d02df142ac1774de79d1bc2
prerequisite-patch-id: c6baa832d495ea6b198f90da386f08ef3bd862b8
prerequisite-patch-id: 68f5c44b96aaf718340edb9966c816a16dbb8592
prerequisite-patch-id: 7ceb5858eb2bd5898d0c77e275df7cebdb4f7e3a
prerequisite-patch-id: 1c91f782de2190318034e2d57e5c87e927fb50a6
prerequisite-patch-id: 664634677db2ec49ce28bbf074f6c154ead544c7
prerequisite-patch-id: 9556e94a915387c81566b0ca7c60d281e7375612
prerequisite-patch-id: ea59a5c8d09896a094a82781da9139f6fcadea0e
prerequisite-patch-id: efebadc8ba5f01a43e24516eb573ab9b98a12861
prerequisite-patch-id: 2999043154c3940962c8f26f4cfe96757ea2b73c
prerequisite-patch-id: f15fe9c0881b7f2c20dfbd9b0f07bdc6d62fbeb1
prerequisite-patch-id: 412ef91a0859ccbe9ef069095c4b79f0437e160f
prerequisite-patch-id: bbfdf78a5e6cfc46c23cc84a87d56db1b5f2e789
prerequisite-patch-id: 3a2f80b2de6b840432e6733dc8da4cbecab59256
prerequisite-patch-id: b6bb72fccc5289501d277b60650064cf6761a02b
prerequisite-patch-id: dec5790d783ad8311ba2713c89c630311f7bfe5c
prerequisite-patch-id: 2b165ec700bd20a09fed5981b9152f477fe1aaed
prerequisite-patch-id: 13334380aba5ef0dbc9307ab77f741fe1bb263c6
prerequisite-patch-id: 75aab74d9d395b905a5f18cee9e50eb57b931c54
prerequisite-patch-id: e29d1706bdb9035f8d9b167ac961b843601da27d
prerequisite-patch-id: 75dce6dc0a6974218a1ffe8012eea6c37d43c38a
prerequisite-patch-id: 916fa02f68d06a204db518e0fc14655bda5b9f3e
prerequisite-patch-id: 396c5598baa18a4ff1d387a720e90279d4e84b32
prerequisite-patch-id: b1a6de346564ee6cd734026fc73886abf820ef27
prerequisite-patch-id: d42fb9c343fd156d75ec9fc9385f3a102ad61e69
prerequisite-patch-id: f58200f6a9646b8bcf3885df434eb01c0b72e463
prerequisite-patch-id: dc1810c55e1873f4fed2d1ed46cd6293e77729ff
prerequisite-patch-id: f31326c9f9aa5b1add2affc97552abcbb65b24da
prerequisite-patch-id: c62adb1b1209a1b5b415f44f7cbcd6d3463378a7
prerequisite-patch-id: 5d72b5607e606b78c17dc3225c401d74b126b1d4
prerequisite-patch-id: 56ee1388e8241ea3f1eab4a90a8f0e59352e356f
prerequisite-patch-id: e4ac9158310c24026fc3a0d1efca03263387bc91
prerequisite-patch-id: 6c9af50cce8d8326a8b1dd936dc2e979e6acb105
prerequisite-patch-id: 2e2ba3c600a7b17b0e3078594602599fbc9eb105
prerequisite-patch-id: c652c2d7694e9739b273dbd77e36d44c50e06387
prerequisite-patch-id: d06de4cdba195d531aeffdd69202f98e0c188351
prerequisite-patch-id: 13bad1eb0834f971fac89a768978c98a6d910f64
prerequisite-patch-id: d849f83159c17c55369be6ef85df1583ee419aab
prerequisite-patch-id: d600006c8e584eeb61223733a016a8bc618627d9
prerequisite-patch-id: 2ff12f99fbc489c5329f2fec1bbb26186aa8f9f6
prerequisite-patch-id: 48c736893871c5d1157746f7ac9242ce0db11bb3
prerequisite-patch-id: 3ad2087492141825c298e301deee1e4cfe624679
prerequisite-patch-id: d971d4c794cddeac920f6f99a93af0e7bd99f245
prerequisite-patch-id: 7812ab1b04f502164366797259a33cf7ee9a56b9
prerequisite-patch-id: efcdab55b35dc1ae7a1bbef1119687667306cd58
prerequisite-patch-id: c79dbe78ce5d943b7a02adf3777b9923107c27f4
prerequisite-patch-id: 56125b0bbbcb907bfbd08e45c476d5bb067dbf02
prerequisite-patch-id: 34e5515352cac12ddfbf031916d2e158cef8ee37
prerequisite-patch-id: 2a3d9e59f9d314d6c3b40025f065b51401d94aaf
prerequisite-patch-id: bea3bdd12c52a3b7f95b4aec4e29c201db932517
prerequisite-patch-id: 301eb6c90f186e5752d5b81396052b049a9e0539
prerequisite-patch-id: 8ddfb7aede4fd11bbd904450cf167e8812f19e36
prerequisite-patch-id: aadeae7c6978d4b11c30944cbf71c85d17a1751c
prerequisite-patch-id: 2befd9c5a50db6814a7d01010c655642ec914717
prerequisite-patch-id: 47fbf48e91e065fec612dd5615f54e252b649b0c
prerequisite-patch-id: 49c8f7619a807a1e02cec593973db1fb542bbe4d
prerequisite-patch-id: 8fdb47e6c8d670c65f907a790abae32d39da780f
prerequisite-patch-id: 4c1108ea87cba0bbaadaec52d4ecd25f87f5d206
prerequisite-patch-id: 96c5da475e96cddb3a69a26d36240934446af4b5
prerequisite-patch-id: 9d44defb04c763fe9039160dd51d63aee3aa6f32
prerequisite-patch-id: fac5f84794e83cd4e64b452ec7a89e81a653f78b
prerequisite-patch-id: 39a76f46b875f097bb4016be1423f981ffd7c803
prerequisite-patch-id: ccf635b303726f328d5e314b3903aa516bc7088d
prerequisite-patch-id: 853c7ce13865cbdeffb20619d2a90c3679eb7a80
prerequisite-patch-id: 8038e519eed44408ad8712fc6c2b6529562c9efb
prerequisite-patch-id: 45f5c900fa54c8a8854fde60a796b21182456d78
prerequisite-patch-id: 0b3644404517e9a07e8ce562da4b9118e8c62114
prerequisite-patch-id: 322059db28c778851ccc8be51f99307bc723550b
prerequisite-patch-id: 3ca9206fa843f693765baa56509fc3827f37d63f
prerequisite-patch-id: f16c111772453869e52e2136bcf579287e86a56d
prerequisite-patch-id: 96a5aa53078ce3da06950e3c82dfd22abbe03e8b
prerequisite-patch-id: 2cbe2ea6ef41c14a3fdf1db339d15b2515fa50da
prerequisite-patch-id: 8879f4043b7f2905c37c5e7be16bbcb7c281a397
prerequisite-patch-id: baa117d25b0e9680f39d4e2ff2416c5c3bcf4b40
prerequisite-patch-id: 23310473e76a776b2b948fe35ee730bc64e88d88
prerequisite-patch-id: 5d61126c42534dcaa2695ee379e6ce1f8cb99a28
prerequisite-patch-id: c9d9c91a1b9c0261aab7acc79d73f46eeb1e01b8
prerequisite-patch-id: 3cb48173784349c652c593ec294539f70195bf84
prerequisite-patch-id: 0ffcc57ba70989723ded660482e9d323e8ab9374
prerequisite-patch-id: df54768710d624bb304bfbdedd94c245934b44bc
prerequisite-patch-id: c371905e5bc98c1f25d41af1b4568fc5de4a408d
prerequisite-patch-id: 277ae9aea67428a1a2b822cbd79f860ec57d19d2
prerequisite-patch-id: eb1c53b4be23b527c5e12a9d45862a94e874bb25
prerequisite-patch-id: 147e16e6aa66276e1eaab589396a05f215a98d9f
prerequisite-patch-id: de3e5c45e2f41caabdfe2924a6eea5403ba84005
prerequisite-patch-id: 905379d7e277957b68bc8b1620e99d7d20eab2ab
prerequisite-patch-id: d88c5521bfadda59ca49f7c851ce8c6b77066de9
prerequisite-patch-id: 1ce55ae23b721b6f9dc7a9e71a436a54e124ab6c
prerequisite-patch-id: 1ba305947ab60e8597af982ca908ca31bdd5219a
prerequisite-patch-id: 1413d26013fb85655c26a79254087a124c0addfb
prerequisite-patch-id: f41f1b8988b52ca3dd1ebc910475fca064650b98
prerequisite-patch-id: 6e3cec0197efb5d9733758cb160b4dcff1bf0d11
prerequisite-patch-id: 450de6916e7ea02414df2ffccb2df4444d520e67
prerequisite-patch-id: 1b37abc1e237f22f80c0dda647764e754169752e
prerequisite-patch-id: 04b3b9b48c01e32abfb6efd3bf9be5f02e9d1c53
prerequisite-patch-id: e6c320c58fc1bf51d86e9e408c0f2882d9f3a2c9
prerequisite-patch-id: 0032ecefff5b77534359c6cb0981fc9d1f80b7a7
prerequisite-patch-id: 5812bf877034429957e4ad0181de2f7c98325d65
prerequisite-patch-id: d99f11246891f6bc6569162e3a497e442d72af03
prerequisite-patch-id: 77d87342c5fc325bcc9a233b144b577355f01d74
prerequisite-patch-id: e15e24c79fa689fd86a3dee9656d9bf02dc28f51
prerequisite-patch-id: 776174c7f7bea4443a51a654cd65f2f2d36eb8fc
prerequisite-patch-id: c415036c8a2a0fb8a9d0f035eb8a696ab5b6b837
prerequisite-patch-id: 01a13de44433e986d7f9f4486c88666a7b631454
prerequisite-patch-id: 66438751ea421e920c7429ef69005722bc91f114
prerequisite-patch-id: d506fe9a8feef7daa8fcf4e1b2e557431cc129b6
prerequisite-patch-id: 74fc5477a7d9c6ab0a600d0738e21ead087808f9
prerequisite-patch-id: 0a889b3f7a62226c4a8ee993163a240fc3bfcc91
prerequisite-patch-id: 17f3dadad0df0802e251ef45fabbced1b0604a03
-- 
2.35.3



^ permalink raw reply	[flat|nested] 37+ messages in thread

* [PATCH v3 01/14] migration/multifd: Reduce access to p->pages
  2024-08-01 12:35 [PATCH v3 00/14] migration/multifd: Remove multifd_send_state->pages Fabiano Rosas
@ 2024-08-01 12:35 ` Fabiano Rosas
  2024-08-01 12:35 ` [PATCH v3 02/14] migration/multifd: Inline page_size and page_count Fabiano Rosas
                   ` (13 subsequent siblings)
  14 siblings, 0 replies; 37+ messages in thread
From: Fabiano Rosas @ 2024-08-01 12:35 UTC (permalink / raw)
  To: qemu-devel; +Cc: Peter Xu, Maciej S . Szmigiero

I'm about to replace the p->pages pointer with an opaque pointer, so
do a cleanup now to reduce direct accesses to p->page, which makes the
next diffs cleaner.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Peter Xu <peterx@redhat.com>
---
 migration/multifd-qpl.c  |  8 +++++---
 migration/multifd-uadk.c |  9 +++++----
 migration/multifd-zlib.c |  2 +-
 migration/multifd-zstd.c |  2 +-
 migration/multifd.c      | 13 +++++++------
 5 files changed, 19 insertions(+), 15 deletions(-)

diff --git a/migration/multifd-qpl.c b/migration/multifd-qpl.c
index 9265098ee7..f8c84c52cf 100644
--- a/migration/multifd-qpl.c
+++ b/migration/multifd-qpl.c
@@ -404,13 +404,14 @@ retry:
 static void multifd_qpl_compress_pages_slow_path(MultiFDSendParams *p)
 {
     QplData *qpl = p->compress_data;
+    MultiFDPages_t *pages = p->pages;
     uint32_t size = p->page_size;
     qpl_job *job = qpl->sw_job;
     uint8_t *zbuf = qpl->zbuf;
     uint8_t *buf;
 
-    for (int i = 0; i < p->pages->normal_num; i++) {
-        buf = p->pages->block->host + p->pages->offset[i];
+    for (int i = 0; i < pages->normal_num; i++) {
+        buf = pages->block->host + pages->offset[i];
         multifd_qpl_prepare_comp_job(job, buf, zbuf, size);
         if (qpl_execute_job(job) == QPL_STS_OK) {
             multifd_qpl_fill_packet(i, p, zbuf, job->total_out);
@@ -498,6 +499,7 @@ static void multifd_qpl_compress_pages(MultiFDSendParams *p)
 static int multifd_qpl_send_prepare(MultiFDSendParams *p, Error **errp)
 {
     QplData *qpl = p->compress_data;
+    MultiFDPages_t *pages = p->pages;
     uint32_t len = 0;
 
     if (!multifd_send_prepare_common(p)) {
@@ -505,7 +507,7 @@ static int multifd_qpl_send_prepare(MultiFDSendParams *p, Error **errp)
     }
 
     /* The first IOV is used to store the compressed page lengths */
-    len = p->pages->normal_num * sizeof(uint32_t);
+    len = pages->normal_num * sizeof(uint32_t);
     multifd_qpl_fill_iov(p, (uint8_t *) qpl->zlen, len);
     if (qpl->hw_avail) {
         multifd_qpl_compress_pages(p);
diff --git a/migration/multifd-uadk.c b/migration/multifd-uadk.c
index d12353fb21..b8ba3cd9c1 100644
--- a/migration/multifd-uadk.c
+++ b/migration/multifd-uadk.c
@@ -174,19 +174,20 @@ static int multifd_uadk_send_prepare(MultiFDSendParams *p, Error **errp)
     uint32_t hdr_size;
     uint8_t *buf = uadk_data->buf;
     int ret = 0;
+    MultiFDPages_t *pages = p->pages;
 
     if (!multifd_send_prepare_common(p)) {
         goto out;
     }
 
-    hdr_size = p->pages->normal_num * sizeof(uint32_t);
+    hdr_size = pages->normal_num * sizeof(uint32_t);
     /* prepare the header that stores the lengths of all compressed data */
     prepare_next_iov(p, uadk_data->buf_hdr, hdr_size);
 
-    for (int i = 0; i < p->pages->normal_num; i++) {
+    for (int i = 0; i < pages->normal_num; i++) {
         struct wd_comp_req creq = {
             .op_type = WD_DIR_COMPRESS,
-            .src     = p->pages->block->host + p->pages->offset[i],
+            .src     = pages->block->host + pages->offset[i],
             .src_len = p->page_size,
             .dst     = buf,
             /* Set dst_len to double the src in case compressed out >= page_size */
@@ -214,7 +215,7 @@ static int multifd_uadk_send_prepare(MultiFDSendParams *p, Error **errp)
          */
         if (!uadk_data->handle || creq.dst_len >= p->page_size) {
             uadk_data->buf_hdr[i] = cpu_to_be32(p->page_size);
-            prepare_next_iov(p, p->pages->block->host + p->pages->offset[i],
+            prepare_next_iov(p, pages->block->host + pages->offset[i],
                              p->page_size);
             buf += p->page_size;
         }
diff --git a/migration/multifd-zlib.c b/migration/multifd-zlib.c
index 2ced69487e..65f8aba5c8 100644
--- a/migration/multifd-zlib.c
+++ b/migration/multifd-zlib.c
@@ -147,7 +147,7 @@ static int zlib_send_prepare(MultiFDSendParams *p, Error **errp)
          * with compression. zlib does not guarantee that this is safe,
          * therefore copy the page before calling deflate().
          */
-        memcpy(z->buf, p->pages->block->host + pages->offset[i], p->page_size);
+        memcpy(z->buf, pages->block->host + pages->offset[i], p->page_size);
         zs->avail_in = p->page_size;
         zs->next_in = z->buf;
 
diff --git a/migration/multifd-zstd.c b/migration/multifd-zstd.c
index ca17b7e310..cb6075a9a5 100644
--- a/migration/multifd-zstd.c
+++ b/migration/multifd-zstd.c
@@ -138,7 +138,7 @@ static int zstd_send_prepare(MultiFDSendParams *p, Error **errp)
         if (i == pages->normal_num - 1) {
             flush = ZSTD_e_flush;
         }
-        z->in.src = p->pages->block->host + pages->offset[i];
+        z->in.src = pages->block->host + pages->offset[i];
         z->in.size = p->page_size;
         z->in.pos = 0;
 
diff --git a/migration/multifd.c b/migration/multifd.c
index 0b4cbaddfe..c95fce2222 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -114,11 +114,11 @@ static void multifd_set_file_bitmap(MultiFDSendParams *p)
 
     assert(pages->block);
 
-    for (int i = 0; i < p->pages->normal_num; i++) {
+    for (int i = 0; i < pages->normal_num; i++) {
         ramblock_set_file_bmap_atomic(pages->block, pages->offset[i], true);
     }
 
-    for (int i = p->pages->normal_num; i < p->pages->num; i++) {
+    for (int i = pages->normal_num; i < pages->num; i++) {
         ramblock_set_file_bmap_atomic(pages->block, pages->offset[i], false);
     }
 }
@@ -417,7 +417,7 @@ void multifd_send_fill_packet(MultiFDSendParams *p)
     int i;
 
     packet->flags = cpu_to_be32(p->flags);
-    packet->pages_alloc = cpu_to_be32(p->pages->allocated);
+    packet->pages_alloc = cpu_to_be32(pages->allocated);
     packet->normal_pages = cpu_to_be32(pages->normal_num);
     packet->zero_pages = cpu_to_be32(zero_num);
     packet->next_packet_size = cpu_to_be32(p->next_packet_size);
@@ -953,7 +953,7 @@ static void *multifd_send_thread(void *opaque)
 
             if (migrate_mapped_ram()) {
                 ret = file_write_ramblock_iov(p->c, p->iov, p->iovs_num,
-                                              p->pages->block, &local_err);
+                                              pages->block, &local_err);
             } else {
                 ret = qio_channel_writev_full_all(p->c, p->iov, p->iovs_num,
                                                   NULL, 0, p->write_flags,
@@ -969,7 +969,7 @@ static void *multifd_send_thread(void *opaque)
             stat64_add(&mig_stats.normal_pages, pages->normal_num);
             stat64_add(&mig_stats.zero_pages, pages->num - pages->normal_num);
 
-            multifd_pages_reset(p->pages);
+            multifd_pages_reset(pages);
             p->next_packet_size = 0;
 
             /*
@@ -1684,9 +1684,10 @@ void multifd_recv_new_channel(QIOChannel *ioc, Error **errp)
 
 bool multifd_send_prepare_common(MultiFDSendParams *p)
 {
+    MultiFDPages_t *pages = p->pages;
     multifd_send_zero_page_detect(p);
 
-    if (!p->pages->normal_num) {
+    if (!pages->normal_num) {
         p->next_packet_size = 0;
         return false;
     }
-- 
2.35.3



^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v3 02/14] migration/multifd: Inline page_size and page_count
  2024-08-01 12:35 [PATCH v3 00/14] migration/multifd: Remove multifd_send_state->pages Fabiano Rosas
  2024-08-01 12:35 ` [PATCH v3 01/14] migration/multifd: Reduce access to p->pages Fabiano Rosas
@ 2024-08-01 12:35 ` Fabiano Rosas
  2024-08-21 20:25   ` Peter Xu
  2024-08-01 12:35 ` [PATCH v3 03/14] migration/multifd: Remove pages->allocated Fabiano Rosas
                   ` (12 subsequent siblings)
  14 siblings, 1 reply; 37+ messages in thread
From: Fabiano Rosas @ 2024-08-01 12:35 UTC (permalink / raw)
  To: qemu-devel; +Cc: Peter Xu, Maciej S . Szmigiero

The MultiFD*Params structures are for per-channel data. Constant
values should not be there because that needlessly wastes cycles and
storage. The page_size and page_count fall into this category so move
them inline in multifd.h.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
---
 migration/multifd-qpl.c       | 10 +++++++---
 migration/multifd-uadk.c      | 36 ++++++++++++++++++++---------------
 migration/multifd-zero-page.c |  4 ++--
 migration/multifd-zlib.c      | 14 ++++++++------
 migration/multifd-zstd.c      | 11 ++++++-----
 migration/multifd.c           | 33 ++++++++++++++++----------------
 migration/multifd.h           | 18 ++++++++++--------
 7 files changed, 71 insertions(+), 55 deletions(-)

diff --git a/migration/multifd-qpl.c b/migration/multifd-qpl.c
index f8c84c52cf..db60c05795 100644
--- a/migration/multifd-qpl.c
+++ b/migration/multifd-qpl.c
@@ -233,8 +233,10 @@ static void multifd_qpl_deinit(QplData *qpl)
 static int multifd_qpl_send_setup(MultiFDSendParams *p, Error **errp)
 {
     QplData *qpl;
+    uint32_t page_size = multifd_ram_page_size();
+    uint32_t page_count = multifd_ram_page_count();
 
-    qpl = multifd_qpl_init(p->page_count, p->page_size, errp);
+    qpl = multifd_qpl_init(page_count, page_size, errp);
     if (!qpl) {
         return -1;
     }
@@ -245,7 +247,7 @@ static int multifd_qpl_send_setup(MultiFDSendParams *p, Error **errp)
      * additional two IOVs are used to store packet header and compressed data
      * length
      */
-    p->iov = g_new0(struct iovec, p->page_count + 2);
+    p->iov = g_new0(struct iovec, page_count + 2);
     return 0;
 }
 
@@ -534,8 +536,10 @@ out:
 static int multifd_qpl_recv_setup(MultiFDRecvParams *p, Error **errp)
 {
     QplData *qpl;
+    uint32_t page_size = multifd_ram_page_size();
+    uint32_t page_count = multifd_ram_page_count();
 
-    qpl = multifd_qpl_init(p->page_count, p->page_size, errp);
+    qpl = multifd_qpl_init(page_count, page_size, errp);
     if (!qpl) {
         return -1;
     }
diff --git a/migration/multifd-uadk.c b/migration/multifd-uadk.c
index b8ba3cd9c1..1ed1c6afe6 100644
--- a/migration/multifd-uadk.c
+++ b/migration/multifd-uadk.c
@@ -114,8 +114,10 @@ static void multifd_uadk_uninit_sess(struct wd_data *wd)
 static int multifd_uadk_send_setup(MultiFDSendParams *p, Error **errp)
 {
     struct wd_data *wd;
+    uint32_t page_size = multifd_ram_page_size();
+    uint32_t page_count = multifd_ram_page_count();
 
-    wd = multifd_uadk_init_sess(p->page_count, p->page_size, true, errp);
+    wd = multifd_uadk_init_sess(page_count, page_size, true, errp);
     if (!wd) {
         return -1;
     }
@@ -128,7 +130,7 @@ static int multifd_uadk_send_setup(MultiFDSendParams *p, Error **errp)
      * length
      */
 
-    p->iov = g_new0(struct iovec, p->page_count + 2);
+    p->iov = g_new0(struct iovec, page_count + 2);
     return 0;
 }
 
@@ -172,6 +174,7 @@ static int multifd_uadk_send_prepare(MultiFDSendParams *p, Error **errp)
 {
     struct wd_data *uadk_data = p->compress_data;
     uint32_t hdr_size;
+    uint32_t page_size = multifd_ram_page_size();
     uint8_t *buf = uadk_data->buf;
     int ret = 0;
     MultiFDPages_t *pages = p->pages;
@@ -188,7 +191,7 @@ static int multifd_uadk_send_prepare(MultiFDSendParams *p, Error **errp)
         struct wd_comp_req creq = {
             .op_type = WD_DIR_COMPRESS,
             .src     = pages->block->host + pages->offset[i],
-            .src_len = p->page_size,
+            .src_len = page_size,
             .dst     = buf,
             /* Set dst_len to double the src in case compressed out >= page_size */
             .dst_len = p->page_size * 2,
@@ -201,7 +204,7 @@ static int multifd_uadk_send_prepare(MultiFDSendParams *p, Error **errp)
                            p->id, ret, creq.status);
                 return -1;
             }
-            if (creq.dst_len < p->page_size) {
+            if (creq.dst_len < page_size) {
                 uadk_data->buf_hdr[i] = cpu_to_be32(creq.dst_len);
                 prepare_next_iov(p, buf, creq.dst_len);
                 buf += creq.dst_len;
@@ -213,11 +216,11 @@ static int multifd_uadk_send_prepare(MultiFDSendParams *p, Error **errp)
          * than page_size as well because at the receive end we can skip the
          * decompression. But it is tricky to find the right number here.
          */
-        if (!uadk_data->handle || creq.dst_len >= p->page_size) {
-            uadk_data->buf_hdr[i] = cpu_to_be32(p->page_size);
+        if (!uadk_data->handle || creq.dst_len >= page_size) {
+            uadk_data->buf_hdr[i] = cpu_to_be32(page_size);
             prepare_next_iov(p, pages->block->host + pages->offset[i],
-                             p->page_size);
-            buf += p->page_size;
+                             page_size);
+            buf += page_size;
         }
     }
 out:
@@ -239,8 +242,10 @@ out:
 static int multifd_uadk_recv_setup(MultiFDRecvParams *p, Error **errp)
 {
     struct wd_data *wd;
+    uint32_t page_size = multifd_ram_page_size();
+    uint32_t page_count = multifd_ram_page_count();
 
-    wd = multifd_uadk_init_sess(p->page_count, p->page_size, false, errp);
+    wd = multifd_uadk_init_sess(page_count, page_size, false, errp);
     if (!wd) {
         return -1;
     }
@@ -281,6 +286,7 @@ static int multifd_uadk_recv(MultiFDRecvParams *p, Error **errp)
     uint32_t flags = p->flags & MULTIFD_FLAG_COMPRESSION_MASK;
     uint32_t hdr_len = p->normal_num * sizeof(uint32_t);
     uint32_t data_len = 0;
+    uint32_t page_size = multifd_ram_page_size();
     uint8_t *buf = uadk_data->buf;
     int ret = 0;
 
@@ -307,7 +313,7 @@ static int multifd_uadk_recv(MultiFDRecvParams *p, Error **errp)
     for (int i = 0; i < p->normal_num; i++) {
         uadk_data->buf_hdr[i] = be32_to_cpu(uadk_data->buf_hdr[i]);
         data_len += uadk_data->buf_hdr[i];
-        assert(uadk_data->buf_hdr[i] <= p->page_size);
+        assert(uadk_data->buf_hdr[i] <= page_size);
     }
 
     /* read compressed data */
@@ -323,12 +329,12 @@ static int multifd_uadk_recv(MultiFDRecvParams *p, Error **errp)
             .src     = buf,
             .src_len = uadk_data->buf_hdr[i],
             .dst     = p->host + p->normal[i],
-            .dst_len = p->page_size,
+            .dst_len = page_size,
         };
 
-        if (uadk_data->buf_hdr[i] == p->page_size) {
-            memcpy(p->host + p->normal[i], buf, p->page_size);
-            buf += p->page_size;
+        if (uadk_data->buf_hdr[i] == page_size) {
+            memcpy(p->host + p->normal[i], buf, page_size);
+            buf += page_size;
             continue;
         }
 
@@ -344,7 +350,7 @@ static int multifd_uadk_recv(MultiFDRecvParams *p, Error **errp)
                        p->id, ret, creq.status);
             return -1;
         }
-        if (creq.dst_len != p->page_size) {
+        if (creq.dst_len != page_size) {
             error_setg(errp, "multifd %u: decompressed length error", p->id);
             return -1;
         }
diff --git a/migration/multifd-zero-page.c b/migration/multifd-zero-page.c
index e1b8370f88..cc624e36b3 100644
--- a/migration/multifd-zero-page.c
+++ b/migration/multifd-zero-page.c
@@ -63,7 +63,7 @@ void multifd_send_zero_page_detect(MultiFDSendParams *p)
     while (i <= j) {
         uint64_t offset = pages->offset[i];
 
-        if (!buffer_is_zero(rb->host + offset, p->page_size)) {
+        if (!buffer_is_zero(rb->host + offset, multifd_ram_page_size())) {
             i++;
             continue;
         }
@@ -81,7 +81,7 @@ void multifd_recv_zero_page_process(MultiFDRecvParams *p)
     for (int i = 0; i < p->zero_num; i++) {
         void *page = p->host + p->zero[i];
         if (ramblock_recv_bitmap_test_byte_offset(p->block, p->zero[i])) {
-            memset(page, 0, p->page_size);
+            memset(page, 0, multifd_ram_page_size());
         } else {
             ramblock_recv_bitmap_set_offset(p->block, p->zero[i]);
         }
diff --git a/migration/multifd-zlib.c b/migration/multifd-zlib.c
index 65f8aba5c8..e47d7f70dc 100644
--- a/migration/multifd-zlib.c
+++ b/migration/multifd-zlib.c
@@ -127,6 +127,7 @@ static int zlib_send_prepare(MultiFDSendParams *p, Error **errp)
     struct zlib_data *z = p->compress_data;
     z_stream *zs = &z->zs;
     uint32_t out_size = 0;
+    uint32_t page_size = multifd_ram_page_size();
     int ret;
     uint32_t i;
 
@@ -147,8 +148,8 @@ static int zlib_send_prepare(MultiFDSendParams *p, Error **errp)
          * with compression. zlib does not guarantee that this is safe,
          * therefore copy the page before calling deflate().
          */
-        memcpy(z->buf, pages->block->host + pages->offset[i], p->page_size);
-        zs->avail_in = p->page_size;
+        memcpy(z->buf, pages->block->host + pages->offset[i], page_size);
+        zs->avail_in = page_size;
         zs->next_in = z->buf;
 
         zs->avail_out = available;
@@ -260,7 +261,8 @@ static int zlib_recv(MultiFDRecvParams *p, Error **errp)
     uint32_t in_size = p->next_packet_size;
     /* we measure the change of total_out */
     uint32_t out_size = zs->total_out;
-    uint32_t expected_size = p->normal_num * p->page_size;
+    uint32_t page_size = multifd_ram_page_size();
+    uint32_t expected_size = p->normal_num * page_size;
     uint32_t flags = p->flags & MULTIFD_FLAG_COMPRESSION_MASK;
     int ret;
     int i;
@@ -296,7 +298,7 @@ static int zlib_recv(MultiFDRecvParams *p, Error **errp)
             flush = Z_SYNC_FLUSH;
         }
 
-        zs->avail_out = p->page_size;
+        zs->avail_out = page_size;
         zs->next_out = p->host + p->normal[i];
 
         /*
@@ -310,8 +312,8 @@ static int zlib_recv(MultiFDRecvParams *p, Error **errp)
         do {
             ret = inflate(zs, flush);
         } while (ret == Z_OK && zs->avail_in
-                             && (zs->total_out - start) < p->page_size);
-        if (ret == Z_OK && (zs->total_out - start) < p->page_size) {
+                             && (zs->total_out - start) < page_size);
+        if (ret == Z_OK && (zs->total_out - start) < page_size) {
             error_setg(errp, "multifd %u: inflate generated too few output",
                        p->id);
             return -1;
diff --git a/migration/multifd-zstd.c b/migration/multifd-zstd.c
index cb6075a9a5..1812fd1b48 100644
--- a/migration/multifd-zstd.c
+++ b/migration/multifd-zstd.c
@@ -139,7 +139,7 @@ static int zstd_send_prepare(MultiFDSendParams *p, Error **errp)
             flush = ZSTD_e_flush;
         }
         z->in.src = pages->block->host + pages->offset[i];
-        z->in.size = p->page_size;
+        z->in.size = multifd_ram_page_size();
         z->in.pos = 0;
 
         /*
@@ -254,7 +254,8 @@ static int zstd_recv(MultiFDRecvParams *p, Error **errp)
 {
     uint32_t in_size = p->next_packet_size;
     uint32_t out_size = 0;
-    uint32_t expected_size = p->normal_num * p->page_size;
+    uint32_t page_size = multifd_ram_page_size();
+    uint32_t expected_size = p->normal_num * page_size;
     uint32_t flags = p->flags & MULTIFD_FLAG_COMPRESSION_MASK;
     struct zstd_data *z = p->compress_data;
     int ret;
@@ -286,7 +287,7 @@ static int zstd_recv(MultiFDRecvParams *p, Error **errp)
     for (i = 0; i < p->normal_num; i++) {
         ramblock_recv_bitmap_set_offset(p->block, p->normal[i]);
         z->out.dst = p->host + p->normal[i];
-        z->out.size = p->page_size;
+        z->out.size = page_size;
         z->out.pos = 0;
 
         /*
@@ -300,8 +301,8 @@ static int zstd_recv(MultiFDRecvParams *p, Error **errp)
         do {
             ret = ZSTD_decompressStream(z->zds, &z->out, &z->in);
         } while (ret > 0 && (z->in.size - z->in.pos > 0)
-                         && (z->out.pos < p->page_size));
-        if (ret > 0 && (z->out.pos < p->page_size)) {
+                         && (z->out.pos < page_size));
+        if (ret > 0 && (z->out.pos < page_size)) {
             error_setg(errp, "multifd %u: decompressStream buffer too small",
                        p->id);
             return -1;
diff --git a/migration/multifd.c b/migration/multifd.c
index c95fce2222..33f91c9561 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -133,15 +133,17 @@ static void multifd_set_file_bitmap(MultiFDSendParams *p)
  */
 static int nocomp_send_setup(MultiFDSendParams *p, Error **errp)
 {
+    uint32_t page_count = multifd_ram_page_count();
+
     if (migrate_zero_copy_send()) {
         p->write_flags |= QIO_CHANNEL_WRITE_FLAG_ZERO_COPY;
     }
 
     if (multifd_use_packets()) {
         /* We need one extra place for the packet header */
-        p->iov = g_new0(struct iovec, p->page_count + 1);
+        p->iov = g_new0(struct iovec, page_count + 1);
     } else {
-        p->iov = g_new0(struct iovec, p->page_count);
+        p->iov = g_new0(struct iovec, page_count);
     }
 
     return 0;
@@ -165,14 +167,15 @@ static void nocomp_send_cleanup(MultiFDSendParams *p, Error **errp)
 static void multifd_send_prepare_iovs(MultiFDSendParams *p)
 {
     MultiFDPages_t *pages = p->pages;
+    uint32_t page_size = multifd_ram_page_size();
 
     for (int i = 0; i < pages->normal_num; i++) {
         p->iov[p->iovs_num].iov_base = pages->block->host + pages->offset[i];
-        p->iov[p->iovs_num].iov_len = p->page_size;
+        p->iov[p->iovs_num].iov_len = page_size;
         p->iovs_num++;
     }
 
-    p->next_packet_size = pages->normal_num * p->page_size;
+    p->next_packet_size = pages->normal_num * page_size;
 }
 
 /**
@@ -237,7 +240,7 @@ static int nocomp_send_prepare(MultiFDSendParams *p, Error **errp)
  */
 static int nocomp_recv_setup(MultiFDRecvParams *p, Error **errp)
 {
-    p->iov = g_new0(struct iovec, p->page_count);
+    p->iov = g_new0(struct iovec, multifd_ram_page_count());
     return 0;
 }
 
@@ -288,7 +291,7 @@ static int nocomp_recv(MultiFDRecvParams *p, Error **errp)
 
     for (int i = 0; i < p->normal_num; i++) {
         p->iov[i].iov_base = p->host + p->normal[i];
-        p->iov[i].iov_len = p->page_size;
+        p->iov[i].iov_len = multifd_ram_page_size();
         ramblock_recv_bitmap_set_offset(p->block, p->normal[i]);
     }
     return qio_channel_readv_all(p->c, p->iov, p->normal_num, errp);
@@ -447,6 +450,8 @@ void multifd_send_fill_packet(MultiFDSendParams *p)
 static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
 {
     MultiFDPacket_t *packet = p->packet;
+    uint32_t page_count = multifd_ram_page_count();
+    uint32_t page_size = multifd_ram_page_size();
     int i;
 
     packet->magic = be32_to_cpu(packet->magic);
@@ -472,10 +477,10 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
      * If we received a packet that is 100 times bigger than expected
      * just stop migration.  It is a magic number.
      */
-    if (packet->pages_alloc > p->page_count) {
+    if (packet->pages_alloc > page_count) {
         error_setg(errp, "multifd: received packet "
                    "with size %u and expected a size of %u",
-                   packet->pages_alloc, p->page_count) ;
+                   packet->pages_alloc, page_count) ;
         return -1;
     }
 
@@ -521,7 +526,7 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
     for (i = 0; i < p->normal_num; i++) {
         uint64_t offset = be64_to_cpu(packet->offset[i]);
 
-        if (offset > (p->block->used_length - p->page_size)) {
+        if (offset > (p->block->used_length - page_size)) {
             error_setg(errp, "multifd: offset too long %" PRIu64
                        " (max " RAM_ADDR_FMT ")",
                        offset, p->block->used_length);
@@ -533,7 +538,7 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
     for (i = 0; i < p->zero_num; i++) {
         uint64_t offset = be64_to_cpu(packet->offset[p->normal_num + i]);
 
-        if (offset > (p->block->used_length - p->page_size)) {
+        if (offset > (p->block->used_length - page_size)) {
             error_setg(errp, "multifd: offset too long %" PRIu64
                        " (max " RAM_ADDR_FMT ")",
                        offset, p->block->used_length);
@@ -1158,7 +1163,7 @@ bool multifd_send_setup(void)
     MigrationState *s = migrate_get_current();
     Error *local_err = NULL;
     int thread_count, ret = 0;
-    uint32_t page_count = MULTIFD_PACKET_SIZE / qemu_target_page_size();
+    uint32_t page_count = multifd_ram_page_count();
     bool use_packets = multifd_use_packets();
     uint8_t i;
 
@@ -1191,8 +1196,6 @@ bool multifd_send_setup(void)
             p->packet->version = cpu_to_be32(MULTIFD_VERSION);
         }
         p->name = g_strdup_printf("mig/src/send_%d", i);
-        p->page_size = qemu_target_page_size();
-        p->page_count = page_count;
         p->write_flags = 0;
 
         if (!multifd_new_send_channel_create(p, &local_err)) {
@@ -1563,7 +1566,7 @@ static void *multifd_recv_thread(void *opaque)
 int multifd_recv_setup(Error **errp)
 {
     int thread_count;
-    uint32_t page_count = MULTIFD_PACKET_SIZE / qemu_target_page_size();
+    uint32_t page_count = multifd_ram_page_count();
     bool use_packets = multifd_use_packets();
     uint8_t i;
 
@@ -1607,8 +1610,6 @@ int multifd_recv_setup(Error **errp)
         p->name = g_strdup_printf("mig/dst/recv_%d", i);
         p->normal = g_new0(ram_addr_t, page_count);
         p->zero = g_new0(ram_addr_t, page_count);
-        p->page_count = page_count;
-        p->page_size = qemu_target_page_size();
     }
 
     for (i = 0; i < thread_count; i++) {
diff --git a/migration/multifd.h b/migration/multifd.h
index 0ecd6f47d7..a2bba23af9 100644
--- a/migration/multifd.h
+++ b/migration/multifd.h
@@ -13,6 +13,7 @@
 #ifndef QEMU_MIGRATION_MULTIFD_H
 #define QEMU_MIGRATION_MULTIFD_H
 
+#include "exec/target_page.h"
 #include "ram.h"
 
 typedef struct MultiFDRecvData MultiFDRecvData;
@@ -106,10 +107,6 @@ typedef struct {
     QIOChannel *c;
     /* packet allocated len */
     uint32_t packet_len;
-    /* guest page size */
-    uint32_t page_size;
-    /* number of pages in a full packet */
-    uint32_t page_count;
     /* multifd flags for sending ram */
     int write_flags;
 
@@ -173,10 +170,6 @@ typedef struct {
     QIOChannel *c;
     /* packet allocated len */
     uint32_t packet_len;
-    /* guest page size */
-    uint32_t page_size;
-    /* number of pages in a full packet */
-    uint32_t page_count;
 
     /* syncs main thread and channels */
     QemuSemaphore sem_sync;
@@ -254,4 +247,13 @@ static inline void multifd_send_prepare_header(MultiFDSendParams *p)
 
 void multifd_channel_connect(MultiFDSendParams *p, QIOChannel *ioc);
 
+static inline uint32_t multifd_ram_page_size(void)
+{
+    return qemu_target_page_size();
+}
+
+static inline uint32_t multifd_ram_page_count(void)
+{
+    return MULTIFD_PACKET_SIZE / qemu_target_page_size();
+}
 #endif
-- 
2.35.3



^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v3 03/14] migration/multifd: Remove pages->allocated
  2024-08-01 12:35 [PATCH v3 00/14] migration/multifd: Remove multifd_send_state->pages Fabiano Rosas
  2024-08-01 12:35 ` [PATCH v3 01/14] migration/multifd: Reduce access to p->pages Fabiano Rosas
  2024-08-01 12:35 ` [PATCH v3 02/14] migration/multifd: Inline page_size and page_count Fabiano Rosas
@ 2024-08-01 12:35 ` Fabiano Rosas
  2024-08-21 20:32   ` Peter Xu
  2024-08-01 12:35 ` [PATCH v3 04/14] migration/multifd: Pass in MultiFDPages_t to file_write_ramblock_iov Fabiano Rosas
                   ` (11 subsequent siblings)
  14 siblings, 1 reply; 37+ messages in thread
From: Fabiano Rosas @ 2024-08-01 12:35 UTC (permalink / raw)
  To: qemu-devel; +Cc: Peter Xu, Maciej S . Szmigiero

This value never changes and is always the same as page_count. We
don't need a copy of it per-channel plus one in the extra slot. Remove
it.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
---
 migration/multifd.c | 6 ++----
 migration/multifd.h | 2 --
 2 files changed, 2 insertions(+), 6 deletions(-)

diff --git a/migration/multifd.c b/migration/multifd.c
index 33f91c9561..b4ca135b47 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -396,7 +396,6 @@ static MultiFDPages_t *multifd_pages_init(uint32_t n)
 {
     MultiFDPages_t *pages = g_new0(MultiFDPages_t, 1);
 
-    pages->allocated = n;
     pages->offset = g_new0(ram_addr_t, n);
 
     return pages;
@@ -405,7 +404,6 @@ static MultiFDPages_t *multifd_pages_init(uint32_t n)
 static void multifd_pages_clear(MultiFDPages_t *pages)
 {
     multifd_pages_reset(pages);
-    pages->allocated = 0;
     g_free(pages->offset);
     pages->offset = NULL;
     g_free(pages);
@@ -420,7 +418,7 @@ void multifd_send_fill_packet(MultiFDSendParams *p)
     int i;
 
     packet->flags = cpu_to_be32(p->flags);
-    packet->pages_alloc = cpu_to_be32(pages->allocated);
+    packet->pages_alloc = cpu_to_be32(multifd_ram_page_count());
     packet->normal_pages = cpu_to_be32(pages->normal_num);
     packet->zero_pages = cpu_to_be32(zero_num);
     packet->next_packet_size = cpu_to_be32(p->next_packet_size);
@@ -651,7 +649,7 @@ static inline bool multifd_queue_empty(MultiFDPages_t *pages)
 
 static inline bool multifd_queue_full(MultiFDPages_t *pages)
 {
-    return pages->num == pages->allocated;
+    return pages->num == multifd_ram_page_count();
 }
 
 static inline void multifd_enqueue(MultiFDPages_t *pages, ram_addr_t offset)
diff --git a/migration/multifd.h b/migration/multifd.h
index a2bba23af9..660a9882c2 100644
--- a/migration/multifd.h
+++ b/migration/multifd.h
@@ -76,8 +76,6 @@ typedef struct {
     uint32_t num;
     /* number of normal pages */
     uint32_t normal_num;
-    /* number of allocated pages */
-    uint32_t allocated;
     /* offset of each page */
     ram_addr_t *offset;
     RAMBlock *block;
-- 
2.35.3



^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v3 04/14] migration/multifd: Pass in MultiFDPages_t to file_write_ramblock_iov
  2024-08-01 12:35 [PATCH v3 00/14] migration/multifd: Remove multifd_send_state->pages Fabiano Rosas
                   ` (2 preceding siblings ...)
  2024-08-01 12:35 ` [PATCH v3 03/14] migration/multifd: Remove pages->allocated Fabiano Rosas
@ 2024-08-01 12:35 ` Fabiano Rosas
  2024-08-01 12:35 ` [PATCH v3 05/14] migration/multifd: Introduce MultiFDSendData Fabiano Rosas
                   ` (10 subsequent siblings)
  14 siblings, 0 replies; 37+ messages in thread
From: Fabiano Rosas @ 2024-08-01 12:35 UTC (permalink / raw)
  To: qemu-devel; +Cc: Peter Xu, Maciej S . Szmigiero

We want to stop dereferencing 'pages' so it can be replaced by an
opaque pointer in the next patches.

Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
---
 migration/file.c    | 3 ++-
 migration/file.h    | 2 +-
 migration/multifd.c | 2 +-
 3 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/migration/file.c b/migration/file.c
index db870f2cf0..7a8c9e3957 100644
--- a/migration/file.c
+++ b/migration/file.c
@@ -198,12 +198,13 @@ void file_start_incoming_migration(FileMigrationArgs *file_args, Error **errp)
 }
 
 int file_write_ramblock_iov(QIOChannel *ioc, const struct iovec *iov,
-                            int niov, RAMBlock *block, Error **errp)
+                            int niov, MultiFDPages_t *pages, Error **errp)
 {
     ssize_t ret = 0;
     int i, slice_idx, slice_num;
     uintptr_t base, next, offset;
     size_t len;
+    RAMBlock *block = pages->block;
 
     slice_idx = 0;
     slice_num = 1;
diff --git a/migration/file.h b/migration/file.h
index 9f71e87f74..1a1115f7f1 100644
--- a/migration/file.h
+++ b/migration/file.h
@@ -21,6 +21,6 @@ int file_parse_offset(char *filespec, uint64_t *offsetp, Error **errp);
 void file_cleanup_outgoing_migration(void);
 bool file_send_channel_create(gpointer opaque, Error **errp);
 int file_write_ramblock_iov(QIOChannel *ioc, const struct iovec *iov,
-                            int niov, RAMBlock *block, Error **errp);
+                            int niov, MultiFDPages_t *pages, Error **errp);
 int multifd_file_recv_data(MultiFDRecvParams *p, Error **errp);
 #endif
diff --git a/migration/multifd.c b/migration/multifd.c
index b4ca135b47..44d4c3ca11 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -956,7 +956,7 @@ static void *multifd_send_thread(void *opaque)
 
             if (migrate_mapped_ram()) {
                 ret = file_write_ramblock_iov(p->c, p->iov, p->iovs_num,
-                                              pages->block, &local_err);
+                                              pages, &local_err);
             } else {
                 ret = qio_channel_writev_full_all(p->c, p->iov, p->iovs_num,
                                                   NULL, 0, p->write_flags,
-- 
2.35.3



^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v3 05/14] migration/multifd: Introduce MultiFDSendData
  2024-08-01 12:35 [PATCH v3 00/14] migration/multifd: Remove multifd_send_state->pages Fabiano Rosas
                   ` (3 preceding siblings ...)
  2024-08-01 12:35 ` [PATCH v3 04/14] migration/multifd: Pass in MultiFDPages_t to file_write_ramblock_iov Fabiano Rosas
@ 2024-08-01 12:35 ` Fabiano Rosas
  2024-08-01 12:35 ` [PATCH v3 06/14] migration/multifd: Make MultiFDPages_t:offset a flexible array member Fabiano Rosas
                   ` (9 subsequent siblings)
  14 siblings, 0 replies; 37+ messages in thread
From: Fabiano Rosas @ 2024-08-01 12:35 UTC (permalink / raw)
  To: qemu-devel; +Cc: Peter Xu, Maciej S . Szmigiero

Add a new data structure to replace p->pages in the multifd
channel. This new structure will hide the multifd payload type behind
an union, so we don't need to add a new field to the channel each time
we want to handle a different data type.

This also allow us to keep multifd_send_pages() as is, without needing
to complicate the pointer switching.

Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
---
 migration/multifd.h | 26 ++++++++++++++++++++++++++
 1 file changed, 26 insertions(+)

diff --git a/migration/multifd.h b/migration/multifd.h
index 660a9882c2..7bb4a2cbc4 100644
--- a/migration/multifd.h
+++ b/migration/multifd.h
@@ -17,6 +17,7 @@
 #include "ram.h"
 
 typedef struct MultiFDRecvData MultiFDRecvData;
+typedef struct MultiFDSendData MultiFDSendData;
 
 bool multifd_send_setup(void);
 void multifd_send_shutdown(void);
@@ -88,6 +89,31 @@ struct MultiFDRecvData {
     off_t file_offset;
 };
 
+typedef enum {
+    MULTIFD_PAYLOAD_NONE,
+    MULTIFD_PAYLOAD_RAM,
+} MultiFDPayloadType;
+
+typedef union MultiFDPayload {
+    MultiFDPages_t ram;
+} MultiFDPayload;
+
+struct MultiFDSendData {
+    MultiFDPayloadType type;
+    MultiFDPayload u;
+};
+
+static inline bool multifd_payload_empty(MultiFDSendData *data)
+{
+    return data->type == MULTIFD_PAYLOAD_NONE;
+}
+
+static inline void multifd_set_payload_type(MultiFDSendData *data,
+                                            MultiFDPayloadType type)
+{
+    data->type = type;
+}
+
 typedef struct {
     /* Fields are only written at creating/deletion time */
     /* No lock required for them, they are read only */
-- 
2.35.3



^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v3 06/14] migration/multifd: Make MultiFDPages_t:offset a flexible array member
  2024-08-01 12:35 [PATCH v3 00/14] migration/multifd: Remove multifd_send_state->pages Fabiano Rosas
                   ` (4 preceding siblings ...)
  2024-08-01 12:35 ` [PATCH v3 05/14] migration/multifd: Introduce MultiFDSendData Fabiano Rosas
@ 2024-08-01 12:35 ` Fabiano Rosas
  2024-08-21 20:38   ` Peter Xu
  2024-08-01 12:35 ` [PATCH v3 07/14] migration/multifd: Replace p->pages with an union pointer Fabiano Rosas
                   ` (8 subsequent siblings)
  14 siblings, 1 reply; 37+ messages in thread
From: Fabiano Rosas @ 2024-08-01 12:35 UTC (permalink / raw)
  To: qemu-devel; +Cc: Peter Xu, Maciej S . Szmigiero

We're about to use MultiFDPages_t from inside the MultiFDSendData
payload union, which means we cannot have pointers to allocated data
inside the pages structure, otherwise we'd lose the reference to that
memory once another payload type touches the union. Move the offset
array into the end of the structure and turn it into a flexible array
member, so it is allocated along with the rest of MultiFDSendData in
the next patches.

Note that other pointers, such as the ramblock pointer are still fine
as long as the storage for them is not owned by the migration code and
can be correctly released at some point.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
---
 migration/multifd.c | 18 ++++++++++++------
 migration/multifd.h |  4 ++--
 2 files changed, 14 insertions(+), 8 deletions(-)

diff --git a/migration/multifd.c b/migration/multifd.c
index 44d4c3ca11..64503604cf 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -98,6 +98,17 @@ struct {
     MultiFDMethods *ops;
 } *multifd_recv_state;
 
+static size_t multifd_ram_payload_size(void)
+{
+    uint32_t n = multifd_ram_page_count();
+
+    /*
+     * We keep an array of page offsets at the end of MultiFDPages_t,
+     * add space for it in the allocation.
+     */
+    return sizeof(MultiFDPages_t) + n * sizeof(ram_addr_t);
+}
+
 static bool multifd_use_packets(void)
 {
     return !migrate_mapped_ram();
@@ -394,9 +405,7 @@ static int multifd_recv_initial_packet(QIOChannel *c, Error **errp)
 
 static MultiFDPages_t *multifd_pages_init(uint32_t n)
 {
-    MultiFDPages_t *pages = g_new0(MultiFDPages_t, 1);
-
-    pages->offset = g_new0(ram_addr_t, n);
+    MultiFDPages_t *pages = g_malloc0(multifd_ram_payload_size());
 
     return pages;
 }
@@ -404,8 +413,6 @@ static MultiFDPages_t *multifd_pages_init(uint32_t n)
 static void multifd_pages_clear(MultiFDPages_t *pages)
 {
     multifd_pages_reset(pages);
-    g_free(pages->offset);
-    pages->offset = NULL;
     g_free(pages);
 }
 
@@ -1185,7 +1192,6 @@ bool multifd_send_setup(void)
         qemu_sem_init(&p->sem_sync, 0);
         p->id = i;
         p->pages = multifd_pages_init(page_count);
-
         if (use_packets) {
             p->packet_len = sizeof(MultiFDPacket_t)
                           + sizeof(uint64_t) * page_count;
diff --git a/migration/multifd.h b/migration/multifd.h
index 7bb4a2cbc4..a7fdd97f70 100644
--- a/migration/multifd.h
+++ b/migration/multifd.h
@@ -77,9 +77,9 @@ typedef struct {
     uint32_t num;
     /* number of normal pages */
     uint32_t normal_num;
+    RAMBlock *block;
     /* offset of each page */
-    ram_addr_t *offset;
-    RAMBlock *block;
+    ram_addr_t offset[];
 } MultiFDPages_t;
 
 struct MultiFDRecvData {
-- 
2.35.3



^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v3 07/14] migration/multifd: Replace p->pages with an union pointer
  2024-08-01 12:35 [PATCH v3 00/14] migration/multifd: Remove multifd_send_state->pages Fabiano Rosas
                   ` (5 preceding siblings ...)
  2024-08-01 12:35 ` [PATCH v3 06/14] migration/multifd: Make MultiFDPages_t:offset a flexible array member Fabiano Rosas
@ 2024-08-01 12:35 ` Fabiano Rosas
  2024-08-21 21:27   ` Peter Xu
  2024-08-01 12:35 ` [PATCH v3 08/14] migration/multifd: Move pages accounting into multifd_send_zero_page_detect() Fabiano Rosas
                   ` (7 subsequent siblings)
  14 siblings, 1 reply; 37+ messages in thread
From: Fabiano Rosas @ 2024-08-01 12:35 UTC (permalink / raw)
  To: qemu-devel; +Cc: Peter Xu, Maciej S . Szmigiero

We want multifd to be able to handle more types of data than just ram
pages. To start decoupling multifd from pages, replace p->pages
(MultiFDPages_t) with the new type MultiFDSendData that hides the
client payload inside an union.

The general idea here is to isolate functions that *need* to handle
MultiFDPages_t and move them in the future to multifd-ram.c, while
multifd.c will stay with only the core functions that handle
MultiFDSendData/MultiFDRecvData.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
---
 migration/multifd-qpl.c       |  6 +--
 migration/multifd-uadk.c      |  2 +-
 migration/multifd-zero-page.c |  2 +-
 migration/multifd-zlib.c      |  2 +-
 migration/multifd-zstd.c      |  2 +-
 migration/multifd.c           | 85 ++++++++++++++++++++---------------
 migration/multifd.h           |  7 +--
 7 files changed, 57 insertions(+), 49 deletions(-)

diff --git a/migration/multifd-qpl.c b/migration/multifd-qpl.c
index db60c05795..21153f1987 100644
--- a/migration/multifd-qpl.c
+++ b/migration/multifd-qpl.c
@@ -406,7 +406,7 @@ retry:
 static void multifd_qpl_compress_pages_slow_path(MultiFDSendParams *p)
 {
     QplData *qpl = p->compress_data;
-    MultiFDPages_t *pages = p->pages;
+    MultiFDPages_t *pages = &p->data->u.ram;
     uint32_t size = p->page_size;
     qpl_job *job = qpl->sw_job;
     uint8_t *zbuf = qpl->zbuf;
@@ -437,7 +437,7 @@ static void multifd_qpl_compress_pages_slow_path(MultiFDSendParams *p)
 static void multifd_qpl_compress_pages(MultiFDSendParams *p)
 {
     QplData *qpl = p->compress_data;
-    MultiFDPages_t *pages = p->pages;
+    MultiFDPages_t *pages = &p->data->u.ram;
     uint32_t size = p->page_size;
     QplHwJob *hw_job;
     uint8_t *buf;
@@ -501,7 +501,7 @@ static void multifd_qpl_compress_pages(MultiFDSendParams *p)
 static int multifd_qpl_send_prepare(MultiFDSendParams *p, Error **errp)
 {
     QplData *qpl = p->compress_data;
-    MultiFDPages_t *pages = p->pages;
+    MultiFDPages_t *pages = &p->data->u.ram;
     uint32_t len = 0;
 
     if (!multifd_send_prepare_common(p)) {
diff --git a/migration/multifd-uadk.c b/migration/multifd-uadk.c
index 1ed1c6afe6..9d99807af5 100644
--- a/migration/multifd-uadk.c
+++ b/migration/multifd-uadk.c
@@ -177,7 +177,7 @@ static int multifd_uadk_send_prepare(MultiFDSendParams *p, Error **errp)
     uint32_t page_size = multifd_ram_page_size();
     uint8_t *buf = uadk_data->buf;
     int ret = 0;
-    MultiFDPages_t *pages = p->pages;
+    MultiFDPages_t *pages = &p->data->u.ram;
 
     if (!multifd_send_prepare_common(p)) {
         goto out;
diff --git a/migration/multifd-zero-page.c b/migration/multifd-zero-page.c
index cc624e36b3..6506a4aa89 100644
--- a/migration/multifd-zero-page.c
+++ b/migration/multifd-zero-page.c
@@ -46,7 +46,7 @@ static void swap_page_offset(ram_addr_t *pages_offset, int a, int b)
  */
 void multifd_send_zero_page_detect(MultiFDSendParams *p)
 {
-    MultiFDPages_t *pages = p->pages;
+    MultiFDPages_t *pages = &p->data->u.ram;
     RAMBlock *rb = pages->block;
     int i = 0;
     int j = pages->num - 1;
diff --git a/migration/multifd-zlib.c b/migration/multifd-zlib.c
index e47d7f70dc..66517c1067 100644
--- a/migration/multifd-zlib.c
+++ b/migration/multifd-zlib.c
@@ -123,7 +123,7 @@ static void zlib_send_cleanup(MultiFDSendParams *p, Error **errp)
  */
 static int zlib_send_prepare(MultiFDSendParams *p, Error **errp)
 {
-    MultiFDPages_t *pages = p->pages;
+    MultiFDPages_t *pages = &p->data->u.ram;
     struct zlib_data *z = p->compress_data;
     z_stream *zs = &z->zs;
     uint32_t out_size = 0;
diff --git a/migration/multifd-zstd.c b/migration/multifd-zstd.c
index 1812fd1b48..04ac711cf4 100644
--- a/migration/multifd-zstd.c
+++ b/migration/multifd-zstd.c
@@ -119,7 +119,7 @@ static void zstd_send_cleanup(MultiFDSendParams *p, Error **errp)
  */
 static int zstd_send_prepare(MultiFDSendParams *p, Error **errp)
 {
-    MultiFDPages_t *pages = p->pages;
+    MultiFDPages_t *pages = &p->data->u.ram;
     struct zstd_data *z = p->compress_data;
     int ret;
     uint32_t i;
diff --git a/migration/multifd.c b/migration/multifd.c
index 64503604cf..8771cda734 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -49,8 +49,7 @@ typedef struct {
 
 struct {
     MultiFDSendParams *params;
-    /* array of pages to sent */
-    MultiFDPages_t *pages;
+    MultiFDSendData *data;
     /*
      * Global number of generated multifd packets.
      *
@@ -109,6 +108,28 @@ static size_t multifd_ram_payload_size(void)
     return sizeof(MultiFDPages_t) + n * sizeof(ram_addr_t);
 }
 
+static MultiFDSendData *multifd_send_data_alloc(void)
+{
+    size_t max_payload_size, size_minus_payload;
+
+    /*
+     * MultiFDPages_t has a flexible array at the end, account for it
+     * when allocating MultiFDSendData. Use max() in case other types
+     * added to the union in the future are larger than
+     * (MultiFDPages_t + flex array).
+     */
+    max_payload_size = MAX(multifd_ram_payload_size(), sizeof(MultiFDPayload));
+
+    /*
+     * Account for any holes the compiler might insert. We can't pack
+     * the structure because that misaligns the members and triggers
+     * Waddress-of-packed-member.
+     */
+    size_minus_payload = sizeof(MultiFDSendData) - sizeof(MultiFDPayload);
+
+    return g_malloc0(size_minus_payload + max_payload_size);
+}
+
 static bool multifd_use_packets(void)
 {
     return !migrate_mapped_ram();
@@ -121,7 +142,7 @@ void multifd_send_channel_created(void)
 
 static void multifd_set_file_bitmap(MultiFDSendParams *p)
 {
-    MultiFDPages_t *pages = p->pages;
+    MultiFDPages_t *pages = &p->data->u.ram;
 
     assert(pages->block);
 
@@ -177,7 +198,7 @@ static void nocomp_send_cleanup(MultiFDSendParams *p, Error **errp)
 
 static void multifd_send_prepare_iovs(MultiFDSendParams *p)
 {
-    MultiFDPages_t *pages = p->pages;
+    MultiFDPages_t *pages = &p->data->u.ram;
     uint32_t page_size = multifd_ram_page_size();
 
     for (int i = 0; i < pages->normal_num; i++) {
@@ -403,23 +424,10 @@ static int multifd_recv_initial_packet(QIOChannel *c, Error **errp)
     return msg.id;
 }
 
-static MultiFDPages_t *multifd_pages_init(uint32_t n)
-{
-    MultiFDPages_t *pages = g_malloc0(multifd_ram_payload_size());
-
-    return pages;
-}
-
-static void multifd_pages_clear(MultiFDPages_t *pages)
-{
-    multifd_pages_reset(pages);
-    g_free(pages);
-}
-
 void multifd_send_fill_packet(MultiFDSendParams *p)
 {
     MultiFDPacket_t *packet = p->packet;
-    MultiFDPages_t *pages = p->pages;
+    MultiFDPages_t *pages = &p->data->u.ram;
     uint64_t packet_num;
     uint32_t zero_num = pages->num - pages->normal_num;
     int i;
@@ -601,7 +609,7 @@ static bool multifd_send_pages(void)
     int i;
     static int next_channel;
     MultiFDSendParams *p = NULL; /* make happy gcc */
-    MultiFDPages_t *pages = multifd_send_state->pages;
+    MultiFDSendData *tmp;
 
     if (multifd_send_should_exit()) {
         return false;
@@ -636,11 +644,14 @@ static bool multifd_send_pages(void)
      * qatomic_store_release() in multifd_send_thread().
      */
     smp_mb_acquire();
-    assert(!p->pages->num);
-    multifd_send_state->pages = p->pages;
-    p->pages = pages;
+
+    assert(!p->data->u.ram.num);
+
+    tmp = multifd_send_state->data;
+    multifd_send_state->data = p->data;
+    p->data = tmp;
     /*
-     * Making sure p->pages is setup before marking pending_job=true. Pairs
+     * Making sure p->data is setup before marking pending_job=true. Pairs
      * with the qatomic_load_acquire() in multifd_send_thread().
      */
     qatomic_store_release(&p->pending_job, true);
@@ -670,7 +681,7 @@ bool multifd_queue_page(RAMBlock *block, ram_addr_t offset)
     MultiFDPages_t *pages;
 
 retry:
-    pages = multifd_send_state->pages;
+    pages = &multifd_send_state->data->u.ram;
 
     /* If the queue is empty, we can already enqueue now */
     if (multifd_queue_empty(pages)) {
@@ -800,8 +811,8 @@ static bool multifd_send_cleanup_channel(MultiFDSendParams *p, Error **errp)
     qemu_sem_destroy(&p->sem_sync);
     g_free(p->name);
     p->name = NULL;
-    multifd_pages_clear(p->pages);
-    p->pages = NULL;
+    g_free(p->data);
+    p->data = NULL;
     p->packet_len = 0;
     g_free(p->packet);
     p->packet = NULL;
@@ -818,8 +829,8 @@ static void multifd_send_cleanup_state(void)
     qemu_sem_destroy(&multifd_send_state->channels_ready);
     g_free(multifd_send_state->params);
     multifd_send_state->params = NULL;
-    multifd_pages_clear(multifd_send_state->pages);
-    multifd_send_state->pages = NULL;
+    g_free(multifd_send_state->data);
+    multifd_send_state->data = NULL;
     g_free(multifd_send_state);
     multifd_send_state = NULL;
 }
@@ -868,11 +879,13 @@ int multifd_send_sync_main(void)
 {
     int i;
     bool flush_zero_copy;
+    MultiFDPages_t *pages;
 
     if (!migrate_multifd()) {
         return 0;
     }
-    if (multifd_send_state->pages->num) {
+    pages = &multifd_send_state->data->u.ram;
+    if (pages->num) {
         if (!multifd_send_pages()) {
             error_report("%s: multifd_send_pages fail", __func__);
             return -1;
@@ -947,11 +960,11 @@ static void *multifd_send_thread(void *opaque)
         }
 
         /*
-         * Read pending_job flag before p->pages.  Pairs with the
+         * Read pending_job flag before p->data.  Pairs with the
          * qatomic_store_release() in multifd_send_pages().
          */
         if (qatomic_load_acquire(&p->pending_job)) {
-            MultiFDPages_t *pages = p->pages;
+            MultiFDPages_t *pages = &p->data->u.ram;
 
             p->iovs_num = 0;
             assert(pages->num);
@@ -963,7 +976,7 @@ static void *multifd_send_thread(void *opaque)
 
             if (migrate_mapped_ram()) {
                 ret = file_write_ramblock_iov(p->c, p->iov, p->iovs_num,
-                                              pages, &local_err);
+                                              &p->data->u.ram, &local_err);
             } else {
                 ret = qio_channel_writev_full_all(p->c, p->iov, p->iovs_num,
                                                   NULL, 0, p->write_flags,
@@ -983,7 +996,7 @@ static void *multifd_send_thread(void *opaque)
             p->next_packet_size = 0;
 
             /*
-             * Making sure p->pages is published before saying "we're
+             * Making sure p->data is published before saying "we're
              * free".  Pairs with the smp_mb_acquire() in
              * multifd_send_pages().
              */
@@ -1179,7 +1192,7 @@ bool multifd_send_setup(void)
     thread_count = migrate_multifd_channels();
     multifd_send_state = g_malloc0(sizeof(*multifd_send_state));
     multifd_send_state->params = g_new0(MultiFDSendParams, thread_count);
-    multifd_send_state->pages = multifd_pages_init(page_count);
+    multifd_send_state->data = multifd_send_data_alloc();
     qemu_sem_init(&multifd_send_state->channels_created, 0);
     qemu_sem_init(&multifd_send_state->channels_ready, 0);
     qatomic_set(&multifd_send_state->exiting, 0);
@@ -1191,7 +1204,7 @@ bool multifd_send_setup(void)
         qemu_sem_init(&p->sem, 0);
         qemu_sem_init(&p->sem_sync, 0);
         p->id = i;
-        p->pages = multifd_pages_init(page_count);
+        p->data = multifd_send_data_alloc();
         if (use_packets) {
             p->packet_len = sizeof(MultiFDPacket_t)
                           + sizeof(uint64_t) * page_count;
@@ -1689,7 +1702,7 @@ void multifd_recv_new_channel(QIOChannel *ioc, Error **errp)
 
 bool multifd_send_prepare_common(MultiFDSendParams *p)
 {
-    MultiFDPages_t *pages = p->pages;
+    MultiFDPages_t *pages = &p->data->u.ram;
     multifd_send_zero_page_detect(p);
 
     if (!pages->normal_num) {
diff --git a/migration/multifd.h b/migration/multifd.h
index a7fdd97f70..c2ba4cad13 100644
--- a/migration/multifd.h
+++ b/migration/multifd.h
@@ -152,12 +152,7 @@ typedef struct {
      */
     bool pending_job;
     bool pending_sync;
-    /* array of pages to sent.
-     * The owner of 'pages' depends of 'pending_job' value:
-     * pending_job == 0 -> migration_thread can use it.
-     * pending_job != 0 -> multifd_channel can use it.
-     */
-    MultiFDPages_t *pages;
+    MultiFDSendData *data;
 
     /* thread local variables. No locking required */
 
-- 
2.35.3



^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v3 08/14] migration/multifd: Move pages accounting into multifd_send_zero_page_detect()
  2024-08-01 12:35 [PATCH v3 00/14] migration/multifd: Remove multifd_send_state->pages Fabiano Rosas
                   ` (6 preceding siblings ...)
  2024-08-01 12:35 ` [PATCH v3 07/14] migration/multifd: Replace p->pages with an union pointer Fabiano Rosas
@ 2024-08-01 12:35 ` Fabiano Rosas
  2024-08-01 12:35 ` [PATCH v3 09/14] migration/multifd: Isolate ram pages packet data Fabiano Rosas
                   ` (6 subsequent siblings)
  14 siblings, 0 replies; 37+ messages in thread
From: Fabiano Rosas @ 2024-08-01 12:35 UTC (permalink / raw)
  To: qemu-devel; +Cc: Peter Xu, Maciej S . Szmigiero

All references to pages are being removed from the multifd worker
threads in order to allow multifd to deal with different payload
types.

multifd_send_zero_page_detect() is called by all multifd migration
paths that deal with pages and is the last spot where zero pages and
normal page amounts are adjusted. Move the pages accounting into that
function.

Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
---
 migration/multifd-zero-page.c | 7 ++++++-
 migration/multifd.c           | 2 --
 2 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/migration/multifd-zero-page.c b/migration/multifd-zero-page.c
index 6506a4aa89..f1e988a959 100644
--- a/migration/multifd-zero-page.c
+++ b/migration/multifd-zero-page.c
@@ -14,6 +14,7 @@
 #include "qemu/cutils.h"
 #include "exec/ramblock.h"
 #include "migration.h"
+#include "migration-stats.h"
 #include "multifd.h"
 #include "options.h"
 #include "ram.h"
@@ -53,7 +54,7 @@ void multifd_send_zero_page_detect(MultiFDSendParams *p)
 
     if (!multifd_zero_page_enabled()) {
         pages->normal_num = pages->num;
-        return;
+        goto out;
     }
 
     /*
@@ -74,6 +75,10 @@ void multifd_send_zero_page_detect(MultiFDSendParams *p)
     }
 
     pages->normal_num = i;
+
+out:
+    stat64_add(&mig_stats.normal_pages, pages->normal_num);
+    stat64_add(&mig_stats.zero_pages, pages->num - pages->normal_num);
 }
 
 void multifd_recv_zero_page_process(MultiFDRecvParams *p)
diff --git a/migration/multifd.c b/migration/multifd.c
index 8771cda734..739fc01cbe 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -989,8 +989,6 @@ static void *multifd_send_thread(void *opaque)
 
             stat64_add(&mig_stats.multifd_bytes,
                        p->next_packet_size + p->packet_len);
-            stat64_add(&mig_stats.normal_pages, pages->normal_num);
-            stat64_add(&mig_stats.zero_pages, pages->num - pages->normal_num);
 
             multifd_pages_reset(pages);
             p->next_packet_size = 0;
-- 
2.35.3



^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v3 09/14] migration/multifd: Isolate ram pages packet data
  2024-08-01 12:35 [PATCH v3 00/14] migration/multifd: Remove multifd_send_state->pages Fabiano Rosas
                   ` (7 preceding siblings ...)
  2024-08-01 12:35 ` [PATCH v3 08/14] migration/multifd: Move pages accounting into multifd_send_zero_page_detect() Fabiano Rosas
@ 2024-08-01 12:35 ` Fabiano Rosas
  2024-08-21 21:38   ` Peter Xu
  2024-08-01 12:35 ` [PATCH v3 10/14] migration/multifd: Don't send ram data during SYNC Fabiano Rosas
                   ` (5 subsequent siblings)
  14 siblings, 1 reply; 37+ messages in thread
From: Fabiano Rosas @ 2024-08-01 12:35 UTC (permalink / raw)
  To: qemu-devel; +Cc: Peter Xu, Maciej S . Szmigiero

While we cannot yet disentangle the multifd packet from page data, we
can make the code a bit cleaner by setting the page-related fields in
a separate function.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
---
 migration/multifd.c    | 110 ++++++++++++++++++++++++-----------------
 migration/trace-events |   9 ++--
 2 files changed, 70 insertions(+), 49 deletions(-)

diff --git a/migration/multifd.c b/migration/multifd.c
index 739fc01cbe..7d946e6661 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -424,67 +424,65 @@ static int multifd_recv_initial_packet(QIOChannel *c, Error **errp)
     return msg.id;
 }
 
-void multifd_send_fill_packet(MultiFDSendParams *p)
+static void multifd_ram_fill_packet(MultiFDSendParams *p)
 {
     MultiFDPacket_t *packet = p->packet;
     MultiFDPages_t *pages = &p->data->u.ram;
-    uint64_t packet_num;
     uint32_t zero_num = pages->num - pages->normal_num;
-    int i;
 
-    packet->flags = cpu_to_be32(p->flags);
     packet->pages_alloc = cpu_to_be32(multifd_ram_page_count());
     packet->normal_pages = cpu_to_be32(pages->normal_num);
     packet->zero_pages = cpu_to_be32(zero_num);
-    packet->next_packet_size = cpu_to_be32(p->next_packet_size);
-
-    packet_num = qatomic_fetch_inc(&multifd_send_state->packet_num);
-    packet->packet_num = cpu_to_be64(packet_num);
 
     if (pages->block) {
         strncpy(packet->ramblock, pages->block->idstr, 256);
     }
 
-    for (i = 0; i < pages->num; i++) {
+    for (int i = 0; i < pages->num; i++) {
         /* there are architectures where ram_addr_t is 32 bit */
         uint64_t temp = pages->offset[i];
 
         packet->offset[i] = cpu_to_be64(temp);
     }
 
-    p->packets_sent++;
     p->total_normal_pages += pages->normal_num;
     p->total_zero_pages += zero_num;
 
-    trace_multifd_send(p->id, packet_num, pages->normal_num, zero_num,
-                       p->flags, p->next_packet_size);
+    trace_multifd_send_ram_fill(p->id, p->total_normal_pages,
+                                p->total_zero_pages);
 }
 
-static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
+void multifd_send_fill_packet(MultiFDSendParams *p)
+{
+    MultiFDPacket_t *packet = p->packet;
+    uint64_t packet_num;
+
+    memset(packet, 0, p->packet_len);
+
+    packet->magic = cpu_to_be32(MULTIFD_MAGIC);
+    packet->version = cpu_to_be32(MULTIFD_VERSION);
+
+    packet->flags = cpu_to_be32(p->flags);
+    packet->next_packet_size = cpu_to_be32(p->next_packet_size);
+
+    packet_num = qatomic_fetch_inc(&multifd_send_state->packet_num);
+    packet->packet_num = cpu_to_be64(packet_num);
+
+    p->packets_sent++;
+
+    multifd_ram_fill_packet(p);
+
+    trace_multifd_send_fill(p->id, packet_num,
+                            p->flags, p->next_packet_size);
+}
+
+static int multifd_ram_unfill_packet(MultiFDRecvParams *p, Error **errp)
 {
     MultiFDPacket_t *packet = p->packet;
     uint32_t page_count = multifd_ram_page_count();
     uint32_t page_size = multifd_ram_page_size();
     int i;
 
-    packet->magic = be32_to_cpu(packet->magic);
-    if (packet->magic != MULTIFD_MAGIC) {
-        error_setg(errp, "multifd: received packet "
-                   "magic %x and expected magic %x",
-                   packet->magic, MULTIFD_MAGIC);
-        return -1;
-    }
-
-    packet->version = be32_to_cpu(packet->version);
-    if (packet->version != MULTIFD_VERSION) {
-        error_setg(errp, "multifd: received packet "
-                   "version %u and expected version %u",
-                   packet->version, MULTIFD_VERSION);
-        return -1;
-    }
-
-    p->flags = be32_to_cpu(packet->flags);
-
     packet->pages_alloc = be32_to_cpu(packet->pages_alloc);
     /*
      * If we received a packet that is 100 times bigger than expected
@@ -513,15 +511,9 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
         return -1;
     }
 
-    p->next_packet_size = be32_to_cpu(packet->next_packet_size);
-    p->packet_num = be64_to_cpu(packet->packet_num);
-    p->packets_recved++;
     p->total_normal_pages += p->normal_num;
     p->total_zero_pages += p->zero_num;
 
-    trace_multifd_recv(p->id, p->packet_num, p->normal_num, p->zero_num,
-                       p->flags, p->next_packet_size);
-
     if (p->normal_num == 0 && p->zero_num == 0) {
         return 0;
     }
@@ -563,6 +555,40 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
     return 0;
 }
 
+static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
+{
+    MultiFDPacket_t *packet = p->packet;
+    int ret = 0;
+
+    packet->magic = be32_to_cpu(packet->magic);
+    if (packet->magic != MULTIFD_MAGIC) {
+        error_setg(errp, "multifd: received packet "
+                   "magic %x and expected magic %x",
+                   packet->magic, MULTIFD_MAGIC);
+        return -1;
+    }
+
+    packet->version = be32_to_cpu(packet->version);
+    if (packet->version != MULTIFD_VERSION) {
+        error_setg(errp, "multifd: received packet "
+                   "version %u and expected version %u",
+                   packet->version, MULTIFD_VERSION);
+        return -1;
+    }
+
+    p->flags = be32_to_cpu(packet->flags);
+    p->next_packet_size = be32_to_cpu(packet->next_packet_size);
+    p->packet_num = be64_to_cpu(packet->packet_num);
+    p->packets_recved++;
+
+    ret = multifd_ram_unfill_packet(p, errp);
+
+    trace_multifd_recv_unfill(p->id, p->packet_num, p->flags,
+                              p->next_packet_size);
+
+    return ret;
+}
+
 static bool multifd_send_should_exit(void)
 {
     return qatomic_read(&multifd_send_state->exiting);
@@ -1036,8 +1062,7 @@ out:
 
     rcu_unregister_thread();
     migration_threads_remove(thread);
-    trace_multifd_send_thread_end(p->id, p->packets_sent, p->total_normal_pages,
-                                  p->total_zero_pages);
+    trace_multifd_send_thread_end(p->id, p->packets_sent);
 
     return NULL;
 }
@@ -1207,8 +1232,6 @@ bool multifd_send_setup(void)
             p->packet_len = sizeof(MultiFDPacket_t)
                           + sizeof(uint64_t) * page_count;
             p->packet = g_malloc0(p->packet_len);
-            p->packet->magic = cpu_to_be32(MULTIFD_MAGIC);
-            p->packet->version = cpu_to_be32(MULTIFD_VERSION);
         }
         p->name = g_strdup_printf("mig/src/send_%d", i);
         p->write_flags = 0;
@@ -1554,7 +1577,6 @@ static void *multifd_recv_thread(void *opaque)
                 qemu_sem_wait(&p->sem_sync);
             }
         } else {
-            p->total_normal_pages += p->data->size / qemu_target_page_size();
             p->data->size = 0;
             /*
              * Order data->size update before clearing
@@ -1571,9 +1593,7 @@ static void *multifd_recv_thread(void *opaque)
     }
 
     rcu_unregister_thread();
-    trace_multifd_recv_thread_end(p->id, p->packets_recved,
-                                  p->total_normal_pages,
-                                  p->total_zero_pages);
+    trace_multifd_recv_thread_end(p->id, p->packets_recved);
 
     return NULL;
 }
diff --git a/migration/trace-events b/migration/trace-events
index 0b7c3324fb..c65902f042 100644
--- a/migration/trace-events
+++ b/migration/trace-events
@@ -128,21 +128,22 @@ postcopy_preempt_reset_channel(void) ""
 # multifd.c
 multifd_new_send_channel_async(uint8_t id) "channel %u"
 multifd_new_send_channel_async_error(uint8_t id, void *err) "channel=%u err=%p"
-multifd_recv(uint8_t id, uint64_t packet_num, uint32_t normal, uint32_t zero, uint32_t flags, uint32_t next_packet_size) "channel %u packet_num %" PRIu64 " normal pages %u zero pages %u flags 0x%x next packet size %u"
+multifd_recv_unfill(uint8_t id, uint64_t packet_num, uint32_t flags, uint32_t next_packet_size) "channel %u packet_num %" PRIu64 " flags 0x%x next packet size %u"
 multifd_recv_new_channel(uint8_t id) "channel %u"
 multifd_recv_sync_main(long packet_num) "packet num %ld"
 multifd_recv_sync_main_signal(uint8_t id) "channel %u"
 multifd_recv_sync_main_wait(uint8_t id) "iter %u"
 multifd_recv_terminate_threads(bool error) "error %d"
-multifd_recv_thread_end(uint8_t id, uint64_t packets, uint64_t normal_pages, uint64_t zero_pages) "channel %u packets %" PRIu64 " normal pages %" PRIu64 " zero pages %" PRIu64
+multifd_recv_thread_end(uint8_t id, uint64_t packets) "channel %u packets %" PRIu64
 multifd_recv_thread_start(uint8_t id) "%u"
-multifd_send(uint8_t id, uint64_t packet_num, uint32_t normal_pages, uint32_t zero_pages, uint32_t flags, uint32_t next_packet_size) "channel %u packet_num %" PRIu64 " normal pages %u zero pages %u flags 0x%x next packet size %u"
+multifd_send_fill(uint8_t id, uint64_t packet_num, uint32_t flags, uint32_t next_packet_size) "channel %u packet_num %" PRIu64 " flags 0x%x next packet size %u"
+multifd_send_ram_fill(uint8_t id, uint32_t normal, uint32_t zero) "channel %u normal pages %u zero pages %u"
 multifd_send_error(uint8_t id) "channel %u"
 multifd_send_sync_main(long packet_num) "packet num %ld"
 multifd_send_sync_main_signal(uint8_t id) "channel %u"
 multifd_send_sync_main_wait(uint8_t id) "channel %u"
 multifd_send_terminate_threads(void) ""
-multifd_send_thread_end(uint8_t id, uint64_t packets, uint64_t normal_pages, uint64_t zero_pages) "channel %u packets %" PRIu64 " normal pages %"  PRIu64 " zero pages %"  PRIu64
+multifd_send_thread_end(uint8_t id, uint64_t packets) "channel %u packets %" PRIu64
 multifd_send_thread_start(uint8_t id) "%u"
 multifd_tls_outgoing_handshake_start(void *ioc, void *tioc, const char *hostname) "ioc=%p tioc=%p hostname=%s"
 multifd_tls_outgoing_handshake_error(void *ioc, const char *err) "ioc=%p err=%s"
-- 
2.35.3



^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v3 10/14] migration/multifd: Don't send ram data during SYNC
  2024-08-01 12:35 [PATCH v3 00/14] migration/multifd: Remove multifd_send_state->pages Fabiano Rosas
                   ` (8 preceding siblings ...)
  2024-08-01 12:35 ` [PATCH v3 09/14] migration/multifd: Isolate ram pages packet data Fabiano Rosas
@ 2024-08-01 12:35 ` Fabiano Rosas
  2024-08-22 15:50   ` Peter Xu
  2024-08-01 12:35 ` [PATCH v3 11/14] migration/multifd: Replace multifd_send_state->pages with client data Fabiano Rosas
                   ` (4 subsequent siblings)
  14 siblings, 1 reply; 37+ messages in thread
From: Fabiano Rosas @ 2024-08-01 12:35 UTC (permalink / raw)
  To: qemu-devel; +Cc: Peter Xu, Maciej S . Szmigiero

Skip saving and loading any ram data in the packet in the case of a
SYNC. This fixes a shortcoming of the current code which requires a
reset of the MultiFDPages_t fields right after the previous
pending_job finishes, otherwise the very next job might be a SYNC and
multifd_send_fill_packet() will put the stale values in the packet.

By not calling multifd_ram_fill_packet(), we can stop resetting
MultiFDPages_t in the multifd core and leave that to the client code.

Actually moving the reset function is not yet done because
pages->num==0 is used by the client code to determine whether the
MultiFDPages_t needs to be flushed. The subsequent patches will
replace that with a generic flag that is not dependent on
MultiFDPages_t.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
---
 migration/multifd.c | 13 ++++++++++---
 1 file changed, 10 insertions(+), 3 deletions(-)

diff --git a/migration/multifd.c b/migration/multifd.c
index 7d946e6661..ec6dcb9999 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -456,6 +456,7 @@ void multifd_send_fill_packet(MultiFDSendParams *p)
 {
     MultiFDPacket_t *packet = p->packet;
     uint64_t packet_num;
+    bool sync_packet = p->flags & MULTIFD_FLAG_SYNC;
 
     memset(packet, 0, p->packet_len);
 
@@ -470,7 +471,9 @@ void multifd_send_fill_packet(MultiFDSendParams *p)
 
     p->packets_sent++;
 
-    multifd_ram_fill_packet(p);
+    if (!sync_packet) {
+        multifd_ram_fill_packet(p);
+    }
 
     trace_multifd_send_fill(p->id, packet_num,
                             p->flags, p->next_packet_size);
@@ -581,7 +584,9 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
     p->packet_num = be64_to_cpu(packet->packet_num);
     p->packets_recved++;
 
-    ret = multifd_ram_unfill_packet(p, errp);
+    if (!(p->flags & MULTIFD_FLAG_SYNC)) {
+        ret = multifd_ram_unfill_packet(p, errp);
+    }
 
     trace_multifd_recv_unfill(p->id, p->packet_num, p->flags,
                               p->next_packet_size);
@@ -1536,7 +1541,9 @@ static void *multifd_recv_thread(void *opaque)
             flags = p->flags;
             /* recv methods don't know how to handle the SYNC flag */
             p->flags &= ~MULTIFD_FLAG_SYNC;
-            has_data = p->normal_num || p->zero_num;
+            if (!(flags & MULTIFD_FLAG_SYNC)) {
+                has_data = p->normal_num || p->zero_num;
+            }
             qemu_mutex_unlock(&p->mutex);
         } else {
             /*
-- 
2.35.3



^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v3 11/14] migration/multifd: Replace multifd_send_state->pages with client data
  2024-08-01 12:35 [PATCH v3 00/14] migration/multifd: Remove multifd_send_state->pages Fabiano Rosas
                   ` (9 preceding siblings ...)
  2024-08-01 12:35 ` [PATCH v3 10/14] migration/multifd: Don't send ram data during SYNC Fabiano Rosas
@ 2024-08-01 12:35 ` Fabiano Rosas
  2024-08-22 15:59   ` Peter Xu
  2024-08-01 12:35 ` [PATCH v3 12/14] migration/multifd: Allow multifd sync without flush Fabiano Rosas
                   ` (3 subsequent siblings)
  14 siblings, 1 reply; 37+ messages in thread
From: Fabiano Rosas @ 2024-08-01 12:35 UTC (permalink / raw)
  To: qemu-devel; +Cc: Peter Xu, Maciej S . Szmigiero

Multifd currently has a simple scheduling mechanism that distributes
work to the various channels by keeping storage space within each
channel and an extra space that is given to the client. Each time the
client fills the space with data and calls into multifd, that space is
given to the next idle channel and a free storage space is taken from
the channel and given to client for the next iteration.

This means we always need (#multifd_channels + 1) memory slots to
operate multifd.

This is fine, except that the presence of this one extra memory slot
doesn't allow different types of payloads to be processed at the same
time in different channels, i.e. the data type of
multifd_send_state->pages needs to be the same as p->pages.

For each new data type different from MultiFDPage_t that is to be
handled, this logic would need to be duplicated by adding new fields
to multifd_send_state, to the channels and to multifd_send_pages().

Fix this situation by moving the extra slot into the client and using
only the generic type MultiFDSendData in the multifd core.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
---
 migration/multifd.c | 79 ++++++++++++++++++++++++++-------------------
 migration/multifd.h |  3 ++
 migration/ram.c     |  2 ++
 3 files changed, 50 insertions(+), 34 deletions(-)

diff --git a/migration/multifd.c b/migration/multifd.c
index ec6dcb9999..1c16879495 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -49,7 +49,6 @@ typedef struct {
 
 struct {
     MultiFDSendParams *params;
-    MultiFDSendData *data;
     /*
      * Global number of generated multifd packets.
      *
@@ -97,6 +96,8 @@ struct {
     MultiFDMethods *ops;
 } *multifd_recv_state;
 
+static MultiFDSendData *multifd_ram_send;
+
 static size_t multifd_ram_payload_size(void)
 {
     uint32_t n = multifd_ram_page_count();
@@ -130,6 +131,17 @@ static MultiFDSendData *multifd_send_data_alloc(void)
     return g_malloc0(size_minus_payload + max_payload_size);
 }
 
+void multifd_ram_save_setup(void)
+{
+    multifd_ram_send = multifd_send_data_alloc();
+}
+
+void multifd_ram_save_cleanup(void)
+{
+    g_free(multifd_ram_send);
+    multifd_ram_send = NULL;
+}
+
 static bool multifd_use_packets(void)
 {
     return !migrate_mapped_ram();
@@ -617,25 +629,20 @@ static void multifd_send_kick_main(MultiFDSendParams *p)
 }
 
 /*
- * How we use multifd_send_state->pages and channel->pages?
+ * multifd_send() works by exchanging the MultiFDSendData object
+ * provided by the caller with an unused MultiFDSendData object from
+ * the next channel that is found to be idle.
  *
- * We create a pages for each channel, and a main one.  Each time that
- * we need to send a batch of pages we interchange the ones between
- * multifd_send_state and the channel that is sending it.  There are
- * two reasons for that:
- *    - to not have to do so many mallocs during migration
- *    - to make easier to know what to free at the end of migration
+ * The channel owns the data until it finishes transmitting and the
+ * caller owns the empty object until it fills it with data and calls
+ * this function again. No locking necessary.
  *
- * This way we always know who is the owner of each "pages" struct,
- * and we don't need any locking.  It belongs to the migration thread
- * or to the channel thread.  Switching is safe because the migration
- * thread is using the channel mutex when changing it, and the channel
- * have to had finish with its own, otherwise pending_job can't be
- * false.
+ * Switching is safe because both the migration thread and the channel
+ * thread have barriers in place to serialize access.
  *
  * Returns true if succeed, false otherwise.
  */
-static bool multifd_send_pages(void)
+static bool multifd_send(MultiFDSendData **send_data)
 {
     int i;
     static int next_channel;
@@ -676,11 +683,16 @@ static bool multifd_send_pages(void)
      */
     smp_mb_acquire();
 
-    assert(!p->data->u.ram.num);
+    assert(multifd_payload_empty(p->data));
 
-    tmp = multifd_send_state->data;
-    multifd_send_state->data = p->data;
+    /*
+     * Swap the pointers. The channel gets the client data for
+     * transferring and the client gets back an unused data slot.
+     */
+    tmp = *send_data;
+    *send_data = p->data;
     p->data = tmp;
+
     /*
      * Making sure p->data is setup before marking pending_job=true. Pairs
      * with the qatomic_load_acquire() in multifd_send_thread().
@@ -712,7 +724,12 @@ bool multifd_queue_page(RAMBlock *block, ram_addr_t offset)
     MultiFDPages_t *pages;
 
 retry:
-    pages = &multifd_send_state->data->u.ram;
+    pages = &multifd_ram_send->u.ram;
+
+    if (multifd_payload_empty(multifd_ram_send)) {
+        multifd_pages_reset(pages);
+        multifd_set_payload_type(multifd_ram_send, MULTIFD_PAYLOAD_RAM);
+    }
 
     /* If the queue is empty, we can already enqueue now */
     if (multifd_queue_empty(pages)) {
@@ -730,7 +747,7 @@ retry:
      * After flush, always retry.
      */
     if (pages->block != block || multifd_queue_full(pages)) {
-        if (!multifd_send_pages()) {
+        if (!multifd_send(&multifd_ram_send)) {
             return false;
         }
         goto retry;
@@ -860,8 +877,6 @@ static void multifd_send_cleanup_state(void)
     qemu_sem_destroy(&multifd_send_state->channels_ready);
     g_free(multifd_send_state->params);
     multifd_send_state->params = NULL;
-    g_free(multifd_send_state->data);
-    multifd_send_state->data = NULL;
     g_free(multifd_send_state);
     multifd_send_state = NULL;
 }
@@ -910,15 +925,14 @@ int multifd_send_sync_main(void)
 {
     int i;
     bool flush_zero_copy;
-    MultiFDPages_t *pages;
 
     if (!migrate_multifd()) {
         return 0;
     }
-    pages = &multifd_send_state->data->u.ram;
-    if (pages->num) {
-        if (!multifd_send_pages()) {
-            error_report("%s: multifd_send_pages fail", __func__);
+
+    if (!multifd_payload_empty(multifd_ram_send)) {
+        if (!multifd_send(&multifd_ram_send)) {
+            error_report("%s: multifd_send fail", __func__);
             return -1;
         }
     }
@@ -992,13 +1006,11 @@ static void *multifd_send_thread(void *opaque)
 
         /*
          * Read pending_job flag before p->data.  Pairs with the
-         * qatomic_store_release() in multifd_send_pages().
+         * qatomic_store_release() in multifd_send().
          */
         if (qatomic_load_acquire(&p->pending_job)) {
-            MultiFDPages_t *pages = &p->data->u.ram;
-
             p->iovs_num = 0;
-            assert(pages->num);
+            assert(!multifd_payload_empty(p->data));
 
             ret = multifd_send_state->ops->send_prepare(p, &local_err);
             if (ret != 0) {
@@ -1021,13 +1033,13 @@ static void *multifd_send_thread(void *opaque)
             stat64_add(&mig_stats.multifd_bytes,
                        p->next_packet_size + p->packet_len);
 
-            multifd_pages_reset(pages);
             p->next_packet_size = 0;
+            multifd_set_payload_type(p->data, MULTIFD_PAYLOAD_NONE);
 
             /*
              * Making sure p->data is published before saying "we're
              * free".  Pairs with the smp_mb_acquire() in
-             * multifd_send_pages().
+             * multifd_send().
              */
             qatomic_store_release(&p->pending_job, false);
         } else {
@@ -1220,7 +1232,6 @@ bool multifd_send_setup(void)
     thread_count = migrate_multifd_channels();
     multifd_send_state = g_malloc0(sizeof(*multifd_send_state));
     multifd_send_state->params = g_new0(MultiFDSendParams, thread_count);
-    multifd_send_state->data = multifd_send_data_alloc();
     qemu_sem_init(&multifd_send_state->channels_created, 0);
     qemu_sem_init(&multifd_send_state->channels_ready, 0);
     qatomic_set(&multifd_send_state->exiting, 0);
diff --git a/migration/multifd.h b/migration/multifd.h
index c2ba4cad13..e3d0120837 100644
--- a/migration/multifd.h
+++ b/migration/multifd.h
@@ -275,4 +275,7 @@ static inline uint32_t multifd_ram_page_count(void)
 {
     return MULTIFD_PACKET_SIZE / qemu_target_page_size();
 }
+
+void multifd_ram_save_setup(void);
+void multifd_ram_save_cleanup(void);
 #endif
diff --git a/migration/ram.c b/migration/ram.c
index edec1a2d07..1815b2557b 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -2387,6 +2387,7 @@ static void ram_save_cleanup(void *opaque)
     ram_bitmaps_destroy();
 
     xbzrle_cleanup();
+    multifd_ram_save_cleanup();
     ram_state_cleanup(rsp);
     g_free(migration_ops);
     migration_ops = NULL;
@@ -3058,6 +3059,7 @@ static int ram_save_setup(QEMUFile *f, void *opaque, Error **errp)
     migration_ops = g_malloc0(sizeof(MigrationOps));
 
     if (migrate_multifd()) {
+        multifd_ram_save_setup();
         migration_ops->ram_save_target_page = ram_save_target_page_multifd;
     } else {
         migration_ops->ram_save_target_page = ram_save_target_page_legacy;
-- 
2.35.3



^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v3 12/14] migration/multifd: Allow multifd sync without flush
  2024-08-01 12:35 [PATCH v3 00/14] migration/multifd: Remove multifd_send_state->pages Fabiano Rosas
                   ` (10 preceding siblings ...)
  2024-08-01 12:35 ` [PATCH v3 11/14] migration/multifd: Replace multifd_send_state->pages with client data Fabiano Rosas
@ 2024-08-01 12:35 ` Fabiano Rosas
  2024-08-22 16:03   ` Peter Xu
  2024-08-01 12:35 ` [PATCH v3 13/14] migration/multifd: Register nocomp ops dynamically Fabiano Rosas
                   ` (2 subsequent siblings)
  14 siblings, 1 reply; 37+ messages in thread
From: Fabiano Rosas @ 2024-08-01 12:35 UTC (permalink / raw)
  To: qemu-devel; +Cc: Peter Xu, Maciej S . Szmigiero

Separate the multifd sync from flushing the client data to the
channels. These two operations are closely related but not strictly
necessary to be executed together.

The multifd sync is intrinsic to how multifd works. The multiple
channels operate independently and may finish IO out of order in
relation to each other. This applies also between the source and
destination QEMU.

Flushing the data that is left in the client-owned data structures
(e.g. MultiFDPages_t) prior to sync is usually the right thing to do,
but that is particular to how the ram migration is implemented with
several passes over dirty data.

Make these two routines separate, allowing future code to call the
sync by itself if needed. This also allows the usage of
multifd_ram_send to be isolated to ram code.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
---
 migration/multifd.c | 13 +++++++++----
 migration/multifd.h |  1 +
 migration/ram.c     |  8 ++++----
 3 files changed, 14 insertions(+), 8 deletions(-)

diff --git a/migration/multifd.c b/migration/multifd.c
index 1c16879495..c25ab4924c 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -921,11 +921,8 @@ static int multifd_zero_copy_flush(QIOChannel *c)
     return ret;
 }
 
-int multifd_send_sync_main(void)
+int multifd_ram_flush_and_sync(void)
 {
-    int i;
-    bool flush_zero_copy;
-
     if (!migrate_multifd()) {
         return 0;
     }
@@ -937,6 +934,14 @@ int multifd_send_sync_main(void)
         }
     }
 
+    return multifd_send_sync_main();
+}
+
+int multifd_send_sync_main(void)
+{
+    int i;
+    bool flush_zero_copy;
+
     flush_zero_copy = migrate_zero_copy_send();
 
     for (i = 0; i < migrate_multifd_channels(); i++) {
diff --git a/migration/multifd.h b/migration/multifd.h
index e3d0120837..19a43d46c0 100644
--- a/migration/multifd.h
+++ b/migration/multifd.h
@@ -278,4 +278,5 @@ static inline uint32_t multifd_ram_page_count(void)
 
 void multifd_ram_save_setup(void);
 void multifd_ram_save_cleanup(void);
+int multifd_ram_flush_and_sync(void);
 #endif
diff --git a/migration/ram.c b/migration/ram.c
index 1815b2557b..67ca3d5d51 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1326,7 +1326,7 @@ static int find_dirty_block(RAMState *rs, PageSearchStatus *pss)
                 (!migrate_multifd_flush_after_each_section() ||
                  migrate_mapped_ram())) {
                 QEMUFile *f = rs->pss[RAM_CHANNEL_PRECOPY].pss_channel;
-                int ret = multifd_send_sync_main();
+                int ret = multifd_ram_flush_and_sync();
                 if (ret < 0) {
                     return ret;
                 }
@@ -3066,7 +3066,7 @@ static int ram_save_setup(QEMUFile *f, void *opaque, Error **errp)
     }
 
     bql_unlock();
-    ret = multifd_send_sync_main();
+    ret = multifd_ram_flush_and_sync();
     bql_lock();
     if (ret < 0) {
         error_setg(errp, "%s: multifd synchronization failed", __func__);
@@ -3213,7 +3213,7 @@ out:
         && migration_is_setup_or_active()) {
         if (migrate_multifd() && migrate_multifd_flush_after_each_section() &&
             !migrate_mapped_ram()) {
-            ret = multifd_send_sync_main();
+            ret = multifd_ram_flush_and_sync();
             if (ret < 0) {
                 return ret;
             }
@@ -3285,7 +3285,7 @@ static int ram_save_complete(QEMUFile *f, void *opaque)
         }
     }
 
-    ret = multifd_send_sync_main();
+    ret = multifd_ram_flush_and_sync();
     if (ret < 0) {
         return ret;
     }
-- 
2.35.3



^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v3 13/14] migration/multifd: Register nocomp ops dynamically
  2024-08-01 12:35 [PATCH v3 00/14] migration/multifd: Remove multifd_send_state->pages Fabiano Rosas
                   ` (11 preceding siblings ...)
  2024-08-01 12:35 ` [PATCH v3 12/14] migration/multifd: Allow multifd sync without flush Fabiano Rosas
@ 2024-08-01 12:35 ` Fabiano Rosas
  2024-08-22 16:23   ` Peter Xu
  2024-08-01 12:35 ` [PATCH v3 14/14] migration/multifd: Move ram code into multifd-ram.c Fabiano Rosas
  2024-08-01 12:45 ` [PATCH v3 00/14] migration/multifd: Remove multifd_send_state->pages Fabiano Rosas
  14 siblings, 1 reply; 37+ messages in thread
From: Fabiano Rosas @ 2024-08-01 12:35 UTC (permalink / raw)
  To: qemu-devel; +Cc: Peter Xu, Maciej S . Szmigiero

Prior to moving the ram code into multifd-ram.c, change the code to
register the nocomp ops dynamically so we don't need to have the ops
structure defined in multifd.c.

While here, rename s/nocomp/ram/ and remove the docstrings which are
mostly useless (if anything, it's the function pointers in multifd.h
that should be documented like that).

Signed-off-by: Fabiano Rosas <farosas@suse.de>
---
 migration/multifd.c | 101 ++++++++++++--------------------------------
 1 file changed, 28 insertions(+), 73 deletions(-)

diff --git a/migration/multifd.c b/migration/multifd.c
index c25ab4924c..d5be784b6f 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -167,15 +167,7 @@ static void multifd_set_file_bitmap(MultiFDSendParams *p)
     }
 }
 
-/* Multifd without compression */
-
-/**
- * nocomp_send_setup: setup send side
- *
- * @p: Params for the channel that we are using
- * @errp: pointer to an error
- */
-static int nocomp_send_setup(MultiFDSendParams *p, Error **errp)
+static int ram_send_setup(MultiFDSendParams *p, Error **errp)
 {
     uint32_t page_count = multifd_ram_page_count();
 
@@ -193,15 +185,7 @@ static int nocomp_send_setup(MultiFDSendParams *p, Error **errp)
     return 0;
 }
 
-/**
- * nocomp_send_cleanup: cleanup send side
- *
- * For no compression this function does nothing.
- *
- * @p: Params for the channel that we are using
- * @errp: pointer to an error
- */
-static void nocomp_send_cleanup(MultiFDSendParams *p, Error **errp)
+static void ram_send_cleanup(MultiFDSendParams *p, Error **errp)
 {
     g_free(p->iov);
     p->iov = NULL;
@@ -222,18 +206,7 @@ static void multifd_send_prepare_iovs(MultiFDSendParams *p)
     p->next_packet_size = pages->normal_num * page_size;
 }
 
-/**
- * nocomp_send_prepare: prepare date to be able to send
- *
- * For no compression we just have to calculate the size of the
- * packet.
- *
- * Returns 0 for success or -1 for error
- *
- * @p: Params for the channel that we are using
- * @errp: pointer to an error
- */
-static int nocomp_send_prepare(MultiFDSendParams *p, Error **errp)
+static int ram_send_prepare(MultiFDSendParams *p, Error **errp)
 {
     bool use_zero_copy_send = migrate_zero_copy_send();
     int ret;
@@ -272,46 +245,19 @@ static int nocomp_send_prepare(MultiFDSendParams *p, Error **errp)
     return 0;
 }
 
-/**
- * nocomp_recv_setup: setup receive side
- *
- * For no compression this function does nothing.
- *
- * Returns 0 for success or -1 for error
- *
- * @p: Params for the channel that we are using
- * @errp: pointer to an error
- */
-static int nocomp_recv_setup(MultiFDRecvParams *p, Error **errp)
+static int ram_recv_setup(MultiFDRecvParams *p, Error **errp)
 {
     p->iov = g_new0(struct iovec, multifd_ram_page_count());
     return 0;
 }
 
-/**
- * nocomp_recv_cleanup: setup receive side
- *
- * For no compression this function does nothing.
- *
- * @p: Params for the channel that we are using
- */
-static void nocomp_recv_cleanup(MultiFDRecvParams *p)
+static void ram_recv_cleanup(MultiFDRecvParams *p)
 {
     g_free(p->iov);
     p->iov = NULL;
 }
 
-/**
- * nocomp_recv: read the data from the channel
- *
- * For no compression we just need to read things into the correct place.
- *
- * Returns 0 for success or -1 for error
- *
- * @p: Params for the channel that we are using
- * @errp: pointer to an error
- */
-static int nocomp_recv(MultiFDRecvParams *p, Error **errp)
+static int ram_recv(MultiFDRecvParams *p, Error **errp)
 {
     uint32_t flags;
 
@@ -341,22 +287,15 @@ static int nocomp_recv(MultiFDRecvParams *p, Error **errp)
     return qio_channel_readv_all(p->c, p->iov, p->normal_num, errp);
 }
 
-static MultiFDMethods multifd_nocomp_ops = {
-    .send_setup = nocomp_send_setup,
-    .send_cleanup = nocomp_send_cleanup,
-    .send_prepare = nocomp_send_prepare,
-    .recv_setup = nocomp_recv_setup,
-    .recv_cleanup = nocomp_recv_cleanup,
-    .recv = nocomp_recv
-};
-
-static MultiFDMethods *multifd_ops[MULTIFD_COMPRESSION__MAX] = {
-    [MULTIFD_COMPRESSION_NONE] = &multifd_nocomp_ops,
-};
+static MultiFDMethods *multifd_ops[MULTIFD_COMPRESSION__MAX] = {};
 
 void multifd_register_ops(int method, MultiFDMethods *ops)
 {
-    assert(0 < method && method < MULTIFD_COMPRESSION__MAX);
+    if (method == MULTIFD_COMPRESSION_NONE) {
+        assert(!multifd_ops[method]);
+    } else {
+        assert(0 < method && method < MULTIFD_COMPRESSION__MAX);
+    }
     multifd_ops[method] = ops;
 }
 
@@ -1755,3 +1694,19 @@ bool multifd_send_prepare_common(MultiFDSendParams *p)
 
     return true;
 }
+
+static MultiFDMethods multifd_ram_ops = {
+    .send_setup = ram_send_setup,
+    .send_cleanup = ram_send_cleanup,
+    .send_prepare = ram_send_prepare,
+    .recv_setup = ram_recv_setup,
+    .recv_cleanup = ram_recv_cleanup,
+    .recv = ram_recv
+};
+
+static void multifd_ram_register(void)
+{
+    multifd_register_ops(MULTIFD_COMPRESSION_NONE, &multifd_ram_ops);
+}
+
+migration_init(multifd_ram_register);
-- 
2.35.3



^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v3 14/14] migration/multifd: Move ram code into multifd-ram.c
  2024-08-01 12:35 [PATCH v3 00/14] migration/multifd: Remove multifd_send_state->pages Fabiano Rosas
                   ` (12 preceding siblings ...)
  2024-08-01 12:35 ` [PATCH v3 13/14] migration/multifd: Register nocomp ops dynamically Fabiano Rosas
@ 2024-08-01 12:35 ` Fabiano Rosas
  2024-08-22 16:25   ` Peter Xu
  2024-08-01 12:45 ` [PATCH v3 00/14] migration/multifd: Remove multifd_send_state->pages Fabiano Rosas
  14 siblings, 1 reply; 37+ messages in thread
From: Fabiano Rosas @ 2024-08-01 12:35 UTC (permalink / raw)
  To: qemu-devel; +Cc: Peter Xu, Maciej S . Szmigiero

In preparation for adding new payload types to multifd, move most of
the ram-related code into multifd-ram.c. Let's try to keep a semblance
of layering by not mixing general multifd control flow with the
details of transmitting pages of ram.

There are still some pieces leftover, namely the p->normal, p->zero,
etc variables that we use for zero page tracking and the packet
allocation which is heavily dependent on the ram code.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
---
 migration/meson.build   |   1 +
 migration/multifd-ram.c | 400 ++++++++++++++++++++++++++++++++++++++++
 migration/multifd.c     | 384 +-------------------------------------
 migration/multifd.h     |   5 +
 4 files changed, 408 insertions(+), 382 deletions(-)
 create mode 100644 migration/multifd-ram.c

diff --git a/migration/meson.build b/migration/meson.build
index 5ce2acb41e..0d1c79cffa 100644
--- a/migration/meson.build
+++ b/migration/meson.build
@@ -21,6 +21,7 @@ system_ss.add(files(
   'migration-hmp-cmds.c',
   'migration.c',
   'multifd.c',
+  'multifd-ram.c',
   'multifd-zlib.c',
   'multifd-zero-page.c',
   'options.c',
diff --git a/migration/multifd-ram.c b/migration/multifd-ram.c
new file mode 100644
index 0000000000..81dda140b5
--- /dev/null
+++ b/migration/multifd-ram.c
@@ -0,0 +1,400 @@
+/*
+ * Multifd ram code
+ *
+ * Copyright (c) 2019-2020 Red Hat Inc
+ *
+ * Authors:
+ *  Juan Quintela <quintela@redhat.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ */
+
+#include "qemu/osdep.h"
+#include "exec/ramblock.h"
+#include "exec/target_page.h"
+#include "file.h"
+#include "multifd.h"
+#include "options.h"
+#include "qapi/error.h"
+#include "qemu/error-report.h"
+#include "trace.h"
+
+static MultiFDSendData *multifd_ram_send;
+
+size_t multifd_ram_payload_size(void)
+{
+    uint32_t n = multifd_ram_page_count();
+
+    /*
+     * We keep an array of page offsets at the end of MultiFDPages_t,
+     * add space for it in the allocation.
+     */
+    return sizeof(MultiFDPages_t) + n * sizeof(ram_addr_t);
+}
+
+void multifd_ram_save_setup(void)
+{
+    multifd_ram_send = multifd_send_data_alloc();
+}
+
+void multifd_ram_save_cleanup(void)
+{
+    g_free(multifd_ram_send);
+    multifd_ram_send = NULL;
+}
+
+static void multifd_set_file_bitmap(MultiFDSendParams *p)
+{
+    MultiFDPages_t *pages = &p->data->u.ram;
+
+    assert(pages->block);
+
+    for (int i = 0; i < pages->normal_num; i++) {
+        ramblock_set_file_bmap_atomic(pages->block, pages->offset[i], true);
+    }
+
+    for (int i = pages->normal_num; i < pages->num; i++) {
+        ramblock_set_file_bmap_atomic(pages->block, pages->offset[i], false);
+    }
+}
+
+static int ram_send_setup(MultiFDSendParams *p, Error **errp)
+{
+    uint32_t page_count = multifd_ram_page_count();
+
+    if (migrate_zero_copy_send()) {
+        p->write_flags |= QIO_CHANNEL_WRITE_FLAG_ZERO_COPY;
+    }
+
+    if (!migrate_mapped_ram()) {
+        /* We need one extra place for the packet header */
+        p->iov = g_new0(struct iovec, page_count + 1);
+    } else {
+        p->iov = g_new0(struct iovec, page_count);
+    }
+
+    return 0;
+}
+
+static void ram_send_cleanup(MultiFDSendParams *p, Error **errp)
+{
+    g_free(p->iov);
+    p->iov = NULL;
+    return;
+}
+
+static void multifd_send_prepare_iovs(MultiFDSendParams *p)
+{
+    MultiFDPages_t *pages = &p->data->u.ram;
+    uint32_t page_size = multifd_ram_page_size();
+
+    for (int i = 0; i < pages->normal_num; i++) {
+        p->iov[p->iovs_num].iov_base = pages->block->host + pages->offset[i];
+        p->iov[p->iovs_num].iov_len = page_size;
+        p->iovs_num++;
+    }
+
+    p->next_packet_size = pages->normal_num * page_size;
+}
+
+static int ram_send_prepare(MultiFDSendParams *p, Error **errp)
+{
+    bool use_zero_copy_send = migrate_zero_copy_send();
+    int ret;
+
+    multifd_send_zero_page_detect(p);
+
+    if (migrate_mapped_ram()) {
+        multifd_send_prepare_iovs(p);
+        multifd_set_file_bitmap(p);
+
+        return 0;
+    }
+
+    if (!use_zero_copy_send) {
+        /*
+         * Only !zerocopy needs the header in IOV; zerocopy will
+         * send it separately.
+         */
+        multifd_send_prepare_header(p);
+    }
+
+    multifd_send_prepare_iovs(p);
+    p->flags |= MULTIFD_FLAG_NOCOMP;
+
+    multifd_send_fill_packet(p);
+
+    if (use_zero_copy_send) {
+        /* Send header first, without zerocopy */
+        ret = qio_channel_write_all(p->c, (void *)p->packet,
+                                    p->packet_len, errp);
+        if (ret != 0) {
+            return -1;
+        }
+    }
+
+    return 0;
+}
+
+static int ram_recv_setup(MultiFDRecvParams *p, Error **errp)
+{
+    p->iov = g_new0(struct iovec, multifd_ram_page_count());
+    return 0;
+}
+
+static void ram_recv_cleanup(MultiFDRecvParams *p)
+{
+    g_free(p->iov);
+    p->iov = NULL;
+}
+
+static int ram_recv(MultiFDRecvParams *p, Error **errp)
+{
+    uint32_t flags;
+
+    if (migrate_mapped_ram()) {
+        return multifd_file_recv_data(p, errp);
+    }
+
+    flags = p->flags & MULTIFD_FLAG_COMPRESSION_MASK;
+
+    if (flags != MULTIFD_FLAG_NOCOMP) {
+        error_setg(errp, "multifd %u: flags received %x flags expected %x",
+                   p->id, flags, MULTIFD_FLAG_NOCOMP);
+        return -1;
+    }
+
+    multifd_recv_zero_page_process(p);
+
+    if (!p->normal_num) {
+        return 0;
+    }
+
+    for (int i = 0; i < p->normal_num; i++) {
+        p->iov[i].iov_base = p->host + p->normal[i];
+        p->iov[i].iov_len = multifd_ram_page_size();
+        ramblock_recv_bitmap_set_offset(p->block, p->normal[i]);
+    }
+    return qio_channel_readv_all(p->c, p->iov, p->normal_num, errp);
+}
+
+static void multifd_pages_reset(MultiFDPages_t *pages)
+{
+    /*
+     * We don't need to touch offset[] array, because it will be
+     * overwritten later when reused.
+     */
+    pages->num = 0;
+    pages->normal_num = 0;
+    pages->block = NULL;
+}
+
+void multifd_ram_fill_packet(MultiFDSendParams *p)
+{
+    MultiFDPacket_t *packet = p->packet;
+    MultiFDPages_t *pages = &p->data->u.ram;
+    uint32_t zero_num = pages->num - pages->normal_num;
+
+    packet->pages_alloc = cpu_to_be32(multifd_ram_page_count());
+    packet->normal_pages = cpu_to_be32(pages->normal_num);
+    packet->zero_pages = cpu_to_be32(zero_num);
+
+    if (pages->block) {
+        strncpy(packet->ramblock, pages->block->idstr, 256);
+    }
+
+    for (int i = 0; i < pages->num; i++) {
+        /* there are architectures where ram_addr_t is 32 bit */
+        uint64_t temp = pages->offset[i];
+
+        packet->offset[i] = cpu_to_be64(temp);
+    }
+
+    p->total_normal_pages += pages->normal_num;
+    p->total_zero_pages += zero_num;
+
+    trace_multifd_send_ram_fill(p->id, p->total_normal_pages,
+                                p->total_zero_pages);
+}
+
+int multifd_ram_unfill_packet(MultiFDRecvParams *p, Error **errp)
+{
+    MultiFDPacket_t *packet = p->packet;
+    uint32_t page_count = multifd_ram_page_count();
+    uint32_t page_size = multifd_ram_page_size();
+    int i;
+
+    packet->pages_alloc = be32_to_cpu(packet->pages_alloc);
+    /*
+     * If we received a packet that is 100 times bigger than expected
+     * just stop migration.  It is a magic number.
+     */
+    if (packet->pages_alloc > page_count) {
+        error_setg(errp, "multifd: received packet "
+                   "with size %u and expected a size of %u",
+                   packet->pages_alloc, page_count) ;
+        return -1;
+    }
+
+    p->normal_num = be32_to_cpu(packet->normal_pages);
+    if (p->normal_num > packet->pages_alloc) {
+        error_setg(errp, "multifd: received packet "
+                   "with %u normal pages and expected maximum pages are %u",
+                   p->normal_num, packet->pages_alloc) ;
+        return -1;
+    }
+
+    p->zero_num = be32_to_cpu(packet->zero_pages);
+    if (p->zero_num > packet->pages_alloc - p->normal_num) {
+        error_setg(errp, "multifd: received packet "
+                   "with %u zero pages and expected maximum zero pages are %u",
+                   p->zero_num, packet->pages_alloc - p->normal_num) ;
+        return -1;
+    }
+
+    p->total_normal_pages += p->normal_num;
+    p->total_zero_pages += p->zero_num;
+
+    if (p->normal_num == 0 && p->zero_num == 0) {
+        return 0;
+    }
+
+    /* make sure that ramblock is 0 terminated */
+    packet->ramblock[255] = 0;
+    p->block = qemu_ram_block_by_name(packet->ramblock);
+    if (!p->block) {
+        error_setg(errp, "multifd: unknown ram block %s",
+                   packet->ramblock);
+        return -1;
+    }
+
+    p->host = p->block->host;
+    for (i = 0; i < p->normal_num; i++) {
+        uint64_t offset = be64_to_cpu(packet->offset[i]);
+
+        if (offset > (p->block->used_length - page_size)) {
+            error_setg(errp, "multifd: offset too long %" PRIu64
+                       " (max " RAM_ADDR_FMT ")",
+                       offset, p->block->used_length);
+            return -1;
+        }
+        p->normal[i] = offset;
+    }
+
+    for (i = 0; i < p->zero_num; i++) {
+        uint64_t offset = be64_to_cpu(packet->offset[p->normal_num + i]);
+
+        if (offset > (p->block->used_length - page_size)) {
+            error_setg(errp, "multifd: offset too long %" PRIu64
+                       " (max " RAM_ADDR_FMT ")",
+                       offset, p->block->used_length);
+            return -1;
+        }
+        p->zero[i] = offset;
+    }
+
+    return 0;
+}
+
+static inline bool multifd_queue_empty(MultiFDPages_t *pages)
+{
+    return pages->num == 0;
+}
+
+static inline bool multifd_queue_full(MultiFDPages_t *pages)
+{
+    return pages->num == multifd_ram_page_count();
+}
+
+static inline void multifd_enqueue(MultiFDPages_t *pages, ram_addr_t offset)
+{
+    pages->offset[pages->num++] = offset;
+}
+
+/* Returns true if enqueue successful, false otherwise */
+bool multifd_queue_page(RAMBlock *block, ram_addr_t offset)
+{
+    MultiFDPages_t *pages;
+
+retry:
+    pages = &multifd_ram_send->u.ram;
+
+    if (multifd_payload_empty(multifd_ram_send)) {
+        multifd_pages_reset(pages);
+        multifd_set_payload_type(multifd_ram_send, MULTIFD_PAYLOAD_RAM);
+    }
+
+    /* If the queue is empty, we can already enqueue now */
+    if (multifd_queue_empty(pages)) {
+        pages->block = block;
+        multifd_enqueue(pages, offset);
+        return true;
+    }
+
+    /*
+     * Not empty, meanwhile we need a flush.  It can because of either:
+     *
+     * (1) The page is not on the same ramblock of previous ones, or,
+     * (2) The queue is full.
+     *
+     * After flush, always retry.
+     */
+    if (pages->block != block || multifd_queue_full(pages)) {
+        if (!multifd_send(&multifd_ram_send)) {
+            return false;
+        }
+        goto retry;
+    }
+
+    /* Not empty, and we still have space, do it! */
+    multifd_enqueue(pages, offset);
+    return true;
+}
+
+int multifd_ram_flush_and_sync(void)
+{
+    if (!migrate_multifd()) {
+        return 0;
+    }
+
+    if (!multifd_payload_empty(multifd_ram_send)) {
+        if (!multifd_send(&multifd_ram_send)) {
+            error_report("%s: multifd_send fail", __func__);
+            return -1;
+        }
+    }
+
+    return multifd_send_sync_main();
+}
+
+bool multifd_send_prepare_common(MultiFDSendParams *p)
+{
+    MultiFDPages_t *pages = &p->data->u.ram;
+    multifd_send_zero_page_detect(p);
+
+    if (!pages->normal_num) {
+        p->next_packet_size = 0;
+        return false;
+    }
+
+    multifd_send_prepare_header(p);
+
+    return true;
+}
+
+static MultiFDMethods multifd_ram_ops = {
+    .send_setup = ram_send_setup,
+    .send_cleanup = ram_send_cleanup,
+    .send_prepare = ram_send_prepare,
+    .recv_setup = ram_recv_setup,
+    .recv_cleanup = ram_recv_cleanup,
+    .recv = ram_recv
+};
+
+static void multifd_ram_register(void)
+{
+    multifd_register_ops(MULTIFD_COMPRESSION_NONE, &multifd_ram_ops);
+}
+
+migration_init(multifd_ram_register);
diff --git a/migration/multifd.c b/migration/multifd.c
index d5be784b6f..99197d6174 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -96,20 +96,7 @@ struct {
     MultiFDMethods *ops;
 } *multifd_recv_state;
 
-static MultiFDSendData *multifd_ram_send;
-
-static size_t multifd_ram_payload_size(void)
-{
-    uint32_t n = multifd_ram_page_count();
-
-    /*
-     * We keep an array of page offsets at the end of MultiFDPages_t,
-     * add space for it in the allocation.
-     */
-    return sizeof(MultiFDPages_t) + n * sizeof(ram_addr_t);
-}
-
-static MultiFDSendData *multifd_send_data_alloc(void)
+MultiFDSendData *multifd_send_data_alloc(void)
 {
     size_t max_payload_size, size_minus_payload;
 
@@ -131,17 +118,6 @@ static MultiFDSendData *multifd_send_data_alloc(void)
     return g_malloc0(size_minus_payload + max_payload_size);
 }
 
-void multifd_ram_save_setup(void)
-{
-    multifd_ram_send = multifd_send_data_alloc();
-}
-
-void multifd_ram_save_cleanup(void)
-{
-    g_free(multifd_ram_send);
-    multifd_ram_send = NULL;
-}
-
 static bool multifd_use_packets(void)
 {
     return !migrate_mapped_ram();
@@ -152,141 +128,6 @@ void multifd_send_channel_created(void)
     qemu_sem_post(&multifd_send_state->channels_created);
 }
 
-static void multifd_set_file_bitmap(MultiFDSendParams *p)
-{
-    MultiFDPages_t *pages = &p->data->u.ram;
-
-    assert(pages->block);
-
-    for (int i = 0; i < pages->normal_num; i++) {
-        ramblock_set_file_bmap_atomic(pages->block, pages->offset[i], true);
-    }
-
-    for (int i = pages->normal_num; i < pages->num; i++) {
-        ramblock_set_file_bmap_atomic(pages->block, pages->offset[i], false);
-    }
-}
-
-static int ram_send_setup(MultiFDSendParams *p, Error **errp)
-{
-    uint32_t page_count = multifd_ram_page_count();
-
-    if (migrate_zero_copy_send()) {
-        p->write_flags |= QIO_CHANNEL_WRITE_FLAG_ZERO_COPY;
-    }
-
-    if (multifd_use_packets()) {
-        /* We need one extra place for the packet header */
-        p->iov = g_new0(struct iovec, page_count + 1);
-    } else {
-        p->iov = g_new0(struct iovec, page_count);
-    }
-
-    return 0;
-}
-
-static void ram_send_cleanup(MultiFDSendParams *p, Error **errp)
-{
-    g_free(p->iov);
-    p->iov = NULL;
-    return;
-}
-
-static void multifd_send_prepare_iovs(MultiFDSendParams *p)
-{
-    MultiFDPages_t *pages = &p->data->u.ram;
-    uint32_t page_size = multifd_ram_page_size();
-
-    for (int i = 0; i < pages->normal_num; i++) {
-        p->iov[p->iovs_num].iov_base = pages->block->host + pages->offset[i];
-        p->iov[p->iovs_num].iov_len = page_size;
-        p->iovs_num++;
-    }
-
-    p->next_packet_size = pages->normal_num * page_size;
-}
-
-static int ram_send_prepare(MultiFDSendParams *p, Error **errp)
-{
-    bool use_zero_copy_send = migrate_zero_copy_send();
-    int ret;
-
-    multifd_send_zero_page_detect(p);
-
-    if (!multifd_use_packets()) {
-        multifd_send_prepare_iovs(p);
-        multifd_set_file_bitmap(p);
-
-        return 0;
-    }
-
-    if (!use_zero_copy_send) {
-        /*
-         * Only !zerocopy needs the header in IOV; zerocopy will
-         * send it separately.
-         */
-        multifd_send_prepare_header(p);
-    }
-
-    multifd_send_prepare_iovs(p);
-    p->flags |= MULTIFD_FLAG_NOCOMP;
-
-    multifd_send_fill_packet(p);
-
-    if (use_zero_copy_send) {
-        /* Send header first, without zerocopy */
-        ret = qio_channel_write_all(p->c, (void *)p->packet,
-                                    p->packet_len, errp);
-        if (ret != 0) {
-            return -1;
-        }
-    }
-
-    return 0;
-}
-
-static int ram_recv_setup(MultiFDRecvParams *p, Error **errp)
-{
-    p->iov = g_new0(struct iovec, multifd_ram_page_count());
-    return 0;
-}
-
-static void ram_recv_cleanup(MultiFDRecvParams *p)
-{
-    g_free(p->iov);
-    p->iov = NULL;
-}
-
-static int ram_recv(MultiFDRecvParams *p, Error **errp)
-{
-    uint32_t flags;
-
-    if (!multifd_use_packets()) {
-        return multifd_file_recv_data(p, errp);
-    }
-
-    flags = p->flags & MULTIFD_FLAG_COMPRESSION_MASK;
-
-    if (flags != MULTIFD_FLAG_NOCOMP) {
-        error_setg(errp, "multifd %u: flags received %x flags expected %x",
-                   p->id, flags, MULTIFD_FLAG_NOCOMP);
-        return -1;
-    }
-
-    multifd_recv_zero_page_process(p);
-
-    if (!p->normal_num) {
-        return 0;
-    }
-
-    for (int i = 0; i < p->normal_num; i++) {
-        p->iov[i].iov_base = p->host + p->normal[i];
-        p->iov[i].iov_len = multifd_ram_page_size();
-        ramblock_recv_bitmap_set_offset(p->block, p->normal[i]);
-    }
-    return qio_channel_readv_all(p->c, p->iov, p->normal_num, errp);
-}
-
 static MultiFDMethods *multifd_ops[MULTIFD_COMPRESSION__MAX] = {};
 
 void multifd_register_ops(int method, MultiFDMethods *ops)
@@ -299,18 +140,6 @@ void multifd_register_ops(int method, MultiFDMethods *ops)
     multifd_ops[method] = ops;
 }
 
-/* Reset a MultiFDPages_t* object for the next use */
-static void multifd_pages_reset(MultiFDPages_t *pages)
-{
-    /*
-     * We don't need to touch offset[] array, because it will be
-     * overwritten later when reused.
-     */
-    pages->num = 0;
-    pages->normal_num = 0;
-    pages->block = NULL;
-}
-
 static int multifd_send_initial_packet(MultiFDSendParams *p, Error **errp)
 {
     MultiFDInit_t msg = {};
@@ -375,34 +204,6 @@ static int multifd_recv_initial_packet(QIOChannel *c, Error **errp)
     return msg.id;
 }
 
-static void multifd_ram_fill_packet(MultiFDSendParams *p)
-{
-    MultiFDPacket_t *packet = p->packet;
-    MultiFDPages_t *pages = &p->data->u.ram;
-    uint32_t zero_num = pages->num - pages->normal_num;
-
-    packet->pages_alloc = cpu_to_be32(multifd_ram_page_count());
-    packet->normal_pages = cpu_to_be32(pages->normal_num);
-    packet->zero_pages = cpu_to_be32(zero_num);
-
-    if (pages->block) {
-        strncpy(packet->ramblock, pages->block->idstr, 256);
-    }
-
-    for (int i = 0; i < pages->num; i++) {
-        /* there are architectures where ram_addr_t is 32 bit */
-        uint64_t temp = pages->offset[i];
-
-        packet->offset[i] = cpu_to_be64(temp);
-    }
-
-    p->total_normal_pages += pages->normal_num;
-    p->total_zero_pages += zero_num;
-
-    trace_multifd_send_ram_fill(p->id, p->total_normal_pages,
-                                p->total_zero_pages);
-}
-
 void multifd_send_fill_packet(MultiFDSendParams *p)
 {
     MultiFDPacket_t *packet = p->packet;
@@ -430,85 +231,6 @@ void multifd_send_fill_packet(MultiFDSendParams *p)
                             p->flags, p->next_packet_size);
 }
 
-static int multifd_ram_unfill_packet(MultiFDRecvParams *p, Error **errp)
-{
-    MultiFDPacket_t *packet = p->packet;
-    uint32_t page_count = multifd_ram_page_count();
-    uint32_t page_size = multifd_ram_page_size();
-    int i;
-
-    packet->pages_alloc = be32_to_cpu(packet->pages_alloc);
-    /*
-     * If we received a packet that is 100 times bigger than expected
-     * just stop migration.  It is a magic number.
-     */
-    if (packet->pages_alloc > page_count) {
-        error_setg(errp, "multifd: received packet "
-                   "with size %u and expected a size of %u",
-                   packet->pages_alloc, page_count) ;
-        return -1;
-    }
-
-    p->normal_num = be32_to_cpu(packet->normal_pages);
-    if (p->normal_num > packet->pages_alloc) {
-        error_setg(errp, "multifd: received packet "
-                   "with %u normal pages and expected maximum pages are %u",
-                   p->normal_num, packet->pages_alloc) ;
-        return -1;
-    }
-
-    p->zero_num = be32_to_cpu(packet->zero_pages);
-    if (p->zero_num > packet->pages_alloc - p->normal_num) {
-        error_setg(errp, "multifd: received packet "
-                   "with %u zero pages and expected maximum zero pages are %u",
-                   p->zero_num, packet->pages_alloc - p->normal_num) ;
-        return -1;
-    }
-
-    p->total_normal_pages += p->normal_num;
-    p->total_zero_pages += p->zero_num;
-
-    if (p->normal_num == 0 && p->zero_num == 0) {
-        return 0;
-    }
-
-    /* make sure that ramblock is 0 terminated */
-    packet->ramblock[255] = 0;
-    p->block = qemu_ram_block_by_name(packet->ramblock);
-    if (!p->block) {
-        error_setg(errp, "multifd: unknown ram block %s",
-                   packet->ramblock);
-        return -1;
-    }
-
-    p->host = p->block->host;
-    for (i = 0; i < p->normal_num; i++) {
-        uint64_t offset = be64_to_cpu(packet->offset[i]);
-
-        if (offset > (p->block->used_length - page_size)) {
-            error_setg(errp, "multifd: offset too long %" PRIu64
-                       " (max " RAM_ADDR_FMT ")",
-                       offset, p->block->used_length);
-            return -1;
-        }
-        p->normal[i] = offset;
-    }
-
-    for (i = 0; i < p->zero_num; i++) {
-        uint64_t offset = be64_to_cpu(packet->offset[p->normal_num + i]);
-
-        if (offset > (p->block->used_length - page_size)) {
-            error_setg(errp, "multifd: offset too long %" PRIu64
-                       " (max " RAM_ADDR_FMT ")",
-                       offset, p->block->used_length);
-            return -1;
-        }
-        p->zero[i] = offset;
-    }
-
-    return 0;
-}
-
 static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
 {
     MultiFDPacket_t *packet = p->packet;
@@ -581,7 +303,7 @@ static void multifd_send_kick_main(MultiFDSendParams *p)
  *
  * Returns true if succeed, false otherwise.
  */
-static bool multifd_send(MultiFDSendData **send_data)
+bool multifd_send(MultiFDSendData **send_data)
 {
     int i;
     static int next_channel;
@@ -642,61 +364,6 @@ static bool multifd_send(MultiFDSendData **send_data)
     return true;
 }
 
-static inline bool multifd_queue_empty(MultiFDPages_t *pages)
-{
-    return pages->num == 0;
-}
-
-static inline bool multifd_queue_full(MultiFDPages_t *pages)
-{
-    return pages->num == multifd_ram_page_count();
-}
-
-static inline void multifd_enqueue(MultiFDPages_t *pages, ram_addr_t offset)
-{
-    pages->offset[pages->num++] = offset;
-}
-
-/* Returns true if enqueue successful, false otherwise */
-bool multifd_queue_page(RAMBlock *block, ram_addr_t offset)
-{
-    MultiFDPages_t *pages;
-
-retry:
-    pages = &multifd_ram_send->u.ram;
-
-    if (multifd_payload_empty(multifd_ram_send)) {
-        multifd_pages_reset(pages);
-        multifd_set_payload_type(multifd_ram_send, MULTIFD_PAYLOAD_RAM);
-    }
-
-    /* If the queue is empty, we can already enqueue now */
-    if (multifd_queue_empty(pages)) {
-        pages->block = block;
-        multifd_enqueue(pages, offset);
-        return true;
-    }
-
-    /*
-     * Not empty, meanwhile we need a flush.  It can because of either:
-     *
-     * (1) The page is not on the same ramblock of previous ones, or,
-     * (2) The queue is full.
-     *
-     * After flush, always retry.
-     */
-    if (pages->block != block || multifd_queue_full(pages)) {
-        if (!multifd_send(&multifd_ram_send)) {
-            return false;
-        }
-        goto retry;
-    }
-
-    /* Not empty, and we still have space, do it! */
-    multifd_enqueue(pages, offset);
-    return true;
-}
-
 /* Multifd send side hit an error; remember it and prepare to quit */
 static void multifd_send_set_error(Error *err)
 {
@@ -860,22 +527,6 @@ static int multifd_zero_copy_flush(QIOChannel *c)
     return ret;
 }
 
-int multifd_ram_flush_and_sync(void)
-{
-    if (!migrate_multifd()) {
-        return 0;
-    }
-
-    if (!multifd_payload_empty(multifd_ram_send)) {
-        if (!multifd_send(&multifd_ram_send)) {
-            error_report("%s: multifd_send fail", __func__);
-            return -1;
-        }
-    }
-
-    return multifd_send_sync_main();
-}
-
 int multifd_send_sync_main(void)
 {
     int i;
@@ -1679,34 +1330,3 @@ void multifd_recv_new_channel(QIOChannel *ioc, Error **errp)
                        QEMU_THREAD_JOINABLE);
     qatomic_inc(&multifd_recv_state->count);
 }
-
-bool multifd_send_prepare_common(MultiFDSendParams *p)
-{
-    MultiFDPages_t *pages = &p->data->u.ram;
-    multifd_send_zero_page_detect(p);
-
-    if (!pages->normal_num) {
-        p->next_packet_size = 0;
-        return false;
-    }
-
-    multifd_send_prepare_header(p);
-
-    return true;
-}
-
-static MultiFDMethods multifd_ram_ops = {
-    .send_setup = ram_send_setup,
-    .send_cleanup = ram_send_cleanup,
-    .send_prepare = ram_send_prepare,
-    .recv_setup = ram_recv_setup,
-    .recv_cleanup = ram_recv_cleanup,
-    .recv = ram_recv
-};
-
-static void multifd_ram_register(void)
-{
-    multifd_register_ops(MULTIFD_COMPRESSION_NONE, &multifd_ram_ops);
-}
-
-migration_init(multifd_ram_register);
diff --git a/migration/multifd.h b/migration/multifd.h
index 19a43d46c0..2197d0b44f 100644
--- a/migration/multifd.h
+++ b/migration/multifd.h
@@ -265,6 +265,8 @@ static inline void multifd_send_prepare_header(MultiFDSendParams *p)
 }
 
 void multifd_channel_connect(MultiFDSendParams *p, QIOChannel *ioc);
+bool multifd_send(MultiFDSendData **send_data);
+MultiFDSendData *multifd_send_data_alloc(void);
 
 static inline uint32_t multifd_ram_page_size(void)
 {
@@ -279,4 +281,7 @@ static inline uint32_t multifd_ram_page_count(void)
 void multifd_ram_save_setup(void);
 void multifd_ram_save_cleanup(void);
 int multifd_ram_flush_and_sync(void);
+size_t multifd_ram_payload_size(void);
+void multifd_ram_fill_packet(MultiFDSendParams *p);
+int multifd_ram_unfill_packet(MultiFDRecvParams *p, Error **errp);
 #endif
-- 
2.35.3



^ permalink raw reply related	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 00/14] migration/multifd: Remove multifd_send_state->pages
  2024-08-01 12:35 [PATCH v3 00/14] migration/multifd: Remove multifd_send_state->pages Fabiano Rosas
                   ` (13 preceding siblings ...)
  2024-08-01 12:35 ` [PATCH v3 14/14] migration/multifd: Move ram code into multifd-ram.c Fabiano Rosas
@ 2024-08-01 12:45 ` Fabiano Rosas
  14 siblings, 0 replies; 37+ messages in thread
From: Fabiano Rosas @ 2024-08-01 12:45 UTC (permalink / raw)
  To: qemu-devel; +Cc: Peter Xu, Maciej S . Szmigiero

Fabiano Rosas <farosas@suse.de> writes:

> Hi,
>
> This v3 incorporates the suggestions done by Peter in v2. Aside from
> those, of note:
>
> - fixed the allocation of MultiFDSendData. The previous version didn't
>   account for compiler-inserted holes;
>
> - kept the packet split patch;
>
> - added some patches to remove p->page_count, p->page_size,
>   pages->allocated. These are all constants and don't need to be
>   per-channel;
>
> - moved the code into multifd-ram.c.
>
>   However, I left the p->packet allocation (depends on page_count) and
>   p->normal + p->zero behind because I need to see how the device
>   state patches will deal with the packet stuff before I can come up
>   with a way to move those out of the MultiFD*Params. It might not be
>   worth it adding another struct just for the ram code to store
>   p->normal, p->zero.
>
> With this I'm pretty much done with what I think needs to be changed
> as a prereq for the device state work. I don't have anything else in
> mind to add to this series.
>
> CI run: https://gitlab.com/farosas/qemu/-/pipelines/1395572680
>
> v2:
> https://lore.kernel.org/r/20240722175914.24022-1-farosas@suse.de
>
> v1:
> https://lore.kernel.org/r/20240620212111.29319-1-farosas@suse.de
>
> First of all, apologies for the roughness of the series. I'm off for
> the next couple of weeks and wanted to put something together early
> for your consideration.
>
> This series is a refactoring (based on an earlier, off-list
> attempt[0]), aimed to remove the usage of the MultiFDPages_t type in
> the multifd core. If we're going to add support for more data types to
> multifd, we first need to clean that up.
>
> This time around this work was prompted by Maciej's series[1]. I see
> you're having to add a bunch of is_device_state checks to work around
> the rigidity of the code.
>
> Aside from the VFIO work, there is also the intent (coming back from
> Juan's ideas) to make multifd the default code path for migration,
> which will have to include the vmstate migration and anything else we
> put on the stream via QEMUFile.
>
> I have long since been bothered by having 'pages' sprinkled all over
> the code, so I might be coming at this with a bit of a narrow focus,
> but I believe in order to support more types of payloads in multifd,
> we need to first allow the scheduling at multifd_send_pages() to be
> independent of MultiFDPages_t. So here it is. Let me know what you
> think.
>
> (as I said, I'll be off for a couple of weeks, so feel free to
> incorporate any of this code if it's useful. Or to ignore it
> completely).
>
> CI run: https://gitlab.com/farosas/qemu/-/pipelines/1340992028
>
> 0- https://github.com/farosas/qemu/commits/multifd-packet-cleanups/
> 1- https://lore.kernel.org/r/cover.1718717584.git.maciej.szmigiero@oracle.com
>
> Fabiano Rosas (14):
>   migration/multifd: Reduce access to p->pages
>   migration/multifd: Inline page_size and page_count
>   migration/multifd: Remove pages->allocated
>   migration/multifd: Pass in MultiFDPages_t to file_write_ramblock_iov
>   migration/multifd: Introduce MultiFDSendData
>   migration/multifd: Make MultiFDPages_t:offset a flexible array member
>   migration/multifd: Replace p->pages with an union pointer
>   migration/multifd: Move pages accounting into
>     multifd_send_zero_page_detect()
>   migration/multifd: Isolate ram pages packet data
>   migration/multifd: Don't send ram data during SYNC
>   migration/multifd: Replace multifd_send_state->pages with client data
>   migration/multifd: Allow multifd sync without flush
>   migration/multifd: Register nocomp ops dynamically
>   migration/multifd: Move ram code into multifd-ram.c
>
>  migration/file.c              |   3 +-
>  migration/file.h              |   2 +-
>  migration/meson.build         |   1 +
>  migration/multifd-qpl.c       |  20 +-
>  migration/multifd-ram.c       | 400 +++++++++++++++++++++++++
>  migration/multifd-uadk.c      |  45 +--
>  migration/multifd-zero-page.c |  13 +-
>  migration/multifd-zlib.c      |  16 +-
>  migration/multifd-zstd.c      |  15 +-
>  migration/multifd.c           | 535 ++++++----------------------------
>  migration/multifd.h           |  66 +++--
>  migration/ram.c               |  10 +-
>  migration/trace-events        |   9 +-
>  13 files changed, 613 insertions(+), 522 deletions(-)
>  create mode 100644 migration/multifd-ram.c
>
>
> base-commit: 3cce8bd4d737f2ca688bbdcb92cd5cc683245bbd
> prerequisite-patch-id: 13fb8c983c330913887987533e1e69bb6a4ed613
> prerequisite-patch-id: 3d95428f94bd2126417f309aa7448f6261a8f2f8
> prerequisite-patch-id: 71687ba9c29408b37eb25a784965690cfb9698f8
> prerequisite-patch-id: 4b935fefd5ef932d45323c5ebeef130cc1587f72
> prerequisite-patch-id: d81df6a9e428a82fe9f0a04c13e3b36341c9ffda
> prerequisite-patch-id: b0f171771de7cd44e68d8ade8588d49f6bf03d8a
> prerequisite-patch-id: e0e973dee5d548a5667e0512650fa1c5b88c90c2
> prerequisite-patch-id: 944d3a14068acf1f3ff5ae04beae442347474ae5
> prerequisite-patch-id: 35e4690cfcb51cccacec5ba7c03fd32004c0ca67
> prerequisite-patch-id: 9f3085d6702c691cae5b5767fe20b008d0e128b2
> prerequisite-patch-id: dc6423ec895b9e28e8f6edd97113536242e95c01
> prerequisite-patch-id: 53b1c7680fac959f5c2ef786a0fd2c5fd5711524
> prerequisite-patch-id: 9392b7caa794ed8a871a4d1986914c99309c9b96
> prerequisite-patch-id: 15a7b92cd04c871f2577360475be4c0233837422
> prerequisite-patch-id: 5424a98773296ea3f782f199c61f40d948222d6b
> prerequisite-patch-id: ea27cd0a9855caf3b5d88940c0988a4d1a76c809
> prerequisite-patch-id: f922d0deec9376de12f83dccdf9c146ad96d5953
> prerequisite-patch-id: 06ec6991abae625119bc4fb1021fef068a34531b
> prerequisite-patch-id: dd0d83e0bc7407ac090eee2c04a4179ac7a29ed7
> prerequisite-patch-id: 56d976d50f3179c2ad72ce9757b905cd88be43b5
> prerequisite-patch-id: 21b2e1e07f57622742647c1f1be1010f7c1a345c
> prerequisite-patch-id: 690216d1b858779a9d5b8b87e5740428c084eff8
> prerequisite-patch-id: dd7720e6a8a37181726eb1382236e54a89b5d518
> prerequisite-patch-id: e52a32da796a00ace9d5b6aeaecc9238719a222f
> prerequisite-patch-id: bc3a88d2a1a55257df56404e1b41b52e24e2bb6a
> prerequisite-patch-id: e04e47ceced2b728a04034f08ec4d6b39060ca04
> prerequisite-patch-id: 7a450ee99f6a11bb1a511175989b615c818d792f
> prerequisite-patch-id: a27b6e2cfa162eb2294920983389c82e2e2aafc4
> prerequisite-patch-id: 81d915f5136a90552f794e28f007bc5eada6ced3
> prerequisite-patch-id: 1e6be5ba2ca0869009aa2e98269c6d07cd1c073c
> prerequisite-patch-id: a7071e107ca7012c21a272063aece5bf6a165234
> prerequisite-patch-id: acf2e421fbbe7480311224f0080321bd24bace72
> prerequisite-patch-id: c6b305efc098b716b5fd86974486610c35310bf9
> prerequisite-patch-id: b2ea0671b1ca8ace09a5be5bc8789e1f9ed362e9
> prerequisite-patch-id: dbb0b9c95dc9be1b5f4397afa0e88ec0cc61e17d
> prerequisite-patch-id: 20b540d8c51706dfad625fa6747b03f87f85b214
> prerequisite-patch-id: 221419287e45a25205d7d854a7fca8f3515d89bd
> prerequisite-patch-id: 041d82f9a313815769a294e2dd5323f1e47a9960
> prerequisite-patch-id: f23f851a6c0bcb9bf7c3c35e8991f35308275e29
> prerequisite-patch-id: 079dc1b0988e3895b2e587909fd569dd1e3e4bfa
> prerequisite-patch-id: ef20808cb5817567b151c290304d2407ead5b4d2
> prerequisite-patch-id: 11ba2c5120a650e489e49e1dd8859cbeb0385081
> prerequisite-patch-id: d06198e7597766a7f620c26c53f1f4ddff088c0c
> prerequisite-patch-id: c0e5474ff1e06e3650ac210f0f6d23c71b26f694
> prerequisite-patch-id: d190df2ffd206a2f473a47d53da8302254d3c0b7
> prerequisite-patch-id: 9bae8b536d05070c5b142ca238ad98c08fcb892f
> prerequisite-patch-id: 39719510e3d8000a693f59966c5d6be0a51c60e7
> prerequisite-patch-id: 0ada1d54ec5ca0e87bb497b042aa9e447afc9b3c
> prerequisite-patch-id: 52bd66de61e3b394eacf88debb333b352cdd27f1
> prerequisite-patch-id: a4cb433552382d323e361491a10f21bd3ea5b146
> prerequisite-patch-id: 3e1bc2bce418c96703e328db3366aea3050c685f
> prerequisite-patch-id: 0f3f6d353e7274ae6c042b32f51ef89af5a913ca
> prerequisite-patch-id: bc75f89192c8d1420556c7977f93102ed38cf1f7
> prerequisite-patch-id: 7fb07c996e4229d1edaeb94fed9c03eb710e894c
> prerequisite-patch-id: d62bbbc43d995fbd0016e05e7c71aed38478ab07
> prerequisite-patch-id: 3faca00d83f7cfc29d31df2cb972b1b97f57e505
> prerequisite-patch-id: d276940aaf4fa4ef2358a28008ef13b731a39629
> prerequisite-patch-id: b8a268e1585faac7c99a5a49cd0ba02b4ea04fc7
> prerequisite-patch-id: 8ceeef85cb7108e965defb5dcd38ced0b1e4eb43
> prerequisite-patch-id: 17c85eb4e53c93833d2983fafa8e74ac11f3ce9a
> prerequisite-patch-id: 9db5aa84bdb09ccd1c89a65f9fe00a5b967a1fa4
> prerequisite-patch-id: 8a98f58e3d04dabce63e5a73605052ec8f99e77d
> prerequisite-patch-id: d0bbfe026091c750b52a87a837f25444a1671e2d
> prerequisite-patch-id: 56ec9da207926c5cee8bcd090fb01782e3c8f94d
> prerequisite-patch-id: 5efc4ef57b1fb632e38dc1a0c8db294745fe7651
> prerequisite-patch-id: aa4f714be6f0e4f47edaa2c7ab08c0a124cc8f10
> prerequisite-patch-id: 96161dfcc9b8be7ab2f7d8b45b84c6dab46e261e
> prerequisite-patch-id: fa09bb268a192039476c21e0826a5cd6b3a3814e
> prerequisite-patch-id: 9dea8db103a584ef8217bc8b8cef6577b082810e
> prerequisite-patch-id: 7486f1a89780da95370873a5ef54cba0768a3a76
> prerequisite-patch-id: 743c04b4fca6f3c872571fd6409260fc022a378d
> prerequisite-patch-id: f72522e4b88a8f0253bb42b7a0f2c8b2cd556faf
> prerequisite-patch-id: b87feef9f763eb9abb52ff4002fab9e383170b7c
> prerequisite-patch-id: 4d1a35c120848cefac2cae61525d68cc7f2e9225
> prerequisite-patch-id: 5a17fd53cc2d5b1ce9648f5da88159fc1b71f386
> prerequisite-patch-id: efabe93b4dd446e98a18a95e0d6458f5754d0b22
> prerequisite-patch-id: d5096ebd2a916dbb553d1fa87afaab2f022b1ccc
> prerequisite-patch-id: e127d6b1738baac49f5771be941bf46e9249ed1e
> prerequisite-patch-id: ec959712db5cf3994d67a8e63901f8e9bc9a5ff6
> prerequisite-patch-id: ca7dfa5a6be16b670b4f3ef78ea30dae32441785
> prerequisite-patch-id: 6eefed2e2e07fecf02ff59082750d15d69da3308
> prerequisite-patch-id: f51f1fc9d9551d23d593d1fd00fc1705fc13bbe3
> prerequisite-patch-id: a26a1a49f1c0aa37facd754357c7c662e906302f
> prerequisite-patch-id: b9842ad176b1464d14c6c2c3738513a4a9dee4aa
> prerequisite-patch-id: 60bda711f65550428d0c71db40c40e988488ddd4
> prerequisite-patch-id: 34eb77fabced2ee263ac741c0655a7bf08a4ab6d
> prerequisite-patch-id: 963ee62cced18c3c670ac37205d214a2e7685e81
> prerequisite-patch-id: b89b171328298b015df107d929f51cbf4d014cf7
> prerequisite-patch-id: a72b6c6e6466da6d3331b0f81342ec10d4d5b284
> prerequisite-patch-id: 03c31a68fce3b7c376b7cc5af90682e5b01fd0df
> prerequisite-patch-id: 873751126726c46b946273d2be7f7f27248cd1bb
> prerequisite-patch-id: ecbe43e43396941f0c53da3c37d8c5d802b9ac05
> prerequisite-patch-id: b681715a6d736c2be8d8fec32aba8eed1e6f9078
> prerequisite-patch-id: 939558921113ff8049eef6670a8cc029ef05543e
> prerequisite-patch-id: bd17dc99ece68b5bab3ff9daf7fc8e499e1ebc6b
> prerequisite-patch-id: f98b097eb060620a5abab23e6b1e7a844d93e34f
> prerequisite-patch-id: d18c958b3170513ba516a684a012e614e5fffb24
> prerequisite-patch-id: c5dc0d8523e1ec5350aade0f26fdf7b0fe2512dc
> prerequisite-patch-id: 85a9994d9c832ce1b93b88596a4907fca9f6eef9
> prerequisite-patch-id: db1e0e6af4396d4d1aea022bbf0accb42df1d95c
> prerequisite-patch-id: aeec629b49de4924e258f703c80004ab2372028a
> prerequisite-patch-id: 6e7604117c7a60a516d040f96a5b2e0003e769fa
> prerequisite-patch-id: a7603aee0aa6ffeca0e6cf61bfa3f86999f03075
> prerequisite-patch-id: 37a53f691313007b37d0710aca98d2135017c1df
> prerequisite-patch-id: 9ae8ac17cc02abe3e1edf0433882f25a56d071e0
> prerequisite-patch-id: 09bf6d466076368108476f838f1cedb69bbfc1d3
> prerequisite-patch-id: 6b2d5a9254dad9516992a254c4222a5cabd820b0
> prerequisite-patch-id: d5ac270f9afe46a59c9cd242c8c6891b2ef11dbc
> prerequisite-patch-id: 891d6e87aa83dc113c14136f5f68aa1ac3407bcb
> prerequisite-patch-id: f620362deac56665b85f3fb3feee9a52c380a642
> prerequisite-patch-id: bc3c5c647832c31e554f4db529b652ad611a6f6d
> prerequisite-patch-id: 2cf2edb8b7fb86f2b15e0577919539a4968e6c55
> prerequisite-patch-id: 9adea6cc53fd14a958fa6526609069e1d2f08b17
> prerequisite-patch-id: 2b2449c503dd43b1fc076c7aeb6f9a88d414f9aa
> prerequisite-patch-id: 6f3f7bae420d5cd3f5336d03586c74649bdec5ae
> prerequisite-patch-id: 72e3fa6afc8abeffe053d0b4fda8a1a92f439ccb
> prerequisite-patch-id: 399beb1082afa887e2ed8f6366e9ff0fadb1add2
> prerequisite-patch-id: 6d2257580f53bbcc3019372bfa71b7802fb443be
> prerequisite-patch-id: 5373234aa561a7ad4882ee1297a722a2cd563483
> prerequisite-patch-id: 72770bf2d749881a5050ef1967e376e323a953fa
> prerequisite-patch-id: ea475a629d03117fab36bf15479a8b0f9d2c3d44
> prerequisite-patch-id: 6194f6893c70ee15284f8004be19322ef27a7153
> prerequisite-patch-id: b817779acaeef58d74508709c5315c6ff9639761
> prerequisite-patch-id: 56fdfa5291464155be2a22d5f1773a3875ba53ea
> prerequisite-patch-id: c5f55ac00f159a79d625ca24239c0198c663e807
> prerequisite-patch-id: 1742d4904cae5a1faf79668ac2b18be284b9aa30
> prerequisite-patch-id: bcf9a9f7e117c6d135dc53f5e0e41d2e10a24b4d
> prerequisite-patch-id: 2801f2428ec6fa368e55556cd1c4f1633f82b371
> prerequisite-patch-id: e48d9b78d4e0c59797a93c27366c0e8e7b40831a
> prerequisite-patch-id: 9d378c3fcc3defb5990e9b4fac0481b4d9cbf998
> prerequisite-patch-id: 30a79f47815477c6626c59770ba09e401785a095
> prerequisite-patch-id: 8bd103d6b2d89ae3ac5e38a041b47a531ed41e40
> prerequisite-patch-id: 61e8832b8c1f293cddeeceaaaf30ae9709d99cf2
> prerequisite-patch-id: 0c27b5ed30dc34d6c6b1158822d687afc570af53
> prerequisite-patch-id: 30218ba738c05231a97a239a7b7db263b9fa3e00
> prerequisite-patch-id: de4606a7b886fe844979bf396dec51fcbb6dde0a
> prerequisite-patch-id: 96c8fd8f3835ea9ab2dd1494a7716f4a65579554
> prerequisite-patch-id: d7eb400ef0b6474bd1a4ae562a2989ee7f09a38e
> prerequisite-patch-id: 10100b6534f56912a29a165b8709f9e00ebfb423
> prerequisite-patch-id: 4aa8159251d656ee0ac1d23930533ac3e2a63aec
> prerequisite-patch-id: 1583345b2b64479d2a3255e8e1f633c46c36bb94
> prerequisite-patch-id: a7e6527566e0c715a3b07af5d6b1ec2477cd2e0e
> prerequisite-patch-id: 84ef10eb1a1b0456fadbb166267a94662d0606ff
> prerequisite-patch-id: f2049657d1fa03cd95a6be6ce152b6a155aff063
> prerequisite-patch-id: a6474c935071bdf3cba1a1b2e067f7119f77513a
> prerequisite-patch-id: 6ba32fbc28136539958b000c47032d8ccd9fba51
> prerequisite-patch-id: e5adff7e7c035831ddb7fc2e518c23e1e6cf6369
> prerequisite-patch-id: 4f07f2a5c4ac917a2d3bb5b1e988c11264a0fc93
> prerequisite-patch-id: 99a1e21173e5f25502866539e23a8be9beee9d3c
> prerequisite-patch-id: 22fe9bf6a8a79c33d4f87b5ca395e65bff1721b7
> prerequisite-patch-id: 3e1ab4b997e26be7ded8af84ff934fa38337f517
> prerequisite-patch-id: feab895cda6daab7368c9eccfad561262178e772
> prerequisite-patch-id: 1488d6151031bc38309677c5b34b83322a062eff
> prerequisite-patch-id: 3a533bdde4bf66a9096e0fda7ee0b945d99e7f53
> prerequisite-patch-id: 6524f2c8d3366b5767a937b15b3e0c256d18ceea
> prerequisite-patch-id: 4578afb3b16b6fb60edc4309f0b6895867707396
> prerequisite-patch-id: d17ec591d5951b6b0929fc0cb0eedcbad473d5e0
> prerequisite-patch-id: c2c0be108c4cfc5fbd7f371b3bcf8e3a7fbb293c
> prerequisite-patch-id: 7c72b99380179c2342c6d4a45445709ccfcc1525
> prerequisite-patch-id: 6eff34ed40cef45df4b3fa9831e4bdd3fb6cbe01
> prerequisite-patch-id: 28051ebaadce4361b951f0e33d8b6729eec44fe0
> prerequisite-patch-id: 4145960f8ce4d8ac3a2e6c1b983245a545e7fbd2
> prerequisite-patch-id: 567ab210666266b8566798223b6f86cf511981c7
> prerequisite-patch-id: 7039619fae616a73255d0e309b827795c8fcdae4
> prerequisite-patch-id: f802c83e5ca45642f1efb2ee24be6d69a831e654
> prerequisite-patch-id: 6fc655fd9dd8e6e33e1dae0b603d89b59d0889e3
> prerequisite-patch-id: 0f17452e2988269385fbec6a7c49eee867cf6995
> prerequisite-patch-id: e00b304f28d1b602bfb228a99c7338cd352577ce
> prerequisite-patch-id: d409d879b604278d1142da46257cdb506c6a327c
> prerequisite-patch-id: abbd22811acacdbcf1bc3e7aab071df609f4b5e7
> prerequisite-patch-id: 48154ad8726f2e7839ee5b7ab937527d2752f670
> prerequisite-patch-id: 693d29957da2544f09651bf63f2df66335c784af
> prerequisite-patch-id: d4fdd1f5cc03bf7544edc9ffab8eb6576053ec8d
> prerequisite-patch-id: ed843b76ff082fcd8fe2bb2098333e67049bb724
> prerequisite-patch-id: 7b1184dbb311148e8cd7638fac8ef0da5bcc7736
> prerequisite-patch-id: 463e54d104b912c7e5a889c472f108dada39935b
> prerequisite-patch-id: cdf627d2db55db40ff6135ccd1caef4ba040b65e
> prerequisite-patch-id: 505f3b47cf39030805b5bc55271222bd5269ecb7
> prerequisite-patch-id: afe92aa2cc06dd6c2ead36a8a75e5d4db3e39216
> prerequisite-patch-id: abcc6b86f2739c9bf24d8311ac1085d196ec86ae
> prerequisite-patch-id: bfe8fa1d7b92e78f868a1115fe34b0e7705e7270
> prerequisite-patch-id: f6c12160e8c18342e212913ab446abaf0c1fa4ef
> prerequisite-patch-id: 4f77617bceba99ff696734c39787cf13e637e4aa
> prerequisite-patch-id: 19fc9d4f0fb10ceb5f9b44cc06014de8ccbe9b08
> prerequisite-patch-id: 145fa70dbad65ce73309a9eb3b68698dd4d87bce
> prerequisite-patch-id: 632ece0cbce4600b2920d8692eb54f29cc3780f7
> prerequisite-patch-id: 492005e007760cb30207694288683d1afa2b8da2
> prerequisite-patch-id: aef029cdfe24b3eb7a6e57202d3937c0973b66a2
> prerequisite-patch-id: 2b1421f93dcaf115e7d95af6dbe1e187a4206224
> prerequisite-patch-id: f37070ef2ea84b81a3c0166b3e48e06f9c42b7fd
> prerequisite-patch-id: c491e100f717942929306a711fed782954f619f0
> prerequisite-patch-id: 709fe575604297bc824e269787d6214f271eab44
> prerequisite-patch-id: cb5b779a69c709562a00b20b5f17edcec759584b
> prerequisite-patch-id: 70e23e7d5d17b0a40b0222ed1e9f601de0641e43
> prerequisite-patch-id: 406a69894a1b8b1d8569eda0708c3c2ce8512ac7
> prerequisite-patch-id: 4aa66ad0c58e5db7d3006e5b01ec07cd334d194a
> prerequisite-patch-id: 46e029947ece9be7d6dd62c52777ff663f603513
> prerequisite-patch-id: f7e74cf4e29abbf89f757d0057de236be65ccbbb
> prerequisite-patch-id: e4cde74b826dfcda173263446482fa2279b22daf
> prerequisite-patch-id: 60df1811efdf86a39285528668c49064d5b635fc
> prerequisite-patch-id: 07ee1e34207e626feaa9383e95e46744946fcf7d
> prerequisite-patch-id: 1218747de1943091b3e2a2b7edba586204711b94
> prerequisite-patch-id: 0ea0a831ea8d77a44feed1312184292a1933b406
> prerequisite-patch-id: 2fb210c9318e3baa065ad0fe319cd0bf5500253c
> prerequisite-patch-id: afb12a368a0eab1bd0fc8ec8d2031d87b1400628
> prerequisite-patch-id: bcf116b7685ab717b31a73d6204e2c3396945a4d
> prerequisite-patch-id: ccf321d3d877ad63aafae004d61e677c3b9e0e4d
> prerequisite-patch-id: 785c406be98b9abcdf556f32b87bd3abc291b565
> prerequisite-patch-id: ce08905deae2f43f5785c7c3cefe6e2ff7202566
> prerequisite-patch-id: 67a48c522f6917694df79a7da2cacb124639b1fc
> prerequisite-patch-id: 28d33665f3057d9c568b1be55793186ee8bf2a3b
> prerequisite-patch-id: 141d789226542c2208b6cc93c8c730dae78f3206
> prerequisite-patch-id: 4cae9bcfc9da364a22443c0d3388d5656130f3a7
> prerequisite-patch-id: 9b0bcc495cd71e84c091144a946b3072fedd6cef
> prerequisite-patch-id: 378a972aa8c88bc4e13de334be029e28b7324390
> prerequisite-patch-id: 769159abc5615c77664ccf96cddb999d1615a60d
> prerequisite-patch-id: 241522598e54a7f2e5e447d4387c1d362c9f3db9
> prerequisite-patch-id: 886c2e24efa498a77d02df142ac1774de79d1bc2
> prerequisite-patch-id: c6baa832d495ea6b198f90da386f08ef3bd862b8
> prerequisite-patch-id: 68f5c44b96aaf718340edb9966c816a16dbb8592
> prerequisite-patch-id: 7ceb5858eb2bd5898d0c77e275df7cebdb4f7e3a
> prerequisite-patch-id: 1c91f782de2190318034e2d57e5c87e927fb50a6
> prerequisite-patch-id: 664634677db2ec49ce28bbf074f6c154ead544c7
> prerequisite-patch-id: 9556e94a915387c81566b0ca7c60d281e7375612
> prerequisite-patch-id: ea59a5c8d09896a094a82781da9139f6fcadea0e
> prerequisite-patch-id: efebadc8ba5f01a43e24516eb573ab9b98a12861
> prerequisite-patch-id: 2999043154c3940962c8f26f4cfe96757ea2b73c
> prerequisite-patch-id: f15fe9c0881b7f2c20dfbd9b0f07bdc6d62fbeb1
> prerequisite-patch-id: 412ef91a0859ccbe9ef069095c4b79f0437e160f
> prerequisite-patch-id: bbfdf78a5e6cfc46c23cc84a87d56db1b5f2e789
> prerequisite-patch-id: 3a2f80b2de6b840432e6733dc8da4cbecab59256
> prerequisite-patch-id: b6bb72fccc5289501d277b60650064cf6761a02b
> prerequisite-patch-id: dec5790d783ad8311ba2713c89c630311f7bfe5c
> prerequisite-patch-id: 2b165ec700bd20a09fed5981b9152f477fe1aaed
> prerequisite-patch-id: 13334380aba5ef0dbc9307ab77f741fe1bb263c6
> prerequisite-patch-id: 75aab74d9d395b905a5f18cee9e50eb57b931c54
> prerequisite-patch-id: e29d1706bdb9035f8d9b167ac961b843601da27d
> prerequisite-patch-id: 75dce6dc0a6974218a1ffe8012eea6c37d43c38a
> prerequisite-patch-id: 916fa02f68d06a204db518e0fc14655bda5b9f3e
> prerequisite-patch-id: 396c5598baa18a4ff1d387a720e90279d4e84b32
> prerequisite-patch-id: b1a6de346564ee6cd734026fc73886abf820ef27
> prerequisite-patch-id: d42fb9c343fd156d75ec9fc9385f3a102ad61e69
> prerequisite-patch-id: f58200f6a9646b8bcf3885df434eb01c0b72e463
> prerequisite-patch-id: dc1810c55e1873f4fed2d1ed46cd6293e77729ff
> prerequisite-patch-id: f31326c9f9aa5b1add2affc97552abcbb65b24da
> prerequisite-patch-id: c62adb1b1209a1b5b415f44f7cbcd6d3463378a7
> prerequisite-patch-id: 5d72b5607e606b78c17dc3225c401d74b126b1d4
> prerequisite-patch-id: 56ee1388e8241ea3f1eab4a90a8f0e59352e356f
> prerequisite-patch-id: e4ac9158310c24026fc3a0d1efca03263387bc91
> prerequisite-patch-id: 6c9af50cce8d8326a8b1dd936dc2e979e6acb105
> prerequisite-patch-id: 2e2ba3c600a7b17b0e3078594602599fbc9eb105
> prerequisite-patch-id: c652c2d7694e9739b273dbd77e36d44c50e06387
> prerequisite-patch-id: d06de4cdba195d531aeffdd69202f98e0c188351
> prerequisite-patch-id: 13bad1eb0834f971fac89a768978c98a6d910f64
> prerequisite-patch-id: d849f83159c17c55369be6ef85df1583ee419aab
> prerequisite-patch-id: d600006c8e584eeb61223733a016a8bc618627d9
> prerequisite-patch-id: 2ff12f99fbc489c5329f2fec1bbb26186aa8f9f6
> prerequisite-patch-id: 48c736893871c5d1157746f7ac9242ce0db11bb3
> prerequisite-patch-id: 3ad2087492141825c298e301deee1e4cfe624679
> prerequisite-patch-id: d971d4c794cddeac920f6f99a93af0e7bd99f245
> prerequisite-patch-id: 7812ab1b04f502164366797259a33cf7ee9a56b9
> prerequisite-patch-id: efcdab55b35dc1ae7a1bbef1119687667306cd58
> prerequisite-patch-id: c79dbe78ce5d943b7a02adf3777b9923107c27f4
> prerequisite-patch-id: 56125b0bbbcb907bfbd08e45c476d5bb067dbf02
> prerequisite-patch-id: 34e5515352cac12ddfbf031916d2e158cef8ee37
> prerequisite-patch-id: 2a3d9e59f9d314d6c3b40025f065b51401d94aaf
> prerequisite-patch-id: bea3bdd12c52a3b7f95b4aec4e29c201db932517
> prerequisite-patch-id: 301eb6c90f186e5752d5b81396052b049a9e0539
> prerequisite-patch-id: 8ddfb7aede4fd11bbd904450cf167e8812f19e36
> prerequisite-patch-id: aadeae7c6978d4b11c30944cbf71c85d17a1751c
> prerequisite-patch-id: 2befd9c5a50db6814a7d01010c655642ec914717
> prerequisite-patch-id: 47fbf48e91e065fec612dd5615f54e252b649b0c
> prerequisite-patch-id: 49c8f7619a807a1e02cec593973db1fb542bbe4d
> prerequisite-patch-id: 8fdb47e6c8d670c65f907a790abae32d39da780f
> prerequisite-patch-id: 4c1108ea87cba0bbaadaec52d4ecd25f87f5d206
> prerequisite-patch-id: 96c5da475e96cddb3a69a26d36240934446af4b5
> prerequisite-patch-id: 9d44defb04c763fe9039160dd51d63aee3aa6f32
> prerequisite-patch-id: fac5f84794e83cd4e64b452ec7a89e81a653f78b
> prerequisite-patch-id: 39a76f46b875f097bb4016be1423f981ffd7c803
> prerequisite-patch-id: ccf635b303726f328d5e314b3903aa516bc7088d
> prerequisite-patch-id: 853c7ce13865cbdeffb20619d2a90c3679eb7a80
> prerequisite-patch-id: 8038e519eed44408ad8712fc6c2b6529562c9efb
> prerequisite-patch-id: 45f5c900fa54c8a8854fde60a796b21182456d78
> prerequisite-patch-id: 0b3644404517e9a07e8ce562da4b9118e8c62114
> prerequisite-patch-id: 322059db28c778851ccc8be51f99307bc723550b
> prerequisite-patch-id: 3ca9206fa843f693765baa56509fc3827f37d63f
> prerequisite-patch-id: f16c111772453869e52e2136bcf579287e86a56d
> prerequisite-patch-id: 96a5aa53078ce3da06950e3c82dfd22abbe03e8b
> prerequisite-patch-id: 2cbe2ea6ef41c14a3fdf1db339d15b2515fa50da
> prerequisite-patch-id: 8879f4043b7f2905c37c5e7be16bbcb7c281a397
> prerequisite-patch-id: baa117d25b0e9680f39d4e2ff2416c5c3bcf4b40
> prerequisite-patch-id: 23310473e76a776b2b948fe35ee730bc64e88d88
> prerequisite-patch-id: 5d61126c42534dcaa2695ee379e6ce1f8cb99a28
> prerequisite-patch-id: c9d9c91a1b9c0261aab7acc79d73f46eeb1e01b8
> prerequisite-patch-id: 3cb48173784349c652c593ec294539f70195bf84
> prerequisite-patch-id: 0ffcc57ba70989723ded660482e9d323e8ab9374
> prerequisite-patch-id: df54768710d624bb304bfbdedd94c245934b44bc
> prerequisite-patch-id: c371905e5bc98c1f25d41af1b4568fc5de4a408d
> prerequisite-patch-id: 277ae9aea67428a1a2b822cbd79f860ec57d19d2
> prerequisite-patch-id: eb1c53b4be23b527c5e12a9d45862a94e874bb25
> prerequisite-patch-id: 147e16e6aa66276e1eaab589396a05f215a98d9f
> prerequisite-patch-id: de3e5c45e2f41caabdfe2924a6eea5403ba84005
> prerequisite-patch-id: 905379d7e277957b68bc8b1620e99d7d20eab2ab
> prerequisite-patch-id: d88c5521bfadda59ca49f7c851ce8c6b77066de9
> prerequisite-patch-id: 1ce55ae23b721b6f9dc7a9e71a436a54e124ab6c
> prerequisite-patch-id: 1ba305947ab60e8597af982ca908ca31bdd5219a
> prerequisite-patch-id: 1413d26013fb85655c26a79254087a124c0addfb
> prerequisite-patch-id: f41f1b8988b52ca3dd1ebc910475fca064650b98
> prerequisite-patch-id: 6e3cec0197efb5d9733758cb160b4dcff1bf0d11
> prerequisite-patch-id: 450de6916e7ea02414df2ffccb2df4444d520e67
> prerequisite-patch-id: 1b37abc1e237f22f80c0dda647764e754169752e
> prerequisite-patch-id: 04b3b9b48c01e32abfb6efd3bf9be5f02e9d1c53
> prerequisite-patch-id: e6c320c58fc1bf51d86e9e408c0f2882d9f3a2c9
> prerequisite-patch-id: 0032ecefff5b77534359c6cb0981fc9d1f80b7a7
> prerequisite-patch-id: 5812bf877034429957e4ad0181de2f7c98325d65
> prerequisite-patch-id: d99f11246891f6bc6569162e3a497e442d72af03
> prerequisite-patch-id: 77d87342c5fc325bcc9a233b144b577355f01d74
> prerequisite-patch-id: e15e24c79fa689fd86a3dee9656d9bf02dc28f51
> prerequisite-patch-id: 776174c7f7bea4443a51a654cd65f2f2d36eb8fc
> prerequisite-patch-id: c415036c8a2a0fb8a9d0f035eb8a696ab5b6b837
> prerequisite-patch-id: 01a13de44433e986d7f9f4486c88666a7b631454
> prerequisite-patch-id: 66438751ea421e920c7429ef69005722bc91f114
> prerequisite-patch-id: d506fe9a8feef7daa8fcf4e1b2e557431cc129b6
> prerequisite-patch-id: 74fc5477a7d9c6ab0a600d0738e21ead087808f9
> prerequisite-patch-id: 0a889b3f7a62226c4a8ee993163a240fc3bfcc91
> prerequisite-patch-id: 17f3dadad0df0802e251ef45fabbced1b0604a03

I messed-up a little here. The base-commit should actually be:
base-commit: e9d2db818ff934afb366aea566d0b33acf7bced1


^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 02/14] migration/multifd: Inline page_size and page_count
  2024-08-01 12:35 ` [PATCH v3 02/14] migration/multifd: Inline page_size and page_count Fabiano Rosas
@ 2024-08-21 20:25   ` Peter Xu
  0 siblings, 0 replies; 37+ messages in thread
From: Peter Xu @ 2024-08-21 20:25 UTC (permalink / raw)
  To: Fabiano Rosas; +Cc: qemu-devel, Maciej S . Szmigiero

On Thu, Aug 01, 2024 at 09:35:04AM -0300, Fabiano Rosas wrote:
> The MultiFD*Params structures are for per-channel data. Constant
> values should not be there because that needlessly wastes cycles and
> storage. The page_size and page_count fall into this category so move
> them inline in multifd.h.
> 
> Signed-off-by: Fabiano Rosas <farosas@suse.de>

Reviewed-by: Peter Xu <peterx@redhat.com>

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 03/14] migration/multifd: Remove pages->allocated
  2024-08-01 12:35 ` [PATCH v3 03/14] migration/multifd: Remove pages->allocated Fabiano Rosas
@ 2024-08-21 20:32   ` Peter Xu
  0 siblings, 0 replies; 37+ messages in thread
From: Peter Xu @ 2024-08-21 20:32 UTC (permalink / raw)
  To: Fabiano Rosas; +Cc: qemu-devel, Maciej S . Szmigiero

On Thu, Aug 01, 2024 at 09:35:05AM -0300, Fabiano Rosas wrote:
> This value never changes and is always the same as page_count. We
> don't need a copy of it per-channel plus one in the extra slot. Remove
> it.
> 
> Signed-off-by: Fabiano Rosas <farosas@suse.de>

Reviewed-by: Peter Xu <peterx@redhat.com>

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 06/14] migration/multifd: Make MultiFDPages_t:offset a flexible array member
  2024-08-01 12:35 ` [PATCH v3 06/14] migration/multifd: Make MultiFDPages_t:offset a flexible array member Fabiano Rosas
@ 2024-08-21 20:38   ` Peter Xu
  0 siblings, 0 replies; 37+ messages in thread
From: Peter Xu @ 2024-08-21 20:38 UTC (permalink / raw)
  To: Fabiano Rosas; +Cc: qemu-devel, Maciej S . Szmigiero

On Thu, Aug 01, 2024 at 09:35:08AM -0300, Fabiano Rosas wrote:
> We're about to use MultiFDPages_t from inside the MultiFDSendData
> payload union, which means we cannot have pointers to allocated data
> inside the pages structure, otherwise we'd lose the reference to that
> memory once another payload type touches the union. Move the offset
> array into the end of the structure and turn it into a flexible array
> member, so it is allocated along with the rest of MultiFDSendData in
> the next patches.
> 
> Note that other pointers, such as the ramblock pointer are still fine
> as long as the storage for them is not owned by the migration code and
> can be correctly released at some point.
> 
> Signed-off-by: Fabiano Rosas <farosas@suse.de>

Reviewed-by: Peter Xu <peterx@redhat.com>

Two nits below..

> ---
>  migration/multifd.c | 18 ++++++++++++------
>  migration/multifd.h |  4 ++--
>  2 files changed, 14 insertions(+), 8 deletions(-)
> 
> diff --git a/migration/multifd.c b/migration/multifd.c
> index 44d4c3ca11..64503604cf 100644
> --- a/migration/multifd.c
> +++ b/migration/multifd.c
> @@ -98,6 +98,17 @@ struct {
>      MultiFDMethods *ops;
>  } *multifd_recv_state;
>  
> +static size_t multifd_ram_payload_size(void)
> +{
> +    uint32_t n = multifd_ram_page_count();
> +
> +    /*
> +     * We keep an array of page offsets at the end of MultiFDPages_t,
> +     * add space for it in the allocation.
> +     */
> +    return sizeof(MultiFDPages_t) + n * sizeof(ram_addr_t);
> +}
> +
>  static bool multifd_use_packets(void)
>  {
>      return !migrate_mapped_ram();
> @@ -394,9 +405,7 @@ static int multifd_recv_initial_packet(QIOChannel *c, Error **errp)
>  
>  static MultiFDPages_t *multifd_pages_init(uint32_t n)
>  {
> -    MultiFDPages_t *pages = g_new0(MultiFDPages_t, 1);
> -
> -    pages->offset = g_new0(ram_addr_t, n);
> +    MultiFDPages_t *pages = g_malloc0(multifd_ram_payload_size());
>  
>      return pages;

Can drop the temp var now.

>  }
> @@ -404,8 +413,6 @@ static MultiFDPages_t *multifd_pages_init(uint32_t n)
>  static void multifd_pages_clear(MultiFDPages_t *pages)
>  {
>      multifd_pages_reset(pages);
> -    g_free(pages->offset);
> -    pages->offset = NULL;
>      g_free(pages);
>  }
>  
> @@ -1185,7 +1192,6 @@ bool multifd_send_setup(void)
>          qemu_sem_init(&p->sem_sync, 0);
>          p->id = i;
>          p->pages = multifd_pages_init(page_count);
> -

Unneeded removal.

>          if (use_packets) {
>              p->packet_len = sizeof(MultiFDPacket_t)
>                            + sizeof(uint64_t) * page_count;
> diff --git a/migration/multifd.h b/migration/multifd.h
> index 7bb4a2cbc4..a7fdd97f70 100644
> --- a/migration/multifd.h
> +++ b/migration/multifd.h
> @@ -77,9 +77,9 @@ typedef struct {
>      uint32_t num;
>      /* number of normal pages */
>      uint32_t normal_num;
> +    RAMBlock *block;
>      /* offset of each page */
> -    ram_addr_t *offset;
> -    RAMBlock *block;
> +    ram_addr_t offset[];
>  } MultiFDPages_t;
>  
>  struct MultiFDRecvData {
> -- 
> 2.35.3
> 

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 07/14] migration/multifd: Replace p->pages with an union pointer
  2024-08-01 12:35 ` [PATCH v3 07/14] migration/multifd: Replace p->pages with an union pointer Fabiano Rosas
@ 2024-08-21 21:27   ` Peter Xu
  0 siblings, 0 replies; 37+ messages in thread
From: Peter Xu @ 2024-08-21 21:27 UTC (permalink / raw)
  To: Fabiano Rosas; +Cc: qemu-devel, Maciej S . Szmigiero

On Thu, Aug 01, 2024 at 09:35:09AM -0300, Fabiano Rosas wrote:
> We want multifd to be able to handle more types of data than just ram
> pages. To start decoupling multifd from pages, replace p->pages
> (MultiFDPages_t) with the new type MultiFDSendData that hides the
> client payload inside an union.
> 
> The general idea here is to isolate functions that *need* to handle
> MultiFDPages_t and move them in the future to multifd-ram.c, while
> multifd.c will stay with only the core functions that handle
> MultiFDSendData/MultiFDRecvData.
> 
> Signed-off-by: Fabiano Rosas <farosas@suse.de>

Reviewed-by: Peter Xu <peterx@redhat.com>

[...]

> +static MultiFDSendData *multifd_send_data_alloc(void)
> +{
> +    size_t max_payload_size, size_minus_payload;
> +
> +    /*
> +     * MultiFDPages_t has a flexible array at the end, account for it
> +     * when allocating MultiFDSendData. Use max() in case other types
> +     * added to the union in the future are larger than
> +     * (MultiFDPages_t + flex array).
> +     */
> +    max_payload_size = MAX(multifd_ram_payload_size(), sizeof(MultiFDPayload));
> +
> +    /*
> +     * Account for any holes the compiler might insert. We can't pack
> +     * the structure because that misaligns the members and triggers
> +     * Waddress-of-packed-member.
> +     */
> +    size_minus_payload = sizeof(MultiFDSendData) - sizeof(MultiFDPayload);
> +
> +    return g_malloc0(size_minus_payload + max_payload_size);
> +}

Hmm I didn't notice the hole issue for sure..

For the mid term we really should remove this in one way or another.. what
I was thinking is mentioned in the other thread:

https://lore.kernel.org/qemu-devel/ZsZZFwws5tlOMmZk@x1n/

I hope we can simply statically define offset[] to be the max.

I don't think we must stick with size-per-packet, in this case IMHO we
should choose whatever is easier for us, and I never worried on regression
yet so far as long as the relevant n_pages is still relatively large. Not
to mention AFAIU for production use, x86/s390 always uses 4K psize, while
arm64 doesn't yet have a stable kvm-avail vcpu model, which might be a
bigger issue as of now to solve..

Let's see how it goes..

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 09/14] migration/multifd: Isolate ram pages packet data
  2024-08-01 12:35 ` [PATCH v3 09/14] migration/multifd: Isolate ram pages packet data Fabiano Rosas
@ 2024-08-21 21:38   ` Peter Xu
  2024-08-22 14:13     ` Fabiano Rosas
  0 siblings, 1 reply; 37+ messages in thread
From: Peter Xu @ 2024-08-21 21:38 UTC (permalink / raw)
  To: Fabiano Rosas; +Cc: qemu-devel, Maciej S . Szmigiero

On Thu, Aug 01, 2024 at 09:35:11AM -0300, Fabiano Rosas wrote:
> @@ -1554,7 +1577,6 @@ static void *multifd_recv_thread(void *opaque)
>                  qemu_sem_wait(&p->sem_sync);
>              }
>          } else {
> -            p->total_normal_pages += p->data->size / qemu_target_page_size();

Is this line dropped by accident?

>              p->data->size = 0;
>              /*
>               * Order data->size update before clearing

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 09/14] migration/multifd: Isolate ram pages packet data
  2024-08-21 21:38   ` Peter Xu
@ 2024-08-22 14:13     ` Fabiano Rosas
  2024-08-22 14:30       ` Peter Xu
  0 siblings, 1 reply; 37+ messages in thread
From: Fabiano Rosas @ 2024-08-22 14:13 UTC (permalink / raw)
  To: Peter Xu; +Cc: qemu-devel, Maciej S . Szmigiero

Peter Xu <peterx@redhat.com> writes:

> On Thu, Aug 01, 2024 at 09:35:11AM -0300, Fabiano Rosas wrote:
>> @@ -1554,7 +1577,6 @@ static void *multifd_recv_thread(void *opaque)
>>                  qemu_sem_wait(&p->sem_sync);
>>              }
>>          } else {
>> -            p->total_normal_pages += p->data->size / qemu_target_page_size();
>
> Is this line dropped by accident?
>

No, this was just used in the tracepoint below. I stopped including this
information there.

>>              p->data->size = 0;
>>              /*
>>               * Order data->size update before clearing


^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 09/14] migration/multifd: Isolate ram pages packet data
  2024-08-22 14:13     ` Fabiano Rosas
@ 2024-08-22 14:30       ` Peter Xu
  2024-08-22 14:55         ` Fabiano Rosas
  0 siblings, 1 reply; 37+ messages in thread
From: Peter Xu @ 2024-08-22 14:30 UTC (permalink / raw)
  To: Fabiano Rosas; +Cc: qemu-devel, Maciej S . Szmigiero

On Thu, Aug 22, 2024 at 11:13:36AM -0300, Fabiano Rosas wrote:
> Peter Xu <peterx@redhat.com> writes:
> 
> > On Thu, Aug 01, 2024 at 09:35:11AM -0300, Fabiano Rosas wrote:
> >> @@ -1554,7 +1577,6 @@ static void *multifd_recv_thread(void *opaque)
> >>                  qemu_sem_wait(&p->sem_sync);
> >>              }
> >>          } else {
> >> -            p->total_normal_pages += p->data->size / qemu_target_page_size();
> >
> > Is this line dropped by accident?
> >
> 
> No, this was just used in the tracepoint below. I stopped including this
> information there.

But this will cause socket / file paths not doing the same thing, since
this counter should still be increamented in socket path (and this is the
file path).

Either we keep it the same as before, or.. if we want to drop it, shouldn't
we remove all instead (along with the two variables "total_normal_pages /
total_zero_pages")?

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 09/14] migration/multifd: Isolate ram pages packet data
  2024-08-22 14:30       ` Peter Xu
@ 2024-08-22 14:55         ` Fabiano Rosas
  0 siblings, 0 replies; 37+ messages in thread
From: Fabiano Rosas @ 2024-08-22 14:55 UTC (permalink / raw)
  To: Peter Xu; +Cc: qemu-devel, Maciej S . Szmigiero

Peter Xu <peterx@redhat.com> writes:

> On Thu, Aug 22, 2024 at 11:13:36AM -0300, Fabiano Rosas wrote:
>> Peter Xu <peterx@redhat.com> writes:
>> 
>> > On Thu, Aug 01, 2024 at 09:35:11AM -0300, Fabiano Rosas wrote:
>> >> @@ -1554,7 +1577,6 @@ static void *multifd_recv_thread(void *opaque)
>> >>                  qemu_sem_wait(&p->sem_sync);
>> >>              }
>> >>          } else {
>> >> -            p->total_normal_pages += p->data->size / qemu_target_page_size();
>> >
>> > Is this line dropped by accident?
>> >
>> 
>> No, this was just used in the tracepoint below. I stopped including this
>> information there.
>
> But this will cause socket / file paths not doing the same thing, since
> this counter should still be increamented in socket path (and this is the
> file path).
>
> Either we keep it the same as before, or.. if we want to drop it, shouldn't
> we remove all instead (along with the two variables "total_normal_pages /
> total_zero_pages")?

I'll just remove them. There's not much point in these anyway since
they are per-channel.


^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 10/14] migration/multifd: Don't send ram data during SYNC
  2024-08-01 12:35 ` [PATCH v3 10/14] migration/multifd: Don't send ram data during SYNC Fabiano Rosas
@ 2024-08-22 15:50   ` Peter Xu
  0 siblings, 0 replies; 37+ messages in thread
From: Peter Xu @ 2024-08-22 15:50 UTC (permalink / raw)
  To: Fabiano Rosas; +Cc: qemu-devel, Maciej S . Szmigiero

On Thu, Aug 01, 2024 at 09:35:12AM -0300, Fabiano Rosas wrote:
> Skip saving and loading any ram data in the packet in the case of a
> SYNC. This fixes a shortcoming of the current code which requires a
> reset of the MultiFDPages_t fields right after the previous
> pending_job finishes, otherwise the very next job might be a SYNC and
> multifd_send_fill_packet() will put the stale values in the packet.
> 
> By not calling multifd_ram_fill_packet(), we can stop resetting
> MultiFDPages_t in the multifd core and leave that to the client code.
> 
> Actually moving the reset function is not yet done because
> pages->num==0 is used by the client code to determine whether the
> MultiFDPages_t needs to be flushed. The subsequent patches will
> replace that with a generic flag that is not dependent on
> MultiFDPages_t.
> 
> Signed-off-by: Fabiano Rosas <farosas@suse.de>

Reviewed-by: Peter Xu <peterx@redhat.com>

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 11/14] migration/multifd: Replace multifd_send_state->pages with client data
  2024-08-01 12:35 ` [PATCH v3 11/14] migration/multifd: Replace multifd_send_state->pages with client data Fabiano Rosas
@ 2024-08-22 15:59   ` Peter Xu
  0 siblings, 0 replies; 37+ messages in thread
From: Peter Xu @ 2024-08-22 15:59 UTC (permalink / raw)
  To: Fabiano Rosas; +Cc: qemu-devel, Maciej S . Szmigiero

On Thu, Aug 01, 2024 at 09:35:13AM -0300, Fabiano Rosas wrote:
> Multifd currently has a simple scheduling mechanism that distributes
> work to the various channels by keeping storage space within each
> channel and an extra space that is given to the client. Each time the
> client fills the space with data and calls into multifd, that space is
> given to the next idle channel and a free storage space is taken from
> the channel and given to client for the next iteration.
> 
> This means we always need (#multifd_channels + 1) memory slots to
> operate multifd.
> 
> This is fine, except that the presence of this one extra memory slot
> doesn't allow different types of payloads to be processed at the same
> time in different channels, i.e. the data type of
> multifd_send_state->pages needs to be the same as p->pages.
> 
> For each new data type different from MultiFDPage_t that is to be
> handled, this logic would need to be duplicated by adding new fields
> to multifd_send_state, to the channels and to multifd_send_pages().
> 
> Fix this situation by moving the extra slot into the client and using
> only the generic type MultiFDSendData in the multifd core.
> 
> Signed-off-by: Fabiano Rosas <farosas@suse.de>

Reviewed-by: Peter Xu <peterx@redhat.com>

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 12/14] migration/multifd: Allow multifd sync without flush
  2024-08-01 12:35 ` [PATCH v3 12/14] migration/multifd: Allow multifd sync without flush Fabiano Rosas
@ 2024-08-22 16:03   ` Peter Xu
  2024-08-22 16:10     ` Peter Xu
  0 siblings, 1 reply; 37+ messages in thread
From: Peter Xu @ 2024-08-22 16:03 UTC (permalink / raw)
  To: Fabiano Rosas; +Cc: qemu-devel, Maciej S . Szmigiero

On Thu, Aug 01, 2024 at 09:35:14AM -0300, Fabiano Rosas wrote:
> Separate the multifd sync from flushing the client data to the
> channels. These two operations are closely related but not strictly
> necessary to be executed together.
> 
> The multifd sync is intrinsic to how multifd works. The multiple
> channels operate independently and may finish IO out of order in
> relation to each other. This applies also between the source and
> destination QEMU.
> 
> Flushing the data that is left in the client-owned data structures
> (e.g. MultiFDPages_t) prior to sync is usually the right thing to do,
> but that is particular to how the ram migration is implemented with
> several passes over dirty data.
> 
> Make these two routines separate, allowing future code to call the
> sync by itself if needed. This also allows the usage of
> multifd_ram_send to be isolated to ram code.

What's the usage of sync but not flush here?

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 12/14] migration/multifd: Allow multifd sync without flush
  2024-08-22 16:03   ` Peter Xu
@ 2024-08-22 16:10     ` Peter Xu
  2024-08-22 17:05       ` Fabiano Rosas
  0 siblings, 1 reply; 37+ messages in thread
From: Peter Xu @ 2024-08-22 16:10 UTC (permalink / raw)
  To: Fabiano Rosas; +Cc: qemu-devel, Maciej S . Szmigiero

On Thu, Aug 22, 2024 at 12:03:47PM -0400, Peter Xu wrote:
> On Thu, Aug 01, 2024 at 09:35:14AM -0300, Fabiano Rosas wrote:
> > Separate the multifd sync from flushing the client data to the
> > channels. These two operations are closely related but not strictly
> > necessary to be executed together.
> > 
> > The multifd sync is intrinsic to how multifd works. The multiple
> > channels operate independently and may finish IO out of order in
> > relation to each other. This applies also between the source and
> > destination QEMU.
> > 
> > Flushing the data that is left in the client-owned data structures
> > (e.g. MultiFDPages_t) prior to sync is usually the right thing to do,
> > but that is particular to how the ram migration is implemented with
> > several passes over dirty data.
> > 
> > Make these two routines separate, allowing future code to call the
> > sync by itself if needed. This also allows the usage of
> > multifd_ram_send to be isolated to ram code.
> 
> What's the usage of sync but not flush here?

Oh I think I see your point.. I think flush+sync is always needed, it's
just that RAM may not always be the one to flush, but something else.
Makes sense then.

If you want, you may touch up the commit message to clarify that.  E.g. I
still don't see any use case that we want to sync without a flush, that
part might be a bit ambiguous.

If my understanding is correct, take this:

Reviewed-by: Peter Xu <peterx@redhat.com>

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 13/14] migration/multifd: Register nocomp ops dynamically
  2024-08-01 12:35 ` [PATCH v3 13/14] migration/multifd: Register nocomp ops dynamically Fabiano Rosas
@ 2024-08-22 16:23   ` Peter Xu
  2024-08-22 17:20     ` Fabiano Rosas
  0 siblings, 1 reply; 37+ messages in thread
From: Peter Xu @ 2024-08-22 16:23 UTC (permalink / raw)
  To: Fabiano Rosas; +Cc: qemu-devel, Maciej S . Szmigiero

On Thu, Aug 01, 2024 at 09:35:15AM -0300, Fabiano Rosas wrote:
> Prior to moving the ram code into multifd-ram.c, change the code to
> register the nocomp ops dynamically so we don't need to have the ops
> structure defined in multifd.c.
> 
> While here, rename s/nocomp/ram/ and remove the docstrings which are
> mostly useless (if anything, it's the function pointers in multifd.h
> that should be documented like that).
> 
> Signed-off-by: Fabiano Rosas <farosas@suse.de>
> ---
>  migration/multifd.c | 101 ++++++++++++--------------------------------
>  1 file changed, 28 insertions(+), 73 deletions(-)
> 
> diff --git a/migration/multifd.c b/migration/multifd.c
> index c25ab4924c..d5be784b6f 100644
> --- a/migration/multifd.c
> +++ b/migration/multifd.c
> @@ -167,15 +167,7 @@ static void multifd_set_file_bitmap(MultiFDSendParams *p)
>      }
>  }
>  
> -/* Multifd without compression */
> -
> -/**
> - * nocomp_send_setup: setup send side
> - *
> - * @p: Params for the channel that we are using
> - * @errp: pointer to an error
> - */
> -static int nocomp_send_setup(MultiFDSendParams *p, Error **errp)
> +static int ram_send_setup(MultiFDSendParams *p, Error **errp)

"ram" as a prefix sounds inaccurate to me.  Personally I even preferred the
old name "nocomp" because it says there's no compression.

Here "ram_send_setup" is at the same level against e.g. "zlib_send_setup".
It sounds like zlib isn't for ram, but it is..

Do you perhaps dislike the "nocomp" term?  How about:

  multifd_plain_send_setup()

Just to do s/nocomp/plain/?  Or "raw"?

We do have two flavours here at least:

*** migration/multifd-qpl.c:
<global>[755]                  .send_setup = multifd_qpl_send_setup,

*** migration/multifd-ram.c:
<global>[387]                  .send_setup = ram_send_setup,

*** migration/multifd-uadk.c:
<global>[364]                  .send_setup = multifd_uadk_send_setup,

*** migration/multifd-zlib.c:
<global>[338]                  .send_setup = zlib_send_setup,

*** migration/multifd-zstd.c:
<global>[326]                  .send_setup = zstd_send_setup,

It might makes sense to all prefix them with "multifd_", just to follow
gpl/uadk?

>  {
>      uint32_t page_count = multifd_ram_page_count();
>  
> @@ -193,15 +185,7 @@ static int nocomp_send_setup(MultiFDSendParams *p, Error **errp)
>      return 0;
>  }
>  
> -/**
> - * nocomp_send_cleanup: cleanup send side
> - *
> - * For no compression this function does nothing.
> - *
> - * @p: Params for the channel that we are using
> - * @errp: pointer to an error
> - */
> -static void nocomp_send_cleanup(MultiFDSendParams *p, Error **errp)
> +static void ram_send_cleanup(MultiFDSendParams *p, Error **errp)
>  {
>      g_free(p->iov);
>      p->iov = NULL;
> @@ -222,18 +206,7 @@ static void multifd_send_prepare_iovs(MultiFDSendParams *p)
>      p->next_packet_size = pages->normal_num * page_size;
>  }
>  
> -/**
> - * nocomp_send_prepare: prepare date to be able to send
> - *
> - * For no compression we just have to calculate the size of the
> - * packet.
> - *
> - * Returns 0 for success or -1 for error
> - *
> - * @p: Params for the channel that we are using
> - * @errp: pointer to an error
> - */
> -static int nocomp_send_prepare(MultiFDSendParams *p, Error **errp)
> +static int ram_send_prepare(MultiFDSendParams *p, Error **errp)
>  {
>      bool use_zero_copy_send = migrate_zero_copy_send();
>      int ret;
> @@ -272,46 +245,19 @@ static int nocomp_send_prepare(MultiFDSendParams *p, Error **errp)
>      return 0;
>  }
>  
> -/**
> - * nocomp_recv_setup: setup receive side
> - *
> - * For no compression this function does nothing.
> - *
> - * Returns 0 for success or -1 for error
> - *
> - * @p: Params for the channel that we are using
> - * @errp: pointer to an error
> - */
> -static int nocomp_recv_setup(MultiFDRecvParams *p, Error **errp)
> +static int ram_recv_setup(MultiFDRecvParams *p, Error **errp)
>  {
>      p->iov = g_new0(struct iovec, multifd_ram_page_count());
>      return 0;
>  }
>  
> -/**
> - * nocomp_recv_cleanup: setup receive side
> - *
> - * For no compression this function does nothing.
> - *
> - * @p: Params for the channel that we are using
> - */
> -static void nocomp_recv_cleanup(MultiFDRecvParams *p)
> +static void ram_recv_cleanup(MultiFDRecvParams *p)
>  {
>      g_free(p->iov);
>      p->iov = NULL;
>  }
>  
> -/**
> - * nocomp_recv: read the data from the channel
> - *
> - * For no compression we just need to read things into the correct place.
> - *
> - * Returns 0 for success or -1 for error
> - *
> - * @p: Params for the channel that we are using
> - * @errp: pointer to an error
> - */
> -static int nocomp_recv(MultiFDRecvParams *p, Error **errp)
> +static int ram_recv(MultiFDRecvParams *p, Error **errp)
>  {
>      uint32_t flags;
>  
> @@ -341,22 +287,15 @@ static int nocomp_recv(MultiFDRecvParams *p, Error **errp)
>      return qio_channel_readv_all(p->c, p->iov, p->normal_num, errp);
>  }
>  
> -static MultiFDMethods multifd_nocomp_ops = {
> -    .send_setup = nocomp_send_setup,
> -    .send_cleanup = nocomp_send_cleanup,
> -    .send_prepare = nocomp_send_prepare,
> -    .recv_setup = nocomp_recv_setup,
> -    .recv_cleanup = nocomp_recv_cleanup,
> -    .recv = nocomp_recv
> -};
> -
> -static MultiFDMethods *multifd_ops[MULTIFD_COMPRESSION__MAX] = {
> -    [MULTIFD_COMPRESSION_NONE] = &multifd_nocomp_ops,
> -};
> +static MultiFDMethods *multifd_ops[MULTIFD_COMPRESSION__MAX] = {};
>  
>  void multifd_register_ops(int method, MultiFDMethods *ops)
>  {
> -    assert(0 < method && method < MULTIFD_COMPRESSION__MAX);
> +    if (method == MULTIFD_COMPRESSION_NONE) {
> +        assert(!multifd_ops[method]);
> +    } else {
> +        assert(0 < method && method < MULTIFD_COMPRESSION__MAX);
> +    }
>      multifd_ops[method] = ops;
>  }

The new assertion is a bit paranoid to me.. while checking duplicated
assignment should at least apply to all if to add.  So.. how about:

  assert(method < MULTIFD_COMPRESSION__MAX);
  assert(!multifd_ops[method]);
  multifd_ops[method] = ops;

?

>  
> @@ -1755,3 +1694,19 @@ bool multifd_send_prepare_common(MultiFDSendParams *p)
>  
>      return true;
>  }
> +
> +static MultiFDMethods multifd_ram_ops = {
> +    .send_setup = ram_send_setup,
> +    .send_cleanup = ram_send_cleanup,
> +    .send_prepare = ram_send_prepare,
> +    .recv_setup = ram_recv_setup,
> +    .recv_cleanup = ram_recv_cleanup,
> +    .recv = ram_recv
> +};
> +
> +static void multifd_ram_register(void)
> +{
> +    multifd_register_ops(MULTIFD_COMPRESSION_NONE, &multifd_ram_ops);
> +}
> +
> +migration_init(multifd_ram_register);
> -- 
> 2.35.3
> 

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 14/14] migration/multifd: Move ram code into multifd-ram.c
  2024-08-01 12:35 ` [PATCH v3 14/14] migration/multifd: Move ram code into multifd-ram.c Fabiano Rosas
@ 2024-08-22 16:25   ` Peter Xu
  2024-08-22 17:21     ` Fabiano Rosas
  0 siblings, 1 reply; 37+ messages in thread
From: Peter Xu @ 2024-08-22 16:25 UTC (permalink / raw)
  To: Fabiano Rosas; +Cc: qemu-devel, Maciej S . Szmigiero

On Thu, Aug 01, 2024 at 09:35:16AM -0300, Fabiano Rosas wrote:
> In preparation for adding new payload types to multifd, move most of
> the ram-related code into multifd-ram.c. Let's try to keep a semblance
> of layering by not mixing general multifd control flow with the
> details of transmitting pages of ram.
> 
> There are still some pieces leftover, namely the p->normal, p->zero,
> etc variables that we use for zero page tracking and the packet
> allocation which is heavily dependent on the ram code.
> 
> Signed-off-by: Fabiano Rosas <farosas@suse.de>

The movement makes sense to me in general, but let's discuss whether nocomp
may need a better name.  It could mean that we may want two new files:
multifd-ram.c to keep generic RAM stuff (which apply to nocomp/zlib/...)
then multifd-plain.c which contains no-comp case, perhaps.

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 12/14] migration/multifd: Allow multifd sync without flush
  2024-08-22 16:10     ` Peter Xu
@ 2024-08-22 17:05       ` Fabiano Rosas
  2024-08-22 17:36         ` Peter Xu
  0 siblings, 1 reply; 37+ messages in thread
From: Fabiano Rosas @ 2024-08-22 17:05 UTC (permalink / raw)
  To: Peter Xu; +Cc: qemu-devel, Maciej S . Szmigiero

Peter Xu <peterx@redhat.com> writes:

> On Thu, Aug 22, 2024 at 12:03:47PM -0400, Peter Xu wrote:
>> On Thu, Aug 01, 2024 at 09:35:14AM -0300, Fabiano Rosas wrote:
>> > Separate the multifd sync from flushing the client data to the
>> > channels. These two operations are closely related but not strictly
>> > necessary to be executed together.
>> > 
>> > The multifd sync is intrinsic to how multifd works. The multiple
>> > channels operate independently and may finish IO out of order in
>> > relation to each other. This applies also between the source and
>> > destination QEMU.
>> > 
>> > Flushing the data that is left in the client-owned data structures
>> > (e.g. MultiFDPages_t) prior to sync is usually the right thing to do,
>> > but that is particular to how the ram migration is implemented with
>> > several passes over dirty data.
>> > 
>> > Make these two routines separate, allowing future code to call the
>> > sync by itself if needed. This also allows the usage of
>> > multifd_ram_send to be isolated to ram code.
>> 
>> What's the usage of sync but not flush here?
>
> Oh I think I see your point.. I think flush+sync is always needed, it's
> just that RAM may not always be the one to flush, but something else.
> Makes sense then.
>

I'm thinking of "flush" here as a last multifd_send() before sync. We
need multiple multifd_send() along the migration to send the data, but
we might not need this extra flush. It could be that there's nothing to
flush and the code guarantees it:

 <populate MultiFDSendData>
 multifd_send()
 sync

Where RAM currently does:

 multifd_queue_page()
 multifd_queue_page()
 multifd_queue_page()
 ...
 multifd_queue_page()
 multifd_send()
 sync

Today there is a multifd_send() inside multifd_queue_page() and the
amount sent depends on the ram.c code. At the time sync gets called,
there could be data queued but not yet sent. Another client (not ram)
could just produce data in a deterministic manner and match that with
calls to multifd_send().

> If you want, you may touch up the commit message to clarify that.  E.g. I
> still don't see any use case that we want to sync without a flush, that
> part might be a bit ambiguous.
>
> If my understanding is correct, take this:
>
> Reviewed-by: Peter Xu <peterx@redhat.com>


^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 13/14] migration/multifd: Register nocomp ops dynamically
  2024-08-22 16:23   ` Peter Xu
@ 2024-08-22 17:20     ` Fabiano Rosas
  0 siblings, 0 replies; 37+ messages in thread
From: Fabiano Rosas @ 2024-08-22 17:20 UTC (permalink / raw)
  To: Peter Xu; +Cc: qemu-devel, Maciej S . Szmigiero

Peter Xu <peterx@redhat.com> writes:

> On Thu, Aug 01, 2024 at 09:35:15AM -0300, Fabiano Rosas wrote:
>> Prior to moving the ram code into multifd-ram.c, change the code to
>> register the nocomp ops dynamically so we don't need to have the ops
>> structure defined in multifd.c.
>> 
>> While here, rename s/nocomp/ram/ and remove the docstrings which are
>> mostly useless (if anything, it's the function pointers in multifd.h
>> that should be documented like that).
>> 
>> Signed-off-by: Fabiano Rosas <farosas@suse.de>
>> ---
>>  migration/multifd.c | 101 ++++++++++++--------------------------------
>>  1 file changed, 28 insertions(+), 73 deletions(-)
>> 
>> diff --git a/migration/multifd.c b/migration/multifd.c
>> index c25ab4924c..d5be784b6f 100644
>> --- a/migration/multifd.c
>> +++ b/migration/multifd.c
>> @@ -167,15 +167,7 @@ static void multifd_set_file_bitmap(MultiFDSendParams *p)
>>      }
>>  }
>>  
>> -/* Multifd without compression */
>> -
>> -/**
>> - * nocomp_send_setup: setup send side
>> - *
>> - * @p: Params for the channel that we are using
>> - * @errp: pointer to an error
>> - */
>> -static int nocomp_send_setup(MultiFDSendParams *p, Error **errp)
>> +static int ram_send_setup(MultiFDSendParams *p, Error **errp)
>
> "ram" as a prefix sounds inaccurate to me.  Personally I even preferred the
> old name "nocomp" because it says there's no compression.
>
> Here "ram_send_setup" is at the same level against e.g. "zlib_send_setup".
> It sounds like zlib isn't for ram, but it is..
>
> Do you perhaps dislike the "nocomp" term?  How about:

I don't mind. I almost left nocomp intact, but thought it would be
better to match the new file name (multifd-ram.c).

>
>   multifd_plain_send_setup()
>
> Just to do s/nocomp/plain/?  Or "raw"?
>
> We do have two flavours here at least:
>
> *** migration/multifd-qpl.c:
> <global>[755]                  .send_setup = multifd_qpl_send_setup,
>
> *** migration/multifd-ram.c:
> <global>[387]                  .send_setup = ram_send_setup,
>
> *** migration/multifd-uadk.c:
> <global>[364]                  .send_setup = multifd_uadk_send_setup,
>
> *** migration/multifd-zlib.c:
> <global>[338]                  .send_setup = zlib_send_setup,
>
> *** migration/multifd-zstd.c:
> <global>[326]                  .send_setup = zstd_send_setup,
>
> It might makes sense to all prefix them with "multifd_", just to follow
> gpl/uadk?

Yep.

>
>>  {
>>      uint32_t page_count = multifd_ram_page_count();
>>  
>> @@ -193,15 +185,7 @@ static int nocomp_send_setup(MultiFDSendParams *p, Error **errp)
>>      return 0;
>>  }
>>  
>> -/**
>> - * nocomp_send_cleanup: cleanup send side
>> - *
>> - * For no compression this function does nothing.
>> - *
>> - * @p: Params for the channel that we are using
>> - * @errp: pointer to an error
>> - */
>> -static void nocomp_send_cleanup(MultiFDSendParams *p, Error **errp)
>> +static void ram_send_cleanup(MultiFDSendParams *p, Error **errp)
>>  {
>>      g_free(p->iov);
>>      p->iov = NULL;
>> @@ -222,18 +206,7 @@ static void multifd_send_prepare_iovs(MultiFDSendParams *p)
>>      p->next_packet_size = pages->normal_num * page_size;
>>  }
>>  
>> -/**
>> - * nocomp_send_prepare: prepare date to be able to send
>> - *
>> - * For no compression we just have to calculate the size of the
>> - * packet.
>> - *
>> - * Returns 0 for success or -1 for error
>> - *
>> - * @p: Params for the channel that we are using
>> - * @errp: pointer to an error
>> - */
>> -static int nocomp_send_prepare(MultiFDSendParams *p, Error **errp)
>> +static int ram_send_prepare(MultiFDSendParams *p, Error **errp)
>>  {
>>      bool use_zero_copy_send = migrate_zero_copy_send();
>>      int ret;
>> @@ -272,46 +245,19 @@ static int nocomp_send_prepare(MultiFDSendParams *p, Error **errp)
>>      return 0;
>>  }
>>  
>> -/**
>> - * nocomp_recv_setup: setup receive side
>> - *
>> - * For no compression this function does nothing.
>> - *
>> - * Returns 0 for success or -1 for error
>> - *
>> - * @p: Params for the channel that we are using
>> - * @errp: pointer to an error
>> - */
>> -static int nocomp_recv_setup(MultiFDRecvParams *p, Error **errp)
>> +static int ram_recv_setup(MultiFDRecvParams *p, Error **errp)
>>  {
>>      p->iov = g_new0(struct iovec, multifd_ram_page_count());
>>      return 0;
>>  }
>>  
>> -/**
>> - * nocomp_recv_cleanup: setup receive side
>> - *
>> - * For no compression this function does nothing.
>> - *
>> - * @p: Params for the channel that we are using
>> - */
>> -static void nocomp_recv_cleanup(MultiFDRecvParams *p)
>> +static void ram_recv_cleanup(MultiFDRecvParams *p)
>>  {
>>      g_free(p->iov);
>>      p->iov = NULL;
>>  }
>>  
>> -/**
>> - * nocomp_recv: read the data from the channel
>> - *
>> - * For no compression we just need to read things into the correct place.
>> - *
>> - * Returns 0 for success or -1 for error
>> - *
>> - * @p: Params for the channel that we are using
>> - * @errp: pointer to an error
>> - */
>> -static int nocomp_recv(MultiFDRecvParams *p, Error **errp)
>> +static int ram_recv(MultiFDRecvParams *p, Error **errp)
>>  {
>>      uint32_t flags;
>>  
>> @@ -341,22 +287,15 @@ static int nocomp_recv(MultiFDRecvParams *p, Error **errp)
>>      return qio_channel_readv_all(p->c, p->iov, p->normal_num, errp);
>>  }
>>  
>> -static MultiFDMethods multifd_nocomp_ops = {
>> -    .send_setup = nocomp_send_setup,
>> -    .send_cleanup = nocomp_send_cleanup,
>> -    .send_prepare = nocomp_send_prepare,
>> -    .recv_setup = nocomp_recv_setup,
>> -    .recv_cleanup = nocomp_recv_cleanup,
>> -    .recv = nocomp_recv
>> -};
>> -
>> -static MultiFDMethods *multifd_ops[MULTIFD_COMPRESSION__MAX] = {
>> -    [MULTIFD_COMPRESSION_NONE] = &multifd_nocomp_ops,
>> -};
>> +static MultiFDMethods *multifd_ops[MULTIFD_COMPRESSION__MAX] = {};
>>  
>>  void multifd_register_ops(int method, MultiFDMethods *ops)
>>  {
>> -    assert(0 < method && method < MULTIFD_COMPRESSION__MAX);
>> +    if (method == MULTIFD_COMPRESSION_NONE) {
>> +        assert(!multifd_ops[method]);
>> +    } else {
>> +        assert(0 < method && method < MULTIFD_COMPRESSION__MAX);
>> +    }
>>      multifd_ops[method] = ops;
>>  }
>
> The new assertion is a bit paranoid to me.. while checking duplicated
> assignment should at least apply to all if to add.  So.. how about:
>
>   assert(method < MULTIFD_COMPRESSION__MAX);
>   assert(!multifd_ops[method]);
>   multifd_ops[method] = ops;
>
> ?

ok

>
>>  
>> @@ -1755,3 +1694,19 @@ bool multifd_send_prepare_common(MultiFDSendParams *p)
>>  
>>      return true;
>>  }
>> +
>> +static MultiFDMethods multifd_ram_ops = {
>> +    .send_setup = ram_send_setup,
>> +    .send_cleanup = ram_send_cleanup,
>> +    .send_prepare = ram_send_prepare,
>> +    .recv_setup = ram_recv_setup,
>> +    .recv_cleanup = ram_recv_cleanup,
>> +    .recv = ram_recv
>> +};
>> +
>> +static void multifd_ram_register(void)
>> +{
>> +    multifd_register_ops(MULTIFD_COMPRESSION_NONE, &multifd_ram_ops);
>> +}
>> +
>> +migration_init(multifd_ram_register);
>> -- 
>> 2.35.3
>> 


^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 14/14] migration/multifd: Move ram code into multifd-ram.c
  2024-08-22 16:25   ` Peter Xu
@ 2024-08-22 17:21     ` Fabiano Rosas
  2024-08-22 17:27       ` Peter Xu
  0 siblings, 1 reply; 37+ messages in thread
From: Fabiano Rosas @ 2024-08-22 17:21 UTC (permalink / raw)
  To: Peter Xu; +Cc: qemu-devel, Maciej S . Szmigiero

Peter Xu <peterx@redhat.com> writes:

> On Thu, Aug 01, 2024 at 09:35:16AM -0300, Fabiano Rosas wrote:
>> In preparation for adding new payload types to multifd, move most of
>> the ram-related code into multifd-ram.c. Let's try to keep a semblance
>> of layering by not mixing general multifd control flow with the
>> details of transmitting pages of ram.
>> 
>> There are still some pieces leftover, namely the p->normal, p->zero,
>> etc variables that we use for zero page tracking and the packet
>> allocation which is heavily dependent on the ram code.
>> 
>> Signed-off-by: Fabiano Rosas <farosas@suse.de>
>
> The movement makes sense to me in general, but let's discuss whether nocomp
> may need a better name.  It could mean that we may want two new files:
> multifd-ram.c to keep generic RAM stuff (which apply to nocomp/zlib/...)
> then multifd-plain.c which contains no-comp case, perhaps.

I think 2 files would be too much. We'd just be jumping from one to
another while reading code. The nocomp code is intimately related to the
multifd-ram code. Should we just put it in a multifd-nocomp.c and that's
it?


^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 14/14] migration/multifd: Move ram code into multifd-ram.c
  2024-08-22 17:21     ` Fabiano Rosas
@ 2024-08-22 17:27       ` Peter Xu
  0 siblings, 0 replies; 37+ messages in thread
From: Peter Xu @ 2024-08-22 17:27 UTC (permalink / raw)
  To: Fabiano Rosas; +Cc: qemu-devel, Maciej S . Szmigiero

On Thu, Aug 22, 2024 at 02:21:18PM -0300, Fabiano Rosas wrote:
> Peter Xu <peterx@redhat.com> writes:
> 
> > On Thu, Aug 01, 2024 at 09:35:16AM -0300, Fabiano Rosas wrote:
> >> In preparation for adding new payload types to multifd, move most of
> >> the ram-related code into multifd-ram.c. Let's try to keep a semblance
> >> of layering by not mixing general multifd control flow with the
> >> details of transmitting pages of ram.
> >> 
> >> There are still some pieces leftover, namely the p->normal, p->zero,
> >> etc variables that we use for zero page tracking and the packet
> >> allocation which is heavily dependent on the ram code.
> >> 
> >> Signed-off-by: Fabiano Rosas <farosas@suse.de>
> >
> > The movement makes sense to me in general, but let's discuss whether nocomp
> > may need a better name.  It could mean that we may want two new files:
> > multifd-ram.c to keep generic RAM stuff (which apply to nocomp/zlib/...)
> > then multifd-plain.c which contains no-comp case, perhaps.
> 
> I think 2 files would be too much. We'd just be jumping from one to
> another while reading code. The nocomp code is intimately related to the
> multifd-ram code. Should we just put it in a multifd-nocomp.c and that's
> it?

Sure, as long as nocomp helpers have a proper prefix (like "plain") then it
would be good.

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 12/14] migration/multifd: Allow multifd sync without flush
  2024-08-22 17:05       ` Fabiano Rosas
@ 2024-08-22 17:36         ` Peter Xu
  2024-08-22 18:07           ` Fabiano Rosas
  0 siblings, 1 reply; 37+ messages in thread
From: Peter Xu @ 2024-08-22 17:36 UTC (permalink / raw)
  To: Fabiano Rosas; +Cc: qemu-devel, Maciej S . Szmigiero

On Thu, Aug 22, 2024 at 02:05:30PM -0300, Fabiano Rosas wrote:
> Peter Xu <peterx@redhat.com> writes:
> 
> > On Thu, Aug 22, 2024 at 12:03:47PM -0400, Peter Xu wrote:
> >> On Thu, Aug 01, 2024 at 09:35:14AM -0300, Fabiano Rosas wrote:
> >> > Separate the multifd sync from flushing the client data to the
> >> > channels. These two operations are closely related but not strictly
> >> > necessary to be executed together.
> >> > 
> >> > The multifd sync is intrinsic to how multifd works. The multiple
> >> > channels operate independently and may finish IO out of order in
> >> > relation to each other. This applies also between the source and
> >> > destination QEMU.
> >> > 
> >> > Flushing the data that is left in the client-owned data structures
> >> > (e.g. MultiFDPages_t) prior to sync is usually the right thing to do,
> >> > but that is particular to how the ram migration is implemented with
> >> > several passes over dirty data.
> >> > 
> >> > Make these two routines separate, allowing future code to call the
> >> > sync by itself if needed. This also allows the usage of
> >> > multifd_ram_send to be isolated to ram code.
> >> 
> >> What's the usage of sync but not flush here?
> >
> > Oh I think I see your point.. I think flush+sync is always needed, it's
> > just that RAM may not always be the one to flush, but something else.
> > Makes sense then.
> >
> 
> I'm thinking of "flush" here as a last multifd_send() before sync. We
> need multiple multifd_send() along the migration to send the data, but
> we might not need this extra flush. It could be that there's nothing to
> flush and the code guarantees it:
> 
>  <populate MultiFDSendData>
>  multifd_send()
>  sync
> 
> Where RAM currently does:
> 
>  multifd_queue_page()
>  multifd_queue_page()
>  multifd_queue_page()
>  ...
>  multifd_queue_page()
>  multifd_send()
>  sync
> 
> Today there is a multifd_send() inside multifd_queue_page() and the
> amount sent depends on the ram.c code. At the time sync gets called,
> there could be data queued but not yet sent. Another client (not ram)
> could just produce data in a deterministic manner and match that with
> calls to multifd_send().

I hope I read it alright.. I suppose you meant we have chance to do:

  ram_send()
  vfio_send()
  flush()

Instead of:

  ram_send()
  flush()
  vfio_send()
  flush()

Am I right?

> 
> > If you want, you may touch up the commit message to clarify that.  E.g. I
> > still don't see any use case that we want to sync without a flush, that
> > part might be a bit ambiguous.
> >
> > If my understanding is correct, take this:
> >
> > Reviewed-by: Peter Xu <peterx@redhat.com>
> 

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 12/14] migration/multifd: Allow multifd sync without flush
  2024-08-22 17:36         ` Peter Xu
@ 2024-08-22 18:07           ` Fabiano Rosas
  2024-08-22 19:11             ` Peter Xu
  0 siblings, 1 reply; 37+ messages in thread
From: Fabiano Rosas @ 2024-08-22 18:07 UTC (permalink / raw)
  To: Peter Xu; +Cc: qemu-devel, Maciej S . Szmigiero

Peter Xu <peterx@redhat.com> writes:

> On Thu, Aug 22, 2024 at 02:05:30PM -0300, Fabiano Rosas wrote:
>> Peter Xu <peterx@redhat.com> writes:
>> 
>> > On Thu, Aug 22, 2024 at 12:03:47PM -0400, Peter Xu wrote:
>> >> On Thu, Aug 01, 2024 at 09:35:14AM -0300, Fabiano Rosas wrote:
>> >> > Separate the multifd sync from flushing the client data to the
>> >> > channels. These two operations are closely related but not strictly
>> >> > necessary to be executed together.
>> >> > 
>> >> > The multifd sync is intrinsic to how multifd works. The multiple
>> >> > channels operate independently and may finish IO out of order in
>> >> > relation to each other. This applies also between the source and
>> >> > destination QEMU.
>> >> > 
>> >> > Flushing the data that is left in the client-owned data structures
>> >> > (e.g. MultiFDPages_t) prior to sync is usually the right thing to do,
>> >> > but that is particular to how the ram migration is implemented with
>> >> > several passes over dirty data.
>> >> > 
>> >> > Make these two routines separate, allowing future code to call the
>> >> > sync by itself if needed. This also allows the usage of
>> >> > multifd_ram_send to be isolated to ram code.
>> >> 
>> >> What's the usage of sync but not flush here?
>> >
>> > Oh I think I see your point.. I think flush+sync is always needed, it's
>> > just that RAM may not always be the one to flush, but something else.
>> > Makes sense then.
>> >
>> 
>> I'm thinking of "flush" here as a last multifd_send() before sync. We
>> need multiple multifd_send() along the migration to send the data, but
>> we might not need this extra flush. It could be that there's nothing to
>> flush and the code guarantees it:
>> 
>>  <populate MultiFDSendData>
>>  multifd_send()
>>  sync
>> 
>> Where RAM currently does:
>> 
>>  multifd_queue_page()
>>  multifd_queue_page()
>>  multifd_queue_page()
>>  ...
>>  multifd_queue_page()
>>  multifd_send()
>>  sync
>> 
>> Today there is a multifd_send() inside multifd_queue_page() and the
>> amount sent depends on the ram.c code. At the time sync gets called,
>> there could be data queued but not yet sent. Another client (not ram)
>> could just produce data in a deterministic manner and match that with
>> calls to multifd_send().
>
> I hope I read it alright.. I suppose you meant we have chance to do:
>
>   ram_send()
>   vfio_send()
>   flush()
>
> Instead of:
>
>   ram_send()
>   flush()
>   vfio_send()
>   flush()
>
> Am I right?

Not really. I'm saying that RAM doesn't always send the data, that's why
it needs a final flush before sync:

multifd_queue_page()
    if (multifd_queue_empty(pages)) {
        multifd_enqueue(pages, offset);
    }
    
    if (multifd_queue_full(pages)) {
        multifd_send_pages()   <-- this might not happen
    }
    multifd_enqueue()

multifd_send_sync_main()
    if (pages->num) { <-- data left unsent
       multifd_send() <-- flush
    }
    <sync routine>

>
>> 
>> > If you want, you may touch up the commit message to clarify that.  E.g. I
>> > still don't see any use case that we want to sync without a flush, that
>> > part might be a bit ambiguous.
>> >
>> > If my understanding is correct, take this:
>> >
>> > Reviewed-by: Peter Xu <peterx@redhat.com>
>> 


^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 12/14] migration/multifd: Allow multifd sync without flush
  2024-08-22 18:07           ` Fabiano Rosas
@ 2024-08-22 19:11             ` Peter Xu
  0 siblings, 0 replies; 37+ messages in thread
From: Peter Xu @ 2024-08-22 19:11 UTC (permalink / raw)
  To: Fabiano Rosas; +Cc: qemu-devel, Maciej S . Szmigiero

On Thu, Aug 22, 2024 at 03:07:55PM -0300, Fabiano Rosas wrote:
> Peter Xu <peterx@redhat.com> writes:
> 
> > On Thu, Aug 22, 2024 at 02:05:30PM -0300, Fabiano Rosas wrote:
> >> Peter Xu <peterx@redhat.com> writes:
> >> 
> >> > On Thu, Aug 22, 2024 at 12:03:47PM -0400, Peter Xu wrote:
> >> >> On Thu, Aug 01, 2024 at 09:35:14AM -0300, Fabiano Rosas wrote:
> >> >> > Separate the multifd sync from flushing the client data to the
> >> >> > channels. These two operations are closely related but not strictly
> >> >> > necessary to be executed together.
> >> >> > 
> >> >> > The multifd sync is intrinsic to how multifd works. The multiple
> >> >> > channels operate independently and may finish IO out of order in
> >> >> > relation to each other. This applies also between the source and
> >> >> > destination QEMU.
> >> >> > 
> >> >> > Flushing the data that is left in the client-owned data structures
> >> >> > (e.g. MultiFDPages_t) prior to sync is usually the right thing to do,
> >> >> > but that is particular to how the ram migration is implemented with
> >> >> > several passes over dirty data.
> >> >> > 
> >> >> > Make these two routines separate, allowing future code to call the
> >> >> > sync by itself if needed. This also allows the usage of
> >> >> > multifd_ram_send to be isolated to ram code.
> >> >> 
> >> >> What's the usage of sync but not flush here?
> >> >
> >> > Oh I think I see your point.. I think flush+sync is always needed, it's
> >> > just that RAM may not always be the one to flush, but something else.
> >> > Makes sense then.
> >> >
> >> 
> >> I'm thinking of "flush" here as a last multifd_send() before sync. We
> >> need multiple multifd_send() along the migration to send the data, but
> >> we might not need this extra flush. It could be that there's nothing to
> >> flush and the code guarantees it:
> >> 
> >>  <populate MultiFDSendData>
> >>  multifd_send()
> >>  sync
> >> 
> >> Where RAM currently does:
> >> 
> >>  multifd_queue_page()
> >>  multifd_queue_page()
> >>  multifd_queue_page()
> >>  ...
> >>  multifd_queue_page()
> >>  multifd_send()
> >>  sync
> >> 
> >> Today there is a multifd_send() inside multifd_queue_page() and the
> >> amount sent depends on the ram.c code. At the time sync gets called,
> >> there could be data queued but not yet sent. Another client (not ram)
> >> could just produce data in a deterministic manner and match that with
> >> calls to multifd_send().
> >
> > I hope I read it alright.. I suppose you meant we have chance to do:
> >
> >   ram_send()
> >   vfio_send()
> >   flush()
> >
> > Instead of:
> >
> >   ram_send()
> >   flush()
> >   vfio_send()
> >   flush()
> >
> > Am I right?
> 
> Not really. I'm saying that RAM doesn't always send the data, that's why
> it needs a final flush before sync:
> 
> multifd_queue_page()
>     if (multifd_queue_empty(pages)) {
>         multifd_enqueue(pages, offset);
>     }
>     
>     if (multifd_queue_full(pages)) {
>         multifd_send_pages()   <-- this might not happen
>     }
>     multifd_enqueue()
> 
> multifd_send_sync_main()
>     if (pages->num) { <-- data left unsent
>        multifd_send() <-- flush
>     }
>     <sync routine>

I see now.

At least for now I'd expect VFIO doesn't need to use sync at all, not
before the kernel ABI changes. It's just that when that comes we may need
to move that final sync() out of ram code then, but before a migration
completes, to cover both.

It's ok then, my R-b holds.

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 37+ messages in thread

end of thread, other threads:[~2024-08-22 19:12 UTC | newest]

Thread overview: 37+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-08-01 12:35 [PATCH v3 00/14] migration/multifd: Remove multifd_send_state->pages Fabiano Rosas
2024-08-01 12:35 ` [PATCH v3 01/14] migration/multifd: Reduce access to p->pages Fabiano Rosas
2024-08-01 12:35 ` [PATCH v3 02/14] migration/multifd: Inline page_size and page_count Fabiano Rosas
2024-08-21 20:25   ` Peter Xu
2024-08-01 12:35 ` [PATCH v3 03/14] migration/multifd: Remove pages->allocated Fabiano Rosas
2024-08-21 20:32   ` Peter Xu
2024-08-01 12:35 ` [PATCH v3 04/14] migration/multifd: Pass in MultiFDPages_t to file_write_ramblock_iov Fabiano Rosas
2024-08-01 12:35 ` [PATCH v3 05/14] migration/multifd: Introduce MultiFDSendData Fabiano Rosas
2024-08-01 12:35 ` [PATCH v3 06/14] migration/multifd: Make MultiFDPages_t:offset a flexible array member Fabiano Rosas
2024-08-21 20:38   ` Peter Xu
2024-08-01 12:35 ` [PATCH v3 07/14] migration/multifd: Replace p->pages with an union pointer Fabiano Rosas
2024-08-21 21:27   ` Peter Xu
2024-08-01 12:35 ` [PATCH v3 08/14] migration/multifd: Move pages accounting into multifd_send_zero_page_detect() Fabiano Rosas
2024-08-01 12:35 ` [PATCH v3 09/14] migration/multifd: Isolate ram pages packet data Fabiano Rosas
2024-08-21 21:38   ` Peter Xu
2024-08-22 14:13     ` Fabiano Rosas
2024-08-22 14:30       ` Peter Xu
2024-08-22 14:55         ` Fabiano Rosas
2024-08-01 12:35 ` [PATCH v3 10/14] migration/multifd: Don't send ram data during SYNC Fabiano Rosas
2024-08-22 15:50   ` Peter Xu
2024-08-01 12:35 ` [PATCH v3 11/14] migration/multifd: Replace multifd_send_state->pages with client data Fabiano Rosas
2024-08-22 15:59   ` Peter Xu
2024-08-01 12:35 ` [PATCH v3 12/14] migration/multifd: Allow multifd sync without flush Fabiano Rosas
2024-08-22 16:03   ` Peter Xu
2024-08-22 16:10     ` Peter Xu
2024-08-22 17:05       ` Fabiano Rosas
2024-08-22 17:36         ` Peter Xu
2024-08-22 18:07           ` Fabiano Rosas
2024-08-22 19:11             ` Peter Xu
2024-08-01 12:35 ` [PATCH v3 13/14] migration/multifd: Register nocomp ops dynamically Fabiano Rosas
2024-08-22 16:23   ` Peter Xu
2024-08-22 17:20     ` Fabiano Rosas
2024-08-01 12:35 ` [PATCH v3 14/14] migration/multifd: Move ram code into multifd-ram.c Fabiano Rosas
2024-08-22 16:25   ` Peter Xu
2024-08-22 17:21     ` Fabiano Rosas
2024-08-22 17:27       ` Peter Xu
2024-08-01 12:45 ` [PATCH v3 00/14] migration/multifd: Remove multifd_send_state->pages Fabiano Rosas

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).