qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/10] docs/migration: Reorganize migration documentations
@ 2024-01-09  6:46 peterx
  2024-01-09  6:46 ` [PATCH 01/10] docs/migration: Create migration/ directory peterx
                   ` (11 more replies)
  0 siblings, 12 replies; 29+ messages in thread
From: peterx @ 2024-01-09  6:46 UTC (permalink / raw)
  To: qemu-devel
  Cc: Cédric Le Goater, Michael S . Tsirkin, peterx,
	Alex Williamson, Jason Wang, Bandan Das, Prasad Pandit,
	Fabiano Rosas

From: Peter Xu <peterx@redhat.com>

Migration docs grow larger and larger.  There are plenty of things we can
do here in the future, but to start that we'd better reorganize the current
bloated doc files first and properly organize them into separate files.
This series kicks that off.

This series mostly does the movement only, so please don't be scared of the
slightly large diff.  I did touch up things here and there, but I didn't
yet started writting much.  One thing I did is I converted virtio.txt to
rST, but that's trivial and no real content I touched.

I am copying both virtio and vfio people because I'm merging the two
separate files into the new docs/devel/migration/ folder.

Comments welcomed.  Thanks,

Peter Xu (10):
  docs/migration: Create migration/ directory
  docs/migration: Create index page
  docs/migration: Convert virtio.txt into rST
  docs/migration: Split "Backwards compatibility" separately
  docs/migration: Split "Debugging" and "Firmware"
  docs/migration: Split "Postcopy"
  docs/migration: Split "dirty limit"
  docs/migration: Organize "Postcopy" page
  docs/migration: Further move vfio to be feature of migration
  docs/migration: Further move virtio to be feature of migration

 docs/devel/index-internals.rst                |    3 +-
 docs/devel/migration.rst                      | 1514 -----------------
 docs/devel/migration/best-practises.rst       |   48 +
 docs/devel/migration/compatibility.rst        |  517 ++++++
 docs/devel/migration/dirty-limit.rst          |   71 +
 docs/devel/migration/features.rst             |   12 +
 docs/devel/migration/index.rst                |   13 +
 docs/devel/migration/main.rst                 |  575 +++++++
 docs/devel/migration/postcopy.rst             |  313 ++++
 .../vfio.rst}                                 |    2 +-
 docs/devel/migration/virtio.rst               |  115 ++
 docs/devel/virtio-migration.txt               |  108 --
 12 files changed, 1666 insertions(+), 1625 deletions(-)
 delete mode 100644 docs/devel/migration.rst
 create mode 100644 docs/devel/migration/best-practises.rst
 create mode 100644 docs/devel/migration/compatibility.rst
 create mode 100644 docs/devel/migration/dirty-limit.rst
 create mode 100644 docs/devel/migration/features.rst
 create mode 100644 docs/devel/migration/index.rst
 create mode 100644 docs/devel/migration/main.rst
 create mode 100644 docs/devel/migration/postcopy.rst
 rename docs/devel/{vfio-migration.rst => migration/vfio.rst} (99%)
 create mode 100644 docs/devel/migration/virtio.rst
 delete mode 100644 docs/devel/virtio-migration.txt

-- 
2.41.0



^ permalink raw reply	[flat|nested] 29+ messages in thread

* [PATCH 01/10] docs/migration: Create migration/ directory
  2024-01-09  6:46 [PATCH 00/10] docs/migration: Reorganize migration documentations peterx
@ 2024-01-09  6:46 ` peterx
  2024-01-09  6:52   ` Cédric Le Goater
  2024-01-09  6:46 ` [PATCH 02/10] docs/migration: Create index page peterx
                   ` (10 subsequent siblings)
  11 siblings, 1 reply; 29+ messages in thread
From: peterx @ 2024-01-09  6:46 UTC (permalink / raw)
  To: qemu-devel
  Cc: Cédric Le Goater, Michael S . Tsirkin, peterx,
	Alex Williamson, Jason Wang, Bandan Das, Prasad Pandit,
	Fabiano Rosas

From: Peter Xu <peterx@redhat.com>

Migration documentation is growing into a single file too large.  Create a
sub-directory for it for a split.

We also already have separate vfio/virtio documentations, move it all over
into the directory.

Note that the virtio one is still not yet converted to rST.  That is a job
for later.

Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: Jason Wang <jasowang@redhat.com>
Cc: Alex Williamson <alex.williamson@redhat.com>
Cc: Cédric Le Goater <clg@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 docs/devel/index-internals.rst                            | 2 +-
 docs/devel/{migration.rst => migration/main.rst}          | 0
 docs/devel/{vfio-migration.rst => migration/vfio.rst}     | 0
 docs/devel/{virtio-migration.txt => migration/virtio.txt} | 0
 4 files changed, 1 insertion(+), 1 deletion(-)
 rename docs/devel/{migration.rst => migration/main.rst} (100%)
 rename docs/devel/{vfio-migration.rst => migration/vfio.rst} (100%)
 rename docs/devel/{virtio-migration.txt => migration/virtio.txt} (100%)

diff --git a/docs/devel/index-internals.rst b/docs/devel/index-internals.rst
index 3def4a138b..a41d62c1eb 100644
--- a/docs/devel/index-internals.rst
+++ b/docs/devel/index-internals.rst
@@ -11,7 +11,7 @@ Details about QEMU's various subsystems including how to add features to them.
    block-coroutine-wrapper
    clocks
    ebpf_rss
-   migration
+   migration/main
    multi-process
    reset
    s390-cpu-topology
diff --git a/docs/devel/migration.rst b/docs/devel/migration/main.rst
similarity index 100%
rename from docs/devel/migration.rst
rename to docs/devel/migration/main.rst
diff --git a/docs/devel/vfio-migration.rst b/docs/devel/migration/vfio.rst
similarity index 100%
rename from docs/devel/vfio-migration.rst
rename to docs/devel/migration/vfio.rst
diff --git a/docs/devel/virtio-migration.txt b/docs/devel/migration/virtio.txt
similarity index 100%
rename from docs/devel/virtio-migration.txt
rename to docs/devel/migration/virtio.txt
-- 
2.41.0



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH 02/10] docs/migration: Create index page
  2024-01-09  6:46 [PATCH 00/10] docs/migration: Reorganize migration documentations peterx
  2024-01-09  6:46 ` [PATCH 01/10] docs/migration: Create migration/ directory peterx
@ 2024-01-09  6:46 ` peterx
  2024-01-09  6:53   ` Cédric Le Goater
  2024-01-09  6:46 ` [PATCH 03/10] docs/migration: Convert virtio.txt into rST peterx
                   ` (9 subsequent siblings)
  11 siblings, 1 reply; 29+ messages in thread
From: peterx @ 2024-01-09  6:46 UTC (permalink / raw)
  To: qemu-devel
  Cc: Cédric Le Goater, Michael S . Tsirkin, peterx,
	Alex Williamson, Jason Wang, Bandan Das, Prasad Pandit,
	Fabiano Rosas

From: Peter Xu <peterx@redhat.com>

Create an index page for migration module.  Move VFIO migration there too.
A trivial touch-up on the title to use lower case there.

Since then we'll have "migration" as the top title, make the main doc file
renamed to "migration framework".

Cc: Alex Williamson <alex.williamson@redhat.com>
Cc: Cédric Le Goater <clg@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 docs/devel/index-internals.rst |  3 +--
 docs/devel/migration/index.rst | 11 +++++++++++
 docs/devel/migration/main.rst  |  6 +++---
 docs/devel/migration/vfio.rst  |  2 +-
 4 files changed, 16 insertions(+), 6 deletions(-)
 create mode 100644 docs/devel/migration/index.rst

diff --git a/docs/devel/index-internals.rst b/docs/devel/index-internals.rst
index a41d62c1eb..5636e9cf1d 100644
--- a/docs/devel/index-internals.rst
+++ b/docs/devel/index-internals.rst
@@ -11,13 +11,12 @@ Details about QEMU's various subsystems including how to add features to them.
    block-coroutine-wrapper
    clocks
    ebpf_rss
-   migration/main
+   migration/index
    multi-process
    reset
    s390-cpu-topology
    s390-dasd-ipl
    tracing
-   vfio-migration
    vfio-iommufd
    writing-monitor-commands
    virtio-backends
diff --git a/docs/devel/migration/index.rst b/docs/devel/migration/index.rst
new file mode 100644
index 0000000000..02cfdcc969
--- /dev/null
+++ b/docs/devel/migration/index.rst
@@ -0,0 +1,11 @@
+Migration
+=========
+
+This is the main entry for QEMU migration documentations.  It explains how
+QEMU live migration works.
+
+.. toctree::
+   :maxdepth: 2
+
+   main
+   vfio
diff --git a/docs/devel/migration/main.rst b/docs/devel/migration/main.rst
index 95351ba51f..62bf027fb4 100644
--- a/docs/devel/migration/main.rst
+++ b/docs/devel/migration/main.rst
@@ -1,6 +1,6 @@
-=========
-Migration
-=========
+===================
+Migration framework
+===================
 
 QEMU has code to load/save the state of the guest that it is running.
 These are two complementary operations.  Saving the state just does
diff --git a/docs/devel/migration/vfio.rst b/docs/devel/migration/vfio.rst
index 605fe60e96..c49482eab6 100644
--- a/docs/devel/migration/vfio.rst
+++ b/docs/devel/migration/vfio.rst
@@ -1,5 +1,5 @@
 =====================
-VFIO device Migration
+VFIO device migration
 =====================
 
 Migration of virtual machine involves saving the state for each device that
-- 
2.41.0



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH 03/10] docs/migration: Convert virtio.txt into rST
  2024-01-09  6:46 [PATCH 00/10] docs/migration: Reorganize migration documentations peterx
  2024-01-09  6:46 ` [PATCH 01/10] docs/migration: Create migration/ directory peterx
  2024-01-09  6:46 ` [PATCH 02/10] docs/migration: Create index page peterx
@ 2024-01-09  6:46 ` peterx
  2024-01-09  7:02   ` Cédric Le Goater
  2024-01-09  6:46 ` [PATCH 04/10] docs/migration: Split "Backwards compatibility" separately peterx
                   ` (8 subsequent siblings)
  11 siblings, 1 reply; 29+ messages in thread
From: peterx @ 2024-01-09  6:46 UTC (permalink / raw)
  To: qemu-devel
  Cc: Cédric Le Goater, Michael S . Tsirkin, peterx,
	Alex Williamson, Jason Wang, Bandan Das, Prasad Pandit,
	Fabiano Rosas

From: Peter Xu <peterx@redhat.com>

Convert the plain old .txt into .rst, add it into migration/index.rst.

Signed-off-by: Peter Xu <peterx@redhat.com>
---
 docs/devel/migration/index.rst  |   1 +
 docs/devel/migration/virtio.rst | 115 ++++++++++++++++++++++++++++++++
 docs/devel/migration/virtio.txt | 108 ------------------------------
 3 files changed, 116 insertions(+), 108 deletions(-)
 create mode 100644 docs/devel/migration/virtio.rst
 delete mode 100644 docs/devel/migration/virtio.txt

diff --git a/docs/devel/migration/index.rst b/docs/devel/migration/index.rst
index 02cfdcc969..2cb701c77c 100644
--- a/docs/devel/migration/index.rst
+++ b/docs/devel/migration/index.rst
@@ -9,3 +9,4 @@ QEMU live migration works.
 
    main
    vfio
+   virtio
diff --git a/docs/devel/migration/virtio.rst b/docs/devel/migration/virtio.rst
new file mode 100644
index 0000000000..611a18b821
--- /dev/null
+++ b/docs/devel/migration/virtio.rst
@@ -0,0 +1,115 @@
+=======================
+Virtio device migration
+=======================
+
+Copyright 2015 IBM Corp.
+
+This work is licensed under the terms of the GNU GPL, version 2 or later.  See
+the COPYING file in the top-level directory.
+
+Saving and restoring the state of virtio devices is a bit of a twisty maze,
+for several reasons:
+
+- state is distributed between several parts:
+
+  - virtio core, for common fields like features, number of queues, ...
+
+  - virtio transport (pci, ccw, ...), for the different proxy devices and
+    transport specific state (msix vectors, indicators, ...)
+
+  - virtio device (net, blk, ...), for the different device types and their
+    state (mac address, request queue, ...)
+
+- most fields are saved via the stream interface; subsequently, subsections
+  have been added to make cross-version migration possible
+
+This file attempts to document the current procedure and point out some
+caveats.
+
+Save state procedure
+====================
+
+::
+
+  virtio core               virtio transport          virtio device
+  -----------               ----------------          -------------
+
+                                                      save() function registered
+                                                      via VMState wrapper on
+                                                      device class
+  virtio_save()                                       <----------
+               ------>      save_config()
+                            - save proxy device
+                            - save transport-specific
+                              device fields
+  - save common device
+    fields
+  - save common virtqueue
+    fields
+               ------>      save_queue()
+                            - save transport-specific
+                              virtqueue fields
+               ------>                               save_device()
+                                                     - save device-specific
+                                                       fields
+  - save subsections
+    - device endianness,
+      if changed from
+      default endianness
+    - 64 bit features, if
+      any high feature bit
+      is set
+    - virtio-1 virtqueue
+      fields, if VERSION_1
+      is set
+
+Load state procedure
+====================
+
+::
+
+  virtio core               virtio transport          virtio device
+  -----------               ----------------          -------------
+
+                                                      load() function registered
+                                                      via VMState wrapper on
+                                                      device class
+  virtio_load()                                       <----------
+               ------>      load_config()
+                            - load proxy device
+                            - load transport-specific
+                              device fields
+  - load common device
+    fields
+  - load common virtqueue
+    fields
+               ------>      load_queue()
+                            - load transport-specific
+                              virtqueue fields
+  - notify guest
+               ------>                               load_device()
+                                                     - load device-specific
+                                                       fields
+  - load subsections
+    - device endianness
+    - 64 bit features
+    - virtio-1 virtqueue
+      fields
+  - sanitize endianness
+  - sanitize features
+  - virtqueue index sanity
+    check
+                                                     - feature-dependent setup
+
+Implications of this setup
+==========================
+
+Devices need to be careful in their state processing during load: The
+load_device() procedure is invoked by the core before subsections have
+been loaded. Any code that depends on information transmitted in subsections
+therefore has to be invoked in the device's load() function _after_
+virtio_load() returned (like e.g. code depending on features).
+
+Any extension of the state being migrated should be done in subsections
+added to the core for compatibility reasons. If transport or device specific
+state is added, core needs to invoke a callback from the new subsection.
diff --git a/docs/devel/migration/virtio.txt b/docs/devel/migration/virtio.txt
deleted file mode 100644
index 98a6b0ffb5..0000000000
--- a/docs/devel/migration/virtio.txt
+++ /dev/null
@@ -1,108 +0,0 @@
-Virtio devices and migration
-============================
-
-Copyright 2015 IBM Corp.
-
-This work is licensed under the terms of the GNU GPL, version 2 or later.  See
-the COPYING file in the top-level directory.
-
-Saving and restoring the state of virtio devices is a bit of a twisty maze,
-for several reasons:
-- state is distributed between several parts:
-  - virtio core, for common fields like features, number of queues, ...
-  - virtio transport (pci, ccw, ...), for the different proxy devices and
-    transport specific state (msix vectors, indicators, ...)
-  - virtio device (net, blk, ...), for the different device types and their
-    state (mac address, request queue, ...)
-- most fields are saved via the stream interface; subsequently, subsections
-  have been added to make cross-version migration possible
-
-This file attempts to document the current procedure and point out some
-caveats.
-
-
-Save state procedure
-====================
-
-virtio core               virtio transport          virtio device
------------               ----------------          -------------
-
-                                                    save() function registered
-                                                    via VMState wrapper on
-                                                    device class
-virtio_save()                                       <----------
-             ------>      save_config()
-                          - save proxy device
-                          - save transport-specific
-                            device fields
-- save common device
-  fields
-- save common virtqueue
-  fields
-             ------>      save_queue()
-                          - save transport-specific
-                            virtqueue fields
-             ------>                               save_device()
-                                                   - save device-specific
-                                                     fields
-- save subsections
-  - device endianness,
-    if changed from
-    default endianness
-  - 64 bit features, if
-    any high feature bit
-    is set
-  - virtio-1 virtqueue
-    fields, if VERSION_1
-    is set
-
-
-Load state procedure
-====================
-
-virtio core               virtio transport          virtio device
------------               ----------------          -------------
-
-                                                    load() function registered
-                                                    via VMState wrapper on
-                                                    device class
-virtio_load()                                       <----------
-             ------>      load_config()
-                          - load proxy device
-                          - load transport-specific
-                            device fields
-- load common device
-  fields
-- load common virtqueue
-  fields
-             ------>      load_queue()
-                          - load transport-specific
-                            virtqueue fields
-- notify guest
-             ------>                               load_device()
-                                                   - load device-specific
-                                                     fields
-- load subsections
-  - device endianness
-  - 64 bit features
-  - virtio-1 virtqueue
-    fields
-- sanitize endianness
-- sanitize features
-- virtqueue index sanity
-  check
-                                                   - feature-dependent setup
-
-
-Implications of this setup
-==========================
-
-Devices need to be careful in their state processing during load: The
-load_device() procedure is invoked by the core before subsections have
-been loaded. Any code that depends on information transmitted in subsections
-therefore has to be invoked in the device's load() function _after_
-virtio_load() returned (like e.g. code depending on features).
-
-Any extension of the state being migrated should be done in subsections
-added to the core for compatibility reasons. If transport or device specific
-state is added, core needs to invoke a callback from the new subsection.
-- 
2.41.0



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH 04/10] docs/migration: Split "Backwards compatibility" separately
  2024-01-09  6:46 [PATCH 00/10] docs/migration: Reorganize migration documentations peterx
                   ` (2 preceding siblings ...)
  2024-01-09  6:46 ` [PATCH 03/10] docs/migration: Convert virtio.txt into rST peterx
@ 2024-01-09  6:46 ` peterx
  2024-01-09  7:03   ` Cédric Le Goater
  2024-01-09  6:46 ` [PATCH 05/10] docs/migration: Split "Debugging" and "Firmware" peterx
                   ` (7 subsequent siblings)
  11 siblings, 1 reply; 29+ messages in thread
From: peterx @ 2024-01-09  6:46 UTC (permalink / raw)
  To: qemu-devel
  Cc: Cédric Le Goater, Michael S . Tsirkin, peterx,
	Alex Williamson, Jason Wang, Bandan Das, Prasad Pandit,
	Fabiano Rosas

From: Peter Xu <peterx@redhat.com>

Split the section from main.rst into a separate file.  Reference it in the
index.rst.

Signed-off-by: Peter Xu <peterx@redhat.com>
---
 docs/devel/migration/compatibility.rst | 517 ++++++++++++++++++++++++
 docs/devel/migration/index.rst         |   1 +
 docs/devel/migration/main.rst          | 519 -------------------------
 3 files changed, 518 insertions(+), 519 deletions(-)
 create mode 100644 docs/devel/migration/compatibility.rst

diff --git a/docs/devel/migration/compatibility.rst b/docs/devel/migration/compatibility.rst
new file mode 100644
index 0000000000..5a5417ef06
--- /dev/null
+++ b/docs/devel/migration/compatibility.rst
@@ -0,0 +1,517 @@
+Backwards compatibility
+=======================
+
+How backwards compatibility works
+---------------------------------
+
+When we do migration, we have two QEMU processes: the source and the
+target.  There are two cases, they are the same version or they are
+different versions.  The easy case is when they are the same version.
+The difficult one is when they are different versions.
+
+There are two things that are different, but they have very similar
+names and sometimes get confused:
+
+- QEMU version
+- machine type version
+
+Let's start with a practical example, we start with:
+
+- qemu-system-x86_64 (v5.2), from now on qemu-5.2.
+- qemu-system-x86_64 (v5.1), from now on qemu-5.1.
+
+Related to this are the "latest" machine types defined on each of
+them:
+
+- pc-q35-5.2 (newer one in qemu-5.2) from now on pc-5.2
+- pc-q35-5.1 (newer one in qemu-5.1) from now on pc-5.1
+
+First of all, migration is only supposed to work if you use the same
+machine type in both source and destination. The QEMU hardware
+configuration needs to be the same also on source and destination.
+Most aspects of the backend configuration can be changed at will,
+except for a few cases where the backend features influence frontend
+device feature exposure.  But that is not relevant for this section.
+
+I am going to list the number of combinations that we can have.  Let's
+start with the trivial ones, QEMU is the same on source and
+destination:
+
+1 - qemu-5.2 -M pc-5.2  -> migrates to -> qemu-5.2 -M pc-5.2
+
+  This is the latest QEMU with the latest machine type.
+  This have to work, and if it doesn't work it is a bug.
+
+2 - qemu-5.1 -M pc-5.1  -> migrates to -> qemu-5.1 -M pc-5.1
+
+  Exactly the same case than the previous one, but for 5.1.
+  Nothing to see here either.
+
+This are the easiest ones, we will not talk more about them in this
+section.
+
+Now we start with the more interesting cases.  Consider the case where
+we have the same QEMU version in both sides (qemu-5.2) but we are using
+the latest machine type for that version (pc-5.2) but one of an older
+QEMU version, in this case pc-5.1.
+
+3 - qemu-5.2 -M pc-5.1  -> migrates to -> qemu-5.2 -M pc-5.1
+
+  It needs to use the definition of pc-5.1 and the devices as they
+  were configured on 5.1, but this should be easy in the sense that
+  both sides are the same QEMU and both sides have exactly the same
+  idea of what the pc-5.1 machine is.
+
+4 - qemu-5.1 -M pc-5.2  -> migrates to -> qemu-5.1 -M pc-5.2
+
+  This combination is not possible as the qemu-5.1 doesn't understand
+  pc-5.2 machine type.  So nothing to worry here.
+
+Now it comes the interesting ones, when both QEMU processes are
+different.  Notice also that the machine type needs to be pc-5.1,
+because we have the limitation than qemu-5.1 doesn't know pc-5.2.  So
+the possible cases are:
+
+5 - qemu-5.2 -M pc-5.1  -> migrates to -> qemu-5.1 -M pc-5.1
+
+  This migration is known as newer to older.  We need to make sure
+  when we are developing 5.2 we need to take care about not to break
+  migration to qemu-5.1.  Notice that we can't make updates to
+  qemu-5.1 to understand whatever qemu-5.2 decides to change, so it is
+  in qemu-5.2 side to make the relevant changes.
+
+6 - qemu-5.1 -M pc-5.1  -> migrates to -> qemu-5.2 -M pc-5.1
+
+  This migration is known as older to newer.  We need to make sure
+  than we are able to receive migrations from qemu-5.1. The problem is
+  similar to the previous one.
+
+If qemu-5.1 and qemu-5.2 were the same, there will not be any
+compatibility problems.  But the reason that we create qemu-5.2 is to
+get new features, devices, defaults, etc.
+
+If we get a device that has a new feature, or change a default value,
+we have a problem when we try to migrate between different QEMU
+versions.
+
+So we need a way to tell qemu-5.2 that when we are using machine type
+pc-5.1, it needs to **not** use the feature, to be able to migrate to
+real qemu-5.1.
+
+And the equivalent part when migrating from qemu-5.1 to qemu-5.2.
+qemu-5.2 has to expect that it is not going to get data for the new
+feature, because qemu-5.1 doesn't know about it.
+
+How do we tell QEMU about these device feature changes?  In
+hw/core/machine.c:hw_compat_X_Y arrays.
+
+If we change a default value, we need to put back the old value on
+that array.  And the device, during initialization needs to look at
+that array to see what value it needs to get for that feature.  And
+what are we going to put in that array, the value of a property.
+
+To create a property for a device, we need to use one of the
+DEFINE_PROP_*() macros. See include/hw/qdev-properties.h to find the
+macros that exist.  With it, we set the default value for that
+property, and that is what it is going to get in the latest released
+version.  But if we want a different value for a previous version, we
+can change that in the hw_compat_X_Y arrays.
+
+hw_compat_X_Y is an array of registers that have the format:
+
+- name_device
+- name_property
+- value
+
+Let's see a practical example.
+
+In qemu-5.2 virtio-blk-device got multi queue support.  This is a
+change that is not backward compatible.  In qemu-5.1 it has one
+queue. In qemu-5.2 it has the same number of queues as the number of
+cpus in the system.
+
+When we are doing migration, if we migrate from a device that has 4
+queues to a device that have only one queue, we don't know where to
+put the extra information for the other 3 queues, and we fail
+migration.
+
+Similar problem when we migrate from qemu-5.1 that has only one queue
+to qemu-5.2, we only sent information for one queue, but destination
+has 4, and we have 3 queues that are not properly initialized and
+anything can happen.
+
+So, how can we address this problem.  Easy, just convince qemu-5.2
+that when it is running pc-5.1, it needs to set the number of queues
+for virtio-blk-devices to 1.
+
+That way we fix the cases 5 and 6.
+
+5 - qemu-5.2 -M pc-5.1  -> migrates to -> qemu-5.1 -M pc-5.1
+
+    qemu-5.2 -M pc-5.1 sets number of queues to be 1.
+    qemu-5.1 -M pc-5.1 expects number of queues to be 1.
+
+    correct.  migration works.
+
+6 - qemu-5.1 -M pc-5.1  -> migrates to -> qemu-5.2 -M pc-5.1
+
+    qemu-5.1 -M pc-5.1 sets number of queues to be 1.
+    qemu-5.2 -M pc-5.1 expects number of queues to be 1.
+
+    correct.  migration works.
+
+And now the other interesting case, case 3.  In this case we have:
+
+3 - qemu-5.2 -M pc-5.1  -> migrates to -> qemu-5.2 -M pc-5.1
+
+    Here we have the same QEMU in both sides.  So it doesn't matter a
+    lot if we have set the number of queues to 1 or not, because
+    they are the same.
+
+    WRONG!
+
+    Think what happens if we do one of this double migrations:
+
+    A -> migrates -> B -> migrates -> C
+
+    where:
+
+    A: qemu-5.1 -M pc-5.1
+    B: qemu-5.2 -M pc-5.1
+    C: qemu-5.2 -M pc-5.1
+
+    migration A -> B is case 6, so number of queues needs to be 1.
+
+    migration B -> C is case 3, so we don't care.  But actually we
+    care because we haven't started the guest in qemu-5.2, it came
+    migrated from qemu-5.1.  So to be in the safe place, we need to
+    always use number of queues 1 when we are using pc-5.1.
+
+Now, how was this done in reality?  The following commit shows how it
+was done::
+
+  commit 9445e1e15e66c19e42bea942ba810db28052cd05
+  Author: Stefan Hajnoczi <stefanha@redhat.com>
+  Date:   Tue Aug 18 15:33:47 2020 +0100
+
+  virtio-blk-pci: default num_queues to -smp N
+
+The relevant parts for migration are::
+
+    @@ -1281,7 +1284,8 @@ static Property virtio_blk_properties[] = {
+     #endif
+         DEFINE_PROP_BIT("request-merging", VirtIOBlock, conf.request_merging, 0,
+                         true),
+    -    DEFINE_PROP_UINT16("num-queues", VirtIOBlock, conf.num_queues, 1),
+    +    DEFINE_PROP_UINT16("num-queues", VirtIOBlock, conf.num_queues,
+    +                       VIRTIO_BLK_AUTO_NUM_QUEUES),
+         DEFINE_PROP_UINT16("queue-size", VirtIOBlock, conf.queue_size, 256),
+
+It changes the default value of num_queues.  But it fishes it for old
+machine types to have the right value::
+
+    @@ -31,6 +31,7 @@
+     GlobalProperty hw_compat_5_1[] = {
+         ...
+    +    { "virtio-blk-device", "num-queues", "1"},
+         ...
+     };
+
+A device with different features on both sides
+----------------------------------------------
+
+Let's assume that we are using the same QEMU binary on both sides,
+just to make the things easier.  But we have a device that has
+different features on both sides of the migration.  That can be
+because the devices are different, because the kernel driver of both
+devices have different features, whatever.
+
+How can we get this to work with migration.  The way to do that is
+"theoretically" easy.  You have to get the features that the device
+has in the source of the migration.  The features that the device has
+on the target of the migration, you get the intersection of the
+features of both sides, and that is the way that you should launch
+QEMU.
+
+Notice that this is not completely related to QEMU.  The most
+important thing here is that this should be handled by the managing
+application that launches QEMU.  If QEMU is configured correctly, the
+migration will succeed.
+
+That said, actually doing it is complicated.  Almost all devices are
+bad at being able to be launched with only some features enabled.
+With one big exception: cpus.
+
+You can read the documentation for QEMU x86 cpu models here:
+
+https://qemu-project.gitlab.io/qemu/system/qemu-cpu-models.html
+
+See when they talk about migration they recommend that one chooses the
+newest cpu model that is supported for all cpus.
+
+Let's say that we have:
+
+Host A:
+
+Device X has the feature Y
+
+Host B:
+
+Device X has not the feature Y
+
+If we try to migrate without any care from host A to host B, it will
+fail because when migration tries to load the feature Y on
+destination, it will find that the hardware is not there.
+
+Doing this would be the equivalent of doing with cpus:
+
+Host A:
+
+$ qemu-system-x86_64 -cpu host
+
+Host B:
+
+$ qemu-system-x86_64 -cpu host
+
+When both hosts have different cpu features this is guaranteed to
+fail.  Especially if Host B has less features than host A.  If host A
+has less features than host B, sometimes it works.  Important word of
+last sentence is "sometimes".
+
+So, forgetting about cpu models and continuing with the -cpu host
+example, let's see that the differences of the cpus is that Host A and
+B have the following features:
+
+Features:   'pcid'  'stibp' 'taa-no'
+Host A:        X       X
+Host B:                        X
+
+And we want to migrate between them, the way configure both QEMU cpu
+will be:
+
+Host A:
+
+$ qemu-system-x86_64 -cpu host,pcid=off,stibp=off
+
+Host B:
+
+$ qemu-system-x86_64 -cpu host,taa-no=off
+
+And you would be able to migrate between them.  It is responsibility
+of the management application or of the user to make sure that the
+configuration is correct.  QEMU doesn't know how to look at this kind
+of features in general.
+
+Notice that we don't recommend to use -cpu host for migration.  It is
+used in this example because it makes the example simpler.
+
+Other devices have worse control about individual features.  If they
+want to be able to migrate between hosts that show different features,
+the device needs a way to configure which ones it is going to use.
+
+In this section we have considered that we are using the same QEMU
+binary in both sides of the migration.  If we use different QEMU
+versions process, then we need to have into account all other
+differences and the examples become even more complicated.
+
+How to mitigate when we have a backward compatibility error
+-----------------------------------------------------------
+
+We broke migration for old machine types continuously during
+development.  But as soon as we find that there is a problem, we fix
+it.  The problem is what happens when we detect after we have done a
+release that something has gone wrong.
+
+Let see how it worked with one example.
+
+After the release of qemu-8.0 we found a problem when doing migration
+of the machine type pc-7.2.
+
+- $ qemu-7.2 -M pc-7.2  ->  qemu-7.2 -M pc-7.2
+
+  This migration works
+
+- $ qemu-8.0 -M pc-7.2  ->  qemu-8.0 -M pc-7.2
+
+  This migration works
+
+- $ qemu-8.0 -M pc-7.2  ->  qemu-7.2 -M pc-7.2
+
+  This migration fails
+
+- $ qemu-7.2 -M pc-7.2  ->  qemu-8.0 -M pc-7.2
+
+  This migration fails
+
+So clearly something fails when migration between qemu-7.2 and
+qemu-8.0 with machine type pc-7.2.  The error messages, and git bisect
+pointed to this commit.
+
+In qemu-8.0 we got this commit::
+
+    commit 010746ae1db7f52700cb2e2c46eb94f299cfa0d2
+    Author: Jonathan Cameron <Jonathan.Cameron@huawei.com>
+    Date:   Thu Mar 2 13:37:02 2023 +0000
+
+    hw/pci/aer: Implement PCI_ERR_UNCOR_MASK register
+
+
+The relevant bits of the commit for our example are this ones::
+
+    --- a/hw/pci/pcie_aer.c
+    +++ b/hw/pci/pcie_aer.c
+    @@ -112,6 +112,10 @@ int pcie_aer_init(PCIDevice *dev,
+
+         pci_set_long(dev->w1cmask + offset + PCI_ERR_UNCOR_STATUS,
+                      PCI_ERR_UNC_SUPPORTED);
+    +    pci_set_long(dev->config + offset + PCI_ERR_UNCOR_MASK,
+    +                 PCI_ERR_UNC_MASK_DEFAULT);
+    +    pci_set_long(dev->wmask + offset + PCI_ERR_UNCOR_MASK,
+    +                 PCI_ERR_UNC_SUPPORTED);
+
+         pci_set_long(dev->config + offset + PCI_ERR_UNCOR_SEVER,
+                     PCI_ERR_UNC_SEVERITY_DEFAULT);
+
+The patch changes how we configure PCI space for AER.  But QEMU fails
+when the PCI space configuration is different between source and
+destination.
+
+The following commit shows how this got fixed::
+
+    commit 5ed3dabe57dd9f4c007404345e5f5bf0e347317f
+    Author: Leonardo Bras <leobras@redhat.com>
+    Date:   Tue May 2 21:27:02 2023 -0300
+
+    hw/pci: Disable PCI_ERR_UNCOR_MASK register for machine type < 8.0
+
+    [...]
+
+The relevant parts of the fix in QEMU are as follow:
+
+First, we create a new property for the device to be able to configure
+the old behaviour or the new behaviour::
+
+    diff --git a/hw/pci/pci.c b/hw/pci/pci.c
+    index 8a87ccc8b0..5153ad63d6 100644
+    --- a/hw/pci/pci.c
+    +++ b/hw/pci/pci.c
+    @@ -79,6 +79,8 @@ static Property pci_props[] = {
+         DEFINE_PROP_STRING("failover_pair_id", PCIDevice,
+                            failover_pair_id),
+         DEFINE_PROP_UINT32("acpi-index",  PCIDevice, acpi_index, 0),
+    +    DEFINE_PROP_BIT("x-pcie-err-unc-mask", PCIDevice, cap_present,
+    +                    QEMU_PCIE_ERR_UNC_MASK_BITNR, true),
+         DEFINE_PROP_END_OF_LIST()
+     };
+
+Notice that we enable the feature for new machine types.
+
+Now we see how the fix is done.  This is going to depend on what kind
+of breakage happens, but in this case it is quite simple::
+
+    diff --git a/hw/pci/pcie_aer.c b/hw/pci/pcie_aer.c
+    index 103667c368..374d593ead 100644
+    --- a/hw/pci/pcie_aer.c
+    +++ b/hw/pci/pcie_aer.c
+    @@ -112,10 +112,13 @@ int pcie_aer_init(PCIDevice *dev, uint8_t cap_ver,
+    uint16_t offset,
+
+         pci_set_long(dev->w1cmask + offset + PCI_ERR_UNCOR_STATUS,
+                      PCI_ERR_UNC_SUPPORTED);
+    -    pci_set_long(dev->config + offset + PCI_ERR_UNCOR_MASK,
+    -                 PCI_ERR_UNC_MASK_DEFAULT);
+    -    pci_set_long(dev->wmask + offset + PCI_ERR_UNCOR_MASK,
+    -                 PCI_ERR_UNC_SUPPORTED);
+    +
+    +    if (dev->cap_present & QEMU_PCIE_ERR_UNC_MASK) {
+    +        pci_set_long(dev->config + offset + PCI_ERR_UNCOR_MASK,
+    +                     PCI_ERR_UNC_MASK_DEFAULT);
+    +        pci_set_long(dev->wmask + offset + PCI_ERR_UNCOR_MASK,
+    +                     PCI_ERR_UNC_SUPPORTED);
+    +    }
+
+         pci_set_long(dev->config + offset + PCI_ERR_UNCOR_SEVER,
+                      PCI_ERR_UNC_SEVERITY_DEFAULT);
+
+I.e. If the property bit is enabled, we configure it as we did for
+qemu-8.0.  If the property bit is not set, we configure it as it was in 7.2.
+
+And now, everything that is missing is disabling the feature for old
+machine types::
+
+    diff --git a/hw/core/machine.c b/hw/core/machine.c
+    index 47a34841a5..07f763eb2e 100644
+    --- a/hw/core/machine.c
+    +++ b/hw/core/machine.c
+    @@ -48,6 +48,7 @@ GlobalProperty hw_compat_7_2[] = {
+         { "e1000e", "migrate-timadj", "off" },
+         { "virtio-mem", "x-early-migration", "false" },
+         { "migration", "x-preempt-pre-7-2", "true" },
+    +    { TYPE_PCI_DEVICE, "x-pcie-err-unc-mask", "off" },
+     };
+     const size_t hw_compat_7_2_len = G_N_ELEMENTS(hw_compat_7_2);
+
+And now, when qemu-8.0.1 is released with this fix, all combinations
+are going to work as supposed.
+
+- $ qemu-7.2 -M pc-7.2  ->  qemu-7.2 -M pc-7.2 (works)
+- $ qemu-8.0.1 -M pc-7.2  ->  qemu-8.0.1 -M pc-7.2 (works)
+- $ qemu-8.0.1 -M pc-7.2  ->  qemu-7.2 -M pc-7.2 (works)
+- $ qemu-7.2 -M pc-7.2  ->  qemu-8.0.1 -M pc-7.2 (works)
+
+So the normality has been restored and everything is ok, no?
+
+Not really, now our matrix is much bigger.  We started with the easy
+cases, migration from the same version to the same version always
+works:
+
+- $ qemu-7.2 -M pc-7.2  ->  qemu-7.2 -M pc-7.2
+- $ qemu-8.0 -M pc-7.2  ->  qemu-8.0 -M pc-7.2
+- $ qemu-8.0.1 -M pc-7.2  ->  qemu-8.0.1 -M pc-7.2
+
+Now the interesting ones.  When the QEMU processes versions are
+different.  For the 1st set, their fail and we can do nothing, both
+versions are released and we can't change anything.
+
+- $ qemu-7.2 -M pc-7.2  ->  qemu-8.0 -M pc-7.2
+- $ qemu-8.0 -M pc-7.2  ->  qemu-7.2 -M pc-7.2
+
+This two are the ones that work. The whole point of making the
+change in qemu-8.0.1 release was to fix this issue:
+
+- $ qemu-7.2 -M pc-7.2  ->  qemu-8.0.1 -M pc-7.2
+- $ qemu-8.0.1 -M pc-7.2  ->  qemu-7.2 -M pc-7.2
+
+But now we found that qemu-8.0 neither can migrate to qemu-7.2 not
+qemu-8.0.1.
+
+- $ qemu-8.0 -M pc-7.2  ->  qemu-8.0.1 -M pc-7.2
+- $ qemu-8.0.1 -M pc-7.2  ->  qemu-8.0 -M pc-7.2
+
+So, if we start a pc-7.2 machine in qemu-8.0 we can't migrate it to
+anything except to qemu-8.0.
+
+Can we do better?
+
+Yeap.  If we know that we are going to do this migration:
+
+- $ qemu-8.0 -M pc-7.2  ->  qemu-8.0.1 -M pc-7.2
+
+We can launch the appropriate devices with::
+
+  --device...,x-pci-e-err-unc-mask=on
+
+And now we can receive a migration from 8.0.  And from now on, we can
+do that migration to new machine types if we remember to enable that
+property for pc-7.2.  Notice that we need to remember, it is not
+enough to know that the source of the migration is qemu-8.0.  Think of
+this example:
+
+$ qemu-8.0 -M pc-7.2 -> qemu-8.0.1 -M pc-7.2 -> qemu-8.2 -M pc-7.2
+
+In the second migration, the source is not qemu-8.0, but we still have
+that "problem" and have that property enabled.  Notice that we need to
+continue having this mark/property until we have this machine
+rebooted.  But it is not a normal reboot (that don't reload QEMU) we
+need the machine to poweroff/poweron on a fixed QEMU.  And from now
+on we can use the proper real machine.
diff --git a/docs/devel/migration/index.rst b/docs/devel/migration/index.rst
index 2cb701c77c..7fc02b9520 100644
--- a/docs/devel/migration/index.rst
+++ b/docs/devel/migration/index.rst
@@ -8,5 +8,6 @@ QEMU live migration works.
    :maxdepth: 2
 
    main
+   compatibility
    vfio
    virtio
diff --git a/docs/devel/migration/main.rst b/docs/devel/migration/main.rst
index 62bf027fb4..b3e31bb52f 100644
--- a/docs/devel/migration/main.rst
+++ b/docs/devel/migration/main.rst
@@ -993,522 +993,3 @@ In some cases it may be best to tie specific firmware versions to specific
 versioned machine types to cut down on the combinations that will need
 support.  This is also useful when newer versions of firmware outgrow
 the padding.
-
-
-Backwards compatibility
-=======================
-
-How backwards compatibility works
----------------------------------
-
-When we do migration, we have two QEMU processes: the source and the
-target.  There are two cases, they are the same version or they are
-different versions.  The easy case is when they are the same version.
-The difficult one is when they are different versions.
-
-There are two things that are different, but they have very similar
-names and sometimes get confused:
-
-- QEMU version
-- machine type version
-
-Let's start with a practical example, we start with:
-
-- qemu-system-x86_64 (v5.2), from now on qemu-5.2.
-- qemu-system-x86_64 (v5.1), from now on qemu-5.1.
-
-Related to this are the "latest" machine types defined on each of
-them:
-
-- pc-q35-5.2 (newer one in qemu-5.2) from now on pc-5.2
-- pc-q35-5.1 (newer one in qemu-5.1) from now on pc-5.1
-
-First of all, migration is only supposed to work if you use the same
-machine type in both source and destination. The QEMU hardware
-configuration needs to be the same also on source and destination.
-Most aspects of the backend configuration can be changed at will,
-except for a few cases where the backend features influence frontend
-device feature exposure.  But that is not relevant for this section.
-
-I am going to list the number of combinations that we can have.  Let's
-start with the trivial ones, QEMU is the same on source and
-destination:
-
-1 - qemu-5.2 -M pc-5.2  -> migrates to -> qemu-5.2 -M pc-5.2
-
-  This is the latest QEMU with the latest machine type.
-  This have to work, and if it doesn't work it is a bug.
-
-2 - qemu-5.1 -M pc-5.1  -> migrates to -> qemu-5.1 -M pc-5.1
-
-  Exactly the same case than the previous one, but for 5.1.
-  Nothing to see here either.
-
-This are the easiest ones, we will not talk more about them in this
-section.
-
-Now we start with the more interesting cases.  Consider the case where
-we have the same QEMU version in both sides (qemu-5.2) but we are using
-the latest machine type for that version (pc-5.2) but one of an older
-QEMU version, in this case pc-5.1.
-
-3 - qemu-5.2 -M pc-5.1  -> migrates to -> qemu-5.2 -M pc-5.1
-
-  It needs to use the definition of pc-5.1 and the devices as they
-  were configured on 5.1, but this should be easy in the sense that
-  both sides are the same QEMU and both sides have exactly the same
-  idea of what the pc-5.1 machine is.
-
-4 - qemu-5.1 -M pc-5.2  -> migrates to -> qemu-5.1 -M pc-5.2
-
-  This combination is not possible as the qemu-5.1 doesn't understand
-  pc-5.2 machine type.  So nothing to worry here.
-
-Now it comes the interesting ones, when both QEMU processes are
-different.  Notice also that the machine type needs to be pc-5.1,
-because we have the limitation than qemu-5.1 doesn't know pc-5.2.  So
-the possible cases are:
-
-5 - qemu-5.2 -M pc-5.1  -> migrates to -> qemu-5.1 -M pc-5.1
-
-  This migration is known as newer to older.  We need to make sure
-  when we are developing 5.2 we need to take care about not to break
-  migration to qemu-5.1.  Notice that we can't make updates to
-  qemu-5.1 to understand whatever qemu-5.2 decides to change, so it is
-  in qemu-5.2 side to make the relevant changes.
-
-6 - qemu-5.1 -M pc-5.1  -> migrates to -> qemu-5.2 -M pc-5.1
-
-  This migration is known as older to newer.  We need to make sure
-  than we are able to receive migrations from qemu-5.1. The problem is
-  similar to the previous one.
-
-If qemu-5.1 and qemu-5.2 were the same, there will not be any
-compatibility problems.  But the reason that we create qemu-5.2 is to
-get new features, devices, defaults, etc.
-
-If we get a device that has a new feature, or change a default value,
-we have a problem when we try to migrate between different QEMU
-versions.
-
-So we need a way to tell qemu-5.2 that when we are using machine type
-pc-5.1, it needs to **not** use the feature, to be able to migrate to
-real qemu-5.1.
-
-And the equivalent part when migrating from qemu-5.1 to qemu-5.2.
-qemu-5.2 has to expect that it is not going to get data for the new
-feature, because qemu-5.1 doesn't know about it.
-
-How do we tell QEMU about these device feature changes?  In
-hw/core/machine.c:hw_compat_X_Y arrays.
-
-If we change a default value, we need to put back the old value on
-that array.  And the device, during initialization needs to look at
-that array to see what value it needs to get for that feature.  And
-what are we going to put in that array, the value of a property.
-
-To create a property for a device, we need to use one of the
-DEFINE_PROP_*() macros. See include/hw/qdev-properties.h to find the
-macros that exist.  With it, we set the default value for that
-property, and that is what it is going to get in the latest released
-version.  But if we want a different value for a previous version, we
-can change that in the hw_compat_X_Y arrays.
-
-hw_compat_X_Y is an array of registers that have the format:
-
-- name_device
-- name_property
-- value
-
-Let's see a practical example.
-
-In qemu-5.2 virtio-blk-device got multi queue support.  This is a
-change that is not backward compatible.  In qemu-5.1 it has one
-queue. In qemu-5.2 it has the same number of queues as the number of
-cpus in the system.
-
-When we are doing migration, if we migrate from a device that has 4
-queues to a device that have only one queue, we don't know where to
-put the extra information for the other 3 queues, and we fail
-migration.
-
-Similar problem when we migrate from qemu-5.1 that has only one queue
-to qemu-5.2, we only sent information for one queue, but destination
-has 4, and we have 3 queues that are not properly initialized and
-anything can happen.
-
-So, how can we address this problem.  Easy, just convince qemu-5.2
-that when it is running pc-5.1, it needs to set the number of queues
-for virtio-blk-devices to 1.
-
-That way we fix the cases 5 and 6.
-
-5 - qemu-5.2 -M pc-5.1  -> migrates to -> qemu-5.1 -M pc-5.1
-
-    qemu-5.2 -M pc-5.1 sets number of queues to be 1.
-    qemu-5.1 -M pc-5.1 expects number of queues to be 1.
-
-    correct.  migration works.
-
-6 - qemu-5.1 -M pc-5.1  -> migrates to -> qemu-5.2 -M pc-5.1
-
-    qemu-5.1 -M pc-5.1 sets number of queues to be 1.
-    qemu-5.2 -M pc-5.1 expects number of queues to be 1.
-
-    correct.  migration works.
-
-And now the other interesting case, case 3.  In this case we have:
-
-3 - qemu-5.2 -M pc-5.1  -> migrates to -> qemu-5.2 -M pc-5.1
-
-    Here we have the same QEMU in both sides.  So it doesn't matter a
-    lot if we have set the number of queues to 1 or not, because
-    they are the same.
-
-    WRONG!
-
-    Think what happens if we do one of this double migrations:
-
-    A -> migrates -> B -> migrates -> C
-
-    where:
-
-    A: qemu-5.1 -M pc-5.1
-    B: qemu-5.2 -M pc-5.1
-    C: qemu-5.2 -M pc-5.1
-
-    migration A -> B is case 6, so number of queues needs to be 1.
-
-    migration B -> C is case 3, so we don't care.  But actually we
-    care because we haven't started the guest in qemu-5.2, it came
-    migrated from qemu-5.1.  So to be in the safe place, we need to
-    always use number of queues 1 when we are using pc-5.1.
-
-Now, how was this done in reality?  The following commit shows how it
-was done::
-
-  commit 9445e1e15e66c19e42bea942ba810db28052cd05
-  Author: Stefan Hajnoczi <stefanha@redhat.com>
-  Date:   Tue Aug 18 15:33:47 2020 +0100
-
-  virtio-blk-pci: default num_queues to -smp N
-
-The relevant parts for migration are::
-
-    @@ -1281,7 +1284,8 @@ static Property virtio_blk_properties[] = {
-     #endif
-         DEFINE_PROP_BIT("request-merging", VirtIOBlock, conf.request_merging, 0,
-                         true),
-    -    DEFINE_PROP_UINT16("num-queues", VirtIOBlock, conf.num_queues, 1),
-    +    DEFINE_PROP_UINT16("num-queues", VirtIOBlock, conf.num_queues,
-    +                       VIRTIO_BLK_AUTO_NUM_QUEUES),
-         DEFINE_PROP_UINT16("queue-size", VirtIOBlock, conf.queue_size, 256),
-
-It changes the default value of num_queues.  But it fishes it for old
-machine types to have the right value::
-
-    @@ -31,6 +31,7 @@
-     GlobalProperty hw_compat_5_1[] = {
-         ...
-    +    { "virtio-blk-device", "num-queues", "1"},
-         ...
-     };
-
-A device with different features on both sides
-----------------------------------------------
-
-Let's assume that we are using the same QEMU binary on both sides,
-just to make the things easier.  But we have a device that has
-different features on both sides of the migration.  That can be
-because the devices are different, because the kernel driver of both
-devices have different features, whatever.
-
-How can we get this to work with migration.  The way to do that is
-"theoretically" easy.  You have to get the features that the device
-has in the source of the migration.  The features that the device has
-on the target of the migration, you get the intersection of the
-features of both sides, and that is the way that you should launch
-QEMU.
-
-Notice that this is not completely related to QEMU.  The most
-important thing here is that this should be handled by the managing
-application that launches QEMU.  If QEMU is configured correctly, the
-migration will succeed.
-
-That said, actually doing it is complicated.  Almost all devices are
-bad at being able to be launched with only some features enabled.
-With one big exception: cpus.
-
-You can read the documentation for QEMU x86 cpu models here:
-
-https://qemu-project.gitlab.io/qemu/system/qemu-cpu-models.html
-
-See when they talk about migration they recommend that one chooses the
-newest cpu model that is supported for all cpus.
-
-Let's say that we have:
-
-Host A:
-
-Device X has the feature Y
-
-Host B:
-
-Device X has not the feature Y
-
-If we try to migrate without any care from host A to host B, it will
-fail because when migration tries to load the feature Y on
-destination, it will find that the hardware is not there.
-
-Doing this would be the equivalent of doing with cpus:
-
-Host A:
-
-$ qemu-system-x86_64 -cpu host
-
-Host B:
-
-$ qemu-system-x86_64 -cpu host
-
-When both hosts have different cpu features this is guaranteed to
-fail.  Especially if Host B has less features than host A.  If host A
-has less features than host B, sometimes it works.  Important word of
-last sentence is "sometimes".
-
-So, forgetting about cpu models and continuing with the -cpu host
-example, let's see that the differences of the cpus is that Host A and
-B have the following features:
-
-Features:   'pcid'  'stibp' 'taa-no'
-Host A:        X       X
-Host B:                        X
-
-And we want to migrate between them, the way configure both QEMU cpu
-will be:
-
-Host A:
-
-$ qemu-system-x86_64 -cpu host,pcid=off,stibp=off
-
-Host B:
-
-$ qemu-system-x86_64 -cpu host,taa-no=off
-
-And you would be able to migrate between them.  It is responsibility
-of the management application or of the user to make sure that the
-configuration is correct.  QEMU doesn't know how to look at this kind
-of features in general.
-
-Notice that we don't recommend to use -cpu host for migration.  It is
-used in this example because it makes the example simpler.
-
-Other devices have worse control about individual features.  If they
-want to be able to migrate between hosts that show different features,
-the device needs a way to configure which ones it is going to use.
-
-In this section we have considered that we are using the same QEMU
-binary in both sides of the migration.  If we use different QEMU
-versions process, then we need to have into account all other
-differences and the examples become even more complicated.
-
-How to mitigate when we have a backward compatibility error
------------------------------------------------------------
-
-We broke migration for old machine types continuously during
-development.  But as soon as we find that there is a problem, we fix
-it.  The problem is what happens when we detect after we have done a
-release that something has gone wrong.
-
-Let see how it worked with one example.
-
-After the release of qemu-8.0 we found a problem when doing migration
-of the machine type pc-7.2.
-
-- $ qemu-7.2 -M pc-7.2  ->  qemu-7.2 -M pc-7.2
-
-  This migration works
-
-- $ qemu-8.0 -M pc-7.2  ->  qemu-8.0 -M pc-7.2
-
-  This migration works
-
-- $ qemu-8.0 -M pc-7.2  ->  qemu-7.2 -M pc-7.2
-
-  This migration fails
-
-- $ qemu-7.2 -M pc-7.2  ->  qemu-8.0 -M pc-7.2
-
-  This migration fails
-
-So clearly something fails when migration between qemu-7.2 and
-qemu-8.0 with machine type pc-7.2.  The error messages, and git bisect
-pointed to this commit.
-
-In qemu-8.0 we got this commit::
-
-    commit 010746ae1db7f52700cb2e2c46eb94f299cfa0d2
-    Author: Jonathan Cameron <Jonathan.Cameron@huawei.com>
-    Date:   Thu Mar 2 13:37:02 2023 +0000
-
-    hw/pci/aer: Implement PCI_ERR_UNCOR_MASK register
-
-
-The relevant bits of the commit for our example are this ones::
-
-    --- a/hw/pci/pcie_aer.c
-    +++ b/hw/pci/pcie_aer.c
-    @@ -112,6 +112,10 @@ int pcie_aer_init(PCIDevice *dev,
-
-         pci_set_long(dev->w1cmask + offset + PCI_ERR_UNCOR_STATUS,
-                      PCI_ERR_UNC_SUPPORTED);
-    +    pci_set_long(dev->config + offset + PCI_ERR_UNCOR_MASK,
-    +                 PCI_ERR_UNC_MASK_DEFAULT);
-    +    pci_set_long(dev->wmask + offset + PCI_ERR_UNCOR_MASK,
-    +                 PCI_ERR_UNC_SUPPORTED);
-
-         pci_set_long(dev->config + offset + PCI_ERR_UNCOR_SEVER,
-                     PCI_ERR_UNC_SEVERITY_DEFAULT);
-
-The patch changes how we configure PCI space for AER.  But QEMU fails
-when the PCI space configuration is different between source and
-destination.
-
-The following commit shows how this got fixed::
-
-    commit 5ed3dabe57dd9f4c007404345e5f5bf0e347317f
-    Author: Leonardo Bras <leobras@redhat.com>
-    Date:   Tue May 2 21:27:02 2023 -0300
-
-    hw/pci: Disable PCI_ERR_UNCOR_MASK register for machine type < 8.0
-
-    [...]
-
-The relevant parts of the fix in QEMU are as follow:
-
-First, we create a new property for the device to be able to configure
-the old behaviour or the new behaviour::
-
-    diff --git a/hw/pci/pci.c b/hw/pci/pci.c
-    index 8a87ccc8b0..5153ad63d6 100644
-    --- a/hw/pci/pci.c
-    +++ b/hw/pci/pci.c
-    @@ -79,6 +79,8 @@ static Property pci_props[] = {
-         DEFINE_PROP_STRING("failover_pair_id", PCIDevice,
-                            failover_pair_id),
-         DEFINE_PROP_UINT32("acpi-index",  PCIDevice, acpi_index, 0),
-    +    DEFINE_PROP_BIT("x-pcie-err-unc-mask", PCIDevice, cap_present,
-    +                    QEMU_PCIE_ERR_UNC_MASK_BITNR, true),
-         DEFINE_PROP_END_OF_LIST()
-     };
-
-Notice that we enable the feature for new machine types.
-
-Now we see how the fix is done.  This is going to depend on what kind
-of breakage happens, but in this case it is quite simple::
-
-    diff --git a/hw/pci/pcie_aer.c b/hw/pci/pcie_aer.c
-    index 103667c368..374d593ead 100644
-    --- a/hw/pci/pcie_aer.c
-    +++ b/hw/pci/pcie_aer.c
-    @@ -112,10 +112,13 @@ int pcie_aer_init(PCIDevice *dev, uint8_t cap_ver,
-    uint16_t offset,
-
-         pci_set_long(dev->w1cmask + offset + PCI_ERR_UNCOR_STATUS,
-                      PCI_ERR_UNC_SUPPORTED);
-    -    pci_set_long(dev->config + offset + PCI_ERR_UNCOR_MASK,
-    -                 PCI_ERR_UNC_MASK_DEFAULT);
-    -    pci_set_long(dev->wmask + offset + PCI_ERR_UNCOR_MASK,
-    -                 PCI_ERR_UNC_SUPPORTED);
-    +
-    +    if (dev->cap_present & QEMU_PCIE_ERR_UNC_MASK) {
-    +        pci_set_long(dev->config + offset + PCI_ERR_UNCOR_MASK,
-    +                     PCI_ERR_UNC_MASK_DEFAULT);
-    +        pci_set_long(dev->wmask + offset + PCI_ERR_UNCOR_MASK,
-    +                     PCI_ERR_UNC_SUPPORTED);
-    +    }
-
-         pci_set_long(dev->config + offset + PCI_ERR_UNCOR_SEVER,
-                      PCI_ERR_UNC_SEVERITY_DEFAULT);
-
-I.e. If the property bit is enabled, we configure it as we did for
-qemu-8.0.  If the property bit is not set, we configure it as it was in 7.2.
-
-And now, everything that is missing is disabling the feature for old
-machine types::
-
-    diff --git a/hw/core/machine.c b/hw/core/machine.c
-    index 47a34841a5..07f763eb2e 100644
-    --- a/hw/core/machine.c
-    +++ b/hw/core/machine.c
-    @@ -48,6 +48,7 @@ GlobalProperty hw_compat_7_2[] = {
-         { "e1000e", "migrate-timadj", "off" },
-         { "virtio-mem", "x-early-migration", "false" },
-         { "migration", "x-preempt-pre-7-2", "true" },
-    +    { TYPE_PCI_DEVICE, "x-pcie-err-unc-mask", "off" },
-     };
-     const size_t hw_compat_7_2_len = G_N_ELEMENTS(hw_compat_7_2);
-
-And now, when qemu-8.0.1 is released with this fix, all combinations
-are going to work as supposed.
-
-- $ qemu-7.2 -M pc-7.2  ->  qemu-7.2 -M pc-7.2 (works)
-- $ qemu-8.0.1 -M pc-7.2  ->  qemu-8.0.1 -M pc-7.2 (works)
-- $ qemu-8.0.1 -M pc-7.2  ->  qemu-7.2 -M pc-7.2 (works)
-- $ qemu-7.2 -M pc-7.2  ->  qemu-8.0.1 -M pc-7.2 (works)
-
-So the normality has been restored and everything is ok, no?
-
-Not really, now our matrix is much bigger.  We started with the easy
-cases, migration from the same version to the same version always
-works:
-
-- $ qemu-7.2 -M pc-7.2  ->  qemu-7.2 -M pc-7.2
-- $ qemu-8.0 -M pc-7.2  ->  qemu-8.0 -M pc-7.2
-- $ qemu-8.0.1 -M pc-7.2  ->  qemu-8.0.1 -M pc-7.2
-
-Now the interesting ones.  When the QEMU processes versions are
-different.  For the 1st set, their fail and we can do nothing, both
-versions are released and we can't change anything.
-
-- $ qemu-7.2 -M pc-7.2  ->  qemu-8.0 -M pc-7.2
-- $ qemu-8.0 -M pc-7.2  ->  qemu-7.2 -M pc-7.2
-
-This two are the ones that work. The whole point of making the
-change in qemu-8.0.1 release was to fix this issue:
-
-- $ qemu-7.2 -M pc-7.2  ->  qemu-8.0.1 -M pc-7.2
-- $ qemu-8.0.1 -M pc-7.2  ->  qemu-7.2 -M pc-7.2
-
-But now we found that qemu-8.0 neither can migrate to qemu-7.2 not
-qemu-8.0.1.
-
-- $ qemu-8.0 -M pc-7.2  ->  qemu-8.0.1 -M pc-7.2
-- $ qemu-8.0.1 -M pc-7.2  ->  qemu-8.0 -M pc-7.2
-
-So, if we start a pc-7.2 machine in qemu-8.0 we can't migrate it to
-anything except to qemu-8.0.
-
-Can we do better?
-
-Yeap.  If we know that we are going to do this migration:
-
-- $ qemu-8.0 -M pc-7.2  ->  qemu-8.0.1 -M pc-7.2
-
-We can launch the appropriate devices with::
-
-  --device...,x-pci-e-err-unc-mask=on
-
-And now we can receive a migration from 8.0.  And from now on, we can
-do that migration to new machine types if we remember to enable that
-property for pc-7.2.  Notice that we need to remember, it is not
-enough to know that the source of the migration is qemu-8.0.  Think of
-this example:
-
-$ qemu-8.0 -M pc-7.2 -> qemu-8.0.1 -M pc-7.2 -> qemu-8.2 -M pc-7.2
-
-In the second migration, the source is not qemu-8.0, but we still have
-that "problem" and have that property enabled.  Notice that we need to
-continue having this mark/property until we have this machine
-rebooted.  But it is not a normal reboot (that don't reload QEMU) we
-need the machine to poweroff/poweron on a fixed QEMU.  And from now
-on we can use the proper real machine.
-- 
2.41.0



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH 05/10] docs/migration: Split "Debugging" and "Firmware"
  2024-01-09  6:46 [PATCH 00/10] docs/migration: Reorganize migration documentations peterx
                   ` (3 preceding siblings ...)
  2024-01-09  6:46 ` [PATCH 04/10] docs/migration: Split "Backwards compatibility" separately peterx
@ 2024-01-09  6:46 ` peterx
  2024-01-09  7:04   ` Cédric Le Goater
  2024-01-09 17:03   ` Fabiano Rosas
  2024-01-09  6:46 ` [PATCH 06/10] docs/migration: Split "Postcopy" peterx
                   ` (6 subsequent siblings)
  11 siblings, 2 replies; 29+ messages in thread
From: peterx @ 2024-01-09  6:46 UTC (permalink / raw)
  To: qemu-devel
  Cc: Cédric Le Goater, Michael S . Tsirkin, peterx,
	Alex Williamson, Jason Wang, Bandan Das, Prasad Pandit,
	Fabiano Rosas

From: Peter Xu <peterx@redhat.com>

Move the two sections into a separate file called "best-practises.rst".
Add the entry into index.

Signed-off-by: Peter Xu <peterx@redhat.com>
---
 docs/devel/migration/best-practises.rst | 48 +++++++++++++++++++++++++
 docs/devel/migration/index.rst          |  1 +
 docs/devel/migration/main.rst           | 44 -----------------------
 3 files changed, 49 insertions(+), 44 deletions(-)
 create mode 100644 docs/devel/migration/best-practises.rst

diff --git a/docs/devel/migration/best-practises.rst b/docs/devel/migration/best-practises.rst
new file mode 100644
index 0000000000..ba122ae417
--- /dev/null
+++ b/docs/devel/migration/best-practises.rst
@@ -0,0 +1,48 @@
+==============
+Best practises
+==============
+
+Debugging
+=========
+
+The migration stream can be analyzed thanks to ``scripts/analyze-migration.py``.
+
+Example usage:
+
+.. code-block:: shell
+
+  $ qemu-system-x86_64 -display none -monitor stdio
+  (qemu) migrate "exec:cat > mig"
+  (qemu) q
+  $ ./scripts/analyze-migration.py -f mig
+  {
+    "ram (3)": {
+        "section sizes": {
+            "pc.ram": "0x0000000008000000",
+  ...
+
+See also ``analyze-migration.py -h`` help for more options.
+
+Firmware
+========
+
+Migration migrates the copies of RAM and ROM, and thus when running
+on the destination it includes the firmware from the source. Even after
+resetting a VM, the old firmware is used.  Only once QEMU has been restarted
+is the new firmware in use.
+
+- Changes in firmware size can cause changes in the required RAMBlock size
+  to hold the firmware and thus migration can fail.  In practice it's best
+  to pad firmware images to convenient powers of 2 with plenty of space
+  for growth.
+
+- Care should be taken with device emulation code so that newer
+  emulation code can work with older firmware to allow forward migration.
+
+- Care should be taken with newer firmware so that backward migration
+  to older systems with older device emulation code will work.
+
+In some cases it may be best to tie specific firmware versions to specific
+versioned machine types to cut down on the combinations that will need
+support.  This is also useful when newer versions of firmware outgrow
+the padding.
diff --git a/docs/devel/migration/index.rst b/docs/devel/migration/index.rst
index 7fc02b9520..c09623b38f 100644
--- a/docs/devel/migration/index.rst
+++ b/docs/devel/migration/index.rst
@@ -11,3 +11,4 @@ QEMU live migration works.
    compatibility
    vfio
    virtio
+   best-practises
diff --git a/docs/devel/migration/main.rst b/docs/devel/migration/main.rst
index b3e31bb52f..97811ce371 100644
--- a/docs/devel/migration/main.rst
+++ b/docs/devel/migration/main.rst
@@ -52,27 +52,6 @@ All these migration protocols use the same infrastructure to
 save/restore state devices.  This infrastructure is shared with the
 savevm/loadvm functionality.
 
-Debugging
-=========
-
-The migration stream can be analyzed thanks to ``scripts/analyze-migration.py``.
-
-Example usage:
-
-.. code-block:: shell
-
-  $ qemu-system-x86_64 -display none -monitor stdio
-  (qemu) migrate "exec:cat > mig"
-  (qemu) q
-  $ ./scripts/analyze-migration.py -f mig
-  {
-    "ram (3)": {
-        "section sizes": {
-            "pc.ram": "0x0000000008000000",
-  ...
-
-See also ``analyze-migration.py -h`` help for more options.
-
 Common infrastructure
 =====================
 
@@ -970,26 +949,3 @@ the background migration channel.  Anyone who cares about latencies of page
 faults during a postcopy migration should enable this feature.  By default,
 it's not enabled.
 
-Firmware
-========
-
-Migration migrates the copies of RAM and ROM, and thus when running
-on the destination it includes the firmware from the source. Even after
-resetting a VM, the old firmware is used.  Only once QEMU has been restarted
-is the new firmware in use.
-
-- Changes in firmware size can cause changes in the required RAMBlock size
-  to hold the firmware and thus migration can fail.  In practice it's best
-  to pad firmware images to convenient powers of 2 with plenty of space
-  for growth.
-
-- Care should be taken with device emulation code so that newer
-  emulation code can work with older firmware to allow forward migration.
-
-- Care should be taken with newer firmware so that backward migration
-  to older systems with older device emulation code will work.
-
-In some cases it may be best to tie specific firmware versions to specific
-versioned machine types to cut down on the combinations that will need
-support.  This is also useful when newer versions of firmware outgrow
-the padding.
-- 
2.41.0



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH 06/10] docs/migration: Split "Postcopy"
  2024-01-09  6:46 [PATCH 00/10] docs/migration: Reorganize migration documentations peterx
                   ` (4 preceding siblings ...)
  2024-01-09  6:46 ` [PATCH 05/10] docs/migration: Split "Debugging" and "Firmware" peterx
@ 2024-01-09  6:46 ` peterx
  2024-01-09  7:05   ` Cédric Le Goater
  2024-01-09  6:46 ` [PATCH 07/10] docs/migration: Split "dirty limit" peterx
                   ` (5 subsequent siblings)
  11 siblings, 1 reply; 29+ messages in thread
From: peterx @ 2024-01-09  6:46 UTC (permalink / raw)
  To: qemu-devel
  Cc: Cédric Le Goater, Michael S . Tsirkin, peterx,
	Alex Williamson, Jason Wang, Bandan Das, Prasad Pandit,
	Fabiano Rosas

From: Peter Xu <peterx@redhat.com>

Split postcopy into a separate file.  Introduce a head page "features.rst"
to keep all the features on top of migration framework.

Signed-off-by: Peter Xu <peterx@redhat.com>
---
 docs/devel/migration/features.rst |   9 +
 docs/devel/migration/index.rst    |   1 +
 docs/devel/migration/main.rst     | 305 ------------------------------
 docs/devel/migration/postcopy.rst | 304 +++++++++++++++++++++++++++++
 4 files changed, 314 insertions(+), 305 deletions(-)
 create mode 100644 docs/devel/migration/features.rst
 create mode 100644 docs/devel/migration/postcopy.rst

diff --git a/docs/devel/migration/features.rst b/docs/devel/migration/features.rst
new file mode 100644
index 0000000000..0054e0c900
--- /dev/null
+++ b/docs/devel/migration/features.rst
@@ -0,0 +1,9 @@
+Migration features
+==================
+
+Migration has plenty of features to support different use cases.
+
+.. toctree::
+   :maxdepth: 2
+
+   postcopy
diff --git a/docs/devel/migration/index.rst b/docs/devel/migration/index.rst
index c09623b38f..7cf62541b9 100644
--- a/docs/devel/migration/index.rst
+++ b/docs/devel/migration/index.rst
@@ -8,6 +8,7 @@ QEMU live migration works.
    :maxdepth: 2
 
    main
+   features
    compatibility
    vfio
    virtio
diff --git a/docs/devel/migration/main.rst b/docs/devel/migration/main.rst
index 97811ce371..051ea43f0e 100644
--- a/docs/devel/migration/main.rst
+++ b/docs/devel/migration/main.rst
@@ -644,308 +644,3 @@ algorithm will restrict virtual CPUs as needed to keep their dirty page
 rate inside the limit. This leads to more steady reading performance during
 live migration and can aid in improving large guest responsiveness.
 
-Postcopy
-========
-
-'Postcopy' migration is a way to deal with migrations that refuse to converge
-(or take too long to converge) its plus side is that there is an upper bound on
-the amount of migration traffic and time it takes, the down side is that during
-the postcopy phase, a failure of *either* side causes the guest to be lost.
-
-In postcopy the destination CPUs are started before all the memory has been
-transferred, and accesses to pages that are yet to be transferred cause
-a fault that's translated by QEMU into a request to the source QEMU.
-
-Postcopy can be combined with precopy (i.e. normal migration) so that if precopy
-doesn't finish in a given time the switch is made to postcopy.
-
-Enabling postcopy
------------------
-
-To enable postcopy, issue this command on the monitor (both source and
-destination) prior to the start of migration:
-
-``migrate_set_capability postcopy-ram on``
-
-The normal commands are then used to start a migration, which is still
-started in precopy mode.  Issuing:
-
-``migrate_start_postcopy``
-
-will now cause the transition from precopy to postcopy.
-It can be issued immediately after migration is started or any
-time later on.  Issuing it after the end of a migration is harmless.
-
-Blocktime is a postcopy live migration metric, intended to show how
-long the vCPU was in state of interruptible sleep due to pagefault.
-That metric is calculated both for all vCPUs as overlapped value, and
-separately for each vCPU. These values are calculated on destination
-side.  To enable postcopy blocktime calculation, enter following
-command on destination monitor:
-
-``migrate_set_capability postcopy-blocktime on``
-
-Postcopy blocktime can be retrieved by query-migrate qmp command.
-postcopy-blocktime value of qmp command will show overlapped blocking
-time for all vCPU, postcopy-vcpu-blocktime will show list of blocking
-time per vCPU.
-
-.. note::
-  During the postcopy phase, the bandwidth limits set using
-  ``migrate_set_parameter`` is ignored (to avoid delaying requested pages that
-  the destination is waiting for).
-
-Postcopy device transfer
-------------------------
-
-Loading of device data may cause the device emulation to access guest RAM
-that may trigger faults that have to be resolved by the source, as such
-the migration stream has to be able to respond with page data *during* the
-device load, and hence the device data has to be read from the stream completely
-before the device load begins to free the stream up.  This is achieved by
-'packaging' the device data into a blob that's read in one go.
-
-Source behaviour
-----------------
-
-Until postcopy is entered the migration stream is identical to normal
-precopy, except for the addition of a 'postcopy advise' command at
-the beginning, to tell the destination that postcopy might happen.
-When postcopy starts the source sends the page discard data and then
-forms the 'package' containing:
-
-   - Command: 'postcopy listen'
-   - The device state
-
-     A series of sections, identical to the precopy streams device state stream
-     containing everything except postcopiable devices (i.e. RAM)
-   - Command: 'postcopy run'
-
-The 'package' is sent as the data part of a Command: ``CMD_PACKAGED``, and the
-contents are formatted in the same way as the main migration stream.
-
-During postcopy the source scans the list of dirty pages and sends them
-to the destination without being requested (in much the same way as precopy),
-however when a page request is received from the destination, the dirty page
-scanning restarts from the requested location.  This causes requested pages
-to be sent quickly, and also causes pages directly after the requested page
-to be sent quickly in the hope that those pages are likely to be used
-by the destination soon.
-
-Destination behaviour
----------------------
-
-Initially the destination looks the same as precopy, with a single thread
-reading the migration stream; the 'postcopy advise' and 'discard' commands
-are processed to change the way RAM is managed, but don't affect the stream
-processing.
-
-::
-
-  ------------------------------------------------------------------------------
-                          1      2   3     4 5                      6   7
-  main -----DISCARD-CMD_PACKAGED ( LISTEN  DEVICE     DEVICE DEVICE RUN )
-  thread                             |       |
-                                     |     (page request)
-                                     |        \___
-                                     v            \
-  listen thread:                     --- page -- page -- page -- page -- page --
-
-                                     a   b        c
-  ------------------------------------------------------------------------------
-
-- On receipt of ``CMD_PACKAGED`` (1)
-
-   All the data associated with the package - the ( ... ) section in the diagram -
-   is read into memory, and the main thread recurses into qemu_loadvm_state_main
-   to process the contents of the package (2) which contains commands (3,6) and
-   devices (4...)
-
-- On receipt of 'postcopy listen' - 3 -(i.e. the 1st command in the package)
-
-   a new thread (a) is started that takes over servicing the migration stream,
-   while the main thread carries on loading the package.   It loads normal
-   background page data (b) but if during a device load a fault happens (5)
-   the returned page (c) is loaded by the listen thread allowing the main
-   threads device load to carry on.
-
-- The last thing in the ``CMD_PACKAGED`` is a 'RUN' command (6)
-
-   letting the destination CPUs start running.  At the end of the
-   ``CMD_PACKAGED`` (7) the main thread returns to normal running behaviour and
-   is no longer used by migration, while the listen thread carries on servicing
-   page data until the end of migration.
-
-Postcopy Recovery
------------------
-
-Comparing to precopy, postcopy is special on error handlings.  When any
-error happens (in this case, mostly network errors), QEMU cannot easily
-fail a migration because VM data resides in both source and destination
-QEMU instances.  On the other hand, when issue happens QEMU on both sides
-will go into a paused state.  It'll need a recovery phase to continue a
-paused postcopy migration.
-
-The recovery phase normally contains a few steps:
-
-  - When network issue occurs, both QEMU will go into PAUSED state
-
-  - When the network is recovered (or a new network is provided), the admin
-    can setup the new channel for migration using QMP command
-    'migrate-recover' on destination node, preparing for a resume.
-
-  - On source host, the admin can continue the interrupted postcopy
-    migration using QMP command 'migrate' with resume=true flag set.
-
-  - After the connection is re-established, QEMU will continue the postcopy
-    migration on both sides.
-
-During a paused postcopy migration, the VM can logically still continue
-running, and it will not be impacted from any page access to pages that
-were already migrated to destination VM before the interruption happens.
-However, if any of the missing pages got accessed on destination VM, the VM
-thread will be halted waiting for the page to be migrated, it means it can
-be halted until the recovery is complete.
-
-The impact of accessing missing pages can be relevant to different
-configurations of the guest.  For example, when with async page fault
-enabled, logically the guest can proactively schedule out the threads
-accessing missing pages.
-
-Postcopy states
----------------
-
-Postcopy moves through a series of states (see postcopy_state) from
-ADVISE->DISCARD->LISTEN->RUNNING->END
-
- - Advise
-
-    Set at the start of migration if postcopy is enabled, even
-    if it hasn't had the start command; here the destination
-    checks that its OS has the support needed for postcopy, and performs
-    setup to ensure the RAM mappings are suitable for later postcopy.
-    The destination will fail early in migration at this point if the
-    required OS support is not present.
-    (Triggered by reception of POSTCOPY_ADVISE command)
-
- - Discard
-
-    Entered on receipt of the first 'discard' command; prior to
-    the first Discard being performed, hugepages are switched off
-    (using madvise) to ensure that no new huge pages are created
-    during the postcopy phase, and to cause any huge pages that
-    have discards on them to be broken.
-
- - Listen
-
-    The first command in the package, POSTCOPY_LISTEN, switches
-    the destination state to Listen, and starts a new thread
-    (the 'listen thread') which takes over the job of receiving
-    pages off the migration stream, while the main thread carries
-    on processing the blob.  With this thread able to process page
-    reception, the destination now 'sensitises' the RAM to detect
-    any access to missing pages (on Linux using the 'userfault'
-    system).
-
- - Running
-
-    POSTCOPY_RUN causes the destination to synchronise all
-    state and start the CPUs and IO devices running.  The main
-    thread now finishes processing the migration package and
-    now carries on as it would for normal precopy migration
-    (although it can't do the cleanup it would do as it
-    finishes a normal migration).
-
- - Paused
-
-    Postcopy can run into a paused state (normally on both sides when
-    happens), where all threads will be temporarily halted mostly due to
-    network errors.  When reaching paused state, migration will make sure
-    the qemu binary on both sides maintain the data without corrupting
-    the VM.  To continue the migration, the admin needs to fix the
-    migration channel using the QMP command 'migrate-recover' on the
-    destination node, then resume the migration using QMP command 'migrate'
-    again on source node, with resume=true flag set.
-
- - End
-
-    The listen thread can now quit, and perform the cleanup of migration
-    state, the migration is now complete.
-
-Source side page map
---------------------
-
-The 'migration bitmap' in postcopy is basically the same as in the precopy,
-where each of the bit to indicate that page is 'dirty' - i.e. needs
-sending.  During the precopy phase this is updated as the CPU dirties
-pages, however during postcopy the CPUs are stopped and nothing should
-dirty anything any more. Instead, dirty bits are cleared when the relevant
-pages are sent during postcopy.
-
-Postcopy with hugepages
------------------------
-
-Postcopy now works with hugetlbfs backed memory:
-
-  a) The linux kernel on the destination must support userfault on hugepages.
-  b) The huge-page configuration on the source and destination VMs must be
-     identical; i.e. RAMBlocks on both sides must use the same page size.
-  c) Note that ``-mem-path /dev/hugepages``  will fall back to allocating normal
-     RAM if it doesn't have enough hugepages, triggering (b) to fail.
-     Using ``-mem-prealloc`` enforces the allocation using hugepages.
-  d) Care should be taken with the size of hugepage used; postcopy with 2MB
-     hugepages works well, however 1GB hugepages are likely to be problematic
-     since it takes ~1 second to transfer a 1GB hugepage across a 10Gbps link,
-     and until the full page is transferred the destination thread is blocked.
-
-Postcopy with shared memory
----------------------------
-
-Postcopy migration with shared memory needs explicit support from the other
-processes that share memory and from QEMU. There are restrictions on the type of
-memory that userfault can support shared.
-
-The Linux kernel userfault support works on ``/dev/shm`` memory and on ``hugetlbfs``
-(although the kernel doesn't provide an equivalent to ``madvise(MADV_DONTNEED)``
-for hugetlbfs which may be a problem in some configurations).
-
-The vhost-user code in QEMU supports clients that have Postcopy support,
-and the ``vhost-user-bridge`` (in ``tests/``) and the DPDK package have changes
-to support postcopy.
-
-The client needs to open a userfaultfd and register the areas
-of memory that it maps with userfault.  The client must then pass the
-userfaultfd back to QEMU together with a mapping table that allows
-fault addresses in the clients address space to be converted back to
-RAMBlock/offsets.  The client's userfaultfd is added to the postcopy
-fault-thread and page requests are made on behalf of the client by QEMU.
-QEMU performs 'wake' operations on the client's userfaultfd to allow it
-to continue after a page has arrived.
-
-.. note::
-  There are two future improvements that would be nice:
-    a) Some way to make QEMU ignorant of the addresses in the clients
-       address space
-    b) Avoiding the need for QEMU to perform ufd-wake calls after the
-       pages have arrived
-
-Retro-fitting postcopy to existing clients is possible:
-  a) A mechanism is needed for the registration with userfault as above,
-     and the registration needs to be coordinated with the phases of
-     postcopy.  In vhost-user extra messages are added to the existing
-     control channel.
-  b) Any thread that can block due to guest memory accesses must be
-     identified and the implication understood; for example if the
-     guest memory access is made while holding a lock then all other
-     threads waiting for that lock will also be blocked.
-
-Postcopy Preemption Mode
-------------------------
-
-Postcopy preempt is a new capability introduced in 8.0 QEMU release, it
-allows urgent pages (those got page fault requested from destination QEMU
-explicitly) to be sent in a separate preempt channel, rather than queued in
-the background migration channel.  Anyone who cares about latencies of page
-faults during a postcopy migration should enable this feature.  By default,
-it's not enabled.
-
diff --git a/docs/devel/migration/postcopy.rst b/docs/devel/migration/postcopy.rst
new file mode 100644
index 0000000000..d60eec06ab
--- /dev/null
+++ b/docs/devel/migration/postcopy.rst
@@ -0,0 +1,304 @@
+Postcopy
+========
+
+'Postcopy' migration is a way to deal with migrations that refuse to converge
+(or take too long to converge) its plus side is that there is an upper bound on
+the amount of migration traffic and time it takes, the down side is that during
+the postcopy phase, a failure of *either* side causes the guest to be lost.
+
+In postcopy the destination CPUs are started before all the memory has been
+transferred, and accesses to pages that are yet to be transferred cause
+a fault that's translated by QEMU into a request to the source QEMU.
+
+Postcopy can be combined with precopy (i.e. normal migration) so that if precopy
+doesn't finish in a given time the switch is made to postcopy.
+
+Enabling postcopy
+-----------------
+
+To enable postcopy, issue this command on the monitor (both source and
+destination) prior to the start of migration:
+
+``migrate_set_capability postcopy-ram on``
+
+The normal commands are then used to start a migration, which is still
+started in precopy mode.  Issuing:
+
+``migrate_start_postcopy``
+
+will now cause the transition from precopy to postcopy.
+It can be issued immediately after migration is started or any
+time later on.  Issuing it after the end of a migration is harmless.
+
+Blocktime is a postcopy live migration metric, intended to show how
+long the vCPU was in state of interruptible sleep due to pagefault.
+That metric is calculated both for all vCPUs as overlapped value, and
+separately for each vCPU. These values are calculated on destination
+side.  To enable postcopy blocktime calculation, enter following
+command on destination monitor:
+
+``migrate_set_capability postcopy-blocktime on``
+
+Postcopy blocktime can be retrieved by query-migrate qmp command.
+postcopy-blocktime value of qmp command will show overlapped blocking
+time for all vCPU, postcopy-vcpu-blocktime will show list of blocking
+time per vCPU.
+
+.. note::
+  During the postcopy phase, the bandwidth limits set using
+  ``migrate_set_parameter`` is ignored (to avoid delaying requested pages that
+  the destination is waiting for).
+
+Postcopy device transfer
+------------------------
+
+Loading of device data may cause the device emulation to access guest RAM
+that may trigger faults that have to be resolved by the source, as such
+the migration stream has to be able to respond with page data *during* the
+device load, and hence the device data has to be read from the stream completely
+before the device load begins to free the stream up.  This is achieved by
+'packaging' the device data into a blob that's read in one go.
+
+Source behaviour
+----------------
+
+Until postcopy is entered the migration stream is identical to normal
+precopy, except for the addition of a 'postcopy advise' command at
+the beginning, to tell the destination that postcopy might happen.
+When postcopy starts the source sends the page discard data and then
+forms the 'package' containing:
+
+   - Command: 'postcopy listen'
+   - The device state
+
+     A series of sections, identical to the precopy streams device state stream
+     containing everything except postcopiable devices (i.e. RAM)
+   - Command: 'postcopy run'
+
+The 'package' is sent as the data part of a Command: ``CMD_PACKAGED``, and the
+contents are formatted in the same way as the main migration stream.
+
+During postcopy the source scans the list of dirty pages and sends them
+to the destination without being requested (in much the same way as precopy),
+however when a page request is received from the destination, the dirty page
+scanning restarts from the requested location.  This causes requested pages
+to be sent quickly, and also causes pages directly after the requested page
+to be sent quickly in the hope that those pages are likely to be used
+by the destination soon.
+
+Destination behaviour
+---------------------
+
+Initially the destination looks the same as precopy, with a single thread
+reading the migration stream; the 'postcopy advise' and 'discard' commands
+are processed to change the way RAM is managed, but don't affect the stream
+processing.
+
+::
+
+  ------------------------------------------------------------------------------
+                          1      2   3     4 5                      6   7
+  main -----DISCARD-CMD_PACKAGED ( LISTEN  DEVICE     DEVICE DEVICE RUN )
+  thread                             |       |
+                                     |     (page request)
+                                     |        \___
+                                     v            \
+  listen thread:                     --- page -- page -- page -- page -- page --
+
+                                     a   b        c
+  ------------------------------------------------------------------------------
+
+- On receipt of ``CMD_PACKAGED`` (1)
+
+   All the data associated with the package - the ( ... ) section in the diagram -
+   is read into memory, and the main thread recurses into qemu_loadvm_state_main
+   to process the contents of the package (2) which contains commands (3,6) and
+   devices (4...)
+
+- On receipt of 'postcopy listen' - 3 -(i.e. the 1st command in the package)
+
+   a new thread (a) is started that takes over servicing the migration stream,
+   while the main thread carries on loading the package.   It loads normal
+   background page data (b) but if during a device load a fault happens (5)
+   the returned page (c) is loaded by the listen thread allowing the main
+   threads device load to carry on.
+
+- The last thing in the ``CMD_PACKAGED`` is a 'RUN' command (6)
+
+   letting the destination CPUs start running.  At the end of the
+   ``CMD_PACKAGED`` (7) the main thread returns to normal running behaviour and
+   is no longer used by migration, while the listen thread carries on servicing
+   page data until the end of migration.
+
+Postcopy Recovery
+-----------------
+
+Comparing to precopy, postcopy is special on error handlings.  When any
+error happens (in this case, mostly network errors), QEMU cannot easily
+fail a migration because VM data resides in both source and destination
+QEMU instances.  On the other hand, when issue happens QEMU on both sides
+will go into a paused state.  It'll need a recovery phase to continue a
+paused postcopy migration.
+
+The recovery phase normally contains a few steps:
+
+  - When network issue occurs, both QEMU will go into PAUSED state
+
+  - When the network is recovered (or a new network is provided), the admin
+    can setup the new channel for migration using QMP command
+    'migrate-recover' on destination node, preparing for a resume.
+
+  - On source host, the admin can continue the interrupted postcopy
+    migration using QMP command 'migrate' with resume=true flag set.
+
+  - After the connection is re-established, QEMU will continue the postcopy
+    migration on both sides.
+
+During a paused postcopy migration, the VM can logically still continue
+running, and it will not be impacted from any page access to pages that
+were already migrated to destination VM before the interruption happens.
+However, if any of the missing pages got accessed on destination VM, the VM
+thread will be halted waiting for the page to be migrated, it means it can
+be halted until the recovery is complete.
+
+The impact of accessing missing pages can be relevant to different
+configurations of the guest.  For example, when with async page fault
+enabled, logically the guest can proactively schedule out the threads
+accessing missing pages.
+
+Postcopy states
+---------------
+
+Postcopy moves through a series of states (see postcopy_state) from
+ADVISE->DISCARD->LISTEN->RUNNING->END
+
+ - Advise
+
+    Set at the start of migration if postcopy is enabled, even
+    if it hasn't had the start command; here the destination
+    checks that its OS has the support needed for postcopy, and performs
+    setup to ensure the RAM mappings are suitable for later postcopy.
+    The destination will fail early in migration at this point if the
+    required OS support is not present.
+    (Triggered by reception of POSTCOPY_ADVISE command)
+
+ - Discard
+
+    Entered on receipt of the first 'discard' command; prior to
+    the first Discard being performed, hugepages are switched off
+    (using madvise) to ensure that no new huge pages are created
+    during the postcopy phase, and to cause any huge pages that
+    have discards on them to be broken.
+
+ - Listen
+
+    The first command in the package, POSTCOPY_LISTEN, switches
+    the destination state to Listen, and starts a new thread
+    (the 'listen thread') which takes over the job of receiving
+    pages off the migration stream, while the main thread carries
+    on processing the blob.  With this thread able to process page
+    reception, the destination now 'sensitises' the RAM to detect
+    any access to missing pages (on Linux using the 'userfault'
+    system).
+
+ - Running
+
+    POSTCOPY_RUN causes the destination to synchronise all
+    state and start the CPUs and IO devices running.  The main
+    thread now finishes processing the migration package and
+    now carries on as it would for normal precopy migration
+    (although it can't do the cleanup it would do as it
+    finishes a normal migration).
+
+ - Paused
+
+    Postcopy can run into a paused state (normally on both sides when
+    happens), where all threads will be temporarily halted mostly due to
+    network errors.  When reaching paused state, migration will make sure
+    the qemu binary on both sides maintain the data without corrupting
+    the VM.  To continue the migration, the admin needs to fix the
+    migration channel using the QMP command 'migrate-recover' on the
+    destination node, then resume the migration using QMP command 'migrate'
+    again on source node, with resume=true flag set.
+
+ - End
+
+    The listen thread can now quit, and perform the cleanup of migration
+    state, the migration is now complete.
+
+Source side page map
+--------------------
+
+The 'migration bitmap' in postcopy is basically the same as in the precopy,
+where each of the bit to indicate that page is 'dirty' - i.e. needs
+sending.  During the precopy phase this is updated as the CPU dirties
+pages, however during postcopy the CPUs are stopped and nothing should
+dirty anything any more. Instead, dirty bits are cleared when the relevant
+pages are sent during postcopy.
+
+Postcopy with hugepages
+-----------------------
+
+Postcopy now works with hugetlbfs backed memory:
+
+  a) The linux kernel on the destination must support userfault on hugepages.
+  b) The huge-page configuration on the source and destination VMs must be
+     identical; i.e. RAMBlocks on both sides must use the same page size.
+  c) Note that ``-mem-path /dev/hugepages``  will fall back to allocating normal
+     RAM if it doesn't have enough hugepages, triggering (b) to fail.
+     Using ``-mem-prealloc`` enforces the allocation using hugepages.
+  d) Care should be taken with the size of hugepage used; postcopy with 2MB
+     hugepages works well, however 1GB hugepages are likely to be problematic
+     since it takes ~1 second to transfer a 1GB hugepage across a 10Gbps link,
+     and until the full page is transferred the destination thread is blocked.
+
+Postcopy with shared memory
+---------------------------
+
+Postcopy migration with shared memory needs explicit support from the other
+processes that share memory and from QEMU. There are restrictions on the type of
+memory that userfault can support shared.
+
+The Linux kernel userfault support works on ``/dev/shm`` memory and on ``hugetlbfs``
+(although the kernel doesn't provide an equivalent to ``madvise(MADV_DONTNEED)``
+for hugetlbfs which may be a problem in some configurations).
+
+The vhost-user code in QEMU supports clients that have Postcopy support,
+and the ``vhost-user-bridge`` (in ``tests/``) and the DPDK package have changes
+to support postcopy.
+
+The client needs to open a userfaultfd and register the areas
+of memory that it maps with userfault.  The client must then pass the
+userfaultfd back to QEMU together with a mapping table that allows
+fault addresses in the clients address space to be converted back to
+RAMBlock/offsets.  The client's userfaultfd is added to the postcopy
+fault-thread and page requests are made on behalf of the client by QEMU.
+QEMU performs 'wake' operations on the client's userfaultfd to allow it
+to continue after a page has arrived.
+
+.. note::
+  There are two future improvements that would be nice:
+    a) Some way to make QEMU ignorant of the addresses in the clients
+       address space
+    b) Avoiding the need for QEMU to perform ufd-wake calls after the
+       pages have arrived
+
+Retro-fitting postcopy to existing clients is possible:
+  a) A mechanism is needed for the registration with userfault as above,
+     and the registration needs to be coordinated with the phases of
+     postcopy.  In vhost-user extra messages are added to the existing
+     control channel.
+  b) Any thread that can block due to guest memory accesses must be
+     identified and the implication understood; for example if the
+     guest memory access is made while holding a lock then all other
+     threads waiting for that lock will also be blocked.
+
+Postcopy Preemption Mode
+------------------------
+
+Postcopy preempt is a new capability introduced in 8.0 QEMU release, it
+allows urgent pages (those got page fault requested from destination QEMU
+explicitly) to be sent in a separate preempt channel, rather than queued in
+the background migration channel.  Anyone who cares about latencies of page
+faults during a postcopy migration should enable this feature.  By default,
+it's not enabled.
-- 
2.41.0



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH 07/10] docs/migration: Split "dirty limit"
  2024-01-09  6:46 [PATCH 00/10] docs/migration: Reorganize migration documentations peterx
                   ` (5 preceding siblings ...)
  2024-01-09  6:46 ` [PATCH 06/10] docs/migration: Split "Postcopy" peterx
@ 2024-01-09  6:46 ` peterx
  2024-01-09  7:06   ` Cédric Le Goater
  2024-01-09  6:46 ` [PATCH 08/10] docs/migration: Organize "Postcopy" page peterx
                   ` (4 subsequent siblings)
  11 siblings, 1 reply; 29+ messages in thread
From: peterx @ 2024-01-09  6:46 UTC (permalink / raw)
  To: qemu-devel
  Cc: Cédric Le Goater, Michael S . Tsirkin, peterx,
	Alex Williamson, Jason Wang, Bandan Das, Prasad Pandit,
	Fabiano Rosas, Yong Huang

From: Peter Xu <peterx@redhat.com>

Split that into a separate file, put under "features".

Cc: Yong Huang <yong.huang@smartx.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 docs/devel/migration/dirty-limit.rst | 71 ++++++++++++++++++++++++++++
 docs/devel/migration/features.rst    |  1 +
 docs/devel/migration/main.rst        | 71 ----------------------------
 3 files changed, 72 insertions(+), 71 deletions(-)
 create mode 100644 docs/devel/migration/dirty-limit.rst

diff --git a/docs/devel/migration/dirty-limit.rst b/docs/devel/migration/dirty-limit.rst
new file mode 100644
index 0000000000..8f32329d5f
--- /dev/null
+++ b/docs/devel/migration/dirty-limit.rst
@@ -0,0 +1,71 @@
+Dirty limit
+===========
+
+The dirty limit, short for dirty page rate upper limit, is a new capability
+introduced in the 8.1 QEMU release that uses a new algorithm based on the KVM
+dirty ring to throttle down the guest during live migration.
+
+The algorithm framework is as follows:
+
+::
+
+  ------------------------------------------------------------------------------
+  main   --------------> throttle thread ------------> PREPARE(1) <--------
+  thread  \                                                |              |
+           \                                               |              |
+            \                                              V              |
+             -\                                        CALCULATE(2)       |
+               \                                           |              |
+                \                                          |              |
+                 \                                         V              |
+                  \                                    SET PENALTY(3) -----
+                   -\                                      |
+                     \                                     |
+                      \                                    V
+                       -> virtual CPU thread -------> ACCEPT PENALTY(4)
+  ------------------------------------------------------------------------------
+
+When the qmp command qmp_set_vcpu_dirty_limit is called for the first time,
+the QEMU main thread starts the throttle thread. The throttle thread, once
+launched, executes the loop, which consists of three steps:
+
+  - PREPARE (1)
+
+     The entire work of PREPARE (1) is preparation for the second stage,
+     CALCULATE(2), as the name implies. It involves preparing the dirty
+     page rate value and the corresponding upper limit of the VM:
+     The dirty page rate is calculated via the KVM dirty ring mechanism,
+     which tells QEMU how many dirty pages a virtual CPU has had since the
+     last KVM_EXIT_DIRTY_RING_FULL exception; The dirty page rate upper
+     limit is specified by caller, therefore fetch it directly.
+
+  - CALCULATE (2)
+
+     Calculate a suitable sleep period for each virtual CPU, which will be
+     used to determine the penalty for the target virtual CPU. The
+     computation must be done carefully in order to reduce the dirty page
+     rate progressively down to the upper limit without oscillation. To
+     achieve this, two strategies are provided: the first is to add or
+     subtract sleep time based on the ratio of the current dirty page rate
+     to the limit, which is used when the current dirty page rate is far
+     from the limit; the second is to add or subtract a fixed time when
+     the current dirty page rate is close to the limit.
+
+  - SET PENALTY (3)
+
+     Set the sleep time for each virtual CPU that should be penalized based
+     on the results of the calculation supplied by step CALCULATE (2).
+
+After completing the three above stages, the throttle thread loops back
+to step PREPARE (1) until the dirty limit is reached.
+
+On the other hand, each virtual CPU thread reads the sleep duration and
+sleeps in the path of the KVM_EXIT_DIRTY_RING_FULL exception handler, that
+is ACCEPT PENALTY (4). Virtual CPUs tied with writing processes will
+obviously exit to the path and get penalized, whereas virtual CPUs involved
+with read processes will not.
+
+In summary, thanks to the KVM dirty ring technology, the dirty limit
+algorithm will restrict virtual CPUs as needed to keep their dirty page
+rate inside the limit. This leads to more steady reading performance during
+live migration and can aid in improving large guest responsiveness.
diff --git a/docs/devel/migration/features.rst b/docs/devel/migration/features.rst
index 0054e0c900..e257d0d100 100644
--- a/docs/devel/migration/features.rst
+++ b/docs/devel/migration/features.rst
@@ -7,3 +7,4 @@ Migration has plenty of features to support different use cases.
    :maxdepth: 2
 
    postcopy
+   dirty-limit
diff --git a/docs/devel/migration/main.rst b/docs/devel/migration/main.rst
index 051ea43f0e..00b9c3d32f 100644
--- a/docs/devel/migration/main.rst
+++ b/docs/devel/migration/main.rst
@@ -573,74 +573,3 @@ path.
      Return path  - opened by main thread, written by main thread AND postcopy
      thread (protected by rp_mutex)
 
-Dirty limit
-=====================
-The dirty limit, short for dirty page rate upper limit, is a new capability
-introduced in the 8.1 QEMU release that uses a new algorithm based on the KVM
-dirty ring to throttle down the guest during live migration.
-
-The algorithm framework is as follows:
-
-::
-
-  ------------------------------------------------------------------------------
-  main   --------------> throttle thread ------------> PREPARE(1) <--------
-  thread  \                                                |              |
-           \                                               |              |
-            \                                              V              |
-             -\                                        CALCULATE(2)       |
-               \                                           |              |
-                \                                          |              |
-                 \                                         V              |
-                  \                                    SET PENALTY(3) -----
-                   -\                                      |
-                     \                                     |
-                      \                                    V
-                       -> virtual CPU thread -------> ACCEPT PENALTY(4)
-  ------------------------------------------------------------------------------
-
-When the qmp command qmp_set_vcpu_dirty_limit is called for the first time,
-the QEMU main thread starts the throttle thread. The throttle thread, once
-launched, executes the loop, which consists of three steps:
-
-  - PREPARE (1)
-
-     The entire work of PREPARE (1) is preparation for the second stage,
-     CALCULATE(2), as the name implies. It involves preparing the dirty
-     page rate value and the corresponding upper limit of the VM:
-     The dirty page rate is calculated via the KVM dirty ring mechanism,
-     which tells QEMU how many dirty pages a virtual CPU has had since the
-     last KVM_EXIT_DIRTY_RING_FULL exception; The dirty page rate upper
-     limit is specified by caller, therefore fetch it directly.
-
-  - CALCULATE (2)
-
-     Calculate a suitable sleep period for each virtual CPU, which will be
-     used to determine the penalty for the target virtual CPU. The
-     computation must be done carefully in order to reduce the dirty page
-     rate progressively down to the upper limit without oscillation. To
-     achieve this, two strategies are provided: the first is to add or
-     subtract sleep time based on the ratio of the current dirty page rate
-     to the limit, which is used when the current dirty page rate is far
-     from the limit; the second is to add or subtract a fixed time when
-     the current dirty page rate is close to the limit.
-
-  - SET PENALTY (3)
-
-     Set the sleep time for each virtual CPU that should be penalized based
-     on the results of the calculation supplied by step CALCULATE (2).
-
-After completing the three above stages, the throttle thread loops back
-to step PREPARE (1) until the dirty limit is reached.
-
-On the other hand, each virtual CPU thread reads the sleep duration and
-sleeps in the path of the KVM_EXIT_DIRTY_RING_FULL exception handler, that
-is ACCEPT PENALTY (4). Virtual CPUs tied with writing processes will
-obviously exit to the path and get penalized, whereas virtual CPUs involved
-with read processes will not.
-
-In summary, thanks to the KVM dirty ring technology, the dirty limit
-algorithm will restrict virtual CPUs as needed to keep their dirty page
-rate inside the limit. This leads to more steady reading performance during
-live migration and can aid in improving large guest responsiveness.
-
-- 
2.41.0



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH 08/10] docs/migration: Organize "Postcopy" page
  2024-01-09  6:46 [PATCH 00/10] docs/migration: Reorganize migration documentations peterx
                   ` (6 preceding siblings ...)
  2024-01-09  6:46 ` [PATCH 07/10] docs/migration: Split "dirty limit" peterx
@ 2024-01-09  6:46 ` peterx
  2024-01-09  7:20   ` Cédric Le Goater
  2024-01-09  6:46 ` [PATCH 09/10] docs/migration: Further move vfio to be feature of migration peterx
                   ` (3 subsequent siblings)
  11 siblings, 1 reply; 29+ messages in thread
From: peterx @ 2024-01-09  6:46 UTC (permalink / raw)
  To: qemu-devel
  Cc: Cédric Le Goater, Michael S . Tsirkin, peterx,
	Alex Williamson, Jason Wang, Bandan Das, Prasad Pandit,
	Fabiano Rosas

From: Peter Xu <peterx@redhat.com>

Reorganize the page, moving things around, and add a few
headlines ("Postcopy internals", "Postcopy features") to cover sub-areas.

Signed-off-by: Peter Xu <peterx@redhat.com>
---
 docs/devel/migration/postcopy.rst | 159 ++++++++++++++++--------------
 1 file changed, 84 insertions(+), 75 deletions(-)

diff --git a/docs/devel/migration/postcopy.rst b/docs/devel/migration/postcopy.rst
index d60eec06ab..6c51e96d79 100644
--- a/docs/devel/migration/postcopy.rst
+++ b/docs/devel/migration/postcopy.rst
@@ -1,6 +1,9 @@
+========
 Postcopy
 ========
 
+.. contents::
+
 'Postcopy' migration is a way to deal with migrations that refuse to converge
 (or take too long to converge) its plus side is that there is an upper bound on
 the amount of migration traffic and time it takes, the down side is that during
@@ -14,7 +17,7 @@ Postcopy can be combined with precopy (i.e. normal migration) so that if precopy
 doesn't finish in a given time the switch is made to postcopy.
 
 Enabling postcopy
------------------
+=================
 
 To enable postcopy, issue this command on the monitor (both source and
 destination) prior to the start of migration:
@@ -49,8 +52,71 @@ time per vCPU.
   ``migrate_set_parameter`` is ignored (to avoid delaying requested pages that
   the destination is waiting for).
 
-Postcopy device transfer
-------------------------
+Postcopy internals
+==================
+
+State machine
+-------------
+
+Postcopy moves through a series of states (see postcopy_state) from
+ADVISE->DISCARD->LISTEN->RUNNING->END
+
+ - Advise
+
+    Set at the start of migration if postcopy is enabled, even
+    if it hasn't had the start command; here the destination
+    checks that its OS has the support needed for postcopy, and performs
+    setup to ensure the RAM mappings are suitable for later postcopy.
+    The destination will fail early in migration at this point if the
+    required OS support is not present.
+    (Triggered by reception of POSTCOPY_ADVISE command)
+
+ - Discard
+
+    Entered on receipt of the first 'discard' command; prior to
+    the first Discard being performed, hugepages are switched off
+    (using madvise) to ensure that no new huge pages are created
+    during the postcopy phase, and to cause any huge pages that
+    have discards on them to be broken.
+
+ - Listen
+
+    The first command in the package, POSTCOPY_LISTEN, switches
+    the destination state to Listen, and starts a new thread
+    (the 'listen thread') which takes over the job of receiving
+    pages off the migration stream, while the main thread carries
+    on processing the blob.  With this thread able to process page
+    reception, the destination now 'sensitises' the RAM to detect
+    any access to missing pages (on Linux using the 'userfault'
+    system).
+
+ - Running
+
+    POSTCOPY_RUN causes the destination to synchronise all
+    state and start the CPUs and IO devices running.  The main
+    thread now finishes processing the migration package and
+    now carries on as it would for normal precopy migration
+    (although it can't do the cleanup it would do as it
+    finishes a normal migration).
+
+ - Paused
+
+    Postcopy can run into a paused state (normally on both sides when
+    happens), where all threads will be temporarily halted mostly due to
+    network errors.  When reaching paused state, migration will make sure
+    the qemu binary on both sides maintain the data without corrupting
+    the VM.  To continue the migration, the admin needs to fix the
+    migration channel using the QMP command 'migrate-recover' on the
+    destination node, then resume the migration using QMP command 'migrate'
+    again on source node, with resume=true flag set.
+
+ - End
+
+    The listen thread can now quit, and perform the cleanup of migration
+    state, the migration is now complete.
+
+Device transfer
+---------------
 
 Loading of device data may cause the device emulation to access guest RAM
 that may trigger faults that have to be resolved by the source, as such
@@ -130,7 +196,20 @@ processing.
    is no longer used by migration, while the listen thread carries on servicing
    page data until the end of migration.
 
-Postcopy Recovery
+Source side page bitmap
+-----------------------
+
+The 'migration bitmap' in postcopy is basically the same as in the precopy,
+where each of the bit to indicate that page is 'dirty' - i.e. needs
+sending.  During the precopy phase this is updated as the CPU dirties
+pages, however during postcopy the CPUs are stopped and nothing should
+dirty anything any more. Instead, dirty bits are cleared when the relevant
+pages are sent during postcopy.
+
+Postcopy features
+=================
+
+Postcopy recovery
 -----------------
 
 Comparing to precopy, postcopy is special on error handlings.  When any
@@ -166,76 +245,6 @@ configurations of the guest.  For example, when with async page fault
 enabled, logically the guest can proactively schedule out the threads
 accessing missing pages.
 
-Postcopy states
----------------
-
-Postcopy moves through a series of states (see postcopy_state) from
-ADVISE->DISCARD->LISTEN->RUNNING->END
-
- - Advise
-
-    Set at the start of migration if postcopy is enabled, even
-    if it hasn't had the start command; here the destination
-    checks that its OS has the support needed for postcopy, and performs
-    setup to ensure the RAM mappings are suitable for later postcopy.
-    The destination will fail early in migration at this point if the
-    required OS support is not present.
-    (Triggered by reception of POSTCOPY_ADVISE command)
-
- - Discard
-
-    Entered on receipt of the first 'discard' command; prior to
-    the first Discard being performed, hugepages are switched off
-    (using madvise) to ensure that no new huge pages are created
-    during the postcopy phase, and to cause any huge pages that
-    have discards on them to be broken.
-
- - Listen
-
-    The first command in the package, POSTCOPY_LISTEN, switches
-    the destination state to Listen, and starts a new thread
-    (the 'listen thread') which takes over the job of receiving
-    pages off the migration stream, while the main thread carries
-    on processing the blob.  With this thread able to process page
-    reception, the destination now 'sensitises' the RAM to detect
-    any access to missing pages (on Linux using the 'userfault'
-    system).
-
- - Running
-
-    POSTCOPY_RUN causes the destination to synchronise all
-    state and start the CPUs and IO devices running.  The main
-    thread now finishes processing the migration package and
-    now carries on as it would for normal precopy migration
-    (although it can't do the cleanup it would do as it
-    finishes a normal migration).
-
- - Paused
-
-    Postcopy can run into a paused state (normally on both sides when
-    happens), where all threads will be temporarily halted mostly due to
-    network errors.  When reaching paused state, migration will make sure
-    the qemu binary on both sides maintain the data without corrupting
-    the VM.  To continue the migration, the admin needs to fix the
-    migration channel using the QMP command 'migrate-recover' on the
-    destination node, then resume the migration using QMP command 'migrate'
-    again on source node, with resume=true flag set.
-
- - End
-
-    The listen thread can now quit, and perform the cleanup of migration
-    state, the migration is now complete.
-
-Source side page map
---------------------
-
-The 'migration bitmap' in postcopy is basically the same as in the precopy,
-where each of the bit to indicate that page is 'dirty' - i.e. needs
-sending.  During the precopy phase this is updated as the CPU dirties
-pages, however during postcopy the CPUs are stopped and nothing should
-dirty anything any more. Instead, dirty bits are cleared when the relevant
-pages are sent during postcopy.
-
 Postcopy with hugepages
 -----------------------
 
@@ -293,7 +302,7 @@ Retro-fitting postcopy to existing clients is possible:
      guest memory access is made while holding a lock then all other
      threads waiting for that lock will also be blocked.
 
-Postcopy Preemption Mode
+Postcopy preemption mode
 ------------------------
 
 Postcopy preempt is a new capability introduced in 8.0 QEMU release, it
-- 
2.41.0



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH 09/10] docs/migration: Further move vfio to be feature of migration
  2024-01-09  6:46 [PATCH 00/10] docs/migration: Reorganize migration documentations peterx
                   ` (7 preceding siblings ...)
  2024-01-09  6:46 ` [PATCH 08/10] docs/migration: Organize "Postcopy" page peterx
@ 2024-01-09  6:46 ` peterx
  2024-01-09  7:20   ` Cédric Le Goater
  2024-01-09  6:46 ` [PATCH 10/10] docs/migration: Further move virtio " peterx
                   ` (2 subsequent siblings)
  11 siblings, 1 reply; 29+ messages in thread
From: peterx @ 2024-01-09  6:46 UTC (permalink / raw)
  To: qemu-devel
  Cc: Cédric Le Goater, Michael S . Tsirkin, peterx,
	Alex Williamson, Jason Wang, Bandan Das, Prasad Pandit,
	Fabiano Rosas

From: Peter Xu <peterx@redhat.com>

Move it one layer down, so taking VFIO-migration as a feature for
migration.

Cc: Alex Williamson <alex.williamson@redhat.com>
Cc: Cédric Le Goater <clg@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 docs/devel/migration/features.rst | 1 +
 docs/devel/migration/index.rst    | 1 -
 2 files changed, 1 insertion(+), 1 deletion(-)

diff --git a/docs/devel/migration/features.rst b/docs/devel/migration/features.rst
index e257d0d100..dea016f707 100644
--- a/docs/devel/migration/features.rst
+++ b/docs/devel/migration/features.rst
@@ -8,3 +8,4 @@ Migration has plenty of features to support different use cases.
 
    postcopy
    dirty-limit
+   vfio
diff --git a/docs/devel/migration/index.rst b/docs/devel/migration/index.rst
index 7cf62541b9..2479e8ecb7 100644
--- a/docs/devel/migration/index.rst
+++ b/docs/devel/migration/index.rst
@@ -10,6 +10,5 @@ QEMU live migration works.
    main
    features
    compatibility
-   vfio
    virtio
    best-practises
-- 
2.41.0



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH 10/10] docs/migration: Further move virtio to be feature of migration
  2024-01-09  6:46 [PATCH 00/10] docs/migration: Reorganize migration documentations peterx
                   ` (8 preceding siblings ...)
  2024-01-09  6:46 ` [PATCH 09/10] docs/migration: Further move vfio to be feature of migration peterx
@ 2024-01-09  6:46 ` peterx
  2024-01-09  7:20   ` Cédric Le Goater
  2024-01-09 10:49 ` [PATCH 00/10] docs/migration: Reorganize migration documentations Peter Xu
  2024-01-11  6:20 ` Peter Xu
  11 siblings, 1 reply; 29+ messages in thread
From: peterx @ 2024-01-09  6:46 UTC (permalink / raw)
  To: qemu-devel
  Cc: Cédric Le Goater, Michael S . Tsirkin, peterx,
	Alex Williamson, Jason Wang, Bandan Das, Prasad Pandit,
	Fabiano Rosas

From: Peter Xu <peterx@redhat.com>

Move it one layer down, so taking Virtio-migration as a feature for
migration.

Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: Jason Wang <jasowang@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 docs/devel/migration/features.rst | 1 +
 docs/devel/migration/index.rst    | 1 -
 2 files changed, 1 insertion(+), 1 deletion(-)

diff --git a/docs/devel/migration/features.rst b/docs/devel/migration/features.rst
index dea016f707..a9acaf618e 100644
--- a/docs/devel/migration/features.rst
+++ b/docs/devel/migration/features.rst
@@ -9,3 +9,4 @@ Migration has plenty of features to support different use cases.
    postcopy
    dirty-limit
    vfio
+   virtio
diff --git a/docs/devel/migration/index.rst b/docs/devel/migration/index.rst
index 2479e8ecb7..7b7a706e35 100644
--- a/docs/devel/migration/index.rst
+++ b/docs/devel/migration/index.rst
@@ -10,5 +10,4 @@ QEMU live migration works.
    main
    features
    compatibility
-   virtio
    best-practises
-- 
2.41.0



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* Re: [PATCH 01/10] docs/migration: Create migration/ directory
  2024-01-09  6:46 ` [PATCH 01/10] docs/migration: Create migration/ directory peterx
@ 2024-01-09  6:52   ` Cédric Le Goater
  0 siblings, 0 replies; 29+ messages in thread
From: Cédric Le Goater @ 2024-01-09  6:52 UTC (permalink / raw)
  To: peterx, qemu-devel
  Cc: Michael S . Tsirkin, Alex Williamson, Jason Wang, Bandan Das,
	Prasad Pandit, Fabiano Rosas

On 1/9/24 07:46, peterx@redhat.com wrote:
> From: Peter Xu <peterx@redhat.com>
> 
> Migration documentation is growing into a single file too large.  Create a
> sub-directory for it for a split.
> 
> We also already have separate vfio/virtio documentations, move it all over
> into the directory.
> 
> Note that the virtio one is still not yet converted to rST.  That is a job
> for later.
> 
> Cc: Michael S. Tsirkin <mst@redhat.com>
> Cc: Jason Wang <jasowang@redhat.com>
> Cc: Alex Williamson <alex.williamson@redhat.com>
> Cc: Cédric Le Goater <clg@redhat.com>
> Signed-off-by: Peter Xu <peterx@redhat.com>


Reviewed-by: Cédric Le Goater <clg@redhat.com>

Thanks,

C.


> ---
>   docs/devel/index-internals.rst                            | 2 +-
>   docs/devel/{migration.rst => migration/main.rst}          | 0
>   docs/devel/{vfio-migration.rst => migration/vfio.rst}     | 0
>   docs/devel/{virtio-migration.txt => migration/virtio.txt} | 0
>   4 files changed, 1 insertion(+), 1 deletion(-)
>   rename docs/devel/{migration.rst => migration/main.rst} (100%)
>   rename docs/devel/{vfio-migration.rst => migration/vfio.rst} (100%)
>   rename docs/devel/{virtio-migration.txt => migration/virtio.txt} (100%)
> 
> diff --git a/docs/devel/index-internals.rst b/docs/devel/index-internals.rst
> index 3def4a138b..a41d62c1eb 100644
> --- a/docs/devel/index-internals.rst
> +++ b/docs/devel/index-internals.rst
> @@ -11,7 +11,7 @@ Details about QEMU's various subsystems including how to add features to them.
>      block-coroutine-wrapper
>      clocks
>      ebpf_rss
> -   migration
> +   migration/main
>      multi-process
>      reset
>      s390-cpu-topology
> diff --git a/docs/devel/migration.rst b/docs/devel/migration/main.rst
> similarity index 100%
> rename from docs/devel/migration.rst
> rename to docs/devel/migration/main.rst
> diff --git a/docs/devel/vfio-migration.rst b/docs/devel/migration/vfio.rst
> similarity index 100%
> rename from docs/devel/vfio-migration.rst
> rename to docs/devel/migration/vfio.rst
> diff --git a/docs/devel/virtio-migration.txt b/docs/devel/migration/virtio.txt
> similarity index 100%
> rename from docs/devel/virtio-migration.txt
> rename to docs/devel/migration/virtio.txt



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 02/10] docs/migration: Create index page
  2024-01-09  6:46 ` [PATCH 02/10] docs/migration: Create index page peterx
@ 2024-01-09  6:53   ` Cédric Le Goater
  0 siblings, 0 replies; 29+ messages in thread
From: Cédric Le Goater @ 2024-01-09  6:53 UTC (permalink / raw)
  To: peterx, qemu-devel
  Cc: Michael S . Tsirkin, Alex Williamson, Jason Wang, Bandan Das,
	Prasad Pandit, Fabiano Rosas

On 1/9/24 07:46, peterx@redhat.com wrote:
> From: Peter Xu <peterx@redhat.com>
> 
> Create an index page for migration module.  Move VFIO migration there too.
> A trivial touch-up on the title to use lower case there.
> 
> Since then we'll have "migration" as the top title, make the main doc file
> renamed to "migration framework".
> 
> Cc: Alex Williamson <alex.williamson@redhat.com>
> Cc: Cédric Le Goater <clg@redhat.com>
> Signed-off-by: Peter Xu <peterx@redhat.com>


Reviewed-by: Cédric Le Goater <clg@redhat.com>

Thanks,

C.


> ---
>   docs/devel/index-internals.rst |  3 +--
>   docs/devel/migration/index.rst | 11 +++++++++++
>   docs/devel/migration/main.rst  |  6 +++---
>   docs/devel/migration/vfio.rst  |  2 +-
>   4 files changed, 16 insertions(+), 6 deletions(-)
>   create mode 100644 docs/devel/migration/index.rst
> 
> diff --git a/docs/devel/index-internals.rst b/docs/devel/index-internals.rst
> index a41d62c1eb..5636e9cf1d 100644
> --- a/docs/devel/index-internals.rst
> +++ b/docs/devel/index-internals.rst
> @@ -11,13 +11,12 @@ Details about QEMU's various subsystems including how to add features to them.
>      block-coroutine-wrapper
>      clocks
>      ebpf_rss
> -   migration/main
> +   migration/index
>      multi-process
>      reset
>      s390-cpu-topology
>      s390-dasd-ipl
>      tracing
> -   vfio-migration
>      vfio-iommufd
>      writing-monitor-commands
>      virtio-backends
> diff --git a/docs/devel/migration/index.rst b/docs/devel/migration/index.rst
> new file mode 100644
> index 0000000000..02cfdcc969
> --- /dev/null
> +++ b/docs/devel/migration/index.rst
> @@ -0,0 +1,11 @@
> +Migration
> +=========
> +
> +This is the main entry for QEMU migration documentations.  It explains how
> +QEMU live migration works.
> +
> +.. toctree::
> +   :maxdepth: 2
> +
> +   main
> +   vfio
> diff --git a/docs/devel/migration/main.rst b/docs/devel/migration/main.rst
> index 95351ba51f..62bf027fb4 100644
> --- a/docs/devel/migration/main.rst
> +++ b/docs/devel/migration/main.rst
> @@ -1,6 +1,6 @@
> -=========
> -Migration
> -=========
> +===================
> +Migration framework
> +===================
>   
>   QEMU has code to load/save the state of the guest that it is running.
>   These are two complementary operations.  Saving the state just does
> diff --git a/docs/devel/migration/vfio.rst b/docs/devel/migration/vfio.rst
> index 605fe60e96..c49482eab6 100644
> --- a/docs/devel/migration/vfio.rst
> +++ b/docs/devel/migration/vfio.rst
> @@ -1,5 +1,5 @@
>   =====================
> -VFIO device Migration
> +VFIO device migration
>   =====================
>   
>   Migration of virtual machine involves saving the state for each device that



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 03/10] docs/migration: Convert virtio.txt into rST
  2024-01-09  6:46 ` [PATCH 03/10] docs/migration: Convert virtio.txt into rST peterx
@ 2024-01-09  7:02   ` Cédric Le Goater
  0 siblings, 0 replies; 29+ messages in thread
From: Cédric Le Goater @ 2024-01-09  7:02 UTC (permalink / raw)
  To: peterx, qemu-devel
  Cc: Michael S . Tsirkin, Alex Williamson, Jason Wang, Bandan Das,
	Prasad Pandit, Fabiano Rosas

On 1/9/24 07:46, peterx@redhat.com wrote:
> From: Peter Xu <peterx@redhat.com>
> 
> Convert the plain old .txt into .rst, add it into migration/index.rst.
> 
> Signed-off-by: Peter Xu <peterx@redhat.com>


Reviewed-by: Cédric Le Goater <clg@redhat.com>

Thanks,

C.


> ---
>   docs/devel/migration/index.rst  |   1 +
>   docs/devel/migration/virtio.rst | 115 ++++++++++++++++++++++++++++++++
>   docs/devel/migration/virtio.txt | 108 ------------------------------
>   3 files changed, 116 insertions(+), 108 deletions(-)
>   create mode 100644 docs/devel/migration/virtio.rst
>   delete mode 100644 docs/devel/migration/virtio.txt
> 
> diff --git a/docs/devel/migration/index.rst b/docs/devel/migration/index.rst
> index 02cfdcc969..2cb701c77c 100644
> --- a/docs/devel/migration/index.rst
> +++ b/docs/devel/migration/index.rst
> @@ -9,3 +9,4 @@ QEMU live migration works.
>   
>      main
>      vfio
> +   virtio
> diff --git a/docs/devel/migration/virtio.rst b/docs/devel/migration/virtio.rst
> new file mode 100644
> index 0000000000..611a18b821
> --- /dev/null
> +++ b/docs/devel/migration/virtio.rst
> @@ -0,0 +1,115 @@
> +=======================
> +Virtio device migration
> +=======================
> +
> +Copyright 2015 IBM Corp.
> +
> +This work is licensed under the terms of the GNU GPL, version 2 or later.  See
> +the COPYING file in the top-level directory.
> +
> +Saving and restoring the state of virtio devices is a bit of a twisty maze,
> +for several reasons:
> +
> +- state is distributed between several parts:
> +
> +  - virtio core, for common fields like features, number of queues, ...
> +
> +  - virtio transport (pci, ccw, ...), for the different proxy devices and
> +    transport specific state (msix vectors, indicators, ...)
> +
> +  - virtio device (net, blk, ...), for the different device types and their
> +    state (mac address, request queue, ...)
> +
> +- most fields are saved via the stream interface; subsequently, subsections
> +  have been added to make cross-version migration possible
> +
> +This file attempts to document the current procedure and point out some
> +caveats.
> +
> +Save state procedure
> +====================
> +
> +::
> +
> +  virtio core               virtio transport          virtio device
> +  -----------               ----------------          -------------
> +
> +                                                      save() function registered
> +                                                      via VMState wrapper on
> +                                                      device class
> +  virtio_save()                                       <----------
> +               ------>      save_config()
> +                            - save proxy device
> +                            - save transport-specific
> +                              device fields
> +  - save common device
> +    fields
> +  - save common virtqueue
> +    fields
> +               ------>      save_queue()
> +                            - save transport-specific
> +                              virtqueue fields
> +               ------>                               save_device()
> +                                                     - save device-specific
> +                                                       fields
> +  - save subsections
> +    - device endianness,
> +      if changed from
> +      default endianness
> +    - 64 bit features, if
> +      any high feature bit
> +      is set
> +    - virtio-1 virtqueue
> +      fields, if VERSION_1
> +      is set
> +
> +Load state procedure
> +====================
> +
> +::
> +
> +  virtio core               virtio transport          virtio device
> +  -----------               ----------------          -------------
> +
> +                                                      load() function registered
> +                                                      via VMState wrapper on
> +                                                      device class
> +  virtio_load()                                       <----------
> +               ------>      load_config()
> +                            - load proxy device
> +                            - load transport-specific
> +                              device fields
> +  - load common device
> +    fields
> +  - load common virtqueue
> +    fields
> +               ------>      load_queue()
> +                            - load transport-specific
> +                              virtqueue fields
> +  - notify guest
> +               ------>                               load_device()
> +                                                     - load device-specific
> +                                                       fields
> +  - load subsections
> +    - device endianness
> +    - 64 bit features
> +    - virtio-1 virtqueue
> +      fields
> +  - sanitize endianness
> +  - sanitize features
> +  - virtqueue index sanity
> +    check
> +                                                     - feature-dependent setup
> +
> +Implications of this setup
> +==========================
> +
> +Devices need to be careful in their state processing during load: The
> +load_device() procedure is invoked by the core before subsections have
> +been loaded. Any code that depends on information transmitted in subsections
> +therefore has to be invoked in the device's load() function _after_
> +virtio_load() returned (like e.g. code depending on features).
> +
> +Any extension of the state being migrated should be done in subsections
> +added to the core for compatibility reasons. If transport or device specific
> +state is added, core needs to invoke a callback from the new subsection.
> diff --git a/docs/devel/migration/virtio.txt b/docs/devel/migration/virtio.txt
> deleted file mode 100644
> index 98a6b0ffb5..0000000000
> --- a/docs/devel/migration/virtio.txt
> +++ /dev/null
> @@ -1,108 +0,0 @@
> -Virtio devices and migration
> -============================
> -
> -Copyright 2015 IBM Corp.
> -
> -This work is licensed under the terms of the GNU GPL, version 2 or later.  See
> -the COPYING file in the top-level directory.
> -
> -Saving and restoring the state of virtio devices is a bit of a twisty maze,
> -for several reasons:
> -- state is distributed between several parts:
> -  - virtio core, for common fields like features, number of queues, ...
> -  - virtio transport (pci, ccw, ...), for the different proxy devices and
> -    transport specific state (msix vectors, indicators, ...)
> -  - virtio device (net, blk, ...), for the different device types and their
> -    state (mac address, request queue, ...)
> -- most fields are saved via the stream interface; subsequently, subsections
> -  have been added to make cross-version migration possible
> -
> -This file attempts to document the current procedure and point out some
> -caveats.
> -
> -
> -Save state procedure
> -====================
> -
> -virtio core               virtio transport          virtio device
> ------------               ----------------          -------------
> -
> -                                                    save() function registered
> -                                                    via VMState wrapper on
> -                                                    device class
> -virtio_save()                                       <----------
> -             ------>      save_config()
> -                          - save proxy device
> -                          - save transport-specific
> -                            device fields
> -- save common device
> -  fields
> -- save common virtqueue
> -  fields
> -             ------>      save_queue()
> -                          - save transport-specific
> -                            virtqueue fields
> -             ------>                               save_device()
> -                                                   - save device-specific
> -                                                     fields
> -- save subsections
> -  - device endianness,
> -    if changed from
> -    default endianness
> -  - 64 bit features, if
> -    any high feature bit
> -    is set
> -  - virtio-1 virtqueue
> -    fields, if VERSION_1
> -    is set
> -
> -
> -Load state procedure
> -====================
> -
> -virtio core               virtio transport          virtio device
> ------------               ----------------          -------------
> -
> -                                                    load() function registered
> -                                                    via VMState wrapper on
> -                                                    device class
> -virtio_load()                                       <----------
> -             ------>      load_config()
> -                          - load proxy device
> -                          - load transport-specific
> -                            device fields
> -- load common device
> -  fields
> -- load common virtqueue
> -  fields
> -             ------>      load_queue()
> -                          - load transport-specific
> -                            virtqueue fields
> -- notify guest
> -             ------>                               load_device()
> -                                                   - load device-specific
> -                                                     fields
> -- load subsections
> -  - device endianness
> -  - 64 bit features
> -  - virtio-1 virtqueue
> -    fields
> -- sanitize endianness
> -- sanitize features
> -- virtqueue index sanity
> -  check
> -                                                   - feature-dependent setup
> -
> -
> -Implications of this setup
> -==========================
> -
> -Devices need to be careful in their state processing during load: The
> -load_device() procedure is invoked by the core before subsections have
> -been loaded. Any code that depends on information transmitted in subsections
> -therefore has to be invoked in the device's load() function _after_
> -virtio_load() returned (like e.g. code depending on features).
> -
> -Any extension of the state being migrated should be done in subsections
> -added to the core for compatibility reasons. If transport or device specific
> -state is added, core needs to invoke a callback from the new subsection.



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 04/10] docs/migration: Split "Backwards compatibility" separately
  2024-01-09  6:46 ` [PATCH 04/10] docs/migration: Split "Backwards compatibility" separately peterx
@ 2024-01-09  7:03   ` Cédric Le Goater
  0 siblings, 0 replies; 29+ messages in thread
From: Cédric Le Goater @ 2024-01-09  7:03 UTC (permalink / raw)
  To: peterx, qemu-devel
  Cc: Michael S . Tsirkin, Alex Williamson, Jason Wang, Bandan Das,
	Prasad Pandit, Fabiano Rosas

On 1/9/24 07:46, peterx@redhat.com wrote:
> From: Peter Xu <peterx@redhat.com>
> 
> Split the section from main.rst into a separate file.  Reference it in the
> index.rst.
> 
> Signed-off-by: Peter Xu <peterx@redhat.com>


Reviewed-by: Cédric Le Goater <clg@redhat.com>

Thanks,

C.





^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 05/10] docs/migration: Split "Debugging" and "Firmware"
  2024-01-09  6:46 ` [PATCH 05/10] docs/migration: Split "Debugging" and "Firmware" peterx
@ 2024-01-09  7:04   ` Cédric Le Goater
  2024-01-09 17:03   ` Fabiano Rosas
  1 sibling, 0 replies; 29+ messages in thread
From: Cédric Le Goater @ 2024-01-09  7:04 UTC (permalink / raw)
  To: peterx, qemu-devel
  Cc: Michael S . Tsirkin, Alex Williamson, Jason Wang, Bandan Das,
	Prasad Pandit, Fabiano Rosas

On 1/9/24 07:46, peterx@redhat.com wrote:
> From: Peter Xu <peterx@redhat.com>
> 
> Move the two sections into a separate file called "best-practises.rst".
> Add the entry into index.
> 
> Signed-off-by: Peter Xu <peterx@redhat.com>


Reviewed-by: Cédric Le Goater <clg@redhat.com>

Thanks,

C.


> ---
>   docs/devel/migration/best-practises.rst | 48 +++++++++++++++++++++++++
>   docs/devel/migration/index.rst          |  1 +
>   docs/devel/migration/main.rst           | 44 -----------------------
>   3 files changed, 49 insertions(+), 44 deletions(-)
>   create mode 100644 docs/devel/migration/best-practises.rst
> 
> diff --git a/docs/devel/migration/best-practises.rst b/docs/devel/migration/best-practises.rst
> new file mode 100644
> index 0000000000..ba122ae417
> --- /dev/null
> +++ b/docs/devel/migration/best-practises.rst
> @@ -0,0 +1,48 @@
> +==============
> +Best practises
> +==============
> +
> +Debugging
> +=========
> +
> +The migration stream can be analyzed thanks to ``scripts/analyze-migration.py``.
> +
> +Example usage:
> +
> +.. code-block:: shell
> +
> +  $ qemu-system-x86_64 -display none -monitor stdio
> +  (qemu) migrate "exec:cat > mig"
> +  (qemu) q
> +  $ ./scripts/analyze-migration.py -f mig
> +  {
> +    "ram (3)": {
> +        "section sizes": {
> +            "pc.ram": "0x0000000008000000",
> +  ...
> +
> +See also ``analyze-migration.py -h`` help for more options.
> +
> +Firmware
> +========
> +
> +Migration migrates the copies of RAM and ROM, and thus when running
> +on the destination it includes the firmware from the source. Even after
> +resetting a VM, the old firmware is used.  Only once QEMU has been restarted
> +is the new firmware in use.
> +
> +- Changes in firmware size can cause changes in the required RAMBlock size
> +  to hold the firmware and thus migration can fail.  In practice it's best
> +  to pad firmware images to convenient powers of 2 with plenty of space
> +  for growth.
> +
> +- Care should be taken with device emulation code so that newer
> +  emulation code can work with older firmware to allow forward migration.
> +
> +- Care should be taken with newer firmware so that backward migration
> +  to older systems with older device emulation code will work.
> +
> +In some cases it may be best to tie specific firmware versions to specific
> +versioned machine types to cut down on the combinations that will need
> +support.  This is also useful when newer versions of firmware outgrow
> +the padding.
> diff --git a/docs/devel/migration/index.rst b/docs/devel/migration/index.rst
> index 7fc02b9520..c09623b38f 100644
> --- a/docs/devel/migration/index.rst
> +++ b/docs/devel/migration/index.rst
> @@ -11,3 +11,4 @@ QEMU live migration works.
>      compatibility
>      vfio
>      virtio
> +   best-practises
> diff --git a/docs/devel/migration/main.rst b/docs/devel/migration/main.rst
> index b3e31bb52f..97811ce371 100644
> --- a/docs/devel/migration/main.rst
> +++ b/docs/devel/migration/main.rst
> @@ -52,27 +52,6 @@ All these migration protocols use the same infrastructure to
>   save/restore state devices.  This infrastructure is shared with the
>   savevm/loadvm functionality.
>   
> -Debugging
> -=========
> -
> -The migration stream can be analyzed thanks to ``scripts/analyze-migration.py``.
> -
> -Example usage:
> -
> -.. code-block:: shell
> -
> -  $ qemu-system-x86_64 -display none -monitor stdio
> -  (qemu) migrate "exec:cat > mig"
> -  (qemu) q
> -  $ ./scripts/analyze-migration.py -f mig
> -  {
> -    "ram (3)": {
> -        "section sizes": {
> -            "pc.ram": "0x0000000008000000",
> -  ...
> -
> -See also ``analyze-migration.py -h`` help for more options.
> -
>   Common infrastructure
>   =====================
>   
> @@ -970,26 +949,3 @@ the background migration channel.  Anyone who cares about latencies of page
>   faults during a postcopy migration should enable this feature.  By default,
>   it's not enabled.
>   
> -Firmware
> -========
> -
> -Migration migrates the copies of RAM and ROM, and thus when running
> -on the destination it includes the firmware from the source. Even after
> -resetting a VM, the old firmware is used.  Only once QEMU has been restarted
> -is the new firmware in use.
> -
> -- Changes in firmware size can cause changes in the required RAMBlock size
> -  to hold the firmware and thus migration can fail.  In practice it's best
> -  to pad firmware images to convenient powers of 2 with plenty of space
> -  for growth.
> -
> -- Care should be taken with device emulation code so that newer
> -  emulation code can work with older firmware to allow forward migration.
> -
> -- Care should be taken with newer firmware so that backward migration
> -  to older systems with older device emulation code will work.
> -
> -In some cases it may be best to tie specific firmware versions to specific
> -versioned machine types to cut down on the combinations that will need
> -support.  This is also useful when newer versions of firmware outgrow
> -the padding.



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 06/10] docs/migration: Split "Postcopy"
  2024-01-09  6:46 ` [PATCH 06/10] docs/migration: Split "Postcopy" peterx
@ 2024-01-09  7:05   ` Cédric Le Goater
  0 siblings, 0 replies; 29+ messages in thread
From: Cédric Le Goater @ 2024-01-09  7:05 UTC (permalink / raw)
  To: peterx, qemu-devel
  Cc: Michael S . Tsirkin, Alex Williamson, Jason Wang, Bandan Das,
	Prasad Pandit, Fabiano Rosas

On 1/9/24 07:46, peterx@redhat.com wrote:
> From: Peter Xu <peterx@redhat.com>
> 
> Split postcopy into a separate file.  Introduce a head page "features.rst"
> to keep all the features on top of migration framework.
> 
> Signed-off-by: Peter Xu <peterx@redhat.com>


Reviewed-by: Cédric Le Goater <clg@redhat.com>

Thanks,

C.


> ---
>   docs/devel/migration/features.rst |   9 +
>   docs/devel/migration/index.rst    |   1 +
>   docs/devel/migration/main.rst     | 305 ------------------------------
>   docs/devel/migration/postcopy.rst | 304 +++++++++++++++++++++++++++++
>   4 files changed, 314 insertions(+), 305 deletions(-)
>   create mode 100644 docs/devel/migration/features.rst
>   create mode 100644 docs/devel/migration/postcopy.rst
> 
> diff --git a/docs/devel/migration/features.rst b/docs/devel/migration/features.rst
> new file mode 100644
> index 0000000000..0054e0c900
> --- /dev/null
> +++ b/docs/devel/migration/features.rst
> @@ -0,0 +1,9 @@
> +Migration features
> +==================
> +
> +Migration has plenty of features to support different use cases.
> +
> +.. toctree::
> +   :maxdepth: 2
> +
> +   postcopy
> diff --git a/docs/devel/migration/index.rst b/docs/devel/migration/index.rst
> index c09623b38f..7cf62541b9 100644
> --- a/docs/devel/migration/index.rst
> +++ b/docs/devel/migration/index.rst
> @@ -8,6 +8,7 @@ QEMU live migration works.
>      :maxdepth: 2
>   
>      main
> +   features
>      compatibility
>      vfio
>      virtio
> diff --git a/docs/devel/migration/main.rst b/docs/devel/migration/main.rst
> index 97811ce371..051ea43f0e 100644
> --- a/docs/devel/migration/main.rst
> +++ b/docs/devel/migration/main.rst
> @@ -644,308 +644,3 @@ algorithm will restrict virtual CPUs as needed to keep their dirty page
>   rate inside the limit. This leads to more steady reading performance during
>   live migration and can aid in improving large guest responsiveness.
>   
> -Postcopy
> -========
> -
> -'Postcopy' migration is a way to deal with migrations that refuse to converge
> -(or take too long to converge) its plus side is that there is an upper bound on
> -the amount of migration traffic and time it takes, the down side is that during
> -the postcopy phase, a failure of *either* side causes the guest to be lost.
> -
> -In postcopy the destination CPUs are started before all the memory has been
> -transferred, and accesses to pages that are yet to be transferred cause
> -a fault that's translated by QEMU into a request to the source QEMU.
> -
> -Postcopy can be combined with precopy (i.e. normal migration) so that if precopy
> -doesn't finish in a given time the switch is made to postcopy.
> -
> -Enabling postcopy
> ------------------
> -
> -To enable postcopy, issue this command on the monitor (both source and
> -destination) prior to the start of migration:
> -
> -``migrate_set_capability postcopy-ram on``
> -
> -The normal commands are then used to start a migration, which is still
> -started in precopy mode.  Issuing:
> -
> -``migrate_start_postcopy``
> -
> -will now cause the transition from precopy to postcopy.
> -It can be issued immediately after migration is started or any
> -time later on.  Issuing it after the end of a migration is harmless.
> -
> -Blocktime is a postcopy live migration metric, intended to show how
> -long the vCPU was in state of interruptible sleep due to pagefault.
> -That metric is calculated both for all vCPUs as overlapped value, and
> -separately for each vCPU. These values are calculated on destination
> -side.  To enable postcopy blocktime calculation, enter following
> -command on destination monitor:
> -
> -``migrate_set_capability postcopy-blocktime on``
> -
> -Postcopy blocktime can be retrieved by query-migrate qmp command.
> -postcopy-blocktime value of qmp command will show overlapped blocking
> -time for all vCPU, postcopy-vcpu-blocktime will show list of blocking
> -time per vCPU.
> -
> -.. note::
> -  During the postcopy phase, the bandwidth limits set using
> -  ``migrate_set_parameter`` is ignored (to avoid delaying requested pages that
> -  the destination is waiting for).
> -
> -Postcopy device transfer
> -------------------------
> -
> -Loading of device data may cause the device emulation to access guest RAM
> -that may trigger faults that have to be resolved by the source, as such
> -the migration stream has to be able to respond with page data *during* the
> -device load, and hence the device data has to be read from the stream completely
> -before the device load begins to free the stream up.  This is achieved by
> -'packaging' the device data into a blob that's read in one go.
> -
> -Source behaviour
> -----------------
> -
> -Until postcopy is entered the migration stream is identical to normal
> -precopy, except for the addition of a 'postcopy advise' command at
> -the beginning, to tell the destination that postcopy might happen.
> -When postcopy starts the source sends the page discard data and then
> -forms the 'package' containing:
> -
> -   - Command: 'postcopy listen'
> -   - The device state
> -
> -     A series of sections, identical to the precopy streams device state stream
> -     containing everything except postcopiable devices (i.e. RAM)
> -   - Command: 'postcopy run'
> -
> -The 'package' is sent as the data part of a Command: ``CMD_PACKAGED``, and the
> -contents are formatted in the same way as the main migration stream.
> -
> -During postcopy the source scans the list of dirty pages and sends them
> -to the destination without being requested (in much the same way as precopy),
> -however when a page request is received from the destination, the dirty page
> -scanning restarts from the requested location.  This causes requested pages
> -to be sent quickly, and also causes pages directly after the requested page
> -to be sent quickly in the hope that those pages are likely to be used
> -by the destination soon.
> -
> -Destination behaviour
> ----------------------
> -
> -Initially the destination looks the same as precopy, with a single thread
> -reading the migration stream; the 'postcopy advise' and 'discard' commands
> -are processed to change the way RAM is managed, but don't affect the stream
> -processing.
> -
> -::
> -
> -  ------------------------------------------------------------------------------
> -                          1      2   3     4 5                      6   7
> -  main -----DISCARD-CMD_PACKAGED ( LISTEN  DEVICE     DEVICE DEVICE RUN )
> -  thread                             |       |
> -                                     |     (page request)
> -                                     |        \___
> -                                     v            \
> -  listen thread:                     --- page -- page -- page -- page -- page --
> -
> -                                     a   b        c
> -  ------------------------------------------------------------------------------
> -
> -- On receipt of ``CMD_PACKAGED`` (1)
> -
> -   All the data associated with the package - the ( ... ) section in the diagram -
> -   is read into memory, and the main thread recurses into qemu_loadvm_state_main
> -   to process the contents of the package (2) which contains commands (3,6) and
> -   devices (4...)
> -
> -- On receipt of 'postcopy listen' - 3 -(i.e. the 1st command in the package)
> -
> -   a new thread (a) is started that takes over servicing the migration stream,
> -   while the main thread carries on loading the package.   It loads normal
> -   background page data (b) but if during a device load a fault happens (5)
> -   the returned page (c) is loaded by the listen thread allowing the main
> -   threads device load to carry on.
> -
> -- The last thing in the ``CMD_PACKAGED`` is a 'RUN' command (6)
> -
> -   letting the destination CPUs start running.  At the end of the
> -   ``CMD_PACKAGED`` (7) the main thread returns to normal running behaviour and
> -   is no longer used by migration, while the listen thread carries on servicing
> -   page data until the end of migration.
> -
> -Postcopy Recovery
> ------------------
> -
> -Comparing to precopy, postcopy is special on error handlings.  When any
> -error happens (in this case, mostly network errors), QEMU cannot easily
> -fail a migration because VM data resides in both source and destination
> -QEMU instances.  On the other hand, when issue happens QEMU on both sides
> -will go into a paused state.  It'll need a recovery phase to continue a
> -paused postcopy migration.
> -
> -The recovery phase normally contains a few steps:
> -
> -  - When network issue occurs, both QEMU will go into PAUSED state
> -
> -  - When the network is recovered (or a new network is provided), the admin
> -    can setup the new channel for migration using QMP command
> -    'migrate-recover' on destination node, preparing for a resume.
> -
> -  - On source host, the admin can continue the interrupted postcopy
> -    migration using QMP command 'migrate' with resume=true flag set.
> -
> -  - After the connection is re-established, QEMU will continue the postcopy
> -    migration on both sides.
> -
> -During a paused postcopy migration, the VM can logically still continue
> -running, and it will not be impacted from any page access to pages that
> -were already migrated to destination VM before the interruption happens.
> -However, if any of the missing pages got accessed on destination VM, the VM
> -thread will be halted waiting for the page to be migrated, it means it can
> -be halted until the recovery is complete.
> -
> -The impact of accessing missing pages can be relevant to different
> -configurations of the guest.  For example, when with async page fault
> -enabled, logically the guest can proactively schedule out the threads
> -accessing missing pages.
> -
> -Postcopy states
> ----------------
> -
> -Postcopy moves through a series of states (see postcopy_state) from
> -ADVISE->DISCARD->LISTEN->RUNNING->END
> -
> - - Advise
> -
> -    Set at the start of migration if postcopy is enabled, even
> -    if it hasn't had the start command; here the destination
> -    checks that its OS has the support needed for postcopy, and performs
> -    setup to ensure the RAM mappings are suitable for later postcopy.
> -    The destination will fail early in migration at this point if the
> -    required OS support is not present.
> -    (Triggered by reception of POSTCOPY_ADVISE command)
> -
> - - Discard
> -
> -    Entered on receipt of the first 'discard' command; prior to
> -    the first Discard being performed, hugepages are switched off
> -    (using madvise) to ensure that no new huge pages are created
> -    during the postcopy phase, and to cause any huge pages that
> -    have discards on them to be broken.
> -
> - - Listen
> -
> -    The first command in the package, POSTCOPY_LISTEN, switches
> -    the destination state to Listen, and starts a new thread
> -    (the 'listen thread') which takes over the job of receiving
> -    pages off the migration stream, while the main thread carries
> -    on processing the blob.  With this thread able to process page
> -    reception, the destination now 'sensitises' the RAM to detect
> -    any access to missing pages (on Linux using the 'userfault'
> -    system).
> -
> - - Running
> -
> -    POSTCOPY_RUN causes the destination to synchronise all
> -    state and start the CPUs and IO devices running.  The main
> -    thread now finishes processing the migration package and
> -    now carries on as it would for normal precopy migration
> -    (although it can't do the cleanup it would do as it
> -    finishes a normal migration).
> -
> - - Paused
> -
> -    Postcopy can run into a paused state (normally on both sides when
> -    happens), where all threads will be temporarily halted mostly due to
> -    network errors.  When reaching paused state, migration will make sure
> -    the qemu binary on both sides maintain the data without corrupting
> -    the VM.  To continue the migration, the admin needs to fix the
> -    migration channel using the QMP command 'migrate-recover' on the
> -    destination node, then resume the migration using QMP command 'migrate'
> -    again on source node, with resume=true flag set.
> -
> - - End
> -
> -    The listen thread can now quit, and perform the cleanup of migration
> -    state, the migration is now complete.
> -
> -Source side page map
> ---------------------
> -
> -The 'migration bitmap' in postcopy is basically the same as in the precopy,
> -where each of the bit to indicate that page is 'dirty' - i.e. needs
> -sending.  During the precopy phase this is updated as the CPU dirties
> -pages, however during postcopy the CPUs are stopped and nothing should
> -dirty anything any more. Instead, dirty bits are cleared when the relevant
> -pages are sent during postcopy.
> -
> -Postcopy with hugepages
> ------------------------
> -
> -Postcopy now works with hugetlbfs backed memory:
> -
> -  a) The linux kernel on the destination must support userfault on hugepages.
> -  b) The huge-page configuration on the source and destination VMs must be
> -     identical; i.e. RAMBlocks on both sides must use the same page size.
> -  c) Note that ``-mem-path /dev/hugepages``  will fall back to allocating normal
> -     RAM if it doesn't have enough hugepages, triggering (b) to fail.
> -     Using ``-mem-prealloc`` enforces the allocation using hugepages.
> -  d) Care should be taken with the size of hugepage used; postcopy with 2MB
> -     hugepages works well, however 1GB hugepages are likely to be problematic
> -     since it takes ~1 second to transfer a 1GB hugepage across a 10Gbps link,
> -     and until the full page is transferred the destination thread is blocked.
> -
> -Postcopy with shared memory
> ----------------------------
> -
> -Postcopy migration with shared memory needs explicit support from the other
> -processes that share memory and from QEMU. There are restrictions on the type of
> -memory that userfault can support shared.
> -
> -The Linux kernel userfault support works on ``/dev/shm`` memory and on ``hugetlbfs``
> -(although the kernel doesn't provide an equivalent to ``madvise(MADV_DONTNEED)``
> -for hugetlbfs which may be a problem in some configurations).
> -
> -The vhost-user code in QEMU supports clients that have Postcopy support,
> -and the ``vhost-user-bridge`` (in ``tests/``) and the DPDK package have changes
> -to support postcopy.
> -
> -The client needs to open a userfaultfd and register the areas
> -of memory that it maps with userfault.  The client must then pass the
> -userfaultfd back to QEMU together with a mapping table that allows
> -fault addresses in the clients address space to be converted back to
> -RAMBlock/offsets.  The client's userfaultfd is added to the postcopy
> -fault-thread and page requests are made on behalf of the client by QEMU.
> -QEMU performs 'wake' operations on the client's userfaultfd to allow it
> -to continue after a page has arrived.
> -
> -.. note::
> -  There are two future improvements that would be nice:
> -    a) Some way to make QEMU ignorant of the addresses in the clients
> -       address space
> -    b) Avoiding the need for QEMU to perform ufd-wake calls after the
> -       pages have arrived
> -
> -Retro-fitting postcopy to existing clients is possible:
> -  a) A mechanism is needed for the registration with userfault as above,
> -     and the registration needs to be coordinated with the phases of
> -     postcopy.  In vhost-user extra messages are added to the existing
> -     control channel.
> -  b) Any thread that can block due to guest memory accesses must be
> -     identified and the implication understood; for example if the
> -     guest memory access is made while holding a lock then all other
> -     threads waiting for that lock will also be blocked.
> -
> -Postcopy Preemption Mode
> -------------------------
> -
> -Postcopy preempt is a new capability introduced in 8.0 QEMU release, it
> -allows urgent pages (those got page fault requested from destination QEMU
> -explicitly) to be sent in a separate preempt channel, rather than queued in
> -the background migration channel.  Anyone who cares about latencies of page
> -faults during a postcopy migration should enable this feature.  By default,
> -it's not enabled.
> -
> diff --git a/docs/devel/migration/postcopy.rst b/docs/devel/migration/postcopy.rst
> new file mode 100644
> index 0000000000..d60eec06ab
> --- /dev/null
> +++ b/docs/devel/migration/postcopy.rst
> @@ -0,0 +1,304 @@
> +Postcopy
> +========
> +
> +'Postcopy' migration is a way to deal with migrations that refuse to converge
> +(or take too long to converge) its plus side is that there is an upper bound on
> +the amount of migration traffic and time it takes, the down side is that during
> +the postcopy phase, a failure of *either* side causes the guest to be lost.
> +
> +In postcopy the destination CPUs are started before all the memory has been
> +transferred, and accesses to pages that are yet to be transferred cause
> +a fault that's translated by QEMU into a request to the source QEMU.
> +
> +Postcopy can be combined with precopy (i.e. normal migration) so that if precopy
> +doesn't finish in a given time the switch is made to postcopy.
> +
> +Enabling postcopy
> +-----------------
> +
> +To enable postcopy, issue this command on the monitor (both source and
> +destination) prior to the start of migration:
> +
> +``migrate_set_capability postcopy-ram on``
> +
> +The normal commands are then used to start a migration, which is still
> +started in precopy mode.  Issuing:
> +
> +``migrate_start_postcopy``
> +
> +will now cause the transition from precopy to postcopy.
> +It can be issued immediately after migration is started or any
> +time later on.  Issuing it after the end of a migration is harmless.
> +
> +Blocktime is a postcopy live migration metric, intended to show how
> +long the vCPU was in state of interruptible sleep due to pagefault.
> +That metric is calculated both for all vCPUs as overlapped value, and
> +separately for each vCPU. These values are calculated on destination
> +side.  To enable postcopy blocktime calculation, enter following
> +command on destination monitor:
> +
> +``migrate_set_capability postcopy-blocktime on``
> +
> +Postcopy blocktime can be retrieved by query-migrate qmp command.
> +postcopy-blocktime value of qmp command will show overlapped blocking
> +time for all vCPU, postcopy-vcpu-blocktime will show list of blocking
> +time per vCPU.
> +
> +.. note::
> +  During the postcopy phase, the bandwidth limits set using
> +  ``migrate_set_parameter`` is ignored (to avoid delaying requested pages that
> +  the destination is waiting for).
> +
> +Postcopy device transfer
> +------------------------
> +
> +Loading of device data may cause the device emulation to access guest RAM
> +that may trigger faults that have to be resolved by the source, as such
> +the migration stream has to be able to respond with page data *during* the
> +device load, and hence the device data has to be read from the stream completely
> +before the device load begins to free the stream up.  This is achieved by
> +'packaging' the device data into a blob that's read in one go.
> +
> +Source behaviour
> +----------------
> +
> +Until postcopy is entered the migration stream is identical to normal
> +precopy, except for the addition of a 'postcopy advise' command at
> +the beginning, to tell the destination that postcopy might happen.
> +When postcopy starts the source sends the page discard data and then
> +forms the 'package' containing:
> +
> +   - Command: 'postcopy listen'
> +   - The device state
> +
> +     A series of sections, identical to the precopy streams device state stream
> +     containing everything except postcopiable devices (i.e. RAM)
> +   - Command: 'postcopy run'
> +
> +The 'package' is sent as the data part of a Command: ``CMD_PACKAGED``, and the
> +contents are formatted in the same way as the main migration stream.
> +
> +During postcopy the source scans the list of dirty pages and sends them
> +to the destination without being requested (in much the same way as precopy),
> +however when a page request is received from the destination, the dirty page
> +scanning restarts from the requested location.  This causes requested pages
> +to be sent quickly, and also causes pages directly after the requested page
> +to be sent quickly in the hope that those pages are likely to be used
> +by the destination soon.
> +
> +Destination behaviour
> +---------------------
> +
> +Initially the destination looks the same as precopy, with a single thread
> +reading the migration stream; the 'postcopy advise' and 'discard' commands
> +are processed to change the way RAM is managed, but don't affect the stream
> +processing.
> +
> +::
> +
> +  ------------------------------------------------------------------------------
> +                          1      2   3     4 5                      6   7
> +  main -----DISCARD-CMD_PACKAGED ( LISTEN  DEVICE     DEVICE DEVICE RUN )
> +  thread                             |       |
> +                                     |     (page request)
> +                                     |        \___
> +                                     v            \
> +  listen thread:                     --- page -- page -- page -- page -- page --
> +
> +                                     a   b        c
> +  ------------------------------------------------------------------------------
> +
> +- On receipt of ``CMD_PACKAGED`` (1)
> +
> +   All the data associated with the package - the ( ... ) section in the diagram -
> +   is read into memory, and the main thread recurses into qemu_loadvm_state_main
> +   to process the contents of the package (2) which contains commands (3,6) and
> +   devices (4...)
> +
> +- On receipt of 'postcopy listen' - 3 -(i.e. the 1st command in the package)
> +
> +   a new thread (a) is started that takes over servicing the migration stream,
> +   while the main thread carries on loading the package.   It loads normal
> +   background page data (b) but if during a device load a fault happens (5)
> +   the returned page (c) is loaded by the listen thread allowing the main
> +   threads device load to carry on.
> +
> +- The last thing in the ``CMD_PACKAGED`` is a 'RUN' command (6)
> +
> +   letting the destination CPUs start running.  At the end of the
> +   ``CMD_PACKAGED`` (7) the main thread returns to normal running behaviour and
> +   is no longer used by migration, while the listen thread carries on servicing
> +   page data until the end of migration.
> +
> +Postcopy Recovery
> +-----------------
> +
> +Comparing to precopy, postcopy is special on error handlings.  When any
> +error happens (in this case, mostly network errors), QEMU cannot easily
> +fail a migration because VM data resides in both source and destination
> +QEMU instances.  On the other hand, when issue happens QEMU on both sides
> +will go into a paused state.  It'll need a recovery phase to continue a
> +paused postcopy migration.
> +
> +The recovery phase normally contains a few steps:
> +
> +  - When network issue occurs, both QEMU will go into PAUSED state
> +
> +  - When the network is recovered (or a new network is provided), the admin
> +    can setup the new channel for migration using QMP command
> +    'migrate-recover' on destination node, preparing for a resume.
> +
> +  - On source host, the admin can continue the interrupted postcopy
> +    migration using QMP command 'migrate' with resume=true flag set.
> +
> +  - After the connection is re-established, QEMU will continue the postcopy
> +    migration on both sides.
> +
> +During a paused postcopy migration, the VM can logically still continue
> +running, and it will not be impacted from any page access to pages that
> +were already migrated to destination VM before the interruption happens.
> +However, if any of the missing pages got accessed on destination VM, the VM
> +thread will be halted waiting for the page to be migrated, it means it can
> +be halted until the recovery is complete.
> +
> +The impact of accessing missing pages can be relevant to different
> +configurations of the guest.  For example, when with async page fault
> +enabled, logically the guest can proactively schedule out the threads
> +accessing missing pages.
> +
> +Postcopy states
> +---------------
> +
> +Postcopy moves through a series of states (see postcopy_state) from
> +ADVISE->DISCARD->LISTEN->RUNNING->END
> +
> + - Advise
> +
> +    Set at the start of migration if postcopy is enabled, even
> +    if it hasn't had the start command; here the destination
> +    checks that its OS has the support needed for postcopy, and performs
> +    setup to ensure the RAM mappings are suitable for later postcopy.
> +    The destination will fail early in migration at this point if the
> +    required OS support is not present.
> +    (Triggered by reception of POSTCOPY_ADVISE command)
> +
> + - Discard
> +
> +    Entered on receipt of the first 'discard' command; prior to
> +    the first Discard being performed, hugepages are switched off
> +    (using madvise) to ensure that no new huge pages are created
> +    during the postcopy phase, and to cause any huge pages that
> +    have discards on them to be broken.
> +
> + - Listen
> +
> +    The first command in the package, POSTCOPY_LISTEN, switches
> +    the destination state to Listen, and starts a new thread
> +    (the 'listen thread') which takes over the job of receiving
> +    pages off the migration stream, while the main thread carries
> +    on processing the blob.  With this thread able to process page
> +    reception, the destination now 'sensitises' the RAM to detect
> +    any access to missing pages (on Linux using the 'userfault'
> +    system).
> +
> + - Running
> +
> +    POSTCOPY_RUN causes the destination to synchronise all
> +    state and start the CPUs and IO devices running.  The main
> +    thread now finishes processing the migration package and
> +    now carries on as it would for normal precopy migration
> +    (although it can't do the cleanup it would do as it
> +    finishes a normal migration).
> +
> + - Paused
> +
> +    Postcopy can run into a paused state (normally on both sides when
> +    happens), where all threads will be temporarily halted mostly due to
> +    network errors.  When reaching paused state, migration will make sure
> +    the qemu binary on both sides maintain the data without corrupting
> +    the VM.  To continue the migration, the admin needs to fix the
> +    migration channel using the QMP command 'migrate-recover' on the
> +    destination node, then resume the migration using QMP command 'migrate'
> +    again on source node, with resume=true flag set.
> +
> + - End
> +
> +    The listen thread can now quit, and perform the cleanup of migration
> +    state, the migration is now complete.
> +
> +Source side page map
> +--------------------
> +
> +The 'migration bitmap' in postcopy is basically the same as in the precopy,
> +where each of the bit to indicate that page is 'dirty' - i.e. needs
> +sending.  During the precopy phase this is updated as the CPU dirties
> +pages, however during postcopy the CPUs are stopped and nothing should
> +dirty anything any more. Instead, dirty bits are cleared when the relevant
> +pages are sent during postcopy.
> +
> +Postcopy with hugepages
> +-----------------------
> +
> +Postcopy now works with hugetlbfs backed memory:
> +
> +  a) The linux kernel on the destination must support userfault on hugepages.
> +  b) The huge-page configuration on the source and destination VMs must be
> +     identical; i.e. RAMBlocks on both sides must use the same page size.
> +  c) Note that ``-mem-path /dev/hugepages``  will fall back to allocating normal
> +     RAM if it doesn't have enough hugepages, triggering (b) to fail.
> +     Using ``-mem-prealloc`` enforces the allocation using hugepages.
> +  d) Care should be taken with the size of hugepage used; postcopy with 2MB
> +     hugepages works well, however 1GB hugepages are likely to be problematic
> +     since it takes ~1 second to transfer a 1GB hugepage across a 10Gbps link,
> +     and until the full page is transferred the destination thread is blocked.
> +
> +Postcopy with shared memory
> +---------------------------
> +
> +Postcopy migration with shared memory needs explicit support from the other
> +processes that share memory and from QEMU. There are restrictions on the type of
> +memory that userfault can support shared.
> +
> +The Linux kernel userfault support works on ``/dev/shm`` memory and on ``hugetlbfs``
> +(although the kernel doesn't provide an equivalent to ``madvise(MADV_DONTNEED)``
> +for hugetlbfs which may be a problem in some configurations).
> +
> +The vhost-user code in QEMU supports clients that have Postcopy support,
> +and the ``vhost-user-bridge`` (in ``tests/``) and the DPDK package have changes
> +to support postcopy.
> +
> +The client needs to open a userfaultfd and register the areas
> +of memory that it maps with userfault.  The client must then pass the
> +userfaultfd back to QEMU together with a mapping table that allows
> +fault addresses in the clients address space to be converted back to
> +RAMBlock/offsets.  The client's userfaultfd is added to the postcopy
> +fault-thread and page requests are made on behalf of the client by QEMU.
> +QEMU performs 'wake' operations on the client's userfaultfd to allow it
> +to continue after a page has arrived.
> +
> +.. note::
> +  There are two future improvements that would be nice:
> +    a) Some way to make QEMU ignorant of the addresses in the clients
> +       address space
> +    b) Avoiding the need for QEMU to perform ufd-wake calls after the
> +       pages have arrived
> +
> +Retro-fitting postcopy to existing clients is possible:
> +  a) A mechanism is needed for the registration with userfault as above,
> +     and the registration needs to be coordinated with the phases of
> +     postcopy.  In vhost-user extra messages are added to the existing
> +     control channel.
> +  b) Any thread that can block due to guest memory accesses must be
> +     identified and the implication understood; for example if the
> +     guest memory access is made while holding a lock then all other
> +     threads waiting for that lock will also be blocked.
> +
> +Postcopy Preemption Mode
> +------------------------
> +
> +Postcopy preempt is a new capability introduced in 8.0 QEMU release, it
> +allows urgent pages (those got page fault requested from destination QEMU
> +explicitly) to be sent in a separate preempt channel, rather than queued in
> +the background migration channel.  Anyone who cares about latencies of page
> +faults during a postcopy migration should enable this feature.  By default,
> +it's not enabled.



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 07/10] docs/migration: Split "dirty limit"
  2024-01-09  6:46 ` [PATCH 07/10] docs/migration: Split "dirty limit" peterx
@ 2024-01-09  7:06   ` Cédric Le Goater
  0 siblings, 0 replies; 29+ messages in thread
From: Cédric Le Goater @ 2024-01-09  7:06 UTC (permalink / raw)
  To: peterx, qemu-devel
  Cc: Michael S . Tsirkin, Alex Williamson, Jason Wang, Bandan Das,
	Prasad Pandit, Fabiano Rosas, Yong Huang

On 1/9/24 07:46, peterx@redhat.com wrote:
> From: Peter Xu <peterx@redhat.com>
> 
> Split that into a separate file, put under "features".
> 
> Cc: Yong Huang <yong.huang@smartx.com>
> Signed-off-by: Peter Xu <peterx@redhat.com>



Reviewed-by: Cédric Le Goater <clg@redhat.com>

Thanks,

C.


> ---
>   docs/devel/migration/dirty-limit.rst | 71 ++++++++++++++++++++++++++++
>   docs/devel/migration/features.rst    |  1 +
>   docs/devel/migration/main.rst        | 71 ----------------------------
>   3 files changed, 72 insertions(+), 71 deletions(-)
>   create mode 100644 docs/devel/migration/dirty-limit.rst
> 
> diff --git a/docs/devel/migration/dirty-limit.rst b/docs/devel/migration/dirty-limit.rst
> new file mode 100644
> index 0000000000..8f32329d5f
> --- /dev/null
> +++ b/docs/devel/migration/dirty-limit.rst
> @@ -0,0 +1,71 @@
> +Dirty limit
> +===========
> +
> +The dirty limit, short for dirty page rate upper limit, is a new capability
> +introduced in the 8.1 QEMU release that uses a new algorithm based on the KVM
> +dirty ring to throttle down the guest during live migration.
> +
> +The algorithm framework is as follows:
> +
> +::
> +
> +  ------------------------------------------------------------------------------
> +  main   --------------> throttle thread ------------> PREPARE(1) <--------
> +  thread  \                                                |              |
> +           \                                               |              |
> +            \                                              V              |
> +             -\                                        CALCULATE(2)       |
> +               \                                           |              |
> +                \                                          |              |
> +                 \                                         V              |
> +                  \                                    SET PENALTY(3) -----
> +                   -\                                      |
> +                     \                                     |
> +                      \                                    V
> +                       -> virtual CPU thread -------> ACCEPT PENALTY(4)
> +  ------------------------------------------------------------------------------
> +
> +When the qmp command qmp_set_vcpu_dirty_limit is called for the first time,
> +the QEMU main thread starts the throttle thread. The throttle thread, once
> +launched, executes the loop, which consists of three steps:
> +
> +  - PREPARE (1)
> +
> +     The entire work of PREPARE (1) is preparation for the second stage,
> +     CALCULATE(2), as the name implies. It involves preparing the dirty
> +     page rate value and the corresponding upper limit of the VM:
> +     The dirty page rate is calculated via the KVM dirty ring mechanism,
> +     which tells QEMU how many dirty pages a virtual CPU has had since the
> +     last KVM_EXIT_DIRTY_RING_FULL exception; The dirty page rate upper
> +     limit is specified by caller, therefore fetch it directly.
> +
> +  - CALCULATE (2)
> +
> +     Calculate a suitable sleep period for each virtual CPU, which will be
> +     used to determine the penalty for the target virtual CPU. The
> +     computation must be done carefully in order to reduce the dirty page
> +     rate progressively down to the upper limit without oscillation. To
> +     achieve this, two strategies are provided: the first is to add or
> +     subtract sleep time based on the ratio of the current dirty page rate
> +     to the limit, which is used when the current dirty page rate is far
> +     from the limit; the second is to add or subtract a fixed time when
> +     the current dirty page rate is close to the limit.
> +
> +  - SET PENALTY (3)
> +
> +     Set the sleep time for each virtual CPU that should be penalized based
> +     on the results of the calculation supplied by step CALCULATE (2).
> +
> +After completing the three above stages, the throttle thread loops back
> +to step PREPARE (1) until the dirty limit is reached.
> +
> +On the other hand, each virtual CPU thread reads the sleep duration and
> +sleeps in the path of the KVM_EXIT_DIRTY_RING_FULL exception handler, that
> +is ACCEPT PENALTY (4). Virtual CPUs tied with writing processes will
> +obviously exit to the path and get penalized, whereas virtual CPUs involved
> +with read processes will not.
> +
> +In summary, thanks to the KVM dirty ring technology, the dirty limit
> +algorithm will restrict virtual CPUs as needed to keep their dirty page
> +rate inside the limit. This leads to more steady reading performance during
> +live migration and can aid in improving large guest responsiveness.
> diff --git a/docs/devel/migration/features.rst b/docs/devel/migration/features.rst
> index 0054e0c900..e257d0d100 100644
> --- a/docs/devel/migration/features.rst
> +++ b/docs/devel/migration/features.rst
> @@ -7,3 +7,4 @@ Migration has plenty of features to support different use cases.
>      :maxdepth: 2
>   
>      postcopy
> +   dirty-limit
> diff --git a/docs/devel/migration/main.rst b/docs/devel/migration/main.rst
> index 051ea43f0e..00b9c3d32f 100644
> --- a/docs/devel/migration/main.rst
> +++ b/docs/devel/migration/main.rst
> @@ -573,74 +573,3 @@ path.
>        Return path  - opened by main thread, written by main thread AND postcopy
>        thread (protected by rp_mutex)
>   
> -Dirty limit
> -=====================
> -The dirty limit, short for dirty page rate upper limit, is a new capability
> -introduced in the 8.1 QEMU release that uses a new algorithm based on the KVM
> -dirty ring to throttle down the guest during live migration.
> -
> -The algorithm framework is as follows:
> -
> -::
> -
> -  ------------------------------------------------------------------------------
> -  main   --------------> throttle thread ------------> PREPARE(1) <--------
> -  thread  \                                                |              |
> -           \                                               |              |
> -            \                                              V              |
> -             -\                                        CALCULATE(2)       |
> -               \                                           |              |
> -                \                                          |              |
> -                 \                                         V              |
> -                  \                                    SET PENALTY(3) -----
> -                   -\                                      |
> -                     \                                     |
> -                      \                                    V
> -                       -> virtual CPU thread -------> ACCEPT PENALTY(4)
> -  ------------------------------------------------------------------------------
> -
> -When the qmp command qmp_set_vcpu_dirty_limit is called for the first time,
> -the QEMU main thread starts the throttle thread. The throttle thread, once
> -launched, executes the loop, which consists of three steps:
> -
> -  - PREPARE (1)
> -
> -     The entire work of PREPARE (1) is preparation for the second stage,
> -     CALCULATE(2), as the name implies. It involves preparing the dirty
> -     page rate value and the corresponding upper limit of the VM:
> -     The dirty page rate is calculated via the KVM dirty ring mechanism,
> -     which tells QEMU how many dirty pages a virtual CPU has had since the
> -     last KVM_EXIT_DIRTY_RING_FULL exception; The dirty page rate upper
> -     limit is specified by caller, therefore fetch it directly.
> -
> -  - CALCULATE (2)
> -
> -     Calculate a suitable sleep period for each virtual CPU, which will be
> -     used to determine the penalty for the target virtual CPU. The
> -     computation must be done carefully in order to reduce the dirty page
> -     rate progressively down to the upper limit without oscillation. To
> -     achieve this, two strategies are provided: the first is to add or
> -     subtract sleep time based on the ratio of the current dirty page rate
> -     to the limit, which is used when the current dirty page rate is far
> -     from the limit; the second is to add or subtract a fixed time when
> -     the current dirty page rate is close to the limit.
> -
> -  - SET PENALTY (3)
> -
> -     Set the sleep time for each virtual CPU that should be penalized based
> -     on the results of the calculation supplied by step CALCULATE (2).
> -
> -After completing the three above stages, the throttle thread loops back
> -to step PREPARE (1) until the dirty limit is reached.
> -
> -On the other hand, each virtual CPU thread reads the sleep duration and
> -sleeps in the path of the KVM_EXIT_DIRTY_RING_FULL exception handler, that
> -is ACCEPT PENALTY (4). Virtual CPUs tied with writing processes will
> -obviously exit to the path and get penalized, whereas virtual CPUs involved
> -with read processes will not.
> -
> -In summary, thanks to the KVM dirty ring technology, the dirty limit
> -algorithm will restrict virtual CPUs as needed to keep their dirty page
> -rate inside the limit. This leads to more steady reading performance during
> -live migration and can aid in improving large guest responsiveness.
> -



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 08/10] docs/migration: Organize "Postcopy" page
  2024-01-09  6:46 ` [PATCH 08/10] docs/migration: Organize "Postcopy" page peterx
@ 2024-01-09  7:20   ` Cédric Le Goater
  0 siblings, 0 replies; 29+ messages in thread
From: Cédric Le Goater @ 2024-01-09  7:20 UTC (permalink / raw)
  To: peterx, qemu-devel
  Cc: Michael S . Tsirkin, Alex Williamson, Jason Wang, Bandan Das,
	Prasad Pandit, Fabiano Rosas

On 1/9/24 07:46, peterx@redhat.com wrote:
> From: Peter Xu <peterx@redhat.com>
> 
> Reorganize the page, moving things around, and add a few
> headlines ("Postcopy internals", "Postcopy features") to cover sub-areas.
> 
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
>   docs/devel/migration/postcopy.rst | 159 ++++++++++++++++--------------
>   1 file changed, 84 insertions(+), 75 deletions(-)
> 
> diff --git a/docs/devel/migration/postcopy.rst b/docs/devel/migration/postcopy.rst
> index d60eec06ab..6c51e96d79 100644
> --- a/docs/devel/migration/postcopy.rst
> +++ b/docs/devel/migration/postcopy.rst
> @@ -1,6 +1,9 @@
> +========
>   Postcopy
>   ========
>   
> +.. contents::
> +
>   'Postcopy' migration is a way to deal with migrations that refuse to converge

The quote character is used in a few places to emphasize words
which should be reworked. The rest looks good, so


Reviewed-by: Cédric Le Goater <clg@redhat.com>

Thanks,

C.



>   (or take too long to converge) its plus side is that there is an upper bound on
>   the amount of migration traffic and time it takes, the down side is that during
> @@ -14,7 +17,7 @@ Postcopy can be combined with precopy (i.e. normal migration) so that if precopy
>   doesn't finish in a given time the switch is made to postcopy.
>   
>   Enabling postcopy
> ------------------
> +=================
>   
>   To enable postcopy, issue this command on the monitor (both source and
>   destination) prior to the start of migration:
> @@ -49,8 +52,71 @@ time per vCPU.
>     ``migrate_set_parameter`` is ignored (to avoid delaying requested pages that
>     the destination is waiting for).
>   
> -Postcopy device transfer
> -------------------------
> +Postcopy internals
> +==================
> +
> +State machine
> +-------------
> +
> +Postcopy moves through a series of states (see postcopy_state) from
> +ADVISE->DISCARD->LISTEN->RUNNING->END
> +
> + - Advise
> +
> +    Set at the start of migration if postcopy is enabled, even
> +    if it hasn't had the start command; here the destination
> +    checks that its OS has the support needed for postcopy, and performs
> +    setup to ensure the RAM mappings are suitable for later postcopy.
> +    The destination will fail early in migration at this point if the
> +    required OS support is not present.
> +    (Triggered by reception of POSTCOPY_ADVISE command)
> +
> + - Discard
> +
> +    Entered on receipt of the first 'discard' command; prior to
> +    the first Discard being performed, hugepages are switched off
> +    (using madvise) to ensure that no new huge pages are created
> +    during the postcopy phase, and to cause any huge pages that
> +    have discards on them to be broken.
> +
> + - Listen
> +
> +    The first command in the package, POSTCOPY_LISTEN, switches
> +    the destination state to Listen, and starts a new thread
> +    (the 'listen thread') which takes over the job of receiving
> +    pages off the migration stream, while the main thread carries
> +    on processing the blob.  With this thread able to process page
> +    reception, the destination now 'sensitises' the RAM to detect
> +    any access to missing pages (on Linux using the 'userfault'
> +    system).
> +
> + - Running
> +
> +    POSTCOPY_RUN causes the destination to synchronise all
> +    state and start the CPUs and IO devices running.  The main
> +    thread now finishes processing the migration package and
> +    now carries on as it would for normal precopy migration
> +    (although it can't do the cleanup it would do as it
> +    finishes a normal migration).
> +
> + - Paused
> +
> +    Postcopy can run into a paused state (normally on both sides when
> +    happens), where all threads will be temporarily halted mostly due to
> +    network errors.  When reaching paused state, migration will make sure
> +    the qemu binary on both sides maintain the data without corrupting
> +    the VM.  To continue the migration, the admin needs to fix the
> +    migration channel using the QMP command 'migrate-recover' on the
> +    destination node, then resume the migration using QMP command 'migrate'
> +    again on source node, with resume=true flag set.
> +
> + - End
> +
> +    The listen thread can now quit, and perform the cleanup of migration
> +    state, the migration is now complete.
> +
> +Device transfer
> +---------------
>   
>   Loading of device data may cause the device emulation to access guest RAM
>   that may trigger faults that have to be resolved by the source, as such
> @@ -130,7 +196,20 @@ processing.
>      is no longer used by migration, while the listen thread carries on servicing
>      page data until the end of migration.
>   
> -Postcopy Recovery
> +Source side page bitmap
> +-----------------------
> +
> +The 'migration bitmap' in postcopy is basically the same as in the precopy,
> +where each of the bit to indicate that page is 'dirty' - i.e. needs
> +sending.  During the precopy phase this is updated as the CPU dirties
> +pages, however during postcopy the CPUs are stopped and nothing should
> +dirty anything any more. Instead, dirty bits are cleared when the relevant
> +pages are sent during postcopy.
> +
> +Postcopy features
> +=================
> +
> +Postcopy recovery
>   -----------------
>   
>   Comparing to precopy, postcopy is special on error handlings.  When any
> @@ -166,76 +245,6 @@ configurations of the guest.  For example, when with async page fault
>   enabled, logically the guest can proactively schedule out the threads
>   accessing missing pages.
>   
> -Postcopy states
> ----------------
> -
> -Postcopy moves through a series of states (see postcopy_state) from
> -ADVISE->DISCARD->LISTEN->RUNNING->END
> -
> - - Advise
> -
> -    Set at the start of migration if postcopy is enabled, even
> -    if it hasn't had the start command; here the destination
> -    checks that its OS has the support needed for postcopy, and performs
> -    setup to ensure the RAM mappings are suitable for later postcopy.
> -    The destination will fail early in migration at this point if the
> -    required OS support is not present.
> -    (Triggered by reception of POSTCOPY_ADVISE command)
> -
> - - Discard
> -
> -    Entered on receipt of the first 'discard' command; prior to
> -    the first Discard being performed, hugepages are switched off
> -    (using madvise) to ensure that no new huge pages are created
> -    during the postcopy phase, and to cause any huge pages that
> -    have discards on them to be broken.
> -
> - - Listen
> -
> -    The first command in the package, POSTCOPY_LISTEN, switches
> -    the destination state to Listen, and starts a new thread
> -    (the 'listen thread') which takes over the job of receiving
> -    pages off the migration stream, while the main thread carries
> -    on processing the blob.  With this thread able to process page
> -    reception, the destination now 'sensitises' the RAM to detect
> -    any access to missing pages (on Linux using the 'userfault'
> -    system).
> -
> - - Running
> -
> -    POSTCOPY_RUN causes the destination to synchronise all
> -    state and start the CPUs and IO devices running.  The main
> -    thread now finishes processing the migration package and
> -    now carries on as it would for normal precopy migration
> -    (although it can't do the cleanup it would do as it
> -    finishes a normal migration).
> -
> - - Paused
> -
> -    Postcopy can run into a paused state (normally on both sides when
> -    happens), where all threads will be temporarily halted mostly due to
> -    network errors.  When reaching paused state, migration will make sure
> -    the qemu binary on both sides maintain the data without corrupting
> -    the VM.  To continue the migration, the admin needs to fix the
> -    migration channel using the QMP command 'migrate-recover' on the
> -    destination node, then resume the migration using QMP command 'migrate'
> -    again on source node, with resume=true flag set.
> -
> - - End
> -
> -    The listen thread can now quit, and perform the cleanup of migration
> -    state, the migration is now complete.
> -
> -Source side page map
> ---------------------
> -
> -The 'migration bitmap' in postcopy is basically the same as in the precopy,
> -where each of the bit to indicate that page is 'dirty' - i.e. needs
> -sending.  During the precopy phase this is updated as the CPU dirties
> -pages, however during postcopy the CPUs are stopped and nothing should
> -dirty anything any more. Instead, dirty bits are cleared when the relevant
> -pages are sent during postcopy.
> -
>   Postcopy with hugepages
>   -----------------------
>   
> @@ -293,7 +302,7 @@ Retro-fitting postcopy to existing clients is possible:
>        guest memory access is made while holding a lock then all other
>        threads waiting for that lock will also be blocked.
>   
> -Postcopy Preemption Mode
> +Postcopy preemption mode
>   ------------------------
>   
>   Postcopy preempt is a new capability introduced in 8.0 QEMU release, it



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 09/10] docs/migration: Further move vfio to be feature of migration
  2024-01-09  6:46 ` [PATCH 09/10] docs/migration: Further move vfio to be feature of migration peterx
@ 2024-01-09  7:20   ` Cédric Le Goater
  0 siblings, 0 replies; 29+ messages in thread
From: Cédric Le Goater @ 2024-01-09  7:20 UTC (permalink / raw)
  To: peterx, qemu-devel
  Cc: Michael S . Tsirkin, Alex Williamson, Jason Wang, Bandan Das,
	Prasad Pandit, Fabiano Rosas

On 1/9/24 07:46, peterx@redhat.com wrote:
> From: Peter Xu <peterx@redhat.com>
> 
> Move it one layer down, so taking VFIO-migration as a feature for
> migration.
> 
> Cc: Alex Williamson <alex.williamson@redhat.com>
> Cc: Cédric Le Goater <clg@redhat.com>
> Signed-off-by: Peter Xu <peterx@redhat.com>


Reviewed-by: Cédric Le Goater <clg@redhat.com>

Thanks,

C.


> ---
>   docs/devel/migration/features.rst | 1 +
>   docs/devel/migration/index.rst    | 1 -
>   2 files changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/docs/devel/migration/features.rst b/docs/devel/migration/features.rst
> index e257d0d100..dea016f707 100644
> --- a/docs/devel/migration/features.rst
> +++ b/docs/devel/migration/features.rst
> @@ -8,3 +8,4 @@ Migration has plenty of features to support different use cases.
>   
>      postcopy
>      dirty-limit
> +   vfio
> diff --git a/docs/devel/migration/index.rst b/docs/devel/migration/index.rst
> index 7cf62541b9..2479e8ecb7 100644
> --- a/docs/devel/migration/index.rst
> +++ b/docs/devel/migration/index.rst
> @@ -10,6 +10,5 @@ QEMU live migration works.
>      main
>      features
>      compatibility
> -   vfio
>      virtio
>      best-practises



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 10/10] docs/migration: Further move virtio to be feature of migration
  2024-01-09  6:46 ` [PATCH 10/10] docs/migration: Further move virtio " peterx
@ 2024-01-09  7:20   ` Cédric Le Goater
  0 siblings, 0 replies; 29+ messages in thread
From: Cédric Le Goater @ 2024-01-09  7:20 UTC (permalink / raw)
  To: peterx, qemu-devel
  Cc: Michael S . Tsirkin, Alex Williamson, Jason Wang, Bandan Das,
	Prasad Pandit, Fabiano Rosas

On 1/9/24 07:46, peterx@redhat.com wrote:
> From: Peter Xu <peterx@redhat.com>
> 
> Move it one layer down, so taking Virtio-migration as a feature for
> migration.
> 
> Cc: Michael S. Tsirkin <mst@redhat.com>
> Cc: Jason Wang <jasowang@redhat.com>
> Signed-off-by: Peter Xu <peterx@redhat.com>


Reviewed-by: Cédric Le Goater <clg@redhat.com>

Thanks,

C.



> ---
>   docs/devel/migration/features.rst | 1 +
>   docs/devel/migration/index.rst    | 1 -
>   2 files changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/docs/devel/migration/features.rst b/docs/devel/migration/features.rst
> index dea016f707..a9acaf618e 100644
> --- a/docs/devel/migration/features.rst
> +++ b/docs/devel/migration/features.rst
> @@ -9,3 +9,4 @@ Migration has plenty of features to support different use cases.
>      postcopy
>      dirty-limit
>      vfio
> +   virtio
> diff --git a/docs/devel/migration/index.rst b/docs/devel/migration/index.rst
> index 2479e8ecb7..7b7a706e35 100644
> --- a/docs/devel/migration/index.rst
> +++ b/docs/devel/migration/index.rst
> @@ -10,5 +10,4 @@ QEMU live migration works.
>      main
>      features
>      compatibility
> -   virtio
>      best-practises



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 00/10] docs/migration: Reorganize migration documentations
  2024-01-09  6:46 [PATCH 00/10] docs/migration: Reorganize migration documentations peterx
                   ` (9 preceding siblings ...)
  2024-01-09  6:46 ` [PATCH 10/10] docs/migration: Further move virtio " peterx
@ 2024-01-09 10:49 ` Peter Xu
  2024-01-09 13:21   ` Cédric Le Goater
  2024-01-11  6:20 ` Peter Xu
  11 siblings, 1 reply; 29+ messages in thread
From: Peter Xu @ 2024-01-09 10:49 UTC (permalink / raw)
  To: qemu-devel
  Cc: Cédric Le Goater, Michael S . Tsirkin, Alex Williamson,
	Jason Wang, Bandan Das, Prasad Pandit, Fabiano Rosas

On Tue, Jan 09, 2024 at 02:46:18PM +0800, peterx@redhat.com wrote:
> From: Peter Xu <peterx@redhat.com>
> 
> Migration docs grow larger and larger.  There are plenty of things we can
> do here in the future, but to start that we'd better reorganize the current
> bloated doc files first and properly organize them into separate files.
> This series kicks that off.
> 
> This series mostly does the movement only, so please don't be scared of the
> slightly large diff.  I did touch up things here and there, but I didn't
> yet started writting much.  One thing I did is I converted virtio.txt to
> rST, but that's trivial and no real content I touched.
> 
> I am copying both virtio and vfio people because I'm merging the two
> separate files into the new docs/devel/migration/ folder.

Firstly.. thanks Cedric for the super fast review.

A few things I'd like to mention alongside, because it's documentation
relevant too, and I'd like to collect if there's any comment.

I just mostly rewrote two wiki pages completely:

  https://wiki.qemu.org/ToDo/LiveMigration
  https://wiki.qemu.org/Features/Migration

I merged all the TODO items from Features/Migration into the ToDo page,
while kept the 2nd page mostly clean, just to route to other places.

I had a plan to make:

  https://qemu.org/docs/master

The solo place for migration documentations (aka, QEMU repo the source of
truth for migration docs, as it's peroidically built there), making all the
rest places pointing to that, as I already did in the wiki page.  While I
kept all the TODOs on the wiki page (not Features/Migration, but
ToDo/LiveMigration).

Fabiano / anyone: feel free to add / update / correct any entries there
where applicable.  Also if there's any thoughts on above feel free to let
me know too.

Thanks,

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 00/10] docs/migration: Reorganize migration documentations
  2024-01-09 10:49 ` [PATCH 00/10] docs/migration: Reorganize migration documentations Peter Xu
@ 2024-01-09 13:21   ` Cédric Le Goater
  2024-01-10  2:37     ` Peter Xu
  0 siblings, 1 reply; 29+ messages in thread
From: Cédric Le Goater @ 2024-01-09 13:21 UTC (permalink / raw)
  To: Peter Xu, qemu-devel
  Cc: Michael S . Tsirkin, Alex Williamson, Jason Wang, Bandan Das,
	Prasad Pandit, Fabiano Rosas


> A few things I'd like to mention alongside, because it's documentation
> relevant too, and I'd like to collect if there's any comment.
> 
> I just mostly rewrote two wiki pages completely:
> 
>    https://wiki.qemu.org/ToDo/LiveMigration
>    https://wiki.qemu.org/Features/Migration>
> I merged all the TODO items from Features/Migration into the ToDo page,
> while kept the 2nd page mostly clean, just to route to other places.
> 
> I had a plan to make:
> 
>    https://qemu.org/docs/master
> 
> The solo place for migration documentations (aka, QEMU repo the source of
> truth for migration docs, as it's peroidically built there), making all the
> rest places pointing to that, as I already did in the wiki page.  While I
> kept all the TODOs on the wiki page (not Features/Migration, but
> ToDo/LiveMigration).> 
> Fabiano / anyone: feel free to add / update / correct any entries there
> where applicable.  Also if there's any thoughts on above feel free to let
> me know too.

The Wiki has some limited value, the changelog for instance, but the rest
is a bag of orphan and obsolete pages doomed to bit-rot since it is slowly
being replaced by the in-tree documentation.

The info in the Features/Migration page is redundant with what we have
in-tree, a part from the CREDITS. The TODO list could be some file under :

	https://qemu.org/docs/master/devel/migration

It would be easier to find and it would keep the Wiki to a strict minimum.

Cheers,

C.




^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 05/10] docs/migration: Split "Debugging" and "Firmware"
  2024-01-09  6:46 ` [PATCH 05/10] docs/migration: Split "Debugging" and "Firmware" peterx
  2024-01-09  7:04   ` Cédric Le Goater
@ 2024-01-09 17:03   ` Fabiano Rosas
  2024-01-10  2:10     ` Peter Xu
  1 sibling, 1 reply; 29+ messages in thread
From: Fabiano Rosas @ 2024-01-09 17:03 UTC (permalink / raw)
  To: peterx, qemu-devel
  Cc: Cédric Le Goater, Michael S . Tsirkin, peterx,
	Alex Williamson, Jason Wang, Bandan Das, Prasad Pandit

peterx@redhat.com writes:

> From: Peter Xu <peterx@redhat.com>
>
> Move the two sections into a separate file called "best-practises.rst".

s/practises/practices/



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 05/10] docs/migration: Split "Debugging" and "Firmware"
  2024-01-09 17:03   ` Fabiano Rosas
@ 2024-01-10  2:10     ` Peter Xu
  0 siblings, 0 replies; 29+ messages in thread
From: Peter Xu @ 2024-01-10  2:10 UTC (permalink / raw)
  To: Fabiano Rosas
  Cc: qemu-devel, Cédric Le Goater, Michael S . Tsirkin,
	Alex Williamson, Jason Wang, Bandan Das, Prasad Pandit

On Tue, Jan 09, 2024 at 02:03:04PM -0300, Fabiano Rosas wrote:
> peterx@redhat.com writes:
> 
> > From: Peter Xu <peterx@redhat.com>
> >
> > Move the two sections into a separate file called "best-practises.rst".
> 
> s/practises/practices/

Will fix, thanks.

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 00/10] docs/migration: Reorganize migration documentations
  2024-01-09 13:21   ` Cédric Le Goater
@ 2024-01-10  2:37     ` Peter Xu
  2024-01-10 15:21       ` Cédric Le Goater
  0 siblings, 1 reply; 29+ messages in thread
From: Peter Xu @ 2024-01-10  2:37 UTC (permalink / raw)
  To: Cédric Le Goater
  Cc: qemu-devel, Michael S . Tsirkin, Alex Williamson, Jason Wang,
	Bandan Das, Prasad Pandit, Fabiano Rosas

On Tue, Jan 09, 2024 at 02:21:26PM +0100, Cédric Le Goater wrote:
> 
> > A few things I'd like to mention alongside, because it's documentation
> > relevant too, and I'd like to collect if there's any comment.
> > 
> > I just mostly rewrote two wiki pages completely:
> > 
> >    https://wiki.qemu.org/ToDo/LiveMigration
> >    https://wiki.qemu.org/Features/Migration>
> > I merged all the TODO items from Features/Migration into the ToDo page,
> > while kept the 2nd page mostly clean, just to route to other places.
> > 
> > I had a plan to make:
> > 
> >    https://qemu.org/docs/master
> > 
> > The solo place for migration documentations (aka, QEMU repo the source of
> > truth for migration docs, as it's peroidically built there), making all the
> > rest places pointing to that, as I already did in the wiki page.  While I
> > kept all the TODOs on the wiki page (not Features/Migration, but
> > ToDo/LiveMigration).> Fabiano / anyone: feel free to add / update /
> > correct any entries there
> > where applicable.  Also if there's any thoughts on above feel free to let
> > me know too.
> 
> The Wiki has some limited value, the changelog for instance, but the rest
> is a bag of orphan and obsolete pages doomed to bit-rot since it is slowly
> being replaced by the in-tree documentation.
> 
> The info in the Features/Migration page is redundant with what we have
> in-tree, a part from the CREDITS. The TODO list could be some file under :
> 
> 	https://qemu.org/docs/master/devel/migration
> 
> It would be easier to find and it would keep the Wiki to a strict minimum.

Thanks for the suggestions.  I agree that we should minimize the wiki use,
especially on docs.  It'll be nice we use a solo source of truth for the
docs, always accessable via qemu.org/docs, and also makes it easier for us
to ask for docs altogether as patches when new features are merged.

I see that most of the ToDos for the other part of qemus still use the wiki
page, even though they're indeed mostly outdated just like the migration
ToDo before I updated it.

IMHO one thing that the wiki services well for ToDo is that it allows easy
& frequent updates on the projects, without the need to require a review
process like most of the patches being posted on the list.  The wiki page
still maintains a diff, and IMHO that may not even be required, as a
history record of a ToDo list may not help much in most cases.

The other issue regarding ToDo is that, some of the ToDo idea (or when
someone frequently updates with details on a project of an ongoing item)
may not be mature enough to be mentioned in an official documents. So even
if some can be considered to be put together with the qemu repo, there may
always be some that may not be suitable, then we will still need some place
for those.  I still don't know what's the ideal way to do this.

Thanks,

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 00/10] docs/migration: Reorganize migration documentations
  2024-01-10  2:37     ` Peter Xu
@ 2024-01-10 15:21       ` Cédric Le Goater
  2024-01-11  2:42         ` Peter Xu
  0 siblings, 1 reply; 29+ messages in thread
From: Cédric Le Goater @ 2024-01-10 15:21 UTC (permalink / raw)
  To: Peter Xu
  Cc: qemu-devel, Michael S . Tsirkin, Alex Williamson, Jason Wang,
	Bandan Das, Prasad Pandit, Fabiano Rosas

On 1/10/24 03:37, Peter Xu wrote:
> On Tue, Jan 09, 2024 at 02:21:26PM +0100, Cédric Le Goater wrote:
>>
>>> A few things I'd like to mention alongside, because it's documentation
>>> relevant too, and I'd like to collect if there's any comment.
>>>
>>> I just mostly rewrote two wiki pages completely:
>>>
>>>     https://wiki.qemu.org/ToDo/LiveMigration
>>>     https://wiki.qemu.org/Features/Migration>
>>> I merged all the TODO items from Features/Migration into the ToDo page,
>>> while kept the 2nd page mostly clean, just to route to other places.
>>>
>>> I had a plan to make:
>>>
>>>     https://qemu.org/docs/master
>>>
>>> The solo place for migration documentations (aka, QEMU repo the source of
>>> truth for migration docs, as it's peroidically built there), making all the
>>> rest places pointing to that, as I already did in the wiki page.  While I
>>> kept all the TODOs on the wiki page (not Features/Migration, but
>>> ToDo/LiveMigration).> Fabiano / anyone: feel free to add / update /
>>> correct any entries there
>>> where applicable.  Also if there's any thoughts on above feel free to let
>>> me know too.
>>
>> The Wiki has some limited value, the changelog for instance, but the rest
>> is a bag of orphan and obsolete pages doomed to bit-rot since it is slowly
>> being replaced by the in-tree documentation.
>>
>> The info in the Features/Migration page is redundant with what we have
>> in-tree, a part from the CREDITS. The TODO list could be some file under :
>>
>> 	https://qemu.org/docs/master/devel/migration
>>
>> It would be easier to find and it would keep the Wiki to a strict minimum.
> 
> Thanks for the suggestions.  I agree that we should minimize the wiki use,
> especially on docs.  It'll be nice we use a solo source of truth for the
> docs, always accessable via qemu.org/docs, and also makes it easier for us
> to ask for docs altogether as patches when new features are merged.
> 
> I see that most of the ToDos for the other part of qemus still use the wiki
> page, even though they're indeed mostly outdated just like the migration
> ToDo before I updated it.
> 
> IMHO one thing that the wiki services well for ToDo is that it allows easy
> & frequent updates on the projects, without the need to require a review
> process like most of the patches being posted on the list.  The wiki page
> still maintains a diff, and IMHO that may not even be required, as a
> history record of a ToDo list may not help much in most cases.
> 
> The other issue regarding ToDo is that, some of the ToDo idea (or when
> someone frequently updates with details on a project of an ongoing item)
> may not be mature enough to be mentioned in an official documents. So even
> if some can be considered to be put together with the qemu repo, there may
> always be some that may not be suitable, then we will still need some place
> for those.  I still don't know what's the ideal way to do this.

OK. I see how you want to use the wiki for the TODO ideas and it makes
sense for mid/long term projects which are not mature enough to be
mentioned in the official docs. That said, I think we should mention
not supported features, incomplete modelling, etc. in the official docs,
which is a bit different than a TODO, I agree.

We also have a [feature request] label under gitlab and some issues are
tagged with it. I wonder how we can consolidate the 3 sources : wiki,
gitlab, https://www.qemu.org/docs/master/

Thanks,

C.



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 00/10] docs/migration: Reorganize migration documentations
  2024-01-10 15:21       ` Cédric Le Goater
@ 2024-01-11  2:42         ` Peter Xu
  0 siblings, 0 replies; 29+ messages in thread
From: Peter Xu @ 2024-01-11  2:42 UTC (permalink / raw)
  To: Cédric Le Goater
  Cc: qemu-devel, Michael S . Tsirkin, Alex Williamson, Jason Wang,
	Bandan Das, Prasad Pandit, Fabiano Rosas

On Wed, Jan 10, 2024 at 04:21:12PM +0100, Cédric Le Goater wrote:
> We also have a [feature request] label under gitlab and some issues are
> tagged with it. I wonder how we can consolidate the 3 sources : wiki,
> gitlab, https://www.qemu.org/docs/master/

Thanks for mentioning the gitlab issues!  This reminded me that we used to
have Dave looking after that from time to time, but it's totally overlooked
at least by myself..  probably we need to have some time tracking it.  On
the documentation side for ToDos, it's indeed potentially doable to already
merge into gitlab issues, then we merge 3->2.  I'll think about it.

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 00/10] docs/migration: Reorganize migration documentations
  2024-01-09  6:46 [PATCH 00/10] docs/migration: Reorganize migration documentations peterx
                   ` (10 preceding siblings ...)
  2024-01-09 10:49 ` [PATCH 00/10] docs/migration: Reorganize migration documentations Peter Xu
@ 2024-01-11  6:20 ` Peter Xu
  11 siblings, 0 replies; 29+ messages in thread
From: Peter Xu @ 2024-01-11  6:20 UTC (permalink / raw)
  To: qemu-devel
  Cc: Cédric Le Goater, Michael S . Tsirkin, Alex Williamson,
	Jason Wang, Bandan Das, Prasad Pandit, Fabiano Rosas

On Tue, Jan 09, 2024 at 02:46:18PM +0800, peterx@redhat.com wrote:
> From: Peter Xu <peterx@redhat.com>
> 
> Migration docs grow larger and larger.  There are plenty of things we can
> do here in the future, but to start that we'd better reorganize the current
> bloated doc files first and properly organize them into separate files.
> This series kicks that off.
> 
> This series mostly does the movement only, so please don't be scared of the
> slightly large diff.  I did touch up things here and there, but I didn't
> yet started writting much.  One thing I did is I converted virtio.txt to
> rST, but that's trivial and no real content I touched.
> 
> I am copying both virtio and vfio people because I'm merging the two
> separate files into the new docs/devel/migration/ folder.

I fixed all the spelling of "practice"s in patch 5, and queued it for now
into staging.

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 29+ messages in thread

end of thread, other threads:[~2024-01-11  6:21 UTC | newest]

Thread overview: 29+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-01-09  6:46 [PATCH 00/10] docs/migration: Reorganize migration documentations peterx
2024-01-09  6:46 ` [PATCH 01/10] docs/migration: Create migration/ directory peterx
2024-01-09  6:52   ` Cédric Le Goater
2024-01-09  6:46 ` [PATCH 02/10] docs/migration: Create index page peterx
2024-01-09  6:53   ` Cédric Le Goater
2024-01-09  6:46 ` [PATCH 03/10] docs/migration: Convert virtio.txt into rST peterx
2024-01-09  7:02   ` Cédric Le Goater
2024-01-09  6:46 ` [PATCH 04/10] docs/migration: Split "Backwards compatibility" separately peterx
2024-01-09  7:03   ` Cédric Le Goater
2024-01-09  6:46 ` [PATCH 05/10] docs/migration: Split "Debugging" and "Firmware" peterx
2024-01-09  7:04   ` Cédric Le Goater
2024-01-09 17:03   ` Fabiano Rosas
2024-01-10  2:10     ` Peter Xu
2024-01-09  6:46 ` [PATCH 06/10] docs/migration: Split "Postcopy" peterx
2024-01-09  7:05   ` Cédric Le Goater
2024-01-09  6:46 ` [PATCH 07/10] docs/migration: Split "dirty limit" peterx
2024-01-09  7:06   ` Cédric Le Goater
2024-01-09  6:46 ` [PATCH 08/10] docs/migration: Organize "Postcopy" page peterx
2024-01-09  7:20   ` Cédric Le Goater
2024-01-09  6:46 ` [PATCH 09/10] docs/migration: Further move vfio to be feature of migration peterx
2024-01-09  7:20   ` Cédric Le Goater
2024-01-09  6:46 ` [PATCH 10/10] docs/migration: Further move virtio " peterx
2024-01-09  7:20   ` Cédric Le Goater
2024-01-09 10:49 ` [PATCH 00/10] docs/migration: Reorganize migration documentations Peter Xu
2024-01-09 13:21   ` Cédric Le Goater
2024-01-10  2:37     ` Peter Xu
2024-01-10 15:21       ` Cédric Le Goater
2024-01-11  2:42         ` Peter Xu
2024-01-11  6:20 ` Peter Xu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).