public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH 0/5] Documentation: kvm: cleanup and introduce "VM planes"
@ 2024-10-23 12:45 Paolo Bonzini
  2024-10-23 12:45 ` [RFC PATCH 1/5] KVM: powerpc: remove remaining traces of KVM_CAP_PPC_RMA Paolo Bonzini
                   ` (5 more replies)
  0 siblings, 6 replies; 11+ messages in thread
From: Paolo Bonzini @ 2024-10-23 12:45 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: roy.hopkins, seanjc, michael.roth, ashish.kalra, jroedel,
	thomas.lendacky, nsaenz, anelkz, oliver.upton, isaku.yamahata,
	maz, steven.price, kai.huang, rick.p.edgecombe, James.Bottomley

As discussed at KVM Forum, this series introduces documentation for the
"VM planes" concept that can be used to implement at least AMD VMPLs
and Microsoft VTLs.

I didn't include Intel TDX and Arm CCA, because people expressed doubts
on whether KVM could deal with {firm,hard}ware that magically enters the
vCPU in one privilege level and leave at another.  This however may not
be a blocker, especially considering that we decided to have only one
mutex for all planes.

Compared to the notes from the KVM Forum BoF, the main change is in the
kvm_run fields.  The design at the BoF had fields masked_planes and
runnable_planes, and a userspace exit would happen if the value of
runnable_planes & ~masked_planes became nonzero.

Here instead I have:

- req_exit_planes which is similar to ~masked_planes.  The difference comes
  from the Hyper-V VINA feature ("Virtual Interrupt Notification Assist"),
  which requires userspace to know about _all_ interrupts, even those for
  lower VTLs

- suspended_planes, which is not used yet but needs to be there for future
  in-kernel accelerations, because interrupts can "skip" VTLs/VMPLs and need
  to return to the last suspended level.

  I am not sure that this needs to be in kvm_run though.  It definitely has
  to be migrated once KVM supports in-kernel switch, but I am not sure that
  userspace needs it "enough" to put it in kvm_run.  It could be accessed
  with KVM_GET_ONE_REG/KVM_SET_ONE_REG or similar, perhaps.

- pending_event_planes (same as runnable_planes) is in the KVM_EXIT_PLANE_EVENT
  data, kvm_run->exit.plane.  It seems that it is not used in any other case
  by userspace (KVM probably needs to keep it up to date at all time; but
  it cannot trust anyway what is in kvm_run and needs to have its own copy).


Another difference is in whether FPU is shared.  We had it as shared,
but for SEV-ES the contents of the x87 and AVX registers are stored in
the VMSA and therefore each VMPL has its own copy.  The solution I have
(KVM_CAP_PLANE_FPU) is a bit of a cop out though.

In order to add the relevant text, there are a few cleanups that can be applied
separately.


Paolo Bonzini (5):
  KVM: powerpc: remove remaining traces of KVM_CAP_PPC_RMA
  Documentation: kvm: fix a few mistakes
  Documentation: kvm: replace section numbers with links
  Documentation: kvm: reorganize introduction
  Documentation: kvm: introduce "VM plane" concept

 Documentation/virt/kvm/api.rst           | 357 ++++++++++++++++-------
 Documentation/virt/kvm/vcpu-requests.rst |   7 +
 arch/powerpc/kvm/powerpc.c               |   3 -
 3 files changed, 266 insertions(+), 101 deletions(-)

-- 
2.46.2


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [RFC PATCH 1/5] KVM: powerpc: remove remaining traces of KVM_CAP_PPC_RMA
  2024-10-23 12:45 [RFC PATCH 0/5] Documentation: kvm: cleanup and introduce "VM planes" Paolo Bonzini
@ 2024-10-23 12:45 ` Paolo Bonzini
  2024-10-23 12:45 ` [RFC PATCH 2/5] Documentation: kvm: fix a few mistakes Paolo Bonzini
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 11+ messages in thread
From: Paolo Bonzini @ 2024-10-23 12:45 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: roy.hopkins, seanjc, michael.roth, ashish.kalra, jroedel,
	thomas.lendacky, nsaenz, anelkz, oliver.upton, isaku.yamahata,
	maz, steven.price, kai.huang, rick.p.edgecombe, James.Bottomley

This was only needed for PPC970 support, which is long gone: the
implementation was removed in 2014.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 Documentation/virt/kvm/api.rst | 36 ----------------------------------
 arch/powerpc/kvm/powerpc.c     |  3 ---
 2 files changed, 39 deletions(-)

diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
index 8e5dad80b337..85dc04bfad3b 100644
--- a/Documentation/virt/kvm/api.rst
+++ b/Documentation/virt/kvm/api.rst
@@ -2170,42 +2170,6 @@ userspace update the TCE table directly which is useful in some
 circumstances.
 
 
-4.63 KVM_ALLOCATE_RMA
----------------------
-
-:Capability: KVM_CAP_PPC_RMA
-:Architectures: powerpc
-:Type: vm ioctl
-:Parameters: struct kvm_allocate_rma (out)
-:Returns: file descriptor for mapping the allocated RMA
-
-This allocates a Real Mode Area (RMA) from the pool allocated at boot
-time by the kernel.  An RMA is a physically-contiguous, aligned region
-of memory used on older POWER processors to provide the memory which
-will be accessed by real-mode (MMU off) accesses in a KVM guest.
-POWER processors support a set of sizes for the RMA that usually
-includes 64MB, 128MB, 256MB and some larger powers of two.
-
-::
-
-  /* for KVM_ALLOCATE_RMA */
-  struct kvm_allocate_rma {
-	__u64 rma_size;
-  };
-
-The return value is a file descriptor which can be passed to mmap(2)
-to map the allocated RMA into userspace.  The mapped area can then be
-passed to the KVM_SET_USER_MEMORY_REGION ioctl to establish it as the
-RMA for a virtual machine.  The size of the RMA in bytes (which is
-fixed at host kernel boot time) is returned in the rma_size field of
-the argument structure.
-
-The KVM_CAP_PPC_RMA capability is 1 or 2 if the KVM_ALLOCATE_RMA ioctl
-is supported; 2 if the processor requires all virtual machines to have
-an RMA, or 1 if the processor can use an RMA but doesn't require it,
-because it supports the Virtual RMA (VRMA) facility.
-
-
 4.64 KVM_NMI
 ------------
 
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 961aadc71de2..a7e5bc0d969f 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -612,9 +612,6 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
 				r = 8 | 4 | 2 | 1;
 		}
 		break;
-	case KVM_CAP_PPC_RMA:
-		r = 0;
-		break;
 	case KVM_CAP_PPC_HWRNG:
 		r = kvmppc_hwrng_present();
 		break;
-- 
2.46.2


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [RFC PATCH 2/5] Documentation: kvm: fix a few mistakes
  2024-10-23 12:45 [RFC PATCH 0/5] Documentation: kvm: cleanup and introduce "VM planes" Paolo Bonzini
  2024-10-23 12:45 ` [RFC PATCH 1/5] KVM: powerpc: remove remaining traces of KVM_CAP_PPC_RMA Paolo Bonzini
@ 2024-10-23 12:45 ` Paolo Bonzini
  2024-10-23 12:45 ` [RFC PATCH 3/5] Documentation: kvm: replace section numbers with links Paolo Bonzini
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 11+ messages in thread
From: Paolo Bonzini @ 2024-10-23 12:45 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: roy.hopkins, seanjc, michael.roth, ashish.kalra, jroedel,
	thomas.lendacky, nsaenz, anelkz, oliver.upton, isaku.yamahata,
	maz, steven.price, kai.huang, rick.p.edgecombe, James.Bottomley

The only occurrence "Capability: none" actually meant the same as "basic".
Fix that and a few more aesthetic or content issues in the document.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 Documentation/virt/kvm/api.rst | 27 ++++++++++++---------------
 1 file changed, 12 insertions(+), 15 deletions(-)

diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
index 85dc04bfad3b..480ab8174e56 100644
--- a/Documentation/virt/kvm/api.rst
+++ b/Documentation/virt/kvm/api.rst
@@ -96,12 +96,9 @@ description:
   Capability:
       which KVM extension provides this ioctl.  Can be 'basic',
       which means that is will be provided by any kernel that supports
-      API version 12 (see section 4.1), a KVM_CAP_xyz constant, which
+      API version 12 (see section 4.1), or a KVM_CAP_xyz constant, which
       means availability needs to be checked with KVM_CHECK_EXTENSION
-      (see section 4.4), or 'none' which means that while not all kernels
-      support this ioctl, there's no capability bit to check its
-      availability: for kernels that don't support the ioctl,
-      the ioctl returns -ENOTTY.
+      (see section 4.4).
 
   Architectures:
       which instruction set architectures provide this ioctl.
@@ -338,8 +335,8 @@ KVM_S390_SIE_PAGE_OFFSET in order to obtain a memory map of the virtual
 cpu's hardware control block.
 
 
-4.8 KVM_GET_DIRTY_LOG (vm ioctl)
---------------------------------
+4.8 KVM_GET_DIRTY_LOG
+---------------------
 
 :Capability: basic
 :Architectures: all
@@ -1298,7 +1295,7 @@ See KVM_GET_VCPU_EVENTS for the data structure.
 
 :Capability: KVM_CAP_DEBUGREGS
 :Architectures: x86
-:Type: vm ioctl
+:Type: vcpu ioctl
 :Parameters: struct kvm_debugregs (out)
 :Returns: 0 on success, -1 on error
 
@@ -1320,7 +1317,7 @@ Reads debug registers from the vcpu.
 
 :Capability: KVM_CAP_DEBUGREGS
 :Architectures: x86
-:Type: vm ioctl
+:Type: vcpu ioctl
 :Parameters: struct kvm_debugregs (in)
 :Returns: 0 on success, -1 on error
 
@@ -2116,8 +2113,8 @@ TLB, prior to calling KVM_RUN on the associated vcpu.
 
 The "bitmap" field is the userspace address of an array.  This array
 consists of a number of bits, equal to the total number of TLB entries as
-determined by the last successful call to KVM_CONFIG_TLB, rounded up to the
-nearest multiple of 64.
+determined by the last successful call to ``KVM_ENABLE_CAP(KVM_CAP_SW_TLB)``,
+rounded up to the nearest multiple of 64.
 
 Each bit corresponds to one TLB entry, ordered the same as in the shared TLB
 array.
@@ -3554,6 +3551,27 @@ Errors:
 This ioctl returns the guest registers that are supported for the
 KVM_GET_ONE_REG/KVM_SET_ONE_REG calls.
 
+Note that s390 does not support KVM_GET_REG_LIST for historical reasons
+(read: nobody cared).  The set of registers in kernels 4.x and newer is:
+
+- KVM_REG_S390_TODPR
+
+- KVM_REG_S390_EPOCHDIFF
+
+- KVM_REG_S390_CPU_TIMER
+
+- KVM_REG_S390_CLOCK_COMP
+
+- KVM_REG_S390_PFTOKEN
+
+- KVM_REG_S390_PFCOMPARE
+
+- KVM_REG_S390_PFSELECT
+
+- KVM_REG_S390_PP
+
+- KVM_REG_S390_GBEA
+
 
 4.85 KVM_ARM_SET_DEVICE_ADDR (deprecated)
 -----------------------------------------
@@ -4902,8 +4899,8 @@ Coalesced pio is based on coalesced mmio. There is little difference
 between coalesced mmio and pio except that coalesced pio records accesses
 to I/O ports.
 
-4.117 KVM_CLEAR_DIRTY_LOG (vm ioctl)
-------------------------------------
+4.117 KVM_CLEAR_DIRTY_LOG
+-------------------------
 
 :Capability: KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2
 :Architectures: x86, arm64, mips
@@ -5212,7 +5209,7 @@ the cpu reset definition in the POP (Principles Of Operation).
 4.123 KVM_S390_INITIAL_RESET
 ----------------------------
 
-:Capability: none
+:Capability: basic
 :Architectures: s390
 :Type: vcpu ioctl
 :Parameters: none
@@ -6151,7 +6148,7 @@ applied.
 .. _KVM_ARM_GET_REG_WRITABLE_MASKS:
 
 4.139 KVM_ARM_GET_REG_WRITABLE_MASKS
--------------------------------------------
+------------------------------------
 
 :Capability: KVM_CAP_ARM_SUPPORTED_REG_MASK_RANGES
 :Architectures: arm64
-- 
2.46.2


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [RFC PATCH 3/5] Documentation: kvm: replace section numbers with links
  2024-10-23 12:45 [RFC PATCH 0/5] Documentation: kvm: cleanup and introduce "VM planes" Paolo Bonzini
  2024-10-23 12:45 ` [RFC PATCH 1/5] KVM: powerpc: remove remaining traces of KVM_CAP_PPC_RMA Paolo Bonzini
  2024-10-23 12:45 ` [RFC PATCH 2/5] Documentation: kvm: fix a few mistakes Paolo Bonzini
@ 2024-10-23 12:45 ` Paolo Bonzini
  2024-10-23 12:45 ` [RFC PATCH 4/5] Documentation: kvm: reorganize introduction Paolo Bonzini
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 11+ messages in thread
From: Paolo Bonzini @ 2024-10-23 12:45 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: roy.hopkins, seanjc, michael.roth, ashish.kalra, jroedel,
	thomas.lendacky, nsaenz, anelkz, oliver.upton, isaku.yamahata,
	maz, steven.price, kai.huang, rick.p.edgecombe, James.Bottomley

In order to simplify further introduction of hyperlinks, replace explicit
section numbers with rST hyperlinks.  The section numbers could actually
be removed now, but I'm not going to do a huge change throughout the file
for an RFC...

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 Documentation/virt/kvm/api.rst | 40 ++++++++++++++++++++++++----------
 1 file changed, 28 insertions(+), 12 deletions(-)

diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
index 480ab8174e56..42030227dedd 100644
--- a/Documentation/virt/kvm/api.rst
+++ b/Documentation/virt/kvm/api.rst
@@ -96,9 +96,9 @@ description:
   Capability:
       which KVM extension provides this ioctl.  Can be 'basic',
       which means that is will be provided by any kernel that supports
-      API version 12 (see section 4.1), or a KVM_CAP_xyz constant, which
-      means availability needs to be checked with KVM_CHECK_EXTENSION
-      (see section 4.4).
+      API version 12 (see :ref:`KVM_GET_API_VERSION <KVM_GET_API_VERSION>`),
+      or a KVM_CAP_xyz constant that can be checked with
+      :ref:`KVM_CHECK_EXTENSION <KVM_CHECK_EXTENSION>`.
 
   Architectures:
       which instruction set architectures provide this ioctl.
@@ -115,6 +115,8 @@ description:
       are not detailed, but errors with specific meanings are.
 
 
+.. _KVM_GET_API_VERSION:
+
 4.1 KVM_GET_API_VERSION
 -----------------------
 
@@ -243,6 +245,8 @@ This list also varies by kvm version and host processor, but does not change
 otherwise.
 
 
+.. _KVM_CHECK_EXTENSION:
+
 4.4 KVM_CHECK_EXTENSION
 -----------------------
 
@@ -285,7 +289,7 @@ the VCPU file descriptor can be mmap-ed, including:
 
 - if KVM_CAP_DIRTY_LOG_RING is available, a number of pages at
   KVM_DIRTY_LOG_PAGE_OFFSET * PAGE_SIZE.  For more information on
-  KVM_CAP_DIRTY_LOG_RING, see section 8.3.
+  KVM_CAP_DIRTY_LOG_RING, see :ref:`KVM_CAP_DIRTY_LOG_RING`.
 
 
 4.7 KVM_CREATE_VCPU
@@ -1426,6 +1430,8 @@ because of a quirk in the virtualization implementation (see the internals
 documentation when it pops into existence).
 
 
+.. _KVM_ENABLE_CAP:
+
 4.37 KVM_ENABLE_CAP
 -------------------
 
@@ -2560,7 +2566,7 @@ Specifically:
 ======================= ========= ===== =======================================
 
 .. [1] These encodings are not accepted for SVE-enabled vcpus.  See
-       KVM_ARM_VCPU_INIT.
+       :ref:`KVM_ARM_VCPU_INIT`.
 
        The equivalent register content can be accessed via bits [127:0] of
        the corresponding SVE Zn registers instead for vcpus that have SVE
@@ -5036,8 +5042,8 @@ Recognised values for feature:
 Finalizes the configuration of the specified vcpu feature.
 
 The vcpu must already have been initialised, enabling the affected feature, by
-means of a successful KVM_ARM_VCPU_INIT call with the appropriate flag set in
-features[].
+means of a successful :ref:`KVM_ARM_VCPU_INIT <KVM_ARM_VCPU_INIT>` call with the
+appropriate flag set in features[].
 
 For affected vcpu features, this is a mandatory step that must be performed
 before the vcpu is fully usable.
@@ -6380,6 +6386,8 @@ the capability to be present.
 `flags` must currently be zero.
 
 
+.. _kvm_run:
+
 5. The kvm_run structure
 ========================
 
@@ -7099,11 +7107,15 @@ primary storage for certain register types. Therefore, the kernel may use the
 values in kvm_run even if the corresponding bit in kvm_dirty_regs is not set.
 
 
+.. _cap_enable:
+
 6. Capabilities that can be enabled on vCPUs
 ============================================
 
 There are certain capabilities that change the behavior of the virtual CPU or
-the virtual machine when enabled. To enable them, please see section 4.37.
+the virtual machine when enabled. To enable them, please see
+:ref:`KVM_ENABLE_CAP`.
+
 Below you can find a list of capabilities and what their effect on the vCPU or
 the virtual machine is when enabling them.
 
@@ -7312,7 +7324,7 @@ KVM API and also from the guest.
           sets are supported
           (bitfields defined in arch/x86/include/uapi/asm/kvm.h).
 
-As described above in the kvm_sync_regs struct info in section 5 (kvm_run):
+As described above in the kvm_sync_regs struct info in section :ref:`kvm_run`,
 KVM_CAP_SYNC_REGS "allow[s] userspace to access certain guest registers
 without having to call SET/GET_*REGS". This reduces overhead by eliminating
 repeated ioctl calls for setting and/or getting register values. This is
@@ -7358,13 +7370,15 @@ Unused bitfields in the bitarrays must be set to zero.
 
 This capability connects the vcpu to an in-kernel XIVE device.
 
+.. _cap_enable_vm:
+
 7. Capabilities that can be enabled on VMs
 ==========================================
 
 There are certain capabilities that change the behavior of the virtual
-machine when enabled. To enable them, please see section 4.37. Below
-you can find a list of capabilities and what their effect on the VM
-is when enabling them.
+machine when enabled. To enable them, please see section
+:ref:`KVM_ENABLE_CAP`. Below you can find a list of capabilities and
+what their effect on the VM is when enabling them.
 
 The following information is provided along with the description:
 
@@ -8515,6 +8529,8 @@ guest according to the bits in the KVM_CPUID_FEATURES CPUID leaf
 (0x40000001). Otherwise, a guest may use the paravirtual features
 regardless of what has actually been exposed through the CPUID leaf.
 
+.. _KVM_CAP_DIRTY_LOG_RING:
+
 8.29 KVM_CAP_DIRTY_LOG_RING/KVM_CAP_DIRTY_LOG_RING_ACQ_REL
 ----------------------------------------------------------
 
-- 
2.46.2


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [RFC PATCH 4/5] Documentation: kvm: reorganize introduction
  2024-10-23 12:45 [RFC PATCH 0/5] Documentation: kvm: cleanup and introduce "VM planes" Paolo Bonzini
                   ` (2 preceding siblings ...)
  2024-10-23 12:45 ` [RFC PATCH 3/5] Documentation: kvm: replace section numbers with links Paolo Bonzini
@ 2024-10-23 12:45 ` Paolo Bonzini
  2024-10-23 12:45 ` [PATCH 5/5] Documentation: kvm: introduce "VM plane" concept Paolo Bonzini
  2024-11-08  9:07 ` [RFC PATCH 0/5] Documentation: kvm: cleanup and introduce "VM planes" Paolo Bonzini
  5 siblings, 0 replies; 11+ messages in thread
From: Paolo Bonzini @ 2024-10-23 12:45 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: roy.hopkins, seanjc, michael.roth, ashish.kalra, jroedel,
	thomas.lendacky, nsaenz, anelkz, oliver.upton, isaku.yamahata,
	maz, steven.price, kai.huang, rick.p.edgecombe, James.Bottomley

Reorganize the text to mention file descriptors as early as possible.
Also mention capabilities early as they are a central part of KVM's
API.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 Documentation/virt/kvm/api.rst | 38 ++++++++++++++++++++++------------
 1 file changed, 25 insertions(+), 13 deletions(-)

diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
index 42030227dedd..6619098a8054 100644
--- a/Documentation/virt/kvm/api.rst
+++ b/Documentation/virt/kvm/api.rst
@@ -7,8 +7,19 @@ The Definitive KVM (Kernel-based Virtual Machine) API Documentation
 1. General description
 ======================
 
-The kvm API is a set of ioctls that are issued to control various aspects
-of a virtual machine.  The ioctls belong to the following classes:
+The kvm API is centered around different kinds of file descriptors
+and ioctls that can be issued to these file descriptors.  An initial
+open("/dev/kvm") obtains a handle to the kvm subsystem; this handle
+can be used to issue system ioctls.  A KVM_CREATE_VM ioctl on this
+handle will create a VM file descriptor which can be used to issue VM
+ioctls.  A KVM_CREATE_VCPU or KVM_CREATE_DEVICE ioctl on a VM fd will
+create a virtual cpu or device and return a file descriptor pointing to
+the new resource.
+
+In other words, the kvm API is a set of ioctls that are issued to
+different kinds of file descriptor in order to control various aspects of
+a virtual machine.  Depending on the file descriptor that accepts them,
+ioctls belong to the following classes:
 
  - System ioctls: These query and set global attributes which affect the
    whole kvm subsystem.  In addition a system ioctl is used to create
@@ -35,18 +46,19 @@ of a virtual machine.  The ioctls belong to the following classes:
    device ioctls must be issued from the same process (address space) that
    was used to create the VM.
 
-2. File descriptors
-===================
+While most ioctls are specific to one kind of file descriptor, in some
+cases the same ioctl can belong to more than one class.
 
-The kvm API is centered around file descriptors.  An initial
-open("/dev/kvm") obtains a handle to the kvm subsystem; this handle
-can be used to issue system ioctls.  A KVM_CREATE_VM ioctl on this
-handle will create a VM file descriptor which can be used to issue VM
-ioctls.  A KVM_CREATE_VCPU or KVM_CREATE_DEVICE ioctl on a VM fd will
-create a virtual cpu or device and return a file descriptor pointing to
-the new resource.  Finally, ioctls on a vcpu or device fd can be used
-to control the vcpu or device.  For vcpus, this includes the important
-task of actually running guest code.
+The KVM API grew over time.  For this reason, KVM defines many constants
+ of the form ``KVM_CAP_*``, each corresponding to a set of functionality
+provided by one or more ioctls.  Availability of these "capabilities" can
+be checked with :ref:`KVM_CHECK_EXTENSION <KVM_CHECK_EXTENSION>`.  Some
+capabilities also need to be enabled for VMs or VCPUs where their
+functionality is desired (see :ref:`cap_enable` and :ref:`cap_enable_vm`).
+
+
+2. Restrictions
+===============
 
 In general file descriptors can be migrated among processes by means
 of fork() and the SCM_RIGHTS facility of unix domain socket.  These
-- 
2.46.2


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 5/5] Documentation: kvm: introduce "VM plane" concept
  2024-10-23 12:45 [RFC PATCH 0/5] Documentation: kvm: cleanup and introduce "VM planes" Paolo Bonzini
                   ` (3 preceding siblings ...)
  2024-10-23 12:45 ` [RFC PATCH 4/5] Documentation: kvm: reorganize introduction Paolo Bonzini
@ 2024-10-23 12:45 ` Paolo Bonzini
  2025-01-17 21:48   ` Sean Christopherson
  2024-11-08  9:07 ` [RFC PATCH 0/5] Documentation: kvm: cleanup and introduce "VM planes" Paolo Bonzini
  5 siblings, 1 reply; 11+ messages in thread
From: Paolo Bonzini @ 2024-10-23 12:45 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: roy.hopkins, seanjc, michael.roth, ashish.kalra, jroedel,
	thomas.lendacky, nsaenz, anelkz, oliver.upton, isaku.yamahata,
	maz, steven.price, kai.huang, rick.p.edgecombe, James.Bottomley

There have been multiple occurrences of processors introducing a virtual
privilege level concept for guests, where the hypervisor hosts multiple
copies of a vCPU's register state (or at least of most of it) and provides
hypercalls or instructions to switch between them.  These include AMD
VMPLs, Intel TDX partitions, Microsoft Hyper-V VTLs, and ARM CCA planes.
Include documentation on how the feature will be exposed to userspace,
based on a draft made between Plumbers and KVM Forum.

In the past, two main solutions that were attempted, mostly in the context
of Hyper-V VTLs and SEV-SNP VMPLs:

- use a single vCPU file descriptor, and store multiple copies of the state
  in a single struct kvm_vcpu.  This requires a lot of changes to
  provide multiple copies of affected fields, especially MMUs and APICs;
  and complex uAPI extensions to direct existing ioctls to a specific
  privilege level.  This solution looked marginally okay for SEV-SNP
  VMPLs, but only because the copies of the register state were hidden
  in the VMSA (KVM does not manage it); it showed all its problems when
  applied to Hyper-V VTLs.

- use multiple VM and vCPU file descriptors, and handle the switch entirely
  in userspace.  This got gnarly pretty fast for even more reasons than
  the previous case, for example because VMs could not share anymore
  memslots, including dirty bitmaps and private/shared attributes (a
  substantial problem for SEV-SNP since VMPLs share their ASID).  Another
  problem was the need to share _some_ register state across VTLs and
  to control that vCPUs did not run in parallel; there needed to be a
  lot of logic to be added in userspace to ensure that higher-privileged
  VTL properly interrupted a lower-privileged one.

  This solution also complicates in-kernel implementation of privilege
  level switch, or even makes it impossible, because there is no kernel
  knowledge of the relationship between vCPUs that have the same id but
  belong to different privilege levels.

Especially given the need to accelerate switches in kernel, it is clear
that KVM needs some level of knowledge of the relationship between vCPUs
that have the same id but belong to different privilege levels.  For this
reason, I proposed a design that only gives the initial set of VM and vCPU file
descriptors the full set of ioctls + struct kvm_run; other privilege
levels instead only support a small part of the KVM API.  In fact for
the vm file descriptor it is only three ioctls: KVM_CHECK_EXTENSION,
KVM_SIGNAL_MSI, KVM_SET_MEMORY_ATTRIBUTES.  For vCPUs it is basically
KVM_GET/SET_*.

This solves a lot of the problems in the multiple-file-descriptors solution,
namely it gets for free the ability to avoid parallel execution of the
same vCPUs in different privilege levels.  Changes to the userspace API
of course exist, but they are relatively small and more easily backwards
compatible, because they boil down to the introduction of new file
descriptor kinds instead of having to change the inputs to all affected
ioctls.

It does share some of the code churn issues in the single-file-descriptor
solution; on the other hand a prototype multi-fd VMPL implementation[1]
also needed large scale changes which therefore seem unavoidable when
privilege levels are provided by hardware, and not a software concept
only as is the case for VTLs.
hardware 

   [1] https://lore.kernel.org/lkml/cover.1726506534.git.roy.hopkins@suse.com/

Acknowledgements: thanks to everyone who participated in the discussions,
you are too many to mention in a small margin.  Thanks to Roy Hopkins,
Tom Lendacky, Anel Orazgaliyeva, Nicolas Saenz-Julienne for experimenting
with implementations of VTLs and VMPLs.

Ah, and because x86 has three names for it and Arm has one, choose the
Arm name for all architectures to avoid bikeshedding and to displease
everyone---including the KVM/arm64 folks, probably.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 Documentation/virt/kvm/api.rst           | 224 ++++++++++++++++++++---
 Documentation/virt/kvm/vcpu-requests.rst |   7 +
 2 files changed, 205 insertions(+), 26 deletions(-)

diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
index 6619098a8054..6777c24dedde 100644
--- a/Documentation/virt/kvm/api.rst
+++ b/Documentation/virt/kvm/api.rst
@@ -56,6 +56,18 @@ be checked with :ref:`KVM_CHECK_EXTENSION <KVM_CHECK_EXTENSION>`.  Some
 capabilities also need to be enabled for VMs or VCPUs where their
 functionality is desired (see :ref:`cap_enable` and :ref:`cap_enable_vm`).
 
+On some architectures, a "virtual privilege level" concept may be present
+apart from the usual separation between user and supervisor mode, or
+between hypervisor and guest mode.  When this is the case, a single vCPU
+can have multiple copies of its register state (or at least most of it),
+and will switch between them through a special processor instruction,
+or through some kind of hypercall.
+
+KVM calls these privilege levels "planes".  Planes other than the
+initially-created one (called "plane 0") have a file descriptor each,
+and so do the planes of each vCPU.  Ioctls for vCPU planes should also
+be issued from a single thread, unless specially marked as asynchronous
+in the documentation.
 
 2. Restrictions
 ===============
@@ -119,6 +131,11 @@ description:
   Type:
       system, vm, or vcpu.
 
+      File descriptors for planes other than plane 0 provide a subset
+      of vm and vcpu ioctls.  Those that *are* supported in extra
+      planes are marked specially in the documentation (for example,
+      `vcpu (all planes)`).
+
   Parameters:
       what parameters are accepted by the ioctl.
 
@@ -281,7 +281,7 @@ otherwise.
 
 :Capability: basic, KVM_CAP_CHECK_EXTENSION_VM for vm ioctl
 :Architectures: all
-:Type: system ioctl, vm ioctl
+:Type: system ioctl, vm ioctl (all planes)
 :Parameters: extension identifier (KVM_CAP_*)
 :Returns: 0 if unsupported; 1 (or some other positive integer) if supported
 
@@ -421,7 +438,7 @@ kvm_run' (see below).
 
 :Capability: basic
 :Architectures: all except arm64
-:Type: vcpu ioctl
+:Type: vcpu ioctl (all planes)
 :Parameters: struct kvm_regs (out)
 :Returns: 0 on success, -1 on error
 
@@ -461,7 +478,7 @@ Reads the general purpose registers from the vcpu.
 
 :Capability: basic
 :Architectures: all except arm64
-:Type: vcpu ioctl
+:Type: vcpu ioctl (all planes)
 :Parameters: struct kvm_regs (in)
 :Returns: 0 on success, -1 on error
 
@@ -475,7 +492,7 @@ See KVM_GET_REGS for the data structure.
 
 :Capability: basic
 :Architectures: x86, ppc
-:Type: vcpu ioctl
+:Type: vcpu ioctl (all planes)
 :Parameters: struct kvm_sregs (out)
 :Returns: 0 on success, -1 on error
 
@@ -506,7 +523,7 @@ but not yet injected into the cpu core.
 
 :Capability: basic
 :Architectures: x86, ppc
-:Type: vcpu ioctl
+:Type: vcpu ioctl (all planes)
 :Parameters: struct kvm_sregs (in)
 :Returns: 0 on success, -1 on error
 
@@ -519,7 +536,7 @@ data structures.
 
 :Capability: basic
 :Architectures: x86
-:Type: vcpu ioctl
+:Type: vcpu ioctl (all planes)
 :Parameters: struct kvm_translation (in/out)
 :Returns: 0 on success, -1 on error
 
@@ -645,7 +662,7 @@ This is an asynchronous vcpu ioctl and can be invoked from any thread.
 
 :Capability: basic (vcpu), KVM_CAP_GET_MSR_FEATURES (system)
 :Architectures: x86
-:Type: system ioctl, vcpu ioctl
+:Type: system ioctl, vcpu ioctl (all planes)
 :Parameters: struct kvm_msrs (in/out)
 :Returns: number of msrs successfully returned;
           -1 on error
@@ -685,7 +702,7 @@ kvm will fill in the 'data' member.
 
 :Capability: basic
 :Architectures: x86
-:Type: vcpu ioctl
+:Type: vcpu ioctl (all planes)
 :Parameters: struct kvm_msrs (in)
 :Returns: number of msrs successfully set (see below), -1 on error
 
@@ -773,7 +790,7 @@ signal mask.
 
 :Capability: basic
 :Architectures: x86, loongarch
-:Type: vcpu ioctl
+:Type: vcpu ioctl (all planes)
 :Parameters: struct kvm_fpu (out)
 :Returns: 0 on success, -1 on error
 
@@ -811,7 +828,7 @@ Reads the floating point state from the vcpu.
 
 :Capability: basic
 :Architectures: x86, loongarch
-:Type: vcpu ioctl
+:Type: vcpu ioctl (all planes)
 :Parameters: struct kvm_fpu (in)
 :Returns: 0 on success, -1 on error
 
@@ -1122,7 +1139,7 @@ Other flags returned by ``KVM_GET_CLOCK`` are accepted but ignored.
 :Capability: KVM_CAP_VCPU_EVENTS
 :Extended by: KVM_CAP_INTR_SHADOW
 :Architectures: x86, arm64
-:Type: vcpu ioctl
+:Type: vcpu ioctl (all planes)
 :Parameters: struct kvm_vcpu_events (out)
 :Returns: 0 on success, -1 on error
 
@@ -1245,7 +1262,7 @@ directly to the virtual CPU).
 :Capability: KVM_CAP_VCPU_EVENTS
 :Extended by: KVM_CAP_INTR_SHADOW
 :Architectures: x86, arm64
-:Type: vcpu ioctl
+:Type: vcpu ioctl (all planes)
 :Parameters: struct kvm_vcpu_events (in)
 :Returns: 0 on success, -1 on error
 
@@ -1311,7 +1328,7 @@ See KVM_GET_VCPU_EVENTS for the data structure.
 
 :Capability: KVM_CAP_DEBUGREGS
 :Architectures: x86
-:Type: vcpu ioctl
+:Type: vcpu ioctl (all planes)
 :Parameters: struct kvm_debugregs (out)
 :Returns: 0 on success, -1 on error
 
@@ -1333,7 +1350,7 @@ Reads debug registers from the vcpu.
 
 :Capability: KVM_CAP_DEBUGREGS
 :Architectures: x86
-:Type: vcpu ioctl
+:Type: vcpu ioctl (all planes)
 :Parameters: struct kvm_debugregs (in)
 :Returns: 0 on success, -1 on error
 
@@ -1649,7 +1666,7 @@ otherwise it will return EBUSY error.
 
 :Capability: KVM_CAP_XSAVE
 :Architectures: x86
-:Type: vcpu ioctl
+:Type: vcpu ioctl (all planes)
 :Parameters: struct kvm_xsave (out)
 :Returns: 0 on success, -1 on error
 
@@ -1669,7 +1686,7 @@ This ioctl would copy current vcpu's xsave struct to the userspace.
 
 :Capability: KVM_CAP_XSAVE and KVM_CAP_XSAVE2
 :Architectures: x86
-:Type: vcpu ioctl
+:Type: vcpu ioctl (all planes)
 :Parameters: struct kvm_xsave (in)
 :Returns: 0 on success, -1 on error
 
@@ -1697,7 +1714,7 @@ contents of CPUID leaf 0xD on the host.
 
 :Capability: KVM_CAP_XCRS
 :Architectures: x86
-:Type: vcpu ioctl
+:Type: vcpu ioctl (all planes)
 :Parameters: struct kvm_xcrs (out)
 :Returns: 0 on success, -1 on error
 
@@ -1724,7 +1741,7 @@ This ioctl would copy current vcpu's xcrs to the userspace.
 
 :Capability: KVM_CAP_XCRS
 :Architectures: x86
-:Type: vcpu ioctl
+:Type: vcpu ioctl (all planes)
 :Parameters: struct kvm_xcrs (in)
 :Returns: 0 on success, -1 on error
 
@@ -2014,7 +2031,7 @@ error.
 
 :Capability: KVM_CAP_IRQCHIP
 :Architectures: x86
-:Type: vcpu ioctl
+:Type: vcpu ioctl (all planes)
 :Parameters: struct kvm_lapic_state (out)
 :Returns: 0 on success, -1 on error
 
@@ -2045,7 +2062,7 @@ always uses xAPIC format.
 
 :Capability: KVM_CAP_IRQCHIP
 :Architectures: x86
-:Type: vcpu ioctl
+:Type: vcpu ioctl (all planes)
 :Parameters: struct kvm_lapic_state (in)
 :Returns: 0 on success, -1 on error
 
@@ -2296,7 +2296,7 @@ prior to calling the KVM_RUN ioctl.
 
 :Capability: KVM_CAP_ONE_REG
 :Architectures: all
-:Type: vcpu ioctl
+:Type: vcpu ioctl (all planes)
 :Parameters: struct kvm_one_reg (in)
 :Returns: 0 on success, negative value on failure
 
@@ -2908,7 +2908,7 @@ such as set vcpu counter or reset vcpu, and they have the following id bit patte
 
 :Capability: KVM_CAP_ONE_REG
 :Architectures: all
-:Type: vcpu ioctl
+:Type: vcpu ioctl (all planes)
 :Parameters: struct kvm_one_reg (in and out)
 :Returns: 0 on success, negative value on failure
 
@@ -2962,7 +2962,7 @@ after pausing the vcpu, but before it is resumed.
 
 :Capability: KVM_CAP_SIGNAL_MSI
 :Architectures: x86 arm64
-:Type: vm ioctl
+:Type: vm ioctl (all planes)
 :Parameters: struct kvm_msi (in)
 :Returns: >0 on delivery, 0 if guest blocked the MSI, and -1 on error
 
@@ -3565,7 +3565,7 @@ VCPU matching underlying host.
 
 :Capability: basic
 :Architectures: arm64, mips, riscv
-:Type: vcpu ioctl
+:Type: vcpu ioctl (all planes)
 :Parameters: struct kvm_reg_list (in/out)
 :Returns: 0 on success; -1 on error
 
@@ -4807,7 +4824,7 @@ The acceptable values for the flags field are::
 
 :Capability: KVM_CAP_NESTED_STATE
 :Architectures: x86
-:Type: vcpu ioctl
+:Type: vcpu ioctl (all planes)
 :Parameters: struct kvm_nested_state (in/out)
 :Returns: 0 on success, -1 on error
 
@@ -4881,7 +4898,7 @@ to the KVM_CHECK_EXTENSION ioctl().
 
 :Capability: KVM_CAP_NESTED_STATE
 :Architectures: x86
-:Type: vcpu ioctl
+:Type: vcpu ioctl (all planes)
 :Parameters: struct kvm_nested_state (in)
 :Returns: 0 on success, -1 on error
 
@@ -5762,7 +5779,7 @@ then ``length`` is returned.
 
 :Capability: KVM_CAP_SREGS2
 :Architectures: x86
-:Type: vcpu ioctl
+:Type: vcpu ioctl (all planes)
 :Parameters: struct kvm_sregs2 (out)
 :Returns: 0 on success, -1 on error
 
@@ -5795,7 +5812,7 @@ flags values for ``kvm_sregs2``:
 
 :Capability: KVM_CAP_SREGS2
 :Architectures: x86
-:Type: vcpu ioctl
+:Type: vcpu ioctl (all planes)
 :Parameters: struct kvm_sregs2 (in)
 :Returns: 0 on success, -1 on error
 
@@ -6011,7 +6028,7 @@ as the descriptors in Descriptors block.
 
 :Capability: KVM_CAP_XSAVE2
 :Architectures: x86
-:Type: vcpu ioctl
+:Type: vcpu ioctl (all planes)
 :Parameters: struct kvm_xsave (out)
 :Returns: 0 on success, -1 on error
 
@@ -6269,7 +6286,7 @@ Returns -EINVAL if called on a protected VM.
 
 :Capability: KVM_CAP_MEMORY_ATTRIBUTES
 :Architectures: x86
-:Type: vm ioctl
+:Type: vm ioctl (all planes)
 :Parameters: struct kvm_memory_attributes (in)
 :Returns: 0 on success, <0 on error
 
@@ -6398,6 +6415,46 @@ the capability to be present.
 `flags` must currently be zero.
 
 
+.. _KVM_CREATE_PLANE:
+
+4.144 KVM_CREATE_PLANE
+----------------------
+
+:Capability: KVM_CAP_PLANE
+:Architectures: none
+:Type: vm ioctl
+:Parameters: plane id
+:Returns: a VM fd that can be used to control the new plane.
+
+Creates a new *plane*, i.e. a separate privilege level for the
+virtual machine.  Each plane has its own memory attributes,
+which can be used to enable more restricted permissions than
+what is allowed with ``KVM_SET_USER_MEMORY_REGION``.
+
+Each plane has a numeric id that is used when communicating
+with KVM through the :ref:`kvm_run <kvm_run>` struct.  While
+KVM is currently agnostic to whether low ids are more or less
+privileged, it is expected that this will not always be the
+case in the future.  For example KVM in the future may use
+the plane id when planes are supported by hardware (as is the
+case for VMPLs in AMD), or if KVM supports accelerated plane
+switch operations (as might be the case for Hyper-V VTLs).
+
+4.145 KVM_CREATE_VCPU_PLANE
+---------------------------
+
+:Capability: KVM_CAP_PLANE
+:Architectures: none
+:Type: vm ioctl (non default plane)
+:Parameters: vcpu file descriptor for the default plane
+:Returns: a vCPU fd that can be used to control the new plane
+          for the vCPU.
+
+Adds a vCPU to a plane; the new vCPU's id comes from the vCPU
+file descriptor that is passed in the argument.  Note that
+ because of how the API is defined, planes other than plane 0
+can only have a subset of the ids that are available in plane 0.
+
 .. _kvm_run:
 
 5. The kvm_run structure
@@ -6433,7 +6490,50 @@ This field is ignored if KVM_CAP_IMMEDIATE_EXIT is not available.
 
 ::
 
-	__u8 padding1[6];
+	/* in/out */
+	__u8 plane;
+
+The plane that will be run (usually 0).
+
+While this is not yet supported, in the future KVM may handle plane
+switch in the kernel.  In this case, the output value of this field
+may differ from the input value.  However, automatic switch will
+have to be :ref:`explicitly enabled <KVM_ENABLE_CAP>`.
+
+For backwards compatibility, this field is ignored unless a plane
+other than plane 0 has been created.
+
+::
+
+        /* in/out */
+        __u16 suspended_planes;
+
+A bitmap of planes whose execution was suspended to run a
+higher-privileged plane, usually via a hypercall or due to
+an interrupt in the higher-privileged plane.
+
+KVM right now does not use this field; it may be used in the future
+once KVM implements in-kernel plane switch mechanisms.  Until that
+is the case, userspace can leave this to zero.
+
+::
+
+	/* in */
+	__u16 req_exit_planes;
+
+A bitmap of planes for which KVM should exit when they have a pending
+interrupt.  In general, userspace should set bits corresponding to
+planes that are more privileged than ``plane``; because KVM is agnostic
+to whether low ids are more or less privileged, these could be the bits
+*above* or *below* ``plane``.  In some cases it may make sense to request
+an exit for all planes---for example, if the higher-priority plane
+wants to be informed about interrupts pending in lower-priority planes,
+userspace may need to learn about those as well.
+
+The bit at position ``plane`` is ignored; interrupts for the current
+plane are never delivered to userspace.
+
+::
 
 	/* out */
 	__u32 exit_reason;
@@ -7086,6 +7186,44 @@ The valid value for 'flags' is:
   - KVM_NOTIFY_CONTEXT_INVALID -- the VM context is corrupted and not valid
     in VMCS. It would run into unknown result if resume the target VM.
 
+::
+
+    /* KVM_EXIT_PLANE_EVENT */
+    struct {
+  #define KVM_PLANE_EVENT_INTERRUPT	0
+      __u16 pending_event_planes;
+      __u8 cause;
+      __u8 target;
+      __u32 flags;
+      __u64 extra;
+    } plane;
+
+Inform userspace of an event that affects a different plane than the
+currently executing one.
+
+On a ``KVM_EXIT_PLANE_EVENT`` exit, ``pending_event_planes`` is always
+set to the set of planes that have a pending interrupt.
+
+``cause`` provides the event that caused the exit, and the meaning of
+``target`` depends on the cause of the exit too.
+
+Right now the only defined cause is ``KVM_PLANE_EVENT_INTERRUPT``, i.e.
+an interrupt was received by a plane whose id is set in the
+``req_exit_planes`` bitmap.  In this case, ``target`` is the id of the
+plane that received an interrupt, and its bit is always set in both
+``req_exit_planes`` and ``pending_event_planes``.
+
+``flags`` and ``extra`` are currently always 0.
+
+If userspace wants to switch to the target plane, it should move any
+shared state from the current plane to ``target``, and then invoke
+``KVM_RUN`` with ``kvm_run->plane`` set to ``target`` (and
+``req_exit_planes`` initialized accordingly).  Note that it's also
+valid to switch planes in response to other userspace exit codes, for
+example ``KVM_EXIT_X86_WRMSR`` or ``KVM_EXIT_HYPERCALL``.  Immediately
+after ``KVM_RUN`` is entered, KVM will check ``req_exit_planes`` and
+trigger a ``KVM_EXIT_PLANE_EVENT`` userspace exit if needed.
+
 ::
 
 		/* Fix the size of the union. */
@@ -8930,6 +9068,40 @@ Do not use KVM_X86_SW_PROTECTED_VM for "real" VMs, and especially not in
 production.  The behavior and effective ABI for software-protected VMs is
 unstable.
 
+8.42 KVM_CAP_PLANE
+------------------
+
+:Capability: KVM_CAP_PLANE
+:Architectures: x86
+:Type: system, vm
+
+The capability returns the maximum plane id that can be passed to
+:ref:`KVM_CREATE_PLANE <KVM_CREATE_PLANE>`.  Because the maximum
+id can vary according to the machine type, it is recommended to
+check for this capability on the VM file descriptor.
+
+When called on the system file descriptor, KVM returns the highest
+value supported on any machine type.
+
+
+8.42 KVM_CAP_PLANE_FPU
+----------------------
+
+:Capability: KVM_CAP_PLANE_FPU
+:Architectures: x86
+:Type: system, vm
+
+The capability returns 1 if the FPU is split for each vCPU plane.
+If the capability is absent, the FPU is shared by all vCPU planes.
+
+Note that ioctls such as KVM_SET_XSAVE or KVM_SET_FPU *are* available
+even if this capability is absent.  However, they will overwrite the
+registers presented to other planes.
+
+Also note that KVM_GET/SET_XSAVE also allows access to some registers
+that are *not* part of FPU state, notably PKRU.  Those are never shared.
+
+
 9. Known KVM API problems
 =========================
 
diff --git a/Documentation/virt/kvm/vcpu-requests.rst b/Documentation/virt/kvm/vcpu-requests.rst
index 06718b9bc959..86ac67b98a74 100644
--- a/Documentation/virt/kvm/vcpu-requests.rst
+++ b/Documentation/virt/kvm/vcpu-requests.rst
@@ -286,6 +286,13 @@ architecture dependent.  kvm_vcpu_block() calls kvm_arch_vcpu_runnable()
 to check if it should awaken.  One reason to do so is to provide
 architectures a function where requests may be checked if necessary.
 
+VM planes
+---------
+
+Each plane has its own set of requests.  Processing requests from
+another plane needs to go through a plane switch, for example via a
+`KVM_EXIT_PLANE_EVENT` userspace exit.
+
 References
 ==========
 
-- 
2.46.2


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [RFC PATCH 0/5] Documentation: kvm: cleanup and introduce "VM planes"
  2024-10-23 12:45 [RFC PATCH 0/5] Documentation: kvm: cleanup and introduce "VM planes" Paolo Bonzini
                   ` (4 preceding siblings ...)
  2024-10-23 12:45 ` [PATCH 5/5] Documentation: kvm: introduce "VM plane" concept Paolo Bonzini
@ 2024-11-08  9:07 ` Paolo Bonzini
  5 siblings, 0 replies; 11+ messages in thread
From: Paolo Bonzini @ 2024-11-08  9:07 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: roy.hopkins, seanjc, michael.roth, ashish.kalra, jroedel,
	thomas.lendacky, nsaenz, anelkz, oliver.upton, isaku.yamahata,
	maz, steven.price, kai.huang, rick.p.edgecombe, James.Bottomley

On 10/23/24 14:45, Paolo Bonzini wrote:
> In order to add the relevant text, there are a few cleanups that can be applied
> separately.

Applied patches 1-4.

Paolo


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 5/5] Documentation: kvm: introduce "VM plane" concept
  2024-10-23 12:45 ` [PATCH 5/5] Documentation: kvm: introduce "VM plane" concept Paolo Bonzini
@ 2025-01-17 21:48   ` Sean Christopherson
  2025-01-21 10:50     ` Nicolas Saenz Julienne
  0 siblings, 1 reply; 11+ messages in thread
From: Sean Christopherson @ 2025-01-17 21:48 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: linux-kernel, kvm, roy.hopkins, michael.roth, ashish.kalra,
	jroedel, thomas.lendacky, nsaenz, anelkz, oliver.upton,
	isaku.yamahata, maz, steven.price, kai.huang, rick.p.edgecombe,
	James.Bottomley

On Wed, Oct 23, 2024, Paolo Bonzini wrote:
> @@ -6398,6 +6415,46 @@ the capability to be present.
>  `flags` must currently be zero.
>  
>  
> +.. _KVM_CREATE_PLANE:
> +
> +4.144 KVM_CREATE_PLANE
> +----------------------
> +
> +:Capability: KVM_CAP_PLANE
> +:Architectures: none
> +:Type: vm ioctl
> +:Parameters: plane id
> +:Returns: a VM fd that can be used to control the new plane.
> +
> +Creates a new *plane*, i.e. a separate privilege level for the
> +virtual machine.  Each plane has its own memory attributes,
> +which can be used to enable more restricted permissions than
> +what is allowed with ``KVM_SET_USER_MEMORY_REGION``.
> +
> +Each plane has a numeric id that is used when communicating
> +with KVM through the :ref:`kvm_run <kvm_run>` struct.  While
> +KVM is currently agnostic to whether low ids are more or less
> +privileged, it is expected that this will not always be the
> +case in the future.  For example KVM in the future may use
> +the plane id when planes are supported by hardware (as is the
> +case for VMPLs in AMD), or if KVM supports accelerated plane
> +switch operations (as might be the case for Hyper-V VTLs).
> +
> +4.145 KVM_CREATE_VCPU_PLANE
> +---------------------------
> +
> +:Capability: KVM_CAP_PLANE
> +:Architectures: none
> +:Type: vm ioctl (non default plane)
> +:Parameters: vcpu file descriptor for the default plane
> +:Returns: a vCPU fd that can be used to control the new plane
> +          for the vCPU.
> +
> +Adds a vCPU to a plane; the new vCPU's id comes from the vCPU
> +file descriptor that is passed in the argument.  Note that
> + because of how the API is defined, planes other than plane 0
> +can only have a subset of the ids that are available in plane 0.

Hmm, was there a reason why we decided to add KVM_CREATE_VCPU_PLANE, as opposed
to having KVM_CREATE_PLANE create vCPUs?  IIRC, we talked about being able to
provide the new FD, but that would be easy enough to handle in KVM_CREATE_PLANE,
e.g. with an array of fds.

E.g. is the expectation that userspace will create all planes before creating
any vCPUs?

My concern with relying on userspace to create vCPUs is that it will mean KVM
will need to support, or at least not blow up on, VMs with multiple planes, but
only a subset of vCPUs at planes > 0.  Given the snafus with vcpus_array, it's
not at all hard to imagine scenarios where KVM tries to access a NULL vCPU in
a different plane.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 5/5] Documentation: kvm: introduce "VM plane" concept
  2025-01-17 21:48   ` Sean Christopherson
@ 2025-01-21 10:50     ` Nicolas Saenz Julienne
  2025-01-21 17:20       ` Sean Christopherson
  0 siblings, 1 reply; 11+ messages in thread
From: Nicolas Saenz Julienne @ 2025-01-21 10:50 UTC (permalink / raw)
  To: Sean Christopherson, Paolo Bonzini
  Cc: linux-kernel, kvm, roy.hopkins, michael.roth, ashish.kalra,
	jroedel, thomas.lendacky, anelkz, oliver.upton, isaku.yamahata,
	maz, steven.price, kai.huang, rick.p.edgecombe, James.Bottomley

Hi Sean,

On Fri Jan 17, 2025 at 9:48 PM UTC, Sean Christopherson wrote:
> On Wed, Oct 23, 2024, Paolo Bonzini wrote:
>> @@ -6398,6 +6415,46 @@ the capability to be present.
>>  `flags` must currently be zero.
>>
>>
>> +.. _KVM_CREATE_PLANE:
>> +
>> +4.144 KVM_CREATE_PLANE
>> +----------------------
>> +
>> +:Capability: KVM_CAP_PLANE
>> +:Architectures: none
>> +:Type: vm ioctl
>> +:Parameters: plane id
>> +:Returns: a VM fd that can be used to control the new plane.
>> +
>> +Creates a new *plane*, i.e. a separate privilege level for the
>> +virtual machine.  Each plane has its own memory attributes,
>> +which can be used to enable more restricted permissions than
>> +what is allowed with ``KVM_SET_USER_MEMORY_REGION``.
>> +
>> +Each plane has a numeric id that is used when communicating
>> +with KVM through the :ref:`kvm_run <kvm_run>` struct.  While
>> +KVM is currently agnostic to whether low ids are more or less
>> +privileged, it is expected that this will not always be the
>> +case in the future.  For example KVM in the future may use
>> +the plane id when planes are supported by hardware (as is the
>> +case for VMPLs in AMD), or if KVM supports accelerated plane
>> +switch operations (as might be the case for Hyper-V VTLs).
>> +
>> +4.145 KVM_CREATE_VCPU_PLANE
>> +---------------------------
>> +
>> +:Capability: KVM_CAP_PLANE
>> +:Architectures: none
>> +:Type: vm ioctl (non default plane)
>> +:Parameters: vcpu file descriptor for the default plane
>> +:Returns: a vCPU fd that can be used to control the new plane
>> +          for the vCPU.
>> +
>> +Adds a vCPU to a plane; the new vCPU's id comes from the vCPU
>> +file descriptor that is passed in the argument.  Note that
>> + because of how the API is defined, planes other than plane 0
>> +can only have a subset of the ids that are available in plane 0.
>
> Hmm, was there a reason why we decided to add KVM_CREATE_VCPU_PLANE, as opposed
> to having KVM_CREATE_PLANE create vCPUs?  IIRC, we talked about being able to
> provide the new FD, but that would be easy enough to handle in KVM_CREATE_PLANE,
> e.g. with an array of fds.

IIRC we mentioned that there is nothing in the VSM spec preventing
higher VTLs from enabling a subset of vCPUs. That said, even the TLFS
mentions that doing so is not such a great idea (15.4 VTL Enablement):

"Enable the target VTL on one or more virtual processors. [...] It is
 recommended that all VPs have the same enabled VTLs. Having a VTL
 enabled on some VPs (but not all) can lead to unexpected behavior."

One thing I've been meaning to research is moving device emulation into
guest execution context by using VTLs. In that context, it might make
sense to only enable VTLs on specific vCPUs. But I'm only speculating.

Otherwise, I cannot think of real world scenarios where this property is
needed.

> k.g. is the expectation that userspace will create all planes before creating
> any vCPUs?

The opposite really, VTLs can be initiated anytime during runtime.

> My concern with relying on userspace to create vCPUs is that it will mean KVM
> will need to support, or at least not blow up on, VMs with multiple planes, but
> only a subset of vCPUs at planes > 0.  Given the snafus with vcpus_array, it's
> not at all hard to imagine scenarios where KVM tries to access a NULL vCPU in
> a different plane.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 5/5] Documentation: kvm: introduce "VM plane" concept
  2025-01-21 10:50     ` Nicolas Saenz Julienne
@ 2025-01-21 17:20       ` Sean Christopherson
  2025-01-21 18:34         ` Nicolas Saenz Julienne
  0 siblings, 1 reply; 11+ messages in thread
From: Sean Christopherson @ 2025-01-21 17:20 UTC (permalink / raw)
  To: Nicolas Saenz Julienne
  Cc: Paolo Bonzini, linux-kernel, kvm, roy.hopkins, michael.roth,
	ashish.kalra, jroedel, thomas.lendacky, anelkz, oliver.upton,
	isaku.yamahata, maz, steven.price, kai.huang, rick.p.edgecombe,
	James.Bottomley

On Tue, Jan 21, 2025, Nicolas Saenz Julienne wrote:
> Hi Sean,
> 
> On Fri Jan 17, 2025 at 9:48 PM UTC, Sean Christopherson wrote:
> > On Wed, Oct 23, 2024, Paolo Bonzini wrote:
> >> @@ -6398,6 +6415,46 @@ the capability to be present.
> >>  `flags` must currently be zero.
> >>
> >>
> >> +.. _KVM_CREATE_PLANE:
> >> +
> >> +4.144 KVM_CREATE_PLANE
> >> +----------------------
> >> +
> >> +:Capability: KVM_CAP_PLANE
> >> +:Architectures: none
> >> +:Type: vm ioctl
> >> +:Parameters: plane id
> >> +:Returns: a VM fd that can be used to control the new plane.
> >> +
> >> +Creates a new *plane*, i.e. a separate privilege level for the
> >> +virtual machine.  Each plane has its own memory attributes,
> >> +which can be used to enable more restricted permissions than
> >> +what is allowed with ``KVM_SET_USER_MEMORY_REGION``.
> >> +
> >> +Each plane has a numeric id that is used when communicating
> >> +with KVM through the :ref:`kvm_run <kvm_run>` struct.  While
> >> +KVM is currently agnostic to whether low ids are more or less
> >> +privileged, it is expected that this will not always be the
> >> +case in the future.  For example KVM in the future may use
> >> +the plane id when planes are supported by hardware (as is the
> >> +case for VMPLs in AMD), or if KVM supports accelerated plane
> >> +switch operations (as might be the case for Hyper-V VTLs).
> >> +
> >> +4.145 KVM_CREATE_VCPU_PLANE
> >> +---------------------------
> >> +
> >> +:Capability: KVM_CAP_PLANE
> >> +:Architectures: none
> >> +:Type: vm ioctl (non default plane)
> >> +:Parameters: vcpu file descriptor for the default plane
> >> +:Returns: a vCPU fd that can be used to control the new plane
> >> +          for the vCPU.
> >> +
> >> +Adds a vCPU to a plane; the new vCPU's id comes from the vCPU
> >> +file descriptor that is passed in the argument.  Note that
> >> + because of how the API is defined, planes other than plane 0
> >> +can only have a subset of the ids that are available in plane 0.
> >
> > Hmm, was there a reason why we decided to add KVM_CREATE_VCPU_PLANE, as opposed
> > to having KVM_CREATE_PLANE create vCPUs?  IIRC, we talked about being able to
> > provide the new FD, but that would be easy enough to handle in KVM_CREATE_PLANE,
> > e.g. with an array of fds.
> 
> IIRC we mentioned that there is nothing in the VSM spec preventing
> higher VTLs from enabling a subset of vCPUs. That said, even the TLFS
> mentions that doing so is not such a great idea (15.4 VTL Enablement):
> 
> "Enable the target VTL on one or more virtual processors. [...] It is
>  recommended that all VPs have the same enabled VTLs. Having a VTL
>  enabled on some VPs (but not all) can lead to unexpected behavior."
> 
> One thing I've been meaning to research is moving device emulation into
> guest execution context by using VTLs. In that context, it might make
> sense to only enable VTLs on specific vCPUs. But I'm only speculating.

Creating vCPUs for a VTL in KVM doesn't need to _enable_ that VTL, and AIUI
shouldn't enable the VTL, because HvCallEnablePartitionVtl "only" enables the VTL
for the VM, HvCallEnableVpVtl is what fully enables the VTL for a given vCPU.

What I am proposing is to create the KVM vCPU object(s) at KVM_CREATE_PLANE,
purely to help avoid NULL pointer dereferences.  Actually, since KVM will likely
need uAPI to let userspace enable a VTL for a vCPU even if the vCPU object is
auto-created, we could have KVM auto-create the objects transparently, i.e. still
provide KVM_CREATE_VCPU_PLANE, but under the hood it would simply enable a flag
and install the vCPU's file descriptor.

> Otherwise, I cannot think of real world scenarios where this property is
> needed.
> 
> > k.g. is the expectation that userspace will create all planes before creating
> > any vCPUs?
> 
> The opposite really, VTLs can be initiated anytime during runtime.

Oh, right.

> > My concern with relying on userspace to create vCPUs is that it will mean KVM
> > will need to support, or at least not blow up on, VMs with multiple planes, but
> > only a subset of vCPUs at planes > 0.  Given the snafus with vcpus_array, it's
> > not at all hard to imagine scenarios where KVM tries to access a NULL vCPU in
> > a different plane.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 5/5] Documentation: kvm: introduce "VM plane" concept
  2025-01-21 17:20       ` Sean Christopherson
@ 2025-01-21 18:34         ` Nicolas Saenz Julienne
  0 siblings, 0 replies; 11+ messages in thread
From: Nicolas Saenz Julienne @ 2025-01-21 18:34 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, linux-kernel, kvm, roy.hopkins, michael.roth,
	ashish.kalra, jroedel, thomas.lendacky, anelkz, oliver.upton,
	isaku.yamahata, maz, steven.price, kai.huang, rick.p.edgecombe,
	James.Bottomley

On Tue Jan 21, 2025 at 5:20 PM UTC, Sean Christopherson wrote:
> On Tue, Jan 21, 2025, Nicolas Saenz Julienne wrote:
>> On Fri Jan 17, 2025 at 9:48 PM UTC, Sean Christopherson wrote:
>> > On Wed, Oct 23, 2024, Paolo Bonzini wrote:
>> >> @@ -6398,6 +6415,46 @@ the capability to be present.
>> >>  `flags` must currently be zero.
>> >> +.. _KVM_CREATE_PLANE:
>> >> +
>> >> +4.144 KVM_CREATE_PLANE
>> >> +----------------------
>> >> +
>> >> +:Capability: KVM_CAP_PLANE
>> >> +:Architectures: none
>> >> +:Type: vm ioctl
>> >> +:Parameters: plane id
>> >> +:Returns: a VM fd that can be used to control the new plane.
>> >> +
>> >> +Creates a new *plane*, i.e. a separate privilege level for the
>> >> +virtual machine.  Each plane has its own memory attributes,
>> >> +which can be used to enable more restricted permissions than
>> >> +what is allowed with ``KVM_SET_USER_MEMORY_REGION``.
>> >> +
>> >> +Each plane has a numeric id that is used when communicating
>> >> +with KVM through the :ref:`kvm_run <kvm_run>` struct.  While
>> >> +KVM is currently agnostic to whether low ids are more or less
>> >> +privileged, it is expected that this will not always be the
>> >> +case in the future.  For example KVM in the future may use
>> >> +the plane id when planes are supported by hardware (as is the
>> >> +case for VMPLs in AMD), or if KVM supports accelerated plane
>> >> +switch operations (as might be the case for Hyper-V VTLs).
>> >> +
>> >> +4.145 KVM_CREATE_VCPU_PLANE
>> >> +---------------------------
>> >> +
>> >> +:Capability: KVM_CAP_PLANE
>> >> +:Architectures: none
>> >> +:Type: vm ioctl (non default plane)
>> >> +:Parameters: vcpu file descriptor for the default plane
>> >> +:Returns: a vCPU fd that can be used to control the new plane
>> >> +          for the vCPU.
>> >> +
>> >> +Adds a vCPU to a plane; the new vCPU's id comes from the vCPU
>> >> +file descriptor that is passed in the argument.  Note that
>> >> + because of how the API is defined, planes other than plane 0
>> >> +can only have a subset of the ids that are available in plane 0.
>> >
>> > Hmm, was there a reason why we decided to add KVM_CREATE_VCPU_PLANE, as opposed
>> > to having KVM_CREATE_PLANE create vCPUs?  IIRC, we talked about being able to
>> > provide the new FD, but that would be easy enough to handle in KVM_CREATE_PLANE,
>> > e.g. with an array of fds.
>>
>> IIRC we mentioned that there is nothing in the VSM spec preventing
>> higher VTLs from enabling a subset of vCPUs. That said, even the TLFS
>> mentions that doing so is not such a great idea (15.4 VTL Enablement):
>>
>> "Enable the target VTL on one or more virtual processors. [...] It is
>>  recommended that all VPs have the same enabled VTLs. Having a VTL
>>  enabled on some VPs (but not all) can lead to unexpected behavior."
>>
>> One thing I've been meaning to research is moving device emulation into
>> guest execution context by using VTLs. In that context, it might make
>> sense to only enable VTLs on specific vCPUs. But I'm only speculating.
>
> Creating vCPUs for a VTL in KVM doesn't need to _enable_ that VTL, and AIUI
> shouldn't enable the VTL, because HvCallEnablePartitionVtl "only" enables the VTL
> for the VM, HvCallEnableVpVtl is what fully enables the VTL for a given vCPU.

Yes.

> What I am proposing is to create the KVM vCPU object(s) at KVM_CREATE_PLANE,
> purely to help avoid NULL pointer dereferences.  Actually, since KVM will likely
> need uAPI to let userspace enable a VTL for a vCPU even if the vCPU object is
> auto-created, we could have KVM auto-create the objects transparently, i.e. still
> provide KVM_CREATE_VCPU_PLANE, but under the hood it would simply enable a flag
> and install the vCPU's file descriptor.

Sounds good. I like the idea of keeping KVM_CREATE_VCPU_PLANE around. It
also leaves the door open to creating the objects at that stage if it
ever necessary.

Nicolas

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2025-01-21 18:35 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-10-23 12:45 [RFC PATCH 0/5] Documentation: kvm: cleanup and introduce "VM planes" Paolo Bonzini
2024-10-23 12:45 ` [RFC PATCH 1/5] KVM: powerpc: remove remaining traces of KVM_CAP_PPC_RMA Paolo Bonzini
2024-10-23 12:45 ` [RFC PATCH 2/5] Documentation: kvm: fix a few mistakes Paolo Bonzini
2024-10-23 12:45 ` [RFC PATCH 3/5] Documentation: kvm: replace section numbers with links Paolo Bonzini
2024-10-23 12:45 ` [RFC PATCH 4/5] Documentation: kvm: reorganize introduction Paolo Bonzini
2024-10-23 12:45 ` [PATCH 5/5] Documentation: kvm: introduce "VM plane" concept Paolo Bonzini
2025-01-17 21:48   ` Sean Christopherson
2025-01-21 10:50     ` Nicolas Saenz Julienne
2025-01-21 17:20       ` Sean Christopherson
2025-01-21 18:34         ` Nicolas Saenz Julienne
2024-11-08  9:07 ` [RFC PATCH 0/5] Documentation: kvm: cleanup and introduce "VM planes" Paolo Bonzini

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox