* [PATCH 0/3] KVM: VCPU state extensions
@ 2010-02-15 9:45 Jan Kiszka
2010-02-15 9:45 ` [PATCH 1/3] KVM: x86: Do not return soft events in vcpu_events Jan Kiszka
` (3 more replies)
0 siblings, 4 replies; 15+ messages in thread
From: Jan Kiszka @ 2010-02-15 9:45 UTC (permalink / raw)
To: Avi Kivity, Marcelo Tosatti; +Cc: kvm
These patches do not technically depend on each other but overlap, so
I'm pushing them now in a series.
Patch 1 is a repost. Patch 2 is reworked and comes with the following
changes:
- expose only a boolean to user space, mapping it on
X86_SHADOW_INT_MOV_SS during write
- do not move X86_SHADOW_INT_* flags around
- Signal capability via KVM_CAP_INTR_SHADOW and manage the new
kvm_vcpu_events field via KVM_VCPUEVENT_VALID_SHADOW
- Update docs
Finally, patch 3 is new, plugging the debug register migration (and
reset) hole.
You can also pull from
git://git.kiszka.org/linux-kvm vcpu-state
Jan Kiszka (3):
KVM: x86: Do not return soft events in vcpu_events
KVM: x86: Save&restore interrupt shadow mask
KVM: x86: Add support for saving&restoring debug registers
Documentation/kvm/api.txt | 42 ++++++++++++++++++++++++-
arch/x86/include/asm/kvm.h | 13 +++++++-
arch/x86/kvm/vmx.c | 2 +-
arch/x86/kvm/x86.c | 75 +++++++++++++++++++++++++++++++++++++++++---
include/linux/kvm.h | 7 ++++
5 files changed, 131 insertions(+), 8 deletions(-)
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH 1/3] KVM: x86: Do not return soft events in vcpu_events
2010-02-15 9:45 [PATCH 0/3] KVM: VCPU state extensions Jan Kiszka
@ 2010-02-15 9:45 ` Jan Kiszka
2010-02-15 9:45 ` [PATCH 2/3] KVM: x86: Save&restore interrupt shadow mask Jan Kiszka
` (2 subsequent siblings)
3 siblings, 0 replies; 15+ messages in thread
From: Jan Kiszka @ 2010-02-15 9:45 UTC (permalink / raw)
To: Avi Kivity, Marcelo Tosatti; +Cc: kvm
To avoid that user space migrates a pending software exception or
interrupt, mask them out on KVM_GET_VCPU_EVENTS. Without this, user
space would try to reinject them, and we would have to reconstruct the
proper instruction length for VMX event injection. Now the pending event
will be reinjected via executing the triggering instruction again.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
---
arch/x86/kvm/x86.c | 9 ++++++---
1 files changed, 6 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 86b739f..50d1d2a 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -2121,14 +2121,17 @@ static void kvm_vcpu_ioctl_x86_get_vcpu_events(struct kvm_vcpu *vcpu,
{
vcpu_load(vcpu);
- events->exception.injected = vcpu->arch.exception.pending;
+ events->exception.injected =
+ vcpu->arch.exception.pending &&
+ !kvm_exception_is_soft(vcpu->arch.exception.nr);
events->exception.nr = vcpu->arch.exception.nr;
events->exception.has_error_code = vcpu->arch.exception.has_error_code;
events->exception.error_code = vcpu->arch.exception.error_code;
- events->interrupt.injected = vcpu->arch.interrupt.pending;
+ events->interrupt.injected =
+ vcpu->arch.interrupt.pending && !vcpu->arch.interrupt.soft;
events->interrupt.nr = vcpu->arch.interrupt.nr;
- events->interrupt.soft = vcpu->arch.interrupt.soft;
+ events->interrupt.soft = 0;
events->nmi.injected = vcpu->arch.nmi_injected;
events->nmi.pending = vcpu->arch.nmi_pending;
--
1.6.0.2
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 2/3] KVM: x86: Save&restore interrupt shadow mask
2010-02-15 9:45 [PATCH 0/3] KVM: VCPU state extensions Jan Kiszka
2010-02-15 9:45 ` [PATCH 1/3] KVM: x86: Do not return soft events in vcpu_events Jan Kiszka
@ 2010-02-15 9:45 ` Jan Kiszka
2010-02-17 0:39 ` Marcelo Tosatti
2010-02-19 18:38 ` [PATCH 2/3 v3] " Jan Kiszka
2010-02-15 9:45 ` [PATCH 3/3] KVM: x86: Add support for saving&restoring debug registers Jan Kiszka
2010-02-22 12:34 ` [PATCH 0/3] KVM: VCPU state extensions Jan Kiszka
3 siblings, 2 replies; 15+ messages in thread
From: Jan Kiszka @ 2010-02-15 9:45 UTC (permalink / raw)
To: Avi Kivity, Marcelo Tosatti; +Cc: kvm
The interrupt shadow created by STI or MOV-SS-like operations is part of
the VCPU state and must be preserved across migration. Transfer it in
the spare padding field of kvm_vcpu_events.interrupt.
As a side effect we now have to make vmx_set_interrupt_shadow robust
against both shadow types being set. Give MOV SS a higher priority and
skip STI in that case to avoid that VMX throws a fault on next entry.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
---
Documentation/kvm/api.txt | 11 ++++++++++-
arch/x86/include/asm/kvm.h | 3 ++-
arch/x86/kvm/vmx.c | 2 +-
arch/x86/kvm/x86.c | 12 ++++++++++--
include/linux/kvm.h | 1 +
5 files changed, 24 insertions(+), 5 deletions(-)
diff --git a/Documentation/kvm/api.txt b/Documentation/kvm/api.txt
index c6416a3..8770b67 100644
--- a/Documentation/kvm/api.txt
+++ b/Documentation/kvm/api.txt
@@ -656,6 +656,7 @@ struct kvm_clock_data {
4.29 KVM_GET_VCPU_EVENTS
Capability: KVM_CAP_VCPU_EVENTS
+Extended by: KVM_CAP_INTR_SHADOW
Architectures: x86
Type: vm ioctl
Parameters: struct kvm_vcpu_event (out)
@@ -676,7 +677,7 @@ struct kvm_vcpu_events {
__u8 injected;
__u8 nr;
__u8 soft;
- __u8 pad;
+ __u8 shadow;
} interrupt;
struct {
__u8 injected;
@@ -688,9 +689,13 @@ struct kvm_vcpu_events {
__u32 flags;
};
+KVM_VCPUEVENT_VALID_SHADOW may be set in the flags field to signal that
+interrupt.shadow contains a valid state. Otherwise, this field is undefined.
+
4.30 KVM_SET_VCPU_EVENTS
Capability: KVM_CAP_VCPU_EVENTS
+Extended by: KVM_CAP_INTR_SHADOW
Architectures: x86
Type: vm ioctl
Parameters: struct kvm_vcpu_event (in)
@@ -709,6 +714,10 @@ current in-kernel state. The bits are:
KVM_VCPUEVENT_VALID_NMI_PENDING - transfer nmi.pending to the kernel
KVM_VCPUEVENT_VALID_SIPI_VECTOR - transfer sipi_vector
+If KVM_CAP_INTR_SHADOW is available, KVM_VCPUEVENT_VALID_SHADOW can be set in
+the flags field to signal that interrupt.shadow contains a valid state and
+shall be written into the VCPU.
+
5. The kvm_run structure
diff --git a/arch/x86/include/asm/kvm.h b/arch/x86/include/asm/kvm.h
index f46b79f..dc6cd24 100644
--- a/arch/x86/include/asm/kvm.h
+++ b/arch/x86/include/asm/kvm.h
@@ -257,6 +257,7 @@ struct kvm_reinject_control {
/* When set in flags, include corresponding fields on KVM_SET_VCPU_EVENTS */
#define KVM_VCPUEVENT_VALID_NMI_PENDING 0x00000001
#define KVM_VCPUEVENT_VALID_SIPI_VECTOR 0x00000002
+#define KVM_VCPUEVENT_VALID_SHADOW 0x00000004
/* for KVM_GET/SET_VCPU_EVENTS */
struct kvm_vcpu_events {
@@ -271,7 +272,7 @@ struct kvm_vcpu_events {
__u8 injected;
__u8 nr;
__u8 soft;
- __u8 pad;
+ __u8 shadow;
} interrupt;
struct {
__u8 injected;
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index f82b072..0fa74d0 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -854,7 +854,7 @@ static void vmx_set_interrupt_shadow(struct kvm_vcpu *vcpu, int mask)
if (mask & X86_SHADOW_INT_MOV_SS)
interruptibility |= GUEST_INTR_STATE_MOV_SS;
- if (mask & X86_SHADOW_INT_STI)
+ else if (mask & X86_SHADOW_INT_STI)
interruptibility |= GUEST_INTR_STATE_STI;
if ((interruptibility != interruptibility_old))
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 50d1d2a..60e6341 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -2132,6 +2132,9 @@ static void kvm_vcpu_ioctl_x86_get_vcpu_events(struct kvm_vcpu *vcpu,
vcpu->arch.interrupt.pending && !vcpu->arch.interrupt.soft;
events->interrupt.nr = vcpu->arch.interrupt.nr;
events->interrupt.soft = 0;
+ events->interrupt.shadow =
+ !!kvm_x86_ops->get_interrupt_shadow(vcpu,
+ X86_SHADOW_INT_MOV_SS | X86_SHADOW_INT_STI);
events->nmi.injected = vcpu->arch.nmi_injected;
events->nmi.pending = vcpu->arch.nmi_pending;
@@ -2140,7 +2143,8 @@ static void kvm_vcpu_ioctl_x86_get_vcpu_events(struct kvm_vcpu *vcpu,
events->sipi_vector = vcpu->arch.sipi_vector;
events->flags = (KVM_VCPUEVENT_VALID_NMI_PENDING
- | KVM_VCPUEVENT_VALID_SIPI_VECTOR);
+ | KVM_VCPUEVENT_VALID_SIPI_VECTOR
+ | KVM_VCPUEVENT_VALID_SHADOW);
vcpu_put(vcpu);
}
@@ -2149,7 +2153,8 @@ static int kvm_vcpu_ioctl_x86_set_vcpu_events(struct kvm_vcpu *vcpu,
struct kvm_vcpu_events *events)
{
if (events->flags & ~(KVM_VCPUEVENT_VALID_NMI_PENDING
- | KVM_VCPUEVENT_VALID_SIPI_VECTOR))
+ | KVM_VCPUEVENT_VALID_SIPI_VECTOR
+ | KVM_VCPUEVENT_VALID_SHADOW))
return -EINVAL;
vcpu_load(vcpu);
@@ -2164,6 +2169,9 @@ static int kvm_vcpu_ioctl_x86_set_vcpu_events(struct kvm_vcpu *vcpu,
vcpu->arch.interrupt.soft = events->interrupt.soft;
if (vcpu->arch.interrupt.pending && irqchip_in_kernel(vcpu->kvm))
kvm_pic_clear_isr_ack(vcpu->kvm);
+ if (events->flags & KVM_VCPUEVENT_VALID_SHADOW)
+ kvm_x86_ops->set_interrupt_shadow(vcpu,
+ events->interrupt.shadow ? X86_SHADOW_INT_MOV_SS : 0);
vcpu->arch.nmi_injected = events->nmi.injected;
if (events->flags & KVM_VCPUEVENT_VALID_NMI_PENDING)
diff --git a/include/linux/kvm.h b/include/linux/kvm.h
index dfa54be..46fb860 100644
--- a/include/linux/kvm.h
+++ b/include/linux/kvm.h
@@ -501,6 +501,7 @@ struct kvm_ioeventfd {
#define KVM_CAP_HYPERV_VAPIC 45
#define KVM_CAP_HYPERV_SPIN 46
#define KVM_CAP_PCI_SEGMENT 47
+#define KVM_CAP_INTR_SHADOW 48
#ifdef KVM_CAP_IRQ_ROUTING
--
1.6.0.2
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 3/3] KVM: x86: Add support for saving&restoring debug registers
2010-02-15 9:45 [PATCH 0/3] KVM: VCPU state extensions Jan Kiszka
2010-02-15 9:45 ` [PATCH 1/3] KVM: x86: Do not return soft events in vcpu_events Jan Kiszka
2010-02-15 9:45 ` [PATCH 2/3] KVM: x86: Save&restore interrupt shadow mask Jan Kiszka
@ 2010-02-15 9:45 ` Jan Kiszka
2010-02-22 12:34 ` [PATCH 0/3] KVM: VCPU state extensions Jan Kiszka
3 siblings, 0 replies; 15+ messages in thread
From: Jan Kiszka @ 2010-02-15 9:45 UTC (permalink / raw)
To: Avi Kivity, Marcelo Tosatti; +Cc: kvm
So far user space was not able to save and restore debug registers for
migration or after reset. Plug this hole.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
---
Documentation/kvm/api.txt | 31 +++++++++++++++++++++++++
arch/x86/include/asm/kvm.h | 10 ++++++++
arch/x86/kvm/x86.c | 54 ++++++++++++++++++++++++++++++++++++++++++++
include/linux/kvm.h | 6 +++++
4 files changed, 101 insertions(+), 0 deletions(-)
diff --git a/Documentation/kvm/api.txt b/Documentation/kvm/api.txt
index 8770b67..6753158 100644
--- a/Documentation/kvm/api.txt
+++ b/Documentation/kvm/api.txt
@@ -718,6 +718,37 @@ If KVM_CAP_INTR_SHADOW is available, KVM_VCPUEVENT_VALID_SHADOW can be set in
the flags field to signal that interrupt.shadow contains a valid state and
shall be written into the VCPU.
+4.32 KVM_GET_DEBUGREGS
+
+Capability: KVM_CAP_DEBUGREGS
+Architectures: x86
+Type: vm ioctl
+Parameters: struct kvm_debugregs (out)
+Returns: 0 on success, -1 on error
+
+Reads debug registers from the vcpu.
+
+struct kvm_debugregs {
+ __u64 db[4];
+ __u64 dr6;
+ __u64 dr7;
+ __u64 flags;
+ __u64 reserved[9];
+};
+
+4.33 KVM_SET_DEBUGREGS
+
+Capability: KVM_CAP_DEBUGREGS
+Architectures: x86
+Type: vm ioctl
+Parameters: struct kvm_debugregs (in)
+Returns: 0 on success, -1 on error
+
+Writes debug registers into the vcpu.
+
+See KVM_GET_DEBUGREGS for the data structure. The flags field is unused
+yet and must be cleared on entry.
+
5. The kvm_run structure
diff --git a/arch/x86/include/asm/kvm.h b/arch/x86/include/asm/kvm.h
index dc6cd24..a81920b 100644
--- a/arch/x86/include/asm/kvm.h
+++ b/arch/x86/include/asm/kvm.h
@@ -21,6 +21,7 @@
#define __KVM_HAVE_PIT_STATE2
#define __KVM_HAVE_XEN_HVM
#define __KVM_HAVE_VCPU_EVENTS
+#define __KVM_HAVE_DEBUGREGS
/* Architectural interrupt line count. */
#define KVM_NR_INTERRUPTS 256
@@ -285,4 +286,13 @@ struct kvm_vcpu_events {
__u32 reserved[10];
};
+/* for KVM_GET/SET_DEBUGREGS */
+struct kvm_debugregs {
+ __u64 db[4];
+ __u64 dr6;
+ __u64 dr7;
+ __u64 flags;
+ __u64 reserved[9];
+};
+
#endif /* _ASM_X86_KVM_H */
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 60e6341..61dfbf1 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -1570,6 +1570,7 @@ int kvm_dev_ioctl_check_extension(long ext)
case KVM_CAP_HYPERV_VAPIC:
case KVM_CAP_HYPERV_SPIN:
case KVM_CAP_PCI_SEGMENT:
+ case KVM_CAP_DEBUGREGS:
r = 1;
break;
case KVM_CAP_COALESCED_MMIO:
@@ -2186,6 +2187,36 @@ static int kvm_vcpu_ioctl_x86_set_vcpu_events(struct kvm_vcpu *vcpu,
return 0;
}
+static void kvm_vcpu_ioctl_x86_get_debugregs(struct kvm_vcpu *vcpu,
+ struct kvm_debugregs *dbgregs)
+{
+ vcpu_load(vcpu);
+
+ memcpy(dbgregs->db, vcpu->arch.db, sizeof(vcpu->arch.db));
+ dbgregs->dr6 = vcpu->arch.dr6;
+ dbgregs->dr7 = vcpu->arch.dr7;
+ dbgregs->flags = 0;
+
+ vcpu_put(vcpu);
+}
+
+static int kvm_vcpu_ioctl_x86_set_debugregs(struct kvm_vcpu *vcpu,
+ struct kvm_debugregs *dbgregs)
+{
+ if (dbgregs->flags)
+ return -EINVAL;
+
+ vcpu_load(vcpu);
+
+ memcpy(vcpu->arch.db, dbgregs->db, sizeof(vcpu->arch.db));
+ vcpu->arch.dr6 = dbgregs->dr6;
+ vcpu->arch.dr7 = dbgregs->dr7;
+
+ vcpu_put(vcpu);
+
+ return 0;
+}
+
long kvm_arch_vcpu_ioctl(struct file *filp,
unsigned int ioctl, unsigned long arg)
{
@@ -2364,6 +2395,29 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
r = kvm_vcpu_ioctl_x86_set_vcpu_events(vcpu, &events);
break;
}
+ case KVM_GET_DEBUGREGS: {
+ struct kvm_debugregs dbgregs;
+
+ kvm_vcpu_ioctl_x86_get_debugregs(vcpu, &dbgregs);
+
+ r = -EFAULT;
+ if (copy_to_user(argp, &dbgregs,
+ sizeof(struct kvm_debugregs)))
+ break;
+ r = 0;
+ break;
+ }
+ case KVM_SET_DEBUGREGS: {
+ struct kvm_debugregs dbgregs;
+
+ r = -EFAULT;
+ if (copy_from_user(&dbgregs, argp,
+ sizeof(struct kvm_debugregs)))
+ break;
+
+ r = kvm_vcpu_ioctl_x86_set_debugregs(vcpu, &dbgregs);
+ break;
+ }
default:
r = -EINVAL;
}
diff --git a/include/linux/kvm.h b/include/linux/kvm.h
index 46fb860..667aec5 100644
--- a/include/linux/kvm.h
+++ b/include/linux/kvm.h
@@ -502,6 +502,9 @@ struct kvm_ioeventfd {
#define KVM_CAP_HYPERV_SPIN 46
#define KVM_CAP_PCI_SEGMENT 47
#define KVM_CAP_INTR_SHADOW 48
+#ifdef __KVM_HAVE_DEBUGREGS
+#define KVM_CAP_DEBUGREGS 49
+#endif
#ifdef KVM_CAP_IRQ_ROUTING
@@ -688,6 +691,9 @@ struct kvm_clock_data {
/* Available with KVM_CAP_VCPU_EVENTS */
#define KVM_GET_VCPU_EVENTS _IOR(KVMIO, 0x9f, struct kvm_vcpu_events)
#define KVM_SET_VCPU_EVENTS _IOW(KVMIO, 0xa0, struct kvm_vcpu_events)
+/* Available with KVM_CAP_DEBUGREGS */
+#define KVM_GET_DEBUGREGS _IOR(KVMIO, 0xa1, struct kvm_debugregs)
+#define KVM_SET_DEBUGREGS _IOW(KVMIO, 0xa2, struct kvm_debugregs)
#define KVM_DEV_ASSIGN_ENABLE_IOMMU (1 << 0)
--
1.6.0.2
^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [PATCH 2/3] KVM: x86: Save&restore interrupt shadow mask
2010-02-15 9:45 ` [PATCH 2/3] KVM: x86: Save&restore interrupt shadow mask Jan Kiszka
@ 2010-02-17 0:39 ` Marcelo Tosatti
2010-02-17 8:06 ` Zachary Amsden
2010-02-17 9:03 ` Jan Kiszka
2010-02-19 18:38 ` [PATCH 2/3 v3] " Jan Kiszka
1 sibling, 2 replies; 15+ messages in thread
From: Marcelo Tosatti @ 2010-02-17 0:39 UTC (permalink / raw)
To: Jan Kiszka; +Cc: Avi Kivity, kvm
On Mon, Feb 15, 2010 at 10:45:42AM +0100, Jan Kiszka wrote:
> The interrupt shadow created by STI or MOV-SS-like operations is part of
> the VCPU state and must be preserved across migration. Transfer it in
> the spare padding field of kvm_vcpu_events.interrupt.
>
> As a side effect we now have to make vmx_set_interrupt_shadow robust
> against both shadow types being set. Give MOV SS a higher priority and
> skip STI in that case to avoid that VMX throws a fault on next entry.
>
> Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
> ---
> Documentation/kvm/api.txt | 11 ++++++++++-
> arch/x86/include/asm/kvm.h | 3 ++-
> arch/x86/kvm/vmx.c | 2 +-
> arch/x86/kvm/x86.c | 12 ++++++++++--
> include/linux/kvm.h | 1 +
> 5 files changed, 24 insertions(+), 5 deletions(-)
>
> diff --git a/Documentation/kvm/api.txt b/Documentation/kvm/api.txt
> index c6416a3..8770b67 100644
> --- a/Documentation/kvm/api.txt
> +++ b/Documentation/kvm/api.txt
> @@ -656,6 +656,7 @@ struct kvm_clock_data {
> 4.29 KVM_GET_VCPU_EVENTS
>
> Capability: KVM_CAP_VCPU_EVENTS
> +Extended by: KVM_CAP_INTR_SHADOW
> Architectures: x86
> Type: vm ioctl
> Parameters: struct kvm_vcpu_event (out)
> @@ -676,7 +677,7 @@ struct kvm_vcpu_events {
> __u8 injected;
> __u8 nr;
> __u8 soft;
> - __u8 pad;
> + __u8 shadow;
> } interrupt;
> struct {
> __u8 injected;
> @@ -688,9 +689,13 @@ struct kvm_vcpu_events {
> __u32 flags;
> };
>
> +KVM_VCPUEVENT_VALID_SHADOW may be set in the flags field to signal that
> +interrupt.shadow contains a valid state. Otherwise, this field is undefined.
> +
> 4.30 KVM_SET_VCPU_EVENTS
>
> Capability: KVM_CAP_VCPU_EVENTS
> +Extended by: KVM_CAP_INTR_SHADOW
> Architectures: x86
> Type: vm ioctl
> Parameters: struct kvm_vcpu_event (in)
> @@ -709,6 +714,10 @@ current in-kernel state. The bits are:
> KVM_VCPUEVENT_VALID_NMI_PENDING - transfer nmi.pending to the kernel
> KVM_VCPUEVENT_VALID_SIPI_VECTOR - transfer sipi_vector
>
> +If KVM_CAP_INTR_SHADOW is available, KVM_VCPUEVENT_VALID_SHADOW can be set in
> +the flags field to signal that interrupt.shadow contains a valid state and
> +shall be written into the VCPU.
> +
>
> 5. The kvm_run structure
>
> diff --git a/arch/x86/include/asm/kvm.h b/arch/x86/include/asm/kvm.h
> index f46b79f..dc6cd24 100644
> --- a/arch/x86/include/asm/kvm.h
> +++ b/arch/x86/include/asm/kvm.h
> @@ -257,6 +257,7 @@ struct kvm_reinject_control {
> /* When set in flags, include corresponding fields on KVM_SET_VCPU_EVENTS */
> #define KVM_VCPUEVENT_VALID_NMI_PENDING 0x00000001
> #define KVM_VCPUEVENT_VALID_SIPI_VECTOR 0x00000002
> +#define KVM_VCPUEVENT_VALID_SHADOW 0x00000004
>
> /* for KVM_GET/SET_VCPU_EVENTS */
> struct kvm_vcpu_events {
> @@ -271,7 +272,7 @@ struct kvm_vcpu_events {
> __u8 injected;
> __u8 nr;
> __u8 soft;
> - __u8 pad;
> + __u8 shadow;
> } interrupt;
> struct {
> __u8 injected;
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index f82b072..0fa74d0 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -854,7 +854,7 @@ static void vmx_set_interrupt_shadow(struct kvm_vcpu *vcpu, int mask)
>
> if (mask & X86_SHADOW_INT_MOV_SS)
> interruptibility |= GUEST_INTR_STATE_MOV_SS;
> - if (mask & X86_SHADOW_INT_STI)
> + else if (mask & X86_SHADOW_INT_STI)
> interruptibility |= GUEST_INTR_STATE_STI;
>
> if ((interruptibility != interruptibility_old))
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 50d1d2a..60e6341 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -2132,6 +2132,9 @@ static void kvm_vcpu_ioctl_x86_get_vcpu_events(struct kvm_vcpu *vcpu,
> vcpu->arch.interrupt.pending && !vcpu->arch.interrupt.soft;
> events->interrupt.nr = vcpu->arch.interrupt.nr;
> events->interrupt.soft = 0;
> + events->interrupt.shadow =
> + !!kvm_x86_ops->get_interrupt_shadow(vcpu,
> + X86_SHADOW_INT_MOV_SS | X86_SHADOW_INT_STI);
>
> events->nmi.injected = vcpu->arch.nmi_injected;
> events->nmi.pending = vcpu->arch.nmi_pending;
> @@ -2140,7 +2143,8 @@ static void kvm_vcpu_ioctl_x86_get_vcpu_events(struct kvm_vcpu *vcpu,
> events->sipi_vector = vcpu->arch.sipi_vector;
>
> events->flags = (KVM_VCPUEVENT_VALID_NMI_PENDING
> - | KVM_VCPUEVENT_VALID_SIPI_VECTOR);
> + | KVM_VCPUEVENT_VALID_SIPI_VECTOR
> + | KVM_VCPUEVENT_VALID_SHADOW);
>
> vcpu_put(vcpu);
> }
> @@ -2149,7 +2153,8 @@ static int kvm_vcpu_ioctl_x86_set_vcpu_events(struct kvm_vcpu *vcpu,
> struct kvm_vcpu_events *events)
> {
> if (events->flags & ~(KVM_VCPUEVENT_VALID_NMI_PENDING
> - | KVM_VCPUEVENT_VALID_SIPI_VECTOR))
> + | KVM_VCPUEVENT_VALID_SIPI_VECTOR
> + | KVM_VCPUEVENT_VALID_SHADOW))
> return -EINVAL;
>
> vcpu_load(vcpu);
> @@ -2164,6 +2169,9 @@ static int kvm_vcpu_ioctl_x86_set_vcpu_events(struct kvm_vcpu *vcpu,
> vcpu->arch.interrupt.soft = events->interrupt.soft;
> if (vcpu->arch.interrupt.pending && irqchip_in_kernel(vcpu->kvm))
> kvm_pic_clear_isr_ack(vcpu->kvm);
> + if (events->flags & KVM_VCPUEVENT_VALID_SHADOW)
> + kvm_x86_ops->set_interrupt_shadow(vcpu,
> + events->interrupt.shadow ? X86_SHADOW_INT_MOV_SS : 0);
Its hackish to transform blocking-by-sti into blocking-by-mov-ss (sti
does not block debug exceptions for the next instruction).
Any special reason you are doing this?
Also, as Avi mentioned it would be better to avoid this. Is it not
possible to disallow migration while interrupt shadow is present?
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 2/3] KVM: x86: Save&restore interrupt shadow mask
2010-02-17 0:39 ` Marcelo Tosatti
@ 2010-02-17 8:06 ` Zachary Amsden
2010-02-17 9:05 ` Gleb Natapov
2010-02-17 9:07 ` Jan Kiszka
2010-02-17 9:03 ` Jan Kiszka
1 sibling, 2 replies; 15+ messages in thread
From: Zachary Amsden @ 2010-02-17 8:06 UTC (permalink / raw)
To: Marcelo Tosatti; +Cc: Jan Kiszka, Avi Kivity, kvm
On 02/16/2010 02:39 PM, Marcelo Tosatti wrote:
> On Mon, Feb 15, 2010 at 10:45:42AM +0100, Jan Kiszka wrote:
>
>> The interrupt shadow created by STI or MOV-SS-like operations is part of
>> the VCPU state and must be preserved across migration. Transfer it in
>> the spare padding field of kvm_vcpu_events.interrupt.
STI and MOV-SS interrupt shadow are both treated differently by
hardware. Any attempt to unify them into a single field is wrong,
especially so in a hardware virtualization context, where they are
actually represented by different fields in the undocumented but
nevertheless extant format that can be inferred from the hardware
virtualization context used by specific vendors.
Zach
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 2/3] KVM: x86: Save&restore interrupt shadow mask
2010-02-17 0:39 ` Marcelo Tosatti
2010-02-17 8:06 ` Zachary Amsden
@ 2010-02-17 9:03 ` Jan Kiszka
2010-02-17 9:10 ` Gleb Natapov
1 sibling, 1 reply; 15+ messages in thread
From: Jan Kiszka @ 2010-02-17 9:03 UTC (permalink / raw)
To: Marcelo Tosatti; +Cc: Avi Kivity, kvm
Marcelo Tosatti wrote:
> On Mon, Feb 15, 2010 at 10:45:42AM +0100, Jan Kiszka wrote:
>> The interrupt shadow created by STI or MOV-SS-like operations is part of
>> the VCPU state and must be preserved across migration. Transfer it in
>> the spare padding field of kvm_vcpu_events.interrupt.
>>
>> As a side effect we now have to make vmx_set_interrupt_shadow robust
>> against both shadow types being set. Give MOV SS a higher priority and
>> skip STI in that case to avoid that VMX throws a fault on next entry.
>>
>> Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
>> ---
>> Documentation/kvm/api.txt | 11 ++++++++++-
>> arch/x86/include/asm/kvm.h | 3 ++-
>> arch/x86/kvm/vmx.c | 2 +-
>> arch/x86/kvm/x86.c | 12 ++++++++++--
>> include/linux/kvm.h | 1 +
>> 5 files changed, 24 insertions(+), 5 deletions(-)
>>
>> diff --git a/Documentation/kvm/api.txt b/Documentation/kvm/api.txt
>> index c6416a3..8770b67 100644
>> --- a/Documentation/kvm/api.txt
>> +++ b/Documentation/kvm/api.txt
>> @@ -656,6 +656,7 @@ struct kvm_clock_data {
>> 4.29 KVM_GET_VCPU_EVENTS
>>
>> Capability: KVM_CAP_VCPU_EVENTS
>> +Extended by: KVM_CAP_INTR_SHADOW
>> Architectures: x86
>> Type: vm ioctl
>> Parameters: struct kvm_vcpu_event (out)
>> @@ -676,7 +677,7 @@ struct kvm_vcpu_events {
>> __u8 injected;
>> __u8 nr;
>> __u8 soft;
>> - __u8 pad;
>> + __u8 shadow;
>> } interrupt;
>> struct {
>> __u8 injected;
>> @@ -688,9 +689,13 @@ struct kvm_vcpu_events {
>> __u32 flags;
>> };
>>
>> +KVM_VCPUEVENT_VALID_SHADOW may be set in the flags field to signal that
>> +interrupt.shadow contains a valid state. Otherwise, this field is undefined.
>> +
>> 4.30 KVM_SET_VCPU_EVENTS
>>
>> Capability: KVM_CAP_VCPU_EVENTS
>> +Extended by: KVM_CAP_INTR_SHADOW
>> Architectures: x86
>> Type: vm ioctl
>> Parameters: struct kvm_vcpu_event (in)
>> @@ -709,6 +714,10 @@ current in-kernel state. The bits are:
>> KVM_VCPUEVENT_VALID_NMI_PENDING - transfer nmi.pending to the kernel
>> KVM_VCPUEVENT_VALID_SIPI_VECTOR - transfer sipi_vector
>>
>> +If KVM_CAP_INTR_SHADOW is available, KVM_VCPUEVENT_VALID_SHADOW can be set in
>> +the flags field to signal that interrupt.shadow contains a valid state and
>> +shall be written into the VCPU.
>> +
>>
>> 5. The kvm_run structure
>>
>> diff --git a/arch/x86/include/asm/kvm.h b/arch/x86/include/asm/kvm.h
>> index f46b79f..dc6cd24 100644
>> --- a/arch/x86/include/asm/kvm.h
>> +++ b/arch/x86/include/asm/kvm.h
>> @@ -257,6 +257,7 @@ struct kvm_reinject_control {
>> /* When set in flags, include corresponding fields on KVM_SET_VCPU_EVENTS */
>> #define KVM_VCPUEVENT_VALID_NMI_PENDING 0x00000001
>> #define KVM_VCPUEVENT_VALID_SIPI_VECTOR 0x00000002
>> +#define KVM_VCPUEVENT_VALID_SHADOW 0x00000004
>>
>> /* for KVM_GET/SET_VCPU_EVENTS */
>> struct kvm_vcpu_events {
>> @@ -271,7 +272,7 @@ struct kvm_vcpu_events {
>> __u8 injected;
>> __u8 nr;
>> __u8 soft;
>> - __u8 pad;
>> + __u8 shadow;
>> } interrupt;
>> struct {
>> __u8 injected;
>> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
>> index f82b072..0fa74d0 100644
>> --- a/arch/x86/kvm/vmx.c
>> +++ b/arch/x86/kvm/vmx.c
>> @@ -854,7 +854,7 @@ static void vmx_set_interrupt_shadow(struct kvm_vcpu *vcpu, int mask)
>>
>> if (mask & X86_SHADOW_INT_MOV_SS)
>> interruptibility |= GUEST_INTR_STATE_MOV_SS;
>> - if (mask & X86_SHADOW_INT_STI)
>> + else if (mask & X86_SHADOW_INT_STI)
>> interruptibility |= GUEST_INTR_STATE_STI;
>>
>> if ((interruptibility != interruptibility_old))
>> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
>> index 50d1d2a..60e6341 100644
>> --- a/arch/x86/kvm/x86.c
>> +++ b/arch/x86/kvm/x86.c
>> @@ -2132,6 +2132,9 @@ static void kvm_vcpu_ioctl_x86_get_vcpu_events(struct kvm_vcpu *vcpu,
>> vcpu->arch.interrupt.pending && !vcpu->arch.interrupt.soft;
>> events->interrupt.nr = vcpu->arch.interrupt.nr;
>> events->interrupt.soft = 0;
>> + events->interrupt.shadow =
>> + !!kvm_x86_ops->get_interrupt_shadow(vcpu,
>> + X86_SHADOW_INT_MOV_SS | X86_SHADOW_INT_STI);
>>
>> events->nmi.injected = vcpu->arch.nmi_injected;
>> events->nmi.pending = vcpu->arch.nmi_pending;
>> @@ -2140,7 +2143,8 @@ static void kvm_vcpu_ioctl_x86_get_vcpu_events(struct kvm_vcpu *vcpu,
>> events->sipi_vector = vcpu->arch.sipi_vector;
>>
>> events->flags = (KVM_VCPUEVENT_VALID_NMI_PENDING
>> - | KVM_VCPUEVENT_VALID_SIPI_VECTOR);
>> + | KVM_VCPUEVENT_VALID_SIPI_VECTOR
>> + | KVM_VCPUEVENT_VALID_SHADOW);
>>
>> vcpu_put(vcpu);
>> }
>> @@ -2149,7 +2153,8 @@ static int kvm_vcpu_ioctl_x86_set_vcpu_events(struct kvm_vcpu *vcpu,
>> struct kvm_vcpu_events *events)
>> {
>> if (events->flags & ~(KVM_VCPUEVENT_VALID_NMI_PENDING
>> - | KVM_VCPUEVENT_VALID_SIPI_VECTOR))
>> + | KVM_VCPUEVENT_VALID_SIPI_VECTOR
>> + | KVM_VCPUEVENT_VALID_SHADOW))
>> return -EINVAL;
>>
>> vcpu_load(vcpu);
>> @@ -2164,6 +2169,9 @@ static int kvm_vcpu_ioctl_x86_set_vcpu_events(struct kvm_vcpu *vcpu,
>> vcpu->arch.interrupt.soft = events->interrupt.soft;
>> if (vcpu->arch.interrupt.pending && irqchip_in_kernel(vcpu->kvm))
>> kvm_pic_clear_isr_ack(vcpu->kvm);
>> + if (events->flags & KVM_VCPUEVENT_VALID_SHADOW)
>> + kvm_x86_ops->set_interrupt_shadow(vcpu,
>> + events->interrupt.shadow ? X86_SHADOW_INT_MOV_SS : 0);
>
> Its hackish to transform blocking-by-sti into blocking-by-mov-ss (sti
> does not block debug exceptions for the next instruction).
>
> Any special reason you are doing this?
AMD makes no difference, so it would be automatically unified during
cross-vendor migration.
>
> Also, as Avi mentioned it would be better to avoid this. Is it not
> possible to disallow migration while interrupt shadow is present?
Which means disallowing user space exists while the shadow it set? Or
should we introduce some flag for user space that tells it "do not
migration now, resume the guest till next exit"?
Jan
--
Siemens AG, Corporate Technology, CT T DE IT 1
Corporate Competence Center Embedded Linux
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 2/3] KVM: x86: Save&restore interrupt shadow mask
2010-02-17 8:06 ` Zachary Amsden
@ 2010-02-17 9:05 ` Gleb Natapov
2010-02-17 9:07 ` Jan Kiszka
1 sibling, 0 replies; 15+ messages in thread
From: Gleb Natapov @ 2010-02-17 9:05 UTC (permalink / raw)
To: Zachary Amsden; +Cc: Marcelo Tosatti, Jan Kiszka, Avi Kivity, kvm
On Tue, Feb 16, 2010 at 10:06:12PM -1000, Zachary Amsden wrote:
> On 02/16/2010 02:39 PM, Marcelo Tosatti wrote:
> >On Mon, Feb 15, 2010 at 10:45:42AM +0100, Jan Kiszka wrote:
> >>The interrupt shadow created by STI or MOV-SS-like operations is part of
> >>the VCPU state and must be preserved across migration. Transfer it in
> >>the spare padding field of kvm_vcpu_events.interrupt.
>
> STI and MOV-SS interrupt shadow are both treated differently by
> hardware. Any attempt to unify them into a single field is wrong,
> especially so in a hardware virtualization context, where they are
> actually represented by different fields in the undocumented but
> nevertheless extant format that can be inferred from the hardware
> virtualization context used by specific vendors.
>
The problem is SVM doesn't distinguish between those two. But we shouldn't
design out interfaces based on SVM brokenness.
--
Gleb.
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 2/3] KVM: x86: Save&restore interrupt shadow mask
2010-02-17 8:06 ` Zachary Amsden
2010-02-17 9:05 ` Gleb Natapov
@ 2010-02-17 9:07 ` Jan Kiszka
1 sibling, 0 replies; 15+ messages in thread
From: Jan Kiszka @ 2010-02-17 9:07 UTC (permalink / raw)
To: Zachary Amsden; +Cc: Marcelo Tosatti, Avi Kivity, kvm
Zachary Amsden wrote:
> On 02/16/2010 02:39 PM, Marcelo Tosatti wrote:
>> On Mon, Feb 15, 2010 at 10:45:42AM +0100, Jan Kiszka wrote:
>>
>>> The interrupt shadow created by STI or MOV-SS-like operations is part of
>>> the VCPU state and must be preserved across migration. Transfer it in
>>> the spare padding field of kvm_vcpu_events.interrupt.
>
> STI and MOV-SS interrupt shadow are both treated differently by
> hardware. Any attempt to unify them into a single field is wrong,
> especially so in a hardware virtualization context, where they are
> actually represented by different fields in the undocumented but
> nevertheless extant format that can be inferred from the hardware
> virtualization context used by specific vendors.
Someone should ask AMD why they thought differently about this while
designing SVM...
Jan
--
Siemens AG, Corporate Technology, CT T DE IT 1
Corporate Competence Center Embedded Linux
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 2/3] KVM: x86: Save&restore interrupt shadow mask
2010-02-17 9:03 ` Jan Kiszka
@ 2010-02-17 9:10 ` Gleb Natapov
2010-02-17 14:54 ` Marcelo Tosatti
0 siblings, 1 reply; 15+ messages in thread
From: Gleb Natapov @ 2010-02-17 9:10 UTC (permalink / raw)
To: Jan Kiszka; +Cc: Marcelo Tosatti, Avi Kivity, kvm
On Wed, Feb 17, 2010 at 10:03:58AM +0100, Jan Kiszka wrote:
> >
> > Also, as Avi mentioned it would be better to avoid this. Is it not
> > possible to disallow migration while interrupt shadow is present?
>
> Which means disallowing user space exists while the shadow it set? Or
> should we introduce some flag for user space that tells it "do not
> migration now, resume the guest till next exit"?
>
I think disabling migration is a slippery slope. Guest may abuse it. May
be it will be hard to do with interrupt shadow, but the mechanism will be
used for other cases too. I remember there was an argument that we
should not migrate while vcpu is in a nested guest mode.
--
Gleb.
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 2/3] KVM: x86: Save&restore interrupt shadow mask
2010-02-17 9:10 ` Gleb Natapov
@ 2010-02-17 14:54 ` Marcelo Tosatti
0 siblings, 0 replies; 15+ messages in thread
From: Marcelo Tosatti @ 2010-02-17 14:54 UTC (permalink / raw)
To: Gleb Natapov; +Cc: Jan Kiszka, Avi Kivity, kvm
On Wed, Feb 17, 2010 at 11:10:07AM +0200, Gleb Natapov wrote:
> On Wed, Feb 17, 2010 at 10:03:58AM +0100, Jan Kiszka wrote:
> > >
> > > Also, as Avi mentioned it would be better to avoid this. Is it not
> > > possible to disallow migration while interrupt shadow is present?
> >
> > Which means disallowing user space exists while the shadow it set? Or
> > should we introduce some flag for user space that tells it "do not
> > migration now, resume the guest till next exit"?
> >
> I think disabling migration is a slippery slope. Guest may abuse it. May
> be it will be hard to do with interrupt shadow, but the mechanism will be
> used for other cases too. I remember there was an argument that we
> should not migrate while vcpu is in a nested guest mode.
Agree that guest may abuse it. Better to save/restore
blocking-by-sti/by-mov-ss individually.
I was thinking the writeback of interrupt shadow / interruptibility state
would be too complicated (eg necessary to care about ordering, etc), but
now i see its handled in kernel (inject_pending_event and friends).
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH 2/3 v3] KVM: x86: Save&restore interrupt shadow mask
2010-02-15 9:45 ` [PATCH 2/3] KVM: x86: Save&restore interrupt shadow mask Jan Kiszka
2010-02-17 0:39 ` Marcelo Tosatti
@ 2010-02-19 18:38 ` Jan Kiszka
1 sibling, 0 replies; 15+ messages in thread
From: Jan Kiszka @ 2010-02-19 18:38 UTC (permalink / raw)
To: Avi Kivity, Marcelo Tosatti; +Cc: kvm, Gleb Natapov
The interrupt shadow created by STI or MOV-SS-like operations is part of
the VCPU state and must be preserved across migration. Transfer it in
the spare padding field of kvm_vcpu_events.interrupt.
As a side effect we now have to make vmx_set_interrupt_shadow robust
against both shadow types being set. Give MOV SS a higher priority and
skip STI in that case to avoid that VMX throws a fault on next entry.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
---
Changes in v3 (aka "back to square #1"):
- export both MOV SS and STI shadows, do not collapse them
Documentation/kvm/api.txt | 11 ++++++++++-
arch/x86/include/asm/kvm.h | 7 ++++++-
arch/x86/include/asm/kvm_emulate.h | 3 ---
arch/x86/kvm/emulate.c | 5 +++--
arch/x86/kvm/svm.c | 2 +-
arch/x86/kvm/vmx.c | 8 ++++----
arch/x86/kvm/x86.c | 12 ++++++++++--
include/linux/kvm.h | 1 +
8 files changed, 35 insertions(+), 14 deletions(-)
diff --git a/Documentation/kvm/api.txt b/Documentation/kvm/api.txt
index beb444a..9e5de5a 100644
--- a/Documentation/kvm/api.txt
+++ b/Documentation/kvm/api.txt
@@ -656,6 +656,7 @@ struct kvm_clock_data {
4.29 KVM_GET_VCPU_EVENTS
Capability: KVM_CAP_VCPU_EVENTS
+Extended by: KVM_CAP_INTR_SHADOW
Architectures: x86
Type: vm ioctl
Parameters: struct kvm_vcpu_event (out)
@@ -676,7 +677,7 @@ struct kvm_vcpu_events {
__u8 injected;
__u8 nr;
__u8 soft;
- __u8 pad;
+ __u8 shadow;
} interrupt;
struct {
__u8 injected;
@@ -688,9 +689,13 @@ struct kvm_vcpu_events {
__u32 flags;
};
+KVM_VCPUEVENT_VALID_SHADOW may be set in the flags field to signal that
+interrupt.shadow contains a valid state. Otherwise, this field is undefined.
+
4.30 KVM_SET_VCPU_EVENTS
Capability: KVM_CAP_VCPU_EVENTS
+Extended by: KVM_CAP_INTR_SHADOW
Architectures: x86
Type: vm ioctl
Parameters: struct kvm_vcpu_event (in)
@@ -709,6 +714,10 @@ current in-kernel state. The bits are:
KVM_VCPUEVENT_VALID_NMI_PENDING - transfer nmi.pending to the kernel
KVM_VCPUEVENT_VALID_SIPI_VECTOR - transfer sipi_vector
+If KVM_CAP_INTR_SHADOW is available, KVM_VCPUEVENT_VALID_SHADOW can be set in
+the flags field to signal that interrupt.shadow contains a valid state and
+shall be written into the VCPU.
+
5. The kvm_run structure
diff --git a/arch/x86/include/asm/kvm.h b/arch/x86/include/asm/kvm.h
index f46b79f..fb61170 100644
--- a/arch/x86/include/asm/kvm.h
+++ b/arch/x86/include/asm/kvm.h
@@ -257,6 +257,11 @@ struct kvm_reinject_control {
/* When set in flags, include corresponding fields on KVM_SET_VCPU_EVENTS */
#define KVM_VCPUEVENT_VALID_NMI_PENDING 0x00000001
#define KVM_VCPUEVENT_VALID_SIPI_VECTOR 0x00000002
+#define KVM_VCPUEVENT_VALID_SHADOW 0x00000004
+
+/* Interrupt shadow states */
+#define KVM_X86_SHADOW_INT_MOV_SS 0x01
+#define KVM_X86_SHADOW_INT_STI 0x02
/* for KVM_GET/SET_VCPU_EVENTS */
struct kvm_vcpu_events {
@@ -271,7 +276,7 @@ struct kvm_vcpu_events {
__u8 injected;
__u8 nr;
__u8 soft;
- __u8 pad;
+ __u8 shadow;
} interrupt;
struct {
__u8 injected;
diff --git a/arch/x86/include/asm/kvm_emulate.h b/arch/x86/include/asm/kvm_emulate.h
index 7a6f54f..2666d7a 100644
--- a/arch/x86/include/asm/kvm_emulate.h
+++ b/arch/x86/include/asm/kvm_emulate.h
@@ -153,9 +153,6 @@ struct decode_cache {
struct fetch_cache fetch;
};
-#define X86_SHADOW_INT_MOV_SS 1
-#define X86_SHADOW_INT_STI 2
-
struct x86_emulate_ctxt {
/* Register state before/after emulation. */
struct kvm_vcpu *vcpu;
diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
index 9beda8e..135bc56 100644
--- a/arch/x86/kvm/emulate.c
+++ b/arch/x86/kvm/emulate.c
@@ -2123,7 +2123,8 @@ special_insn:
sel = c->src.val;
if (c->modrm_reg == VCPU_SREG_SS)
- toggle_interruptibility(ctxt, X86_SHADOW_INT_MOV_SS);
+ toggle_interruptibility(ctxt,
+ KVM_X86_SHADOW_INT_MOV_SS);
if (c->modrm_reg <= 5) {
type_bits = (c->modrm_reg == 1) ? 9 : 1;
@@ -2374,7 +2375,7 @@ special_insn:
if (emulator_bad_iopl(ctxt))
kvm_inject_gp(ctxt->vcpu, 0);
else {
- toggle_interruptibility(ctxt, X86_SHADOW_INT_STI);
+ toggle_interruptibility(ctxt, KVM_X86_SHADOW_INT_STI);
ctxt->eflags |= X86_EFLAGS_IF;
c->dst.type = OP_NONE; /* Disable writeback. */
}
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 2e1e8d6..5f88a73 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -263,7 +263,7 @@ static u32 svm_get_interrupt_shadow(struct kvm_vcpu *vcpu, int mask)
u32 ret = 0;
if (svm->vmcb->control.int_state & SVM_INTERRUPT_SHADOW_MASK)
- ret |= X86_SHADOW_INT_STI | X86_SHADOW_INT_MOV_SS;
+ ret |= KVM_X86_SHADOW_INT_STI | KVM_X86_SHADOW_INT_MOV_SS;
return ret & mask;
}
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index f7c815b..ce5ec41 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -838,9 +838,9 @@ static u32 vmx_get_interrupt_shadow(struct kvm_vcpu *vcpu, int mask)
int ret = 0;
if (interruptibility & GUEST_INTR_STATE_STI)
- ret |= X86_SHADOW_INT_STI;
+ ret |= KVM_X86_SHADOW_INT_STI;
if (interruptibility & GUEST_INTR_STATE_MOV_SS)
- ret |= X86_SHADOW_INT_MOV_SS;
+ ret |= KVM_X86_SHADOW_INT_MOV_SS;
return ret & mask;
}
@@ -852,9 +852,9 @@ static void vmx_set_interrupt_shadow(struct kvm_vcpu *vcpu, int mask)
interruptibility &= ~(GUEST_INTR_STATE_STI | GUEST_INTR_STATE_MOV_SS);
- if (mask & X86_SHADOW_INT_MOV_SS)
+ if (mask & KVM_X86_SHADOW_INT_MOV_SS)
interruptibility |= GUEST_INTR_STATE_MOV_SS;
- if (mask & X86_SHADOW_INT_STI)
+ else if (mask & KVM_X86_SHADOW_INT_STI)
interruptibility |= GUEST_INTR_STATE_STI;
if ((interruptibility != interruptibility_old))
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index d1b5024..9b13e15 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -2132,6 +2132,9 @@ static void kvm_vcpu_ioctl_x86_get_vcpu_events(struct kvm_vcpu *vcpu,
vcpu->arch.interrupt.pending && !vcpu->arch.interrupt.soft;
events->interrupt.nr = vcpu->arch.interrupt.nr;
events->interrupt.soft = 0;
+ events->interrupt.shadow =
+ kvm_x86_ops->get_interrupt_shadow(vcpu,
+ KVM_X86_SHADOW_INT_MOV_SS | KVM_X86_SHADOW_INT_STI);
events->nmi.injected = vcpu->arch.nmi_injected;
events->nmi.pending = vcpu->arch.nmi_pending;
@@ -2140,7 +2143,8 @@ static void kvm_vcpu_ioctl_x86_get_vcpu_events(struct kvm_vcpu *vcpu,
events->sipi_vector = vcpu->arch.sipi_vector;
events->flags = (KVM_VCPUEVENT_VALID_NMI_PENDING
- | KVM_VCPUEVENT_VALID_SIPI_VECTOR);
+ | KVM_VCPUEVENT_VALID_SIPI_VECTOR
+ | KVM_VCPUEVENT_VALID_SHADOW);
vcpu_put(vcpu);
}
@@ -2149,7 +2153,8 @@ static int kvm_vcpu_ioctl_x86_set_vcpu_events(struct kvm_vcpu *vcpu,
struct kvm_vcpu_events *events)
{
if (events->flags & ~(KVM_VCPUEVENT_VALID_NMI_PENDING
- | KVM_VCPUEVENT_VALID_SIPI_VECTOR))
+ | KVM_VCPUEVENT_VALID_SIPI_VECTOR
+ | KVM_VCPUEVENT_VALID_SHADOW))
return -EINVAL;
vcpu_load(vcpu);
@@ -2164,6 +2169,9 @@ static int kvm_vcpu_ioctl_x86_set_vcpu_events(struct kvm_vcpu *vcpu,
vcpu->arch.interrupt.soft = events->interrupt.soft;
if (vcpu->arch.interrupt.pending && irqchip_in_kernel(vcpu->kvm))
kvm_pic_clear_isr_ack(vcpu->kvm);
+ if (events->flags & KVM_VCPUEVENT_VALID_SHADOW)
+ kvm_x86_ops->set_interrupt_shadow(vcpu,
+ events->interrupt.shadow);
vcpu->arch.nmi_injected = events->nmi.injected;
if (events->flags & KVM_VCPUEVENT_VALID_NMI_PENDING)
diff --git a/include/linux/kvm.h b/include/linux/kvm.h
index dfa54be..46fb860 100644
--- a/include/linux/kvm.h
+++ b/include/linux/kvm.h
@@ -501,6 +501,7 @@ struct kvm_ioeventfd {
#define KVM_CAP_HYPERV_VAPIC 45
#define KVM_CAP_HYPERV_SPIN 46
#define KVM_CAP_PCI_SEGMENT 47
+#define KVM_CAP_INTR_SHADOW 48
#ifdef KVM_CAP_IRQ_ROUTING
^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [PATCH 0/3] KVM: VCPU state extensions
2010-02-15 9:45 [PATCH 0/3] KVM: VCPU state extensions Jan Kiszka
` (2 preceding siblings ...)
2010-02-15 9:45 ` [PATCH 3/3] KVM: x86: Add support for saving&restoring debug registers Jan Kiszka
@ 2010-02-22 12:34 ` Jan Kiszka
2010-02-22 12:45 ` Avi Kivity
3 siblings, 1 reply; 15+ messages in thread
From: Jan Kiszka @ 2010-02-22 12:34 UTC (permalink / raw)
To: Avi Kivity; +Cc: Marcelo Tosatti, kvm
Jan Kiszka wrote:
> These patches do not technically depend on each other but overlap, so
> I'm pushing them now in a series.
>
> Patch 1 is a repost. Patch 2 is reworked and comes with the following
> changes:
>
> - expose only a boolean to user space, mapping it on
> X86_SHADOW_INT_MOV_SS during write
> - do not move X86_SHADOW_INT_* flags around
> - Signal capability via KVM_CAP_INTR_SHADOW and manage the new
> kvm_vcpu_events field via KVM_VCPUEVENT_VALID_SHADOW
> - Update docs
>
> Finally, patch 3 is new, plugging the debug register migration (and
> reset) hole.
>
> You can also pull from
>
> git://git.kiszka.org/linux-kvm vcpu-state
>
> Jan Kiszka (3):
> KVM: x86: Do not return soft events in vcpu_events
> KVM: x86: Save&restore interrupt shadow mask
> KVM: x86: Add support for saving&restoring debug registers
>
> Documentation/kvm/api.txt | 42 ++++++++++++++++++++++++-
> arch/x86/include/asm/kvm.h | 13 +++++++-
> arch/x86/kvm/vmx.c | 2 +-
> arch/x86/kvm/x86.c | 75 +++++++++++++++++++++++++++++++++++++++++---
> include/linux/kvm.h | 7 ++++
> 5 files changed, 131 insertions(+), 8 deletions(-)
>
[Trying to sort my patch queues]
Ping on this series with updated patch 2. Any open issues remaining?
Jan
--
Siemens AG, Corporate Technology, CT T DE IT 1
Corporate Competence Center Embedded Linux
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 0/3] KVM: VCPU state extensions
2010-02-22 12:34 ` [PATCH 0/3] KVM: VCPU state extensions Jan Kiszka
@ 2010-02-22 12:45 ` Avi Kivity
2010-02-22 12:54 ` Avi Kivity
0 siblings, 1 reply; 15+ messages in thread
From: Avi Kivity @ 2010-02-22 12:45 UTC (permalink / raw)
To: Jan Kiszka; +Cc: Marcelo Tosatti, kvm
On 02/22/2010 02:34 PM, Jan Kiszka wrote:
>
> [Trying to sort my patch queues]
>
> Ping on this series with updated patch 2. Any open issues remaining?
>
My default algorithm on patches with a lot of noise is to wait until the
dust settles. I'll look at it now.
--
error compiling committee.c: too many arguments to function
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 0/3] KVM: VCPU state extensions
2010-02-22 12:45 ` Avi Kivity
@ 2010-02-22 12:54 ` Avi Kivity
0 siblings, 0 replies; 15+ messages in thread
From: Avi Kivity @ 2010-02-22 12:54 UTC (permalink / raw)
To: Jan Kiszka; +Cc: Marcelo Tosatti, kvm
On 02/22/2010 02:45 PM, Avi Kivity wrote:
> On 02/22/2010 02:34 PM, Jan Kiszka wrote:
>>
>> [Trying to sort my patch queues]
>>
>> Ping on this series with updated patch 2. Any open issues remaining?
>
> My default algorithm on patches with a lot of noise is to wait until
> the dust settles. I'll look at it now.
>
All applied.
--
error compiling committee.c: too many arguments to function
^ permalink raw reply [flat|nested] 15+ messages in thread
end of thread, other threads:[~2010-02-22 12:54 UTC | newest]
Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-02-15 9:45 [PATCH 0/3] KVM: VCPU state extensions Jan Kiszka
2010-02-15 9:45 ` [PATCH 1/3] KVM: x86: Do not return soft events in vcpu_events Jan Kiszka
2010-02-15 9:45 ` [PATCH 2/3] KVM: x86: Save&restore interrupt shadow mask Jan Kiszka
2010-02-17 0:39 ` Marcelo Tosatti
2010-02-17 8:06 ` Zachary Amsden
2010-02-17 9:05 ` Gleb Natapov
2010-02-17 9:07 ` Jan Kiszka
2010-02-17 9:03 ` Jan Kiszka
2010-02-17 9:10 ` Gleb Natapov
2010-02-17 14:54 ` Marcelo Tosatti
2010-02-19 18:38 ` [PATCH 2/3 v3] " Jan Kiszka
2010-02-15 9:45 ` [PATCH 3/3] KVM: x86: Add support for saving&restoring debug registers Jan Kiszka
2010-02-22 12:34 ` [PATCH 0/3] KVM: VCPU state extensions Jan Kiszka
2010-02-22 12:45 ` Avi Kivity
2010-02-22 12:54 ` Avi Kivity
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox