* [patch 0/5] unify remote request and kvm_vcpu_kick IPI mechanism
@ 2009-08-27 1:20 Marcelo Tosatti
2009-08-27 1:20 ` [patch 1/5] KVM: move kvm_vcpu_kick to virt/kvm/kvm_main.c Marcelo Tosatti
` (5 more replies)
0 siblings, 6 replies; 24+ messages in thread
From: Marcelo Tosatti @ 2009-08-27 1:20 UTC (permalink / raw)
To: kvm
Unify remote request (TLB_FLUSH and MMU_RELOAD) with kvm_vcpu_kick mechanism.
The new wait_on_bit based scheme also allows finer optimization of
different request types, and makes future enhancements easier (such as
request to a single vcpu, or subset of vcpus).
kernel compilation is 2.5 - 3% faster with shadow paging.
See individual patches for details.
--
^ permalink raw reply [flat|nested] 24+ messages in thread
* [patch 1/5] KVM: move kvm_vcpu_kick to virt/kvm/kvm_main.c
2009-08-27 1:20 [patch 0/5] unify remote request and kvm_vcpu_kick IPI mechanism Marcelo Tosatti
@ 2009-08-27 1:20 ` Marcelo Tosatti
2009-08-27 1:20 ` [patch 2/5] KVM: reintroduce guest mode bit and unify remote request code Marcelo Tosatti
` (4 subsequent siblings)
5 siblings, 0 replies; 24+ messages in thread
From: Marcelo Tosatti @ 2009-08-27 1:20 UTC (permalink / raw)
To: kvm; +Cc: Marcelo Tosatti
[-- Attachment #1: move-kick-to-virt-kvm --]
[-- Type: text/plain, Size: 2219 bytes --]
Avoids code duplication.
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Index: kvm/arch/ia64/kvm/kvm-ia64.c
===================================================================
--- kvm.orig/arch/ia64/kvm/kvm-ia64.c
+++ kvm/arch/ia64/kvm/kvm-ia64.c
@@ -1857,21 +1857,6 @@ void kvm_arch_hardware_unsetup(void)
{
}
-void kvm_vcpu_kick(struct kvm_vcpu *vcpu)
-{
- int me;
- int cpu = vcpu->cpu;
-
- if (waitqueue_active(&vcpu->wq))
- wake_up_interruptible(&vcpu->wq);
-
- me = get_cpu();
- if (cpu != me && (unsigned) cpu < nr_cpu_ids && cpu_online(cpu))
- if (!test_and_set_bit(KVM_REQ_KICK, &vcpu->requests))
- smp_send_reschedule(cpu);
- put_cpu();
-}
-
int kvm_apic_set_irq(struct kvm_vcpu *vcpu, struct kvm_lapic_irq *irq)
{
return __apic_accept_irq(vcpu, irq->vector);
Index: kvm/arch/x86/kvm/x86.c
===================================================================
--- kvm.orig/arch/x86/kvm/x86.c
+++ kvm/arch/x86/kvm/x86.c
@@ -4914,23 +4914,6 @@ int kvm_arch_vcpu_runnable(struct kvm_vc
kvm_cpu_has_interrupt(vcpu));
}
-void kvm_vcpu_kick(struct kvm_vcpu *vcpu)
-{
- int me;
- int cpu = vcpu->cpu;
-
- if (waitqueue_active(&vcpu->wq)) {
- wake_up_interruptible(&vcpu->wq);
- ++vcpu->stat.halt_wakeup;
- }
-
- me = get_cpu();
- if (cpu != me && (unsigned)cpu < nr_cpu_ids && cpu_online(cpu))
- if (!test_and_set_bit(KVM_REQ_KICK, &vcpu->requests))
- smp_send_reschedule(cpu);
- put_cpu();
-}
-
int kvm_arch_interrupt_allowed(struct kvm_vcpu *vcpu)
{
return kvm_x86_ops->interrupt_allowed(vcpu);
Index: kvm/virt/kvm/kvm_main.c
===================================================================
--- kvm.orig/virt/kvm/kvm_main.c
+++ kvm/virt/kvm/kvm_main.c
@@ -117,6 +117,23 @@ void vcpu_put(struct kvm_vcpu *vcpu)
mutex_unlock(&vcpu->mutex);
}
+void kvm_vcpu_kick(struct kvm_vcpu *vcpu)
+{
+ int me;
+ int cpu = vcpu->cpu;
+
+ if (waitqueue_active(&vcpu->wq)) {
+ wake_up_interruptible(&vcpu->wq);
+ ++vcpu->stat.halt_wakeup;
+ }
+
+ me = get_cpu();
+ if (cpu != me && (unsigned)cpu < nr_cpu_ids && cpu_online(cpu))
+ if (!test_and_set_bit(KVM_REQ_KICK, &vcpu->requests))
+ smp_send_reschedule(cpu);
+ put_cpu();
+}
+
static void ack_flush(void *_completed)
{
}
--
^ permalink raw reply [flat|nested] 24+ messages in thread
* [patch 2/5] KVM: reintroduce guest mode bit and unify remote request code
2009-08-27 1:20 [patch 0/5] unify remote request and kvm_vcpu_kick IPI mechanism Marcelo Tosatti
2009-08-27 1:20 ` [patch 1/5] KVM: move kvm_vcpu_kick to virt/kvm/kvm_main.c Marcelo Tosatti
@ 2009-08-27 1:20 ` Marcelo Tosatti
2009-08-27 8:15 ` Avi Kivity
2009-08-27 8:25 ` Avi Kivity
2009-08-27 1:20 ` [patch 3/5] KVM: switch REQ_TLB_FLUSH/REQ_MMU_RELOAD to kvm_vcpus_request Marcelo Tosatti
` (3 subsequent siblings)
5 siblings, 2 replies; 24+ messages in thread
From: Marcelo Tosatti @ 2009-08-27 1:20 UTC (permalink / raw)
To: kvm; +Cc: Marcelo Tosatti
[-- Attachment #1: reintroduce-guest-mode --]
[-- Type: text/plain, Size: 4840 bytes --]
Split KVM_REQ_KICKED in two bits: KVM_VCPU_KICKED, to indicate
whether a vcpu has been IPI'ed, and KVM_VCPU_GUEST_MODE, in a separate
vcpu_state variable.
Unify remote requests with kvm_vcpu_kick.
Synchronous requests wait on KVM_VCPU_GUEST_MODE, via wait_on_bit/wake_up_bit.
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Index: kvm/arch/x86/kvm/x86.c
===================================================================
--- kvm.orig/arch/x86/kvm/x86.c
+++ kvm/arch/x86/kvm/x86.c
@@ -3586,11 +3586,14 @@ static int vcpu_enter_guest(struct kvm_v
local_irq_disable();
- clear_bit(KVM_REQ_KICK, &vcpu->requests);
- smp_mb__after_clear_bit();
+ set_bit(KVM_VCPU_GUEST_MODE, &vcpu->vcpu_state);
+ barrier();
if (vcpu->requests || need_resched() || signal_pending(current)) {
- set_bit(KVM_REQ_KICK, &vcpu->requests);
+ clear_bit(KVM_VCPU_KICKED, &vcpu->vcpu_state);
+ clear_bit(KVM_VCPU_GUEST_MODE, &vcpu->vcpu_state);
+ smp_mb__after_clear_bit();
+ wake_up_bit(&vcpu->vcpu_state, KVM_VCPU_GUEST_MODE);
local_irq_enable();
preempt_enable();
r = 1;
@@ -3642,7 +3645,10 @@ static int vcpu_enter_guest(struct kvm_v
set_debugreg(vcpu->arch.host_dr6, 6);
set_debugreg(vcpu->arch.host_dr7, 7);
- set_bit(KVM_REQ_KICK, &vcpu->requests);
+ clear_bit(KVM_VCPU_KICKED, &vcpu->vcpu_state);
+ clear_bit(KVM_VCPU_GUEST_MODE, &vcpu->vcpu_state);
+ smp_mb__after_clear_bit();
+ wake_up_bit(&vcpu->vcpu_state, KVM_VCPU_GUEST_MODE);
local_irq_enable();
++vcpu->stat.exits;
Index: kvm/include/linux/kvm_host.h
===================================================================
--- kvm.orig/include/linux/kvm_host.h
+++ kvm/include/linux/kvm_host.h
@@ -42,6 +42,9 @@
#define KVM_USERSPACE_IRQ_SOURCE_ID 0
+#define KVM_VCPU_GUEST_MODE 0
+#define KVM_VCPU_KICKED 1
+
struct kvm;
struct kvm_vcpu;
extern struct kmem_cache *kvm_vcpu_cache;
@@ -83,6 +86,7 @@ struct kvm_vcpu {
int cpu;
struct kvm_run *run;
unsigned long requests;
+ unsigned long vcpu_state;
unsigned long guest_debug;
int fpu_active;
int guest_fpu_loaded;
@@ -362,6 +366,7 @@ void kvm_arch_sync_events(struct kvm *kv
int kvm_cpu_has_pending_timer(struct kvm_vcpu *vcpu);
void kvm_vcpu_kick(struct kvm_vcpu *vcpu);
+void kvm_vcpu_ipi(struct kvm_vcpu *vcpu);
int kvm_is_mmio_pfn(pfn_t pfn);
Index: kvm/virt/kvm/kvm_main.c
===================================================================
--- kvm.orig/virt/kvm/kvm_main.c
+++ kvm/virt/kvm/kvm_main.c
@@ -119,18 +119,26 @@ void vcpu_put(struct kvm_vcpu *vcpu)
void kvm_vcpu_kick(struct kvm_vcpu *vcpu)
{
- int me;
- int cpu = vcpu->cpu;
-
if (waitqueue_active(&vcpu->wq)) {
wake_up_interruptible(&vcpu->wq);
++vcpu->stat.halt_wakeup;
}
+ kvm_vcpu_ipi(vcpu);
+}
+
+void kvm_vcpu_ipi(struct kvm_vcpu *vcpu)
+{
+ int me;
+ int cpu = vcpu->cpu;
me = get_cpu();
- if (cpu != me && (unsigned)cpu < nr_cpu_ids && cpu_online(cpu))
- if (!test_and_set_bit(KVM_REQ_KICK, &vcpu->requests))
- smp_send_reschedule(cpu);
+ if (cpu != me && (unsigned)cpu < nr_cpu_ids && cpu_online(cpu)) {
+ if (test_bit(KVM_VCPU_GUEST_MODE, &vcpu->vcpu_state)) {
+ if (!test_and_set_bit(KVM_VCPU_KICKED,
+ &vcpu->vcpu_state))
+ smp_send_reschedule(cpu);
+ }
+ }
put_cpu();
}
@@ -168,6 +176,30 @@ static bool make_all_cpus_request(struct
return called;
}
+static int kvm_req_wait(void *unused)
+{
+ cpu_relax();
+ return 0;
+}
+
+static void kvm_vcpu_request(struct kvm_vcpu *vcpu, unsigned int req)
+{
+ set_bit(req, &vcpu->requests);
+ barrier();
+ kvm_vcpu_ipi(vcpu);
+ wait_on_bit(&vcpu->vcpu_state, KVM_VCPU_GUEST_MODE, kvm_req_wait,
+ TASK_UNINTERRUPTIBLE);
+}
+
+static void kvm_vcpus_request(struct kvm *kvm, unsigned int req)
+{
+ int i;
+ struct kvm_vcpu *vcpu;
+
+ kvm_for_each_vcpu(i, vcpu, kvm)
+ kvm_vcpu_request(vcpu, req);
+}
+
void kvm_flush_remote_tlbs(struct kvm *kvm)
{
if (make_all_cpus_request(kvm, KVM_REQ_TLB_FLUSH))
Index: kvm/arch/ia64/kvm/kvm-ia64.c
===================================================================
--- kvm.orig/arch/ia64/kvm/kvm-ia64.c
+++ kvm/arch/ia64/kvm/kvm-ia64.c
@@ -655,12 +655,13 @@ again:
host_ctx = kvm_get_host_context(vcpu);
guest_ctx = kvm_get_guest_context(vcpu);
- clear_bit(KVM_REQ_KICK, &vcpu->requests);
-
r = kvm_vcpu_pre_transition(vcpu);
if (r < 0)
goto vcpu_run_fail;
+ set_bit(KVM_VCPU_GUEST_MODE, &vcpu->vcpu_state);
+ barrier();
+
up_read(&vcpu->kvm->slots_lock);
kvm_guest_enter();
@@ -672,7 +673,10 @@ again:
kvm_vcpu_post_transition(vcpu);
vcpu->arch.launched = 1;
- set_bit(KVM_REQ_KICK, &vcpu->requests);
+ clear_bit(KVM_VCPU_KICKED, &vcpu->vcpu_state);
+ clear_bit(KVM_VCPU_GUEST_MODE, &vcpu->vcpu_state);
+ smp_mb__after_clear_bit();
+ wake_up_bit(&vcpu->vcpu_state, KVM_VCPU_GUEST_MODE);
local_irq_enable();
/*
--
^ permalink raw reply [flat|nested] 24+ messages in thread
* [patch 3/5] KVM: switch REQ_TLB_FLUSH/REQ_MMU_RELOAD to kvm_vcpus_request
2009-08-27 1:20 [patch 0/5] unify remote request and kvm_vcpu_kick IPI mechanism Marcelo Tosatti
2009-08-27 1:20 ` [patch 1/5] KVM: move kvm_vcpu_kick to virt/kvm/kvm_main.c Marcelo Tosatti
2009-08-27 1:20 ` [patch 2/5] KVM: reintroduce guest mode bit and unify remote request code Marcelo Tosatti
@ 2009-08-27 1:20 ` Marcelo Tosatti
2009-08-27 1:20 ` [patch 4/5] KVM: remove make_all_cpus_request Marcelo Tosatti
` (2 subsequent siblings)
5 siblings, 0 replies; 24+ messages in thread
From: Marcelo Tosatti @ 2009-08-27 1:20 UTC (permalink / raw)
To: kvm; +Cc: Marcelo Tosatti
[-- Attachment #1: switch-to-wait-bit-requests --]
[-- Type: text/plain, Size: 723 bytes --]
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Index: kvm/virt/kvm/kvm_main.c
===================================================================
--- kvm.orig/virt/kvm/kvm_main.c
+++ kvm/virt/kvm/kvm_main.c
@@ -202,13 +202,13 @@ static void kvm_vcpus_request(struct kvm
void kvm_flush_remote_tlbs(struct kvm *kvm)
{
- if (make_all_cpus_request(kvm, KVM_REQ_TLB_FLUSH))
- ++kvm->stat.remote_tlb_flush;
+ kvm_vcpus_request(kvm, KVM_REQ_TLB_FLUSH);
+ ++kvm->stat.remote_tlb_flush;
}
void kvm_reload_remote_mmus(struct kvm *kvm)
{
- make_all_cpus_request(kvm, KVM_REQ_MMU_RELOAD);
+ kvm_vcpus_request(kvm, KVM_REQ_MMU_RELOAD);
}
int kvm_vcpu_init(struct kvm_vcpu *vcpu, struct kvm *kvm, unsigned id)
--
^ permalink raw reply [flat|nested] 24+ messages in thread
* [patch 4/5] KVM: remove make_all_cpus_request
2009-08-27 1:20 [patch 0/5] unify remote request and kvm_vcpu_kick IPI mechanism Marcelo Tosatti
` (2 preceding siblings ...)
2009-08-27 1:20 ` [patch 3/5] KVM: switch REQ_TLB_FLUSH/REQ_MMU_RELOAD to kvm_vcpus_request Marcelo Tosatti
@ 2009-08-27 1:20 ` Marcelo Tosatti
2009-08-27 1:20 ` [patch 5/5] KVM: x86: drop duplicat kvm_flush_remote_tlbs Marcelo Tosatti
2009-08-27 15:54 ` [RFC] KVM: x86: conditionally acquire/release slots_lock on entry/exit Marcelo Tosatti
5 siblings, 0 replies; 24+ messages in thread
From: Marcelo Tosatti @ 2009-08-27 1:20 UTC (permalink / raw)
To: kvm; +Cc: Marcelo Tosatti
[-- Attachment #1: remove-make-all-vcpus-request --]
[-- Type: text/plain, Size: 1899 bytes --]
Obsoleted by kvm_vcpus_request
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Index: kvm/virt/kvm/kvm_main.c
===================================================================
--- kvm.orig/virt/kvm/kvm_main.c
+++ kvm/virt/kvm/kvm_main.c
@@ -142,40 +142,6 @@ void kvm_vcpu_ipi(struct kvm_vcpu *vcpu)
put_cpu();
}
-static void ack_flush(void *_completed)
-{
-}
-
-static bool make_all_cpus_request(struct kvm *kvm, unsigned int req)
-{
- int i, cpu, me;
- cpumask_var_t cpus;
- bool called = true;
- struct kvm_vcpu *vcpu;
-
- if (alloc_cpumask_var(&cpus, GFP_ATOMIC))
- cpumask_clear(cpus);
-
- spin_lock(&kvm->requests_lock);
- me = smp_processor_id();
- kvm_for_each_vcpu(i, vcpu, kvm) {
- if (test_and_set_bit(req, &vcpu->requests))
- continue;
- cpu = vcpu->cpu;
- if (cpus != NULL && cpu != -1 && cpu != me)
- cpumask_set_cpu(cpu, cpus);
- }
- if (unlikely(cpus == NULL))
- smp_call_function_many(cpu_online_mask, ack_flush, NULL, 1);
- else if (!cpumask_empty(cpus))
- smp_call_function_many(cpus, ack_flush, NULL, 1);
- else
- called = false;
- spin_unlock(&kvm->requests_lock);
- free_cpumask_var(cpus);
- return called;
-}
-
static int kvm_req_wait(void *unused)
{
cpu_relax();
@@ -415,7 +381,6 @@ static struct kvm *kvm_create_vm(void)
kvm->mm = current->mm;
atomic_inc(&kvm->mm->mm_count);
spin_lock_init(&kvm->mmu_lock);
- spin_lock_init(&kvm->requests_lock);
kvm_io_bus_init(&kvm->pio_bus);
kvm_eventfd_init(kvm);
mutex_init(&kvm->lock);
Index: kvm/include/linux/kvm_host.h
===================================================================
--- kvm.orig/include/linux/kvm_host.h
+++ kvm/include/linux/kvm_host.h
@@ -157,7 +157,6 @@ struct kvm_irq_routing_table {};
struct kvm {
spinlock_t mmu_lock;
- spinlock_t requests_lock;
struct rw_semaphore slots_lock;
struct mm_struct *mm; /* userspace tied to this vm */
int nmemslots;
--
^ permalink raw reply [flat|nested] 24+ messages in thread
* [patch 5/5] KVM: x86: drop duplicat kvm_flush_remote_tlbs
2009-08-27 1:20 [patch 0/5] unify remote request and kvm_vcpu_kick IPI mechanism Marcelo Tosatti
` (3 preceding siblings ...)
2009-08-27 1:20 ` [patch 4/5] KVM: remove make_all_cpus_request Marcelo Tosatti
@ 2009-08-27 1:20 ` Marcelo Tosatti
2009-08-27 15:54 ` [RFC] KVM: x86: conditionally acquire/release slots_lock on entry/exit Marcelo Tosatti
5 siblings, 0 replies; 24+ messages in thread
From: Marcelo Tosatti @ 2009-08-27 1:20 UTC (permalink / raw)
To: kvm; +Cc: Marcelo Tosatti
[-- Attachment #1: remove-duplicate-tlb-flushes --]
[-- Type: text/plain, Size: 830 bytes --]
kvm_mmu_slot_remove_write_access already calls it.
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Index: kvm-requests/arch/x86/kvm/x86.c
===================================================================
--- kvm-requests.orig/arch/x86/kvm/x86.c
+++ kvm-requests/arch/x86/kvm/x86.c
@@ -2148,7 +2148,6 @@ int kvm_vm_ioctl_get_dirty_log(struct kv
spin_lock(&kvm->mmu_lock);
kvm_mmu_slot_remove_write_access(kvm, log->slot);
spin_unlock(&kvm->mmu_lock);
- kvm_flush_remote_tlbs(kvm);
memslot = &kvm->memslots[log->slot];
n = ALIGN(memslot->npages, BITS_PER_LONG) / 8;
memset(memslot->dirty_bitmap, 0, n);
@@ -4904,7 +4903,6 @@ int kvm_arch_set_memory_region(struct kv
kvm_mmu_slot_remove_write_access(kvm, mem->slot);
spin_unlock(&kvm->mmu_lock);
- kvm_flush_remote_tlbs(kvm);
return 0;
}
--
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [patch 2/5] KVM: reintroduce guest mode bit and unify remote request code
2009-08-27 1:20 ` [patch 2/5] KVM: reintroduce guest mode bit and unify remote request code Marcelo Tosatti
@ 2009-08-27 8:15 ` Avi Kivity
2009-08-27 12:45 ` Marcelo Tosatti
2009-08-27 8:25 ` Avi Kivity
1 sibling, 1 reply; 24+ messages in thread
From: Avi Kivity @ 2009-08-27 8:15 UTC (permalink / raw)
To: Marcelo Tosatti; +Cc: kvm
On 08/27/2009 04:20 AM, Marcelo Tosatti wrote:
> Split KVM_REQ_KICKED in two bits: KVM_VCPU_KICKED, to indicate
> whether a vcpu has been IPI'ed, and KVM_VCPU_GUEST_MODE, in a separate
> vcpu_state variable.
>
> Unify remote requests with kvm_vcpu_kick.
>
> Synchronous requests wait on KVM_VCPU_GUEST_MODE, via wait_on_bit/wake_up_bit.
>
>
I did miss guest_mode.
> + unsigned long vcpu_state;
>
Why not bool guest_mode? Saves two atomics per exit.
--
error compiling committee.c: too many arguments to function
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [patch 2/5] KVM: reintroduce guest mode bit and unify remote request code
2009-08-27 1:20 ` [patch 2/5] KVM: reintroduce guest mode bit and unify remote request code Marcelo Tosatti
2009-08-27 8:15 ` Avi Kivity
@ 2009-08-27 8:25 ` Avi Kivity
2009-08-27 12:58 ` Marcelo Tosatti
1 sibling, 1 reply; 24+ messages in thread
From: Avi Kivity @ 2009-08-27 8:25 UTC (permalink / raw)
To: Marcelo Tosatti; +Cc: kvm
On 08/27/2009 04:20 AM, Marcelo Tosatti wrote:
> +}
> +
> +void kvm_vcpu_ipi(struct kvm_vcpu *vcpu)
> +{
> + int me;
> + int cpu = vcpu->cpu;
>
> me = get_cpu();
> - if (cpu != me&& (unsigned)cpu< nr_cpu_ids&& cpu_online(cpu))
> - if (!test_and_set_bit(KVM_REQ_KICK,&vcpu->requests))
> - smp_send_reschedule(cpu);
> + if (cpu != me&& (unsigned)cpu< nr_cpu_ids&& cpu_online(cpu)) {
> + if (test_bit(KVM_VCPU_GUEST_MODE,&vcpu->vcpu_state)) {
> + if (!test_and_set_bit(KVM_VCPU_KICKED,
> + &vcpu->vcpu_state))
> + smp_send_reschedule(cpu);
> + }
> + }
> put_cpu();
> }
>
> @@ -168,6 +176,30 @@ static bool make_all_cpus_request(struct
> return called;
> }
>
> +static int kvm_req_wait(void *unused)
> +{
> + cpu_relax();
> + return 0;
> +}
> +
> +static void kvm_vcpu_request(struct kvm_vcpu *vcpu, unsigned int req)
> +{
> + set_bit(req,&vcpu->requests);
> + barrier();
> + kvm_vcpu_ipi(vcpu);
> + wait_on_bit(&vcpu->vcpu_state, KVM_VCPU_GUEST_MODE, kvm_req_wait,
> + TASK_UNINTERRUPTIBLE);
> +}
> +
> +static void kvm_vcpus_request(struct kvm *kvm, unsigned int req)
> +{
> + int i;
> + struct kvm_vcpu *vcpu;
> +
> + kvm_for_each_vcpu(i, vcpu, kvm)
> + kvm_vcpu_request(vcpu, req);
> +}
>
Gleb notes there are two problems here: instead of using a multicast
IPI, you're sending multiple unicast IPIs. Second, you're serializing
the waiting. It would be better to batch-send the IPIs, then batch-wait
for results.
--
error compiling committee.c: too many arguments to function
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [patch 2/5] KVM: reintroduce guest mode bit and unify remote request code
2009-08-27 8:15 ` Avi Kivity
@ 2009-08-27 12:45 ` Marcelo Tosatti
2009-08-27 13:24 ` Avi Kivity
0 siblings, 1 reply; 24+ messages in thread
From: Marcelo Tosatti @ 2009-08-27 12:45 UTC (permalink / raw)
To: Avi Kivity; +Cc: kvm
On Thu, Aug 27, 2009 at 11:15:23AM +0300, Avi Kivity wrote:
> On 08/27/2009 04:20 AM, Marcelo Tosatti wrote:
>> Split KVM_REQ_KICKED in two bits: KVM_VCPU_KICKED, to indicate
>> whether a vcpu has been IPI'ed, and KVM_VCPU_GUEST_MODE, in a separate
>> vcpu_state variable.
>>
>> Unify remote requests with kvm_vcpu_kick.
>>
>> Synchronous requests wait on KVM_VCPU_GUEST_MODE, via wait_on_bit/wake_up_bit.
>>
>>
>
> I did miss guest_mode.
>
>> + unsigned long vcpu_state;
>>
>
> Why not bool guest_mode? Saves two atomics per exit.
It must be atomic since GUEST_MODE / VCPU_KICKED bits are manipulated by
multiple CPU's.
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [patch 2/5] KVM: reintroduce guest mode bit and unify remote request code
2009-08-27 8:25 ` Avi Kivity
@ 2009-08-27 12:58 ` Marcelo Tosatti
0 siblings, 0 replies; 24+ messages in thread
From: Marcelo Tosatti @ 2009-08-27 12:58 UTC (permalink / raw)
To: Avi Kivity; +Cc: kvm
On Thu, Aug 27, 2009 at 11:25:17AM +0300, Avi Kivity wrote:
> On 08/27/2009 04:20 AM, Marcelo Tosatti wrote:
>
>> +}
>> +
>> +void kvm_vcpu_ipi(struct kvm_vcpu *vcpu)
>> +{
>> + int me;
>> + int cpu = vcpu->cpu;
>>
>> me = get_cpu();
>> - if (cpu != me&& (unsigned)cpu< nr_cpu_ids&& cpu_online(cpu))
>> - if (!test_and_set_bit(KVM_REQ_KICK,&vcpu->requests))
>> - smp_send_reschedule(cpu);
>> + if (cpu != me&& (unsigned)cpu< nr_cpu_ids&& cpu_online(cpu)) {
>> + if (test_bit(KVM_VCPU_GUEST_MODE,&vcpu->vcpu_state)) {
>> + if (!test_and_set_bit(KVM_VCPU_KICKED,
>> + &vcpu->vcpu_state))
>> + smp_send_reschedule(cpu);
>> + }
>> + }
>> put_cpu();
>> }
>>
>> @@ -168,6 +176,30 @@ static bool make_all_cpus_request(struct
>> return called;
>> }
>>
>> +static int kvm_req_wait(void *unused)
>> +{
>> + cpu_relax();
>> + return 0;
>> +}
>> +
>> +static void kvm_vcpu_request(struct kvm_vcpu *vcpu, unsigned int req)
>> +{
>> + set_bit(req,&vcpu->requests);
>> + barrier();
>> + kvm_vcpu_ipi(vcpu);
>> + wait_on_bit(&vcpu->vcpu_state, KVM_VCPU_GUEST_MODE, kvm_req_wait,
>> + TASK_UNINTERRUPTIBLE);
>> +}
>> +
>> +static void kvm_vcpus_request(struct kvm *kvm, unsigned int req)
>> +{
>> + int i;
>> + struct kvm_vcpu *vcpu;
>> +
>> + kvm_for_each_vcpu(i, vcpu, kvm)
>> + kvm_vcpu_request(vcpu, req);
>> +}
>>
>
> Gleb notes there are two problems here: instead of using a multicast
> IPI, you're sending multiple unicast IPIs. Second, you're serializing
> the waiting. It would be better to batch-send the IPIs, then batch-wait
> for results.
Right. Playing with multiple variants of batched send/wait but
so far haven't been able to see significant improvements for
REQ_FLUSH/REQ_RELOAD.
Batched send will probably be more visible in guest IPI emulation.
Note however that even with multiple unicast IPIs this change collapses
kvm_vcpu_kick with the remote requests, so you decrease the number of
IPI's.
Was hoping to include these changes incrementally?
void kvm_vcpus_request(struct kvm *kvm, unsigned int req)
{
- int i;
+ int i, me, cpu;
struct kvm_vcpu *vcpu;
+ cpumask_var_t wait_cpus, kick_cpus;
+
+ if (alloc_cpumask_var(&wait_cpus, GFP_ATOMIC))
+ cpumask_clear(wait_cpus);
+
+ if (alloc_cpumask_var(&kick_cpus, GFP_ATOMIC))
+ cpumask_clear(kick_cpus);
+
+ me = get_cpu();
+ kvm_for_each_vcpu(i, vcpu, kvm) {
+ set_bit(req, &vcpu->requests);
+ barrier();
+ cpu = vcpu->cpu;
+ if (test_bit(KVM_VCPU_GUEST_MODE, &vcpu->vcpu_state)) {
+ if (cpu != -1 && cpu != me) {
+ if (wait_cpus != NULL)
+ cpumask_set_cpu(cpu, wait_cpus);
+ if (kick_cpus != NULL)
+ if (!test_and_set_bit(KVM_VCPU_KICKED,
+ &vcpu->vcpu_state))
+ cpumask_set_cpu(cpu, kick_cpus);
+ }
+ }
+ }
+ if (unlikely(kick_cpus == NULL))
+ smp_call_function_many(cpu_online_mask, ack_flush,
+ NULL, 1);
+ else if (!cpumask_empty(kick_cpus))
+ smp_send_reschedule_many(kick_cpus);
kvm_for_each_vcpu(i, vcpu, kvm)
- kvm_vcpu_request(vcpu, req);
+ if (cpumask_test_cpu(vcpu->cpu, wait_cpus))
+ if (test_bit(req, &vcpu->requests))
+ wait_on_bit(&vcpu->vcpu_state, KVM_VCPU_GUEST_MODE,
+ kvm_req_wait, TASK_UNINTERRUPTIBLE);
+ put_cpu();
+
+ free_cpumask_var(wait_cpus);
+ free_cpumask_var(kick_cpus);
}
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [patch 2/5] KVM: reintroduce guest mode bit and unify remote request code
2009-08-27 12:45 ` Marcelo Tosatti
@ 2009-08-27 13:24 ` Avi Kivity
2009-08-27 14:07 ` Marcelo Tosatti
0 siblings, 1 reply; 24+ messages in thread
From: Avi Kivity @ 2009-08-27 13:24 UTC (permalink / raw)
To: Marcelo Tosatti; +Cc: kvm
On 08/27/2009 03:45 PM, Marcelo Tosatti wrote:
>> Why not bool guest_mode? Saves two atomics per exit.
>>
> It must be atomic since GUEST_MODE / VCPU_KICKED bits are manipulated by
> multiple CPU's.
>
bools are atomic.
--
error compiling committee.c: too many arguments to function
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [patch 2/5] KVM: reintroduce guest mode bit and unify remote request code
2009-08-27 13:24 ` Avi Kivity
@ 2009-08-27 14:07 ` Marcelo Tosatti
2009-08-28 7:06 ` Avi Kivity
0 siblings, 1 reply; 24+ messages in thread
From: Marcelo Tosatti @ 2009-08-27 14:07 UTC (permalink / raw)
To: Avi Kivity; +Cc: kvm, Yang, Sheng
On Thu, Aug 27, 2009 at 04:24:58PM +0300, Avi Kivity wrote:
> On 08/27/2009 03:45 PM, Marcelo Tosatti wrote:
>>> Why not bool guest_mode? Saves two atomics per exit.
>>>
>> It must be atomic since GUEST_MODE / VCPU_KICKED bits are manipulated by
>> multiple CPU's.
>>
>
> bools are atomic.
OK.
- VCPU_KICKED requires test_and_set. GUEST_MODE/VCPU_KICKED accesses
must not be reordered.
(OK, could have GUEST_MODE in a bool even so, but its easier to read
by keeping them together, at least to me).
- Its easier to cacheline align with longs rather than bools?
- From testing it seems the LOCK prefix is not heavy, as long as its
cpu local (probably due to 7.1.4 Effects of a LOCK Operation on
Internal Processor Caches?).
BTW,
7.1.2.2 Software Controlled Bus Locking
Software should access semaphores (shared memory used for signalling
between multiple processors) using identical addresses and operand
lengths. For example, if one processor accesses a semaphore using a word
access, other processors should not access the semaphore using a byte
access.
The bit operations use 32-bit access, but the vcpu->requests check in
vcpu_enter_guest uses 64-bit access.
Is that safe?
^ permalink raw reply [flat|nested] 24+ messages in thread
* [RFC] KVM: x86: conditionally acquire/release slots_lock on entry/exit
2009-08-27 1:20 [patch 0/5] unify remote request and kvm_vcpu_kick IPI mechanism Marcelo Tosatti
` (4 preceding siblings ...)
2009-08-27 1:20 ` [patch 5/5] KVM: x86: drop duplicat kvm_flush_remote_tlbs Marcelo Tosatti
@ 2009-08-27 15:54 ` Marcelo Tosatti
2009-08-27 16:27 ` Avi Kivity
5 siblings, 1 reply; 24+ messages in thread
From: Marcelo Tosatti @ 2009-08-27 15:54 UTC (permalink / raw)
To: kvm; +Cc: Avi Kivity, Gleb Natapov
perf report shows heavy overhead from down/up of slots_lock.
Attempted to remove slots_lock by having vcpus stop on a synchronization
point, but this introduced further complexity (a vcpu can be scheduled
out before reaching the synchronization point, and can sched back in at
points which are slots_lock protected, etc).
This patch changes vcpu_enter_guest to conditionally release/acquire
slots_lock in case a vcpu state bit is set.
vmexit performance improves by 5-10% on UP guest.
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Index: kvm-requests/arch/x86/kvm/vmx.c
===================================================================
--- kvm-requests.orig/arch/x86/kvm/vmx.c
+++ kvm-requests/arch/x86/kvm/vmx.c
@@ -2169,7 +2169,7 @@ static int alloc_apic_access_page(struct
struct kvm_userspace_memory_region kvm_userspace_mem;
int r = 0;
- down_write(&kvm->slots_lock);
+ kvm_grab_global_lock(kvm);
if (kvm->arch.apic_access_page)
goto out;
kvm_userspace_mem.slot = APIC_ACCESS_PAGE_PRIVATE_MEMSLOT;
@@ -2191,7 +2191,7 @@ static int alloc_identity_pagetable(stru
struct kvm_userspace_memory_region kvm_userspace_mem;
int r = 0;
- down_write(&kvm->slots_lock);
+ kvm_grab_global_lock(kvm);
if (kvm->arch.ept_identity_pagetable)
goto out;
kvm_userspace_mem.slot = IDENTITY_PAGETABLE_PRIVATE_MEMSLOT;
Index: kvm-requests/arch/x86/kvm/x86.c
===================================================================
--- kvm-requests.orig/arch/x86/kvm/x86.c
+++ kvm-requests/arch/x86/kvm/x86.c
@@ -1926,7 +1926,7 @@ static int kvm_vm_ioctl_set_nr_mmu_pages
if (kvm_nr_mmu_pages < KVM_MIN_ALLOC_MMU_PAGES)
return -EINVAL;
- down_write(&kvm->slots_lock);
+ kvm_grab_global_lock(kvm);
spin_lock(&kvm->mmu_lock);
kvm_mmu_change_mmu_pages(kvm, kvm_nr_mmu_pages);
@@ -1982,7 +1982,7 @@ static int kvm_vm_ioctl_set_memory_alias
< alias->target_phys_addr)
goto out;
- down_write(&kvm->slots_lock);
+ kvm_grab_global_lock(kvm);
spin_lock(&kvm->mmu_lock);
p = &kvm->arch.aliases[alias->slot];
@@ -2137,7 +2137,7 @@ int kvm_vm_ioctl_get_dirty_log(struct kv
struct kvm_memory_slot *memslot;
int is_dirty = 0;
- down_write(&kvm->slots_lock);
+ kvm_grab_global_lock(kvm);
r = kvm_get_dirty_log(kvm, log, &is_dirty);
if (r)
@@ -2253,7 +2253,7 @@ long kvm_arch_vm_ioctl(struct file *filp
sizeof(struct kvm_pit_config)))
goto out;
create_pit:
- down_write(&kvm->slots_lock);
+ kvm_grab_global_lock(kvm);
r = -EEXIST;
if (kvm->arch.vpit)
goto create_pit_unlock;
@@ -3548,7 +3548,7 @@ static void inject_pending_event(struct
static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
{
- int r;
+ int r, dropped_slots_lock = 0;
bool req_int_win = !irqchip_in_kernel(vcpu->kvm) &&
vcpu->run->request_interrupt_window;
@@ -3616,7 +3616,10 @@ static int vcpu_enter_guest(struct kvm_v
kvm_lapic_sync_to_vapic(vcpu);
}
- up_read(&vcpu->kvm->slots_lock);
+ if (unlikely(test_bit(KVM_VCPU_DROP_LOCK, &vcpu->vcpu_state))) {
+ dropped_slots_lock = 1;
+ up_read(&vcpu->kvm->slots_lock);
+ }
kvm_guest_enter();
@@ -3668,8 +3671,8 @@ static int vcpu_enter_guest(struct kvm_v
preempt_enable();
- down_read(&vcpu->kvm->slots_lock);
-
+ if (dropped_slots_lock)
+ down_read(&vcpu->kvm->slots_lock);
/*
* Profile KVM exit RIPs:
*/
Index: kvm-requests/include/linux/kvm_host.h
===================================================================
--- kvm-requests.orig/include/linux/kvm_host.h
+++ kvm-requests/include/linux/kvm_host.h
@@ -44,6 +44,7 @@
#define KVM_VCPU_GUEST_MODE 0
#define KVM_VCPU_KICKED 1
+#define KVM_VCPU_DROP_LOCK 2
struct kvm;
struct kvm_vcpu;
@@ -408,6 +409,7 @@ void kvm_unregister_irq_ack_notifier(str
struct kvm_irq_ack_notifier *kian);
int kvm_request_irq_source_id(struct kvm *kvm);
void kvm_free_irq_source_id(struct kvm *kvm, int irq_source_id);
+void kvm_grab_global_lock(struct kvm *kvm);
/* For vcpu->arch.iommu_flags */
#define KVM_IOMMU_CACHE_COHERENCY 0x1
Index: kvm-requests/virt/kvm/coalesced_mmio.c
===================================================================
--- kvm-requests.orig/virt/kvm/coalesced_mmio.c
+++ kvm-requests/virt/kvm/coalesced_mmio.c
@@ -117,7 +117,7 @@ int kvm_vm_ioctl_register_coalesced_mmio
if (dev == NULL)
return -EINVAL;
- down_write(&kvm->slots_lock);
+ kvm_grab_global_lock(kvm);
if (dev->nb_zones >= KVM_COALESCED_MMIO_ZONE_MAX) {
up_write(&kvm->slots_lock);
return -ENOBUFS;
@@ -140,7 +140,7 @@ int kvm_vm_ioctl_unregister_coalesced_mm
if (dev == NULL)
return -EINVAL;
- down_write(&kvm->slots_lock);
+ kvm_grab_global_lock(kvm);
i = dev->nb_zones;
while(i) {
Index: kvm-requests/virt/kvm/eventfd.c
===================================================================
--- kvm-requests.orig/virt/kvm/eventfd.c
+++ kvm-requests/virt/kvm/eventfd.c
@@ -498,7 +498,7 @@ kvm_assign_ioeventfd(struct kvm *kvm, st
else
p->wildcard = true;
- down_write(&kvm->slots_lock);
+ kvm_grab_global_lock(kvm);
/* Verify that there isnt a match already */
if (ioeventfd_check_collision(kvm, p)) {
@@ -541,7 +541,7 @@ kvm_deassign_ioeventfd(struct kvm *kvm,
if (IS_ERR(eventfd))
return PTR_ERR(eventfd);
- down_write(&kvm->slots_lock);
+ kvm_grab_global_lock(kvm);
list_for_each_entry_safe(p, tmp, &kvm->ioeventfds, list) {
bool wildcard = !(args->flags & KVM_IOEVENTFD_FLAG_DATAMATCH);
Index: kvm-requests/virt/kvm/kvm_main.c
===================================================================
--- kvm-requests.orig/virt/kvm/kvm_main.c
+++ kvm-requests/virt/kvm/kvm_main.c
@@ -787,6 +787,22 @@ void kvm_reload_remote_mmus(struct kvm *
kvm_vcpus_request(kvm, KVM_REQ_MMU_RELOAD);
}
+void kvm_grab_global_lock(struct kvm *kvm)
+{
+ int i;
+ struct kvm_vcpu *vcpu;
+
+ kvm_for_each_vcpu(i, vcpu, kvm) {
+ set_bit(KVM_VCPU_DROP_LOCK, &vcpu->vcpu_state);
+ barrier();
+ kvm_vcpu_ipi(vcpu);
+ }
+ down_write(&kvm->slots_lock);
+ kvm_for_each_vcpu(i, vcpu, kvm)
+ clear_bit(KVM_VCPU_DROP_LOCK, &vcpu->vcpu_state);
+}
+EXPORT_SYMBOL_GPL(kvm_grab_global_lock);
+
int kvm_vcpu_init(struct kvm_vcpu *vcpu, struct kvm *kvm, unsigned id)
{
struct page *page;
@@ -1286,7 +1302,7 @@ int kvm_set_memory_region(struct kvm *kv
{
int r;
- down_write(&kvm->slots_lock);
+ kvm_grab_global_lock(kvm);
r = __kvm_set_memory_region(kvm, mem, user_alloc);
up_write(&kvm->slots_lock);
return r;
@@ -2556,7 +2572,7 @@ int kvm_io_bus_register_dev(struct kvm *
{
int ret;
- down_write(&kvm->slots_lock);
+ kvm_grab_global_lock(kvm);
ret = __kvm_io_bus_register_dev(bus, dev);
up_write(&kvm->slots_lock);
@@ -2579,7 +2595,7 @@ void kvm_io_bus_unregister_dev(struct kv
struct kvm_io_bus *bus,
struct kvm_io_device *dev)
{
- down_write(&kvm->slots_lock);
+ kvm_grab_global_lock(kvm);
__kvm_io_bus_unregister_dev(bus, dev);
up_write(&kvm->slots_lock);
}
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [RFC] KVM: x86: conditionally acquire/release slots_lock on entry/exit
2009-08-27 15:54 ` [RFC] KVM: x86: conditionally acquire/release slots_lock on entry/exit Marcelo Tosatti
@ 2009-08-27 16:27 ` Avi Kivity
2009-08-27 22:59 ` Marcelo Tosatti
0 siblings, 1 reply; 24+ messages in thread
From: Avi Kivity @ 2009-08-27 16:27 UTC (permalink / raw)
To: Marcelo Tosatti; +Cc: kvm, Gleb Natapov
On 08/27/2009 06:54 PM, Marcelo Tosatti wrote:
> perf report shows heavy overhead from down/up of slots_lock.
>
> Attempted to remove slots_lock by having vcpus stop on a synchronization
> point, but this introduced further complexity (a vcpu can be scheduled
> out before reaching the synchronization point, and can sched back in at
> points which are slots_lock protected, etc).
>
> This patch changes vcpu_enter_guest to conditionally release/acquire
> slots_lock in case a vcpu state bit is set.
>
> vmexit performance improves by 5-10% on UP guest.
>
Sorry, it looks pretty complex. Have you considered using srcu? It
seems to me down/up_read() can be replaced by srcu_read_lock/unlock(),
and after proper conversion of memslots and io_bus to
rcu_assign_pointer(), we can just add synchronize_srcu() immediately
after changing stuff (of course mmu_lock still needs to be held when
updating slots).
--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [RFC] KVM: x86: conditionally acquire/release slots_lock on entry/exit
2009-08-27 16:27 ` Avi Kivity
@ 2009-08-27 22:59 ` Marcelo Tosatti
2009-08-28 6:50 ` Avi Kivity
0 siblings, 1 reply; 24+ messages in thread
From: Marcelo Tosatti @ 2009-08-27 22:59 UTC (permalink / raw)
To: Avi Kivity; +Cc: kvm, Gleb Natapov
On Thu, Aug 27, 2009 at 07:27:48PM +0300, Avi Kivity wrote:
> On 08/27/2009 06:54 PM, Marcelo Tosatti wrote:
>> perf report shows heavy overhead from down/up of slots_lock.
>>
>> Attempted to remove slots_lock by having vcpus stop on a synchronization
>> point, but this introduced further complexity (a vcpu can be scheduled
>> out before reaching the synchronization point, and can sched back in at
>> points which are slots_lock protected, etc).
>>
>> This patch changes vcpu_enter_guest to conditionally release/acquire
>> slots_lock in case a vcpu state bit is set.
>>
>> vmexit performance improves by 5-10% on UP guest.
>>
>
> Sorry, it looks pretty complex.
Why?
> Have you considered using srcu? It seems to me down/up_read() can
> be replaced by srcu_read_lock/unlock(), and after proper conversion
> of memslots and io_bus to rcu_assign_pointer(), we can just add
> synchronize_srcu() immediately after changing stuff (of course
> mmu_lock still needs to be held when updating slots).
I don't see RCU as being suitable because in certain operations you
want to stop writers (on behalf of vcpus), do something, and let them
continue afterwards. The dirty log, for example. Or any operation that
wants to modify lockless vcpu specific data.
Also, synchronize_srcu() is limited to preemptible sections.
io_bus could use RCU, but I think being able to stop vcpus is also a
different requirement. Do you have any suggestion on how to do it in a
better way?
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [RFC] KVM: x86: conditionally acquire/release slots_lock on entry/exit
2009-08-27 22:59 ` Marcelo Tosatti
@ 2009-08-28 6:50 ` Avi Kivity
2009-09-10 22:30 ` Marcelo Tosatti
0 siblings, 1 reply; 24+ messages in thread
From: Avi Kivity @ 2009-08-28 6:50 UTC (permalink / raw)
To: Marcelo Tosatti; +Cc: kvm, Gleb Natapov
On 08/28/2009 01:59 AM, Marcelo Tosatti wrote:
> On Thu, Aug 27, 2009 at 07:27:48PM +0300, Avi Kivity wrote:
>
>> On 08/27/2009 06:54 PM, Marcelo Tosatti wrote:
>>
>>> perf report shows heavy overhead from down/up of slots_lock.
>>>
>>> Attempted to remove slots_lock by having vcpus stop on a synchronization
>>> point, but this introduced further complexity (a vcpu can be scheduled
>>> out before reaching the synchronization point, and can sched back in at
>>> points which are slots_lock protected, etc).
>>>
>>> This patch changes vcpu_enter_guest to conditionally release/acquire
>>> slots_lock in case a vcpu state bit is set.
>>>
>>> vmexit performance improves by 5-10% on UP guest.
>>>
>>>
>> Sorry, it looks pretty complex.
>>
> Why?
>
There's a new locking protocol in there. I prefer sticking with the
existing kernel plumbing, or it gets more and more difficult knowing who
protects what and in what order you can do things.
>> Have you considered using srcu? It seems to me down/up_read() can
>> be replaced by srcu_read_lock/unlock(), and after proper conversion
>> of memslots and io_bus to rcu_assign_pointer(), we can just add
>> synchronize_srcu() immediately after changing stuff (of course
>> mmu_lock still needs to be held when updating slots).
>>
> I don't see RCU as being suitable because in certain operations you
> want to stop writers (on behalf of vcpus), do something, and let them
> continue afterwards. The dirty log, for example. Or any operation that
> wants to modify lockless vcpu specific data.
>
kvm_flush_remote_tlbs() (which you'd call after mmu operations), will
get cpus out of guest mode, and synchronize_srcu() will wait for them to
drop the srcu "read lock". So it really happens naturally: do an RCU
update, send some request to all vcpus, synchronize_srcu(), done.
> Also, synchronize_srcu() is limited to preemptible sections.
>
> io_bus could use RCU, but I think being able to stop vcpus is also a
> different requirement. Do you have any suggestion on how to do it in a
> better way?
>
We don't need to stop vcpus, just kick them out of guest mode to let
them notice the new data. SRCU does that well.
--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [patch 2/5] KVM: reintroduce guest mode bit and unify remote request code
2009-08-27 14:07 ` Marcelo Tosatti
@ 2009-08-28 7:06 ` Avi Kivity
2009-08-28 7:22 ` Avi Kivity
0 siblings, 1 reply; 24+ messages in thread
From: Avi Kivity @ 2009-08-28 7:06 UTC (permalink / raw)
To: Marcelo Tosatti; +Cc: kvm, Yang, Sheng
On 08/27/2009 05:07 PM, Marcelo Tosatti wrote:
> On Thu, Aug 27, 2009 at 04:24:58PM +0300, Avi Kivity wrote:
>
>> On 08/27/2009 03:45 PM, Marcelo Tosatti wrote:
>>
>>>> Why not bool guest_mode? Saves two atomics per exit.
>>>>
>>>>
>>> It must be atomic since GUEST_MODE / VCPU_KICKED bits are manipulated by
>>> multiple CPU's.
>>>
>>>
>> bools are atomic.
>>
> OK.
>
> - VCPU_KICKED requires test_and_set. GUEST_MODE/VCPU_KICKED accesses
> must not be reordered.
>
Why do we need both, btw? Set your vcpu->requests bit, if guest_mode is
true, clear it and IPI. So guest_mode=false means, we might be in guest
mode but if so we're due for a kick anyway.
> (OK, could have GUEST_MODE in a bool even so, but its easier to read
> by keeping them together, at least to me).
>
> - Its easier to cacheline align with longs rather than bools?
>
To cacheline align we need to pack everything important at the front of
the structure.
> - From testing it seems the LOCK prefix is not heavy, as long as its
> cpu local (probably due to 7.1.4 Effects of a LOCK Operation on
> Internal Processor Caches?).
>
Yes, in newer processors atomics are not nearly as expensive as they
used to be.
> BTW,
>
> 7.1.2.2 Software Controlled Bus Locking
>
> Software should access semaphores (shared memory used for signalling
> between multiple processors) using identical addresses and operand
> lengths. For example, if one processor accesses a semaphore using a word
> access, other processors should not access the semaphore using a byte
> access.
>
> The bit operations use 32-bit access, but the vcpu->requests check in
> vcpu_enter_guest uses 64-bit access.
>
That's true, and bitops sometimes even uses byte operations.
> Is that safe?
>
My guess yes, but not efficient.
--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [patch 2/5] KVM: reintroduce guest mode bit and unify remote request code
2009-08-28 7:06 ` Avi Kivity
@ 2009-08-28 7:22 ` Avi Kivity
0 siblings, 0 replies; 24+ messages in thread
From: Avi Kivity @ 2009-08-28 7:22 UTC (permalink / raw)
To: Marcelo Tosatti; +Cc: kvm, Yang, Sheng
On 08/28/2009 10:06 AM, Avi Kivity wrote:
>
> Why do we need both, btw? Set your vcpu->requests bit, if guest_mode
> is true, clear it and IPI. So guest_mode=false means, we might be in
> guest mode but if so we're due for a kick anyway.
No, this introduces a race.
--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [RFC] KVM: x86: conditionally acquire/release slots_lock on entry/exit
2009-08-28 6:50 ` Avi Kivity
@ 2009-09-10 22:30 ` Marcelo Tosatti
2009-09-13 15:42 ` Avi Kivity
0 siblings, 1 reply; 24+ messages in thread
From: Marcelo Tosatti @ 2009-09-10 22:30 UTC (permalink / raw)
To: Avi Kivity; +Cc: kvm, Gleb Natapov
On Fri, Aug 28, 2009 at 09:50:36AM +0300, Avi Kivity wrote:
> On 08/28/2009 01:59 AM, Marcelo Tosatti wrote:
>> On Thu, Aug 27, 2009 at 07:27:48PM +0300, Avi Kivity wrote:
>>
>>> On 08/27/2009 06:54 PM, Marcelo Tosatti wrote:
>>>
>>>> perf report shows heavy overhead from down/up of slots_lock.
>>>>
>>>> Attempted to remove slots_lock by having vcpus stop on a synchronization
>>>> point, but this introduced further complexity (a vcpu can be scheduled
>>>> out before reaching the synchronization point, and can sched back in at
>>>> points which are slots_lock protected, etc).
>>>>
>>>> This patch changes vcpu_enter_guest to conditionally release/acquire
>>>> slots_lock in case a vcpu state bit is set.
>>>>
>>>> vmexit performance improves by 5-10% on UP guest.
>>>>
>>>>
>>> Sorry, it looks pretty complex.
>>>
>> Why?
>>
>
> There's a new locking protocol in there. I prefer sticking with the
> existing kernel plumbing, or it gets more and more difficult knowing who
> protects what and in what order you can do things.
>
>>> Have you considered using srcu? It seems to me down/up_read() can
>>> be replaced by srcu_read_lock/unlock(), and after proper conversion
>>> of memslots and io_bus to rcu_assign_pointer(), we can just add
>>> synchronize_srcu() immediately after changing stuff (of course
>>> mmu_lock still needs to be held when updating slots).
>>>
>> I don't see RCU as being suitable because in certain operations you
>> want to stop writers (on behalf of vcpus), do something, and let them
>> continue afterwards. The dirty log, for example. Or any operation that
>> wants to modify lockless vcpu specific data.
>>
>
> kvm_flush_remote_tlbs() (which you'd call after mmu operations), will
> get cpus out of guest mode, and synchronize_srcu() will wait for them to
> drop the srcu "read lock". So it really happens naturally: do an RCU
> update, send some request to all vcpus, synchronize_srcu(), done.
>
>> Also, synchronize_srcu() is limited to preemptible sections.
>>
>> io_bus could use RCU, but I think being able to stop vcpus is also a
>> different requirement. Do you have any suggestion on how to do it in a
>> better way?
>>
>
> We don't need to stop vcpus, just kick them out of guest mode to let
> them notice the new data. SRCU does that well.
Two problems:
1. The removal of memslots/aliases and zapping of mmu (to remove any
shadow pages with stale sp->gfn) must be atomic from the POV of the
pagefault path. Otherwise something like this can happen:
fault path set_memory_region
walk_addr fetches and validates
table_gfns
delete memslot
synchronize_srcu
fetch creates shadow
page with nonexistant sp->gfn
OR
mmu_alloc_roots path set_memory_region
delete memslot
root_gfn = vcpu->arch.cr3 << PAGE_SHIFT
mmu_check_root(root_gfn) synchronize_rcu
kvm_mmu_get_page()
kvm_mmu_zap_all
Accesses between kvm_mmu_get_page and kvm_mmu_zap_all window can see
shadow pages with stale gfn.
But, if you still think its worthwhile to use RCU, at least handling
gfn_to_memslot / unalias_gfn errors _and_ adding mmu_notifier_retry
invalidation to set_memory_region path will be necessary (so that
gfn_to_pfn validation, in the fault path, is discarded in case
of memslot/alias update).
2. Another complication is that memslot creation and kvm_iommu_map_pages
are not atomic.
create memslot
synchronize_srcu
<----- vcpu grabs gfn reference without
iommu mapping.
kvm_iommu_map_pages
Which can be solved by changing kvm_iommu_map_pages (and new gfn_to_pfn
helper) to use base_gfn,npages,hva information from somewhere else other
than visible kvm->memslots (so that when the slot becomes visible its
already iommu mapped).
So it appears to me using RCU introduces more complications / subtle
details than mutual exclusion here. The new request bit which the
original patch introduces is limited to enabling/disabling conditional
acquision of slots_lock (calling it a "new locking protocol" is unfair)
to improve write acquision latency.
Better ideas/directions welcome.
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [RFC] KVM: x86: conditionally acquire/release slots_lock on entry/exit
2009-09-10 22:30 ` Marcelo Tosatti
@ 2009-09-13 15:42 ` Avi Kivity
2009-09-13 16:26 ` Paul E. McKenney
0 siblings, 1 reply; 24+ messages in thread
From: Avi Kivity @ 2009-09-13 15:42 UTC (permalink / raw)
To: Marcelo Tosatti; +Cc: kvm, Gleb Natapov, Paul E. McKenney
On 09/11/2009 01:30 AM, Marcelo Tosatti wrote:
>
>> We don't need to stop vcpus, just kick them out of guest mode to let
>> them notice the new data. SRCU does that well.
>>
> Two problems:
>
> 1. The removal of memslots/aliases and zapping of mmu (to remove any
> shadow pages with stale sp->gfn) must be atomic from the POV of the
> pagefault path. Otherwise something like this can happen:
>
> fault path set_memory_region
>
srcu_read_lock()
> walk_addr fetches and validates
> table_gfns
> delete memslot
> synchronize_srcu
>
> fetch creates shadow
>
srcu_read_unlock()
> page with nonexistant sp->gfn
>
I think synchronize_srcu() will be deferred until the fault path is
complete (and srcu_read_unlock() runs). Copying someone who knows for sure.
> OR
>
> mmu_alloc_roots path set_memory_region
>
srcu_read_lock()
>
> delete memslot
> root_gfn = vcpu->arch.cr3<< PAGE_SHIFT
> mmu_check_root(root_gfn) synchronize_rcu
> kvm_mmu_get_page()
>
srcu_read_unlock()
> kvm_mmu_zap_all
>
Ditto, srcu_read_lock() protects us.
> Accesses between kvm_mmu_get_page and kvm_mmu_zap_all window can see
> shadow pages with stale gfn.
>
> But, if you still think its worthwhile to use RCU, at least handling
> gfn_to_memslot / unalias_gfn errors _and_ adding mmu_notifier_retry
> invalidation to set_memory_region path will be necessary (so that
> gfn_to_pfn validation, in the fault path, is discarded in case
> of memslot/alias update).
>
It really is worthwhile to reuse complex infrastructure instead of
writing new infrastructure.
> 2. Another complication is that memslot creation and kvm_iommu_map_pages
> are not atomic.
>
> create memslot
> synchronize_srcu
> <----- vcpu grabs gfn reference without
> iommu mapping.
> kvm_iommu_map_pages
>
> Which can be solved by changing kvm_iommu_map_pages (and new gfn_to_pfn
> helper) to use base_gfn,npages,hva information from somewhere else other
> than visible kvm->memslots (so that when the slot becomes visible its
> already iommu mapped).
>
Yes. It can accept a memslots structure instead of deriving it from
kvm->memslots. Then we do a rcu_assign_pointer() to switch the tables.
> So it appears to me using RCU introduces more complications / subtle
> details than mutual exclusion here. The new request bit which the
> original patch introduces is limited to enabling/disabling conditional
> acquision of slots_lock (calling it a "new locking protocol" is unfair)
> to improve write acquision latency.
>
It's true that it is not a new locking protocol. But I feel it is
worthwhile to try to use rcu for this; at least it will make it easier
for newcomers (provided they understand rcu).
--
error compiling committee.c: too many arguments to function
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [RFC] KVM: x86: conditionally acquire/release slots_lock on entry/exit
2009-09-13 15:42 ` Avi Kivity
@ 2009-09-13 16:26 ` Paul E. McKenney
2009-09-13 22:49 ` Marcelo Tosatti
0 siblings, 1 reply; 24+ messages in thread
From: Paul E. McKenney @ 2009-09-13 16:26 UTC (permalink / raw)
To: Avi Kivity; +Cc: Marcelo Tosatti, kvm, Gleb Natapov
On Sun, Sep 13, 2009 at 06:42:49PM +0300, Avi Kivity wrote:
> On 09/11/2009 01:30 AM, Marcelo Tosatti wrote:
>>
>>> We don't need to stop vcpus, just kick them out of guest mode to let
>>> them notice the new data. SRCU does that well.
>>>
>> Two problems:
>>
>> 1. The removal of memslots/aliases and zapping of mmu (to remove any
>> shadow pages with stale sp->gfn) must be atomic from the POV of the
>> pagefault path. Otherwise something like this can happen:
>>
>> fault path set_memory_region
>>
>
> srcu_read_lock()
>
>> walk_addr fetches and validates
>> table_gfns
>> delete memslot
>> synchronize_srcu
>>
>> fetch creates shadow
>>
>
> srcu_read_unlock()
>
>> page with nonexistant sp->gfn
>>
>
> I think synchronize_srcu() will be deferred until the fault path is
> complete (and srcu_read_unlock() runs). Copying someone who knows for
> sure.
Yes, synchronize_srcu() will block until srcu_read_unlock() in this
scenario, assuming that the same srcu_struct is used by both.
>> OR
>>
>> mmu_alloc_roots path set_memory_region
>>
>
> srcu_read_lock()
>
>>
>> delete memslot
>> root_gfn = vcpu->arch.cr3<< PAGE_SHIFT
>> mmu_check_root(root_gfn) synchronize_rcu
>> kvm_mmu_get_page()
>>
>
> srcu_read_unlock()
>
>> kvm_mmu_zap_all
>>
>
> Ditto, srcu_read_lock() protects us.
Yep!
>> Accesses between kvm_mmu_get_page and kvm_mmu_zap_all window can see
>> shadow pages with stale gfn.
>>
>> But, if you still think its worthwhile to use RCU, at least handling
>> gfn_to_memslot / unalias_gfn errors _and_ adding mmu_notifier_retry
>> invalidation to set_memory_region path will be necessary (so that
>> gfn_to_pfn validation, in the fault path, is discarded in case
>> of memslot/alias update).
>
> It really is worthwhile to reuse complex infrastructure instead of writing
> new infrastructure.
Marcelo, in your first example, is your concern that the fault path
needs to detect the memslot deletion? Or that the use of sp->gfn "leaks"
out of the SRCU read-side critical section?
Thanx, Paul
>> 2. Another complication is that memslot creation and kvm_iommu_map_pages
>> are not atomic.
>>
>> create memslot
>> synchronize_srcu
>> <----- vcpu grabs gfn reference without
>> iommu mapping.
>> kvm_iommu_map_pages
>>
>> Which can be solved by changing kvm_iommu_map_pages (and new gfn_to_pfn
>> helper) to use base_gfn,npages,hva information from somewhere else other
>> than visible kvm->memslots (so that when the slot becomes visible its
>> already iommu mapped).
>
> Yes. It can accept a memslots structure instead of deriving it from
> kvm->memslots. Then we do a rcu_assign_pointer() to switch the tables.
>
>> So it appears to me using RCU introduces more complications / subtle
>> details than mutual exclusion here. The new request bit which the
>> original patch introduces is limited to enabling/disabling conditional
>> acquision of slots_lock (calling it a "new locking protocol" is unfair)
>> to improve write acquision latency.
>>
>
> It's true that it is not a new locking protocol. But I feel it is
> worthwhile to try to use rcu for this; at least it will make it easier for
> newcomers (provided they understand rcu).
>
>
> --
> error compiling committee.c: too many arguments to function
>
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [RFC] KVM: x86: conditionally acquire/release slots_lock on entry/exit
2009-09-13 16:26 ` Paul E. McKenney
@ 2009-09-13 22:49 ` Marcelo Tosatti
2009-09-14 5:03 ` Avi Kivity
0 siblings, 1 reply; 24+ messages in thread
From: Marcelo Tosatti @ 2009-09-13 22:49 UTC (permalink / raw)
To: Paul E. McKenney; +Cc: Avi Kivity, kvm, Gleb Natapov
On Sun, Sep 13, 2009 at 09:26:52AM -0700, Paul E. McKenney wrote:
> On Sun, Sep 13, 2009 at 06:42:49PM +0300, Avi Kivity wrote:
> > On 09/11/2009 01:30 AM, Marcelo Tosatti wrote:
> >>
> >>> We don't need to stop vcpus, just kick them out of guest mode to let
> >>> them notice the new data. SRCU does that well.
> >>>
> >> Two problems:
> >>
> >> 1. The removal of memslots/aliases and zapping of mmu (to remove any
> >> shadow pages with stale sp->gfn) must be atomic from the POV of the
> >> pagefault path. Otherwise something like this can happen:
> >>
> >> fault path set_memory_region
> >>
> >
> > srcu_read_lock()
> >
> >> walk_addr fetches and validates
> >> table_gfns
> >> delete memslot
> >> synchronize_srcu
> >>
> >> fetch creates shadow
> >>
> >
> > srcu_read_unlock()
> >
> >> page with nonexistant sp->gfn
> >>
> >
> > I think synchronize_srcu() will be deferred until the fault path is
> > complete (and srcu_read_unlock() runs). Copying someone who knows for
> > sure.
>
> Yes, synchronize_srcu() will block until srcu_read_unlock() in this
> scenario, assuming that the same srcu_struct is used by both.
Right it will. But this does not stop the fault path from creating
shadow pages with stale sp->gfn (the only way to do that would be mutual
exclusion AFAICS).
> >> OR
> >>
> >> mmu_alloc_roots path set_memory_region
> >>
> >
> > srcu_read_lock()
> >
> >>
> >> delete memslot
> >> root_gfn = vcpu->arch.cr3<< PAGE_SHIFT
> >> mmu_check_root(root_gfn) synchronize_rcu
> >> kvm_mmu_get_page()
> >>
> >
> > srcu_read_unlock()
> >
> >> kvm_mmu_zap_all
> >>
> >
> > Ditto, srcu_read_lock() protects us.
>
> Yep!
The RCU read-protected side does not stop a new memslots pointer from
being assigned (with rcu_assign_pointer), does it?
> >> Accesses between kvm_mmu_get_page and kvm_mmu_zap_all window can see
> >> shadow pages with stale gfn.
> >>
> >> But, if you still think its worthwhile to use RCU, at least handling
> >> gfn_to_memslot / unalias_gfn errors _and_ adding mmu_notifier_retry
> >> invalidation to set_memory_region path will be necessary (so that
> >> gfn_to_pfn validation, in the fault path, is discarded in case
> >> of memslot/alias update).
> >
> > It really is worthwhile to reuse complex infrastructure instead of writing
> > new infrastructure.
>
> Marcelo, in your first example, is your concern that the fault path
> needs to detect the memslot deletion?
Yes, it needs to invalidate the leakage, which in this case is a shadow
page data structure which was created containing information from a now
deleted memslot.
> Or that the use of sp->gfn "leaks" out of the SRCU read-side critical
> section?
Yes, use of a stale sp->gfn leaks outside of the SRCU read side critical
section and currently the rest of the code is not ready to deal with
that... but it will have to.
> Thanx, Paul
>
> >> 2. Another complication is that memslot creation and kvm_iommu_map_pages
> >> are not atomic.
> >>
> >> create memslot
> >> synchronize_srcu
> >> <----- vcpu grabs gfn reference without
> >> iommu mapping.
> >> kvm_iommu_map_pages
> >>
> >> Which can be solved by changing kvm_iommu_map_pages (and new gfn_to_pfn
> >> helper) to use base_gfn,npages,hva information from somewhere else other
> >> than visible kvm->memslots (so that when the slot becomes visible its
> >> already iommu mapped).
> >
> > Yes. It can accept a memslots structure instead of deriving it from
> > kvm->memslots. Then we do a rcu_assign_pointer() to switch the tables.
Alright.
> >> So it appears to me using RCU introduces more complications / subtle
> >> details than mutual exclusion here. The new request bit which the
> >> original patch introduces is limited to enabling/disabling conditional
> >> acquision of slots_lock (calling it a "new locking protocol" is unfair)
> >> to improve write acquision latency.
> >>
> >
> > It's true that it is not a new locking protocol. But I feel it is
> > worthwhile to try to use rcu for this; at least it will make it easier for
> > newcomers (provided they understand rcu).
Sure.
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [RFC] KVM: x86: conditionally acquire/release slots_lock on entry/exit
2009-09-13 22:49 ` Marcelo Tosatti
@ 2009-09-14 5:03 ` Avi Kivity
2009-09-14 7:17 ` Avi Kivity
0 siblings, 1 reply; 24+ messages in thread
From: Avi Kivity @ 2009-09-14 5:03 UTC (permalink / raw)
To: Marcelo Tosatti; +Cc: Paul E. McKenney, kvm, Gleb Natapov
On 09/14/2009 01:49 AM, Marcelo Tosatti wrote:
>
>>> I think synchronize_srcu() will be deferred until the fault path is
>>> complete (and srcu_read_unlock() runs). Copying someone who knows for
>>> sure.
>>>
>> Yes, synchronize_srcu() will block until srcu_read_unlock() in this
>> scenario, assuming that the same srcu_struct is used by both.
>>
> Right it will. But this does not stop the fault path from creating
> shadow pages with stale sp->gfn (the only way to do that would be mutual
> exclusion AFAICS).
>
So we put the kvm_mmu_zap_pages() call as part of the synchronize_srcu()
callback to take advantage of the srcu guarantees. We know that when
when the callback is called all new reads see the new slots and all old
readers have completed.
> The RCU read-protected side does not stop a new memslots pointer from
> being assigned (with rcu_assign_pointer), does it?
>
>
It doesn't. It only gives you a point in time where you know no one is
using the old pointer, but before it has been deleted.
--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [RFC] KVM: x86: conditionally acquire/release slots_lock on entry/exit
2009-09-14 5:03 ` Avi Kivity
@ 2009-09-14 7:17 ` Avi Kivity
0 siblings, 0 replies; 24+ messages in thread
From: Avi Kivity @ 2009-09-14 7:17 UTC (permalink / raw)
To: Marcelo Tosatti; +Cc: Paul E. McKenney, kvm, Gleb Natapov
On 09/14/2009 08:03 AM, Avi Kivity wrote:
>> Right it will. But this does not stop the fault path from creating
>> shadow pages with stale sp->gfn (the only way to do that would be mutual
>> exclusion AFAICS).
>
> So we put the kvm_mmu_zap_pages() call as part of the
> synchronize_srcu() callback to take advantage of the srcu guarantees.
> We know that when when the callback is called all new reads see the
> new slots and all old readers have completed.
I think I see your concern - assigning sp->gfn leaks information out of
the srcu critical section.
Two ways out:
1) copy kvm->slots into sp->slots and use it when dropping the shadow
page. Intrusive and increases shadow footprint.
1b) Instead of sp->slots, use a 1-bit generation counter. Even uglier
but reduces the shadow footprint.
2) instead of removing the slot in rcu_assign_pointer(), mark it
invalid. gfn_to_page() will fail on such slots but the teardown paths
(like unaccount_shadow) continue to work. One we've zapped the mmu we
drop the slot completely (can do in place, no need to rcu_assign_pointer).
--
error compiling committee.c: too many arguments to function
^ permalink raw reply [flat|nested] 24+ messages in thread
end of thread, other threads:[~2009-09-14 7:17 UTC | newest]
Thread overview: 24+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-08-27 1:20 [patch 0/5] unify remote request and kvm_vcpu_kick IPI mechanism Marcelo Tosatti
2009-08-27 1:20 ` [patch 1/5] KVM: move kvm_vcpu_kick to virt/kvm/kvm_main.c Marcelo Tosatti
2009-08-27 1:20 ` [patch 2/5] KVM: reintroduce guest mode bit and unify remote request code Marcelo Tosatti
2009-08-27 8:15 ` Avi Kivity
2009-08-27 12:45 ` Marcelo Tosatti
2009-08-27 13:24 ` Avi Kivity
2009-08-27 14:07 ` Marcelo Tosatti
2009-08-28 7:06 ` Avi Kivity
2009-08-28 7:22 ` Avi Kivity
2009-08-27 8:25 ` Avi Kivity
2009-08-27 12:58 ` Marcelo Tosatti
2009-08-27 1:20 ` [patch 3/5] KVM: switch REQ_TLB_FLUSH/REQ_MMU_RELOAD to kvm_vcpus_request Marcelo Tosatti
2009-08-27 1:20 ` [patch 4/5] KVM: remove make_all_cpus_request Marcelo Tosatti
2009-08-27 1:20 ` [patch 5/5] KVM: x86: drop duplicat kvm_flush_remote_tlbs Marcelo Tosatti
2009-08-27 15:54 ` [RFC] KVM: x86: conditionally acquire/release slots_lock on entry/exit Marcelo Tosatti
2009-08-27 16:27 ` Avi Kivity
2009-08-27 22:59 ` Marcelo Tosatti
2009-08-28 6:50 ` Avi Kivity
2009-09-10 22:30 ` Marcelo Tosatti
2009-09-13 15:42 ` Avi Kivity
2009-09-13 16:26 ` Paul E. McKenney
2009-09-13 22:49 ` Marcelo Tosatti
2009-09-14 5:03 ` Avi Kivity
2009-09-14 7:17 ` Avi Kivity
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).