* [PATCH v2 0/3] Minor vcpu->requests improvements
@ 2012-05-20 13:49 Avi Kivity
2012-05-20 13:49 ` [PATCH v2 1/3] KVM: Simplify KVM_REQ_EVENT/req_int_win handling Avi Kivity
` (2 more replies)
0 siblings, 3 replies; 7+ messages in thread
From: Avi Kivity @ 2012-05-20 13:49 UTC (permalink / raw)
To: Marcelo Tosatti, kvm
Nothing spectacular, just regularization of the code.
v2:
fix endless loop in patch 3 where a reload would set a bit in
vcpu->requests and abort the entry
Avi Kivity (3):
KVM: Simplify KVM_REQ_EVENT/req_int_win handling
KVM: Optimize vcpu->requests slow path slightly
KVM: Move mmu reload out of line
arch/x86/kvm/mmu.c | 4 ++-
arch/x86/kvm/svm.c | 1 +
arch/x86/kvm/x86.c | 73 ++++++++++++++++++++++++++++------------------------
3 files changed, 43 insertions(+), 35 deletions(-)
--
1.7.10.1
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH v2 1/3] KVM: Simplify KVM_REQ_EVENT/req_int_win handling
2012-05-20 13:49 [PATCH v2 0/3] Minor vcpu->requests improvements Avi Kivity
@ 2012-05-20 13:49 ` Avi Kivity
2012-05-20 13:49 ` [PATCH v2 2/3] KVM: Optimize vcpu->requests slow path slightly Avi Kivity
2012-05-20 13:49 ` [PATCH v2 3/3] KVM: Move mmu reload out of line Avi Kivity
2 siblings, 0 replies; 7+ messages in thread
From: Avi Kivity @ 2012-05-20 13:49 UTC (permalink / raw)
To: Marcelo Tosatti, kvm
Put the KVM_REQ_EVENT block in the regular vcpu->requests if (),
instead of its own little check.
Signed-off-by: Avi Kivity <avi@redhat.com>
---
arch/x86/kvm/x86.c | 30 ++++++++++++++++--------------
1 file changed, 16 insertions(+), 14 deletions(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index b78f89d..953e692 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5233,6 +5233,9 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
vcpu->run->request_interrupt_window;
bool req_immediate_exit = 0;
+ if (unlikely(req_int_win))
+ kvm_make_request(KVM_REQ_EVENT, vcpu);
+
if (vcpu->requests) {
if (kvm_check_request(KVM_REQ_MMU_RELOAD, vcpu))
kvm_mmu_unload(vcpu);
@@ -5277,20 +5280,19 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
kvm_handle_pmu_event(vcpu);
if (kvm_check_request(KVM_REQ_PMI, vcpu))
kvm_deliver_pmi(vcpu);
- }
-
- if (kvm_check_request(KVM_REQ_EVENT, vcpu) || req_int_win) {
- inject_pending_event(vcpu);
-
- /* enable NMI/IRQ window open exits if needed */
- if (vcpu->arch.nmi_pending)
- kvm_x86_ops->enable_nmi_window(vcpu);
- else if (kvm_cpu_has_interrupt(vcpu) || req_int_win)
- kvm_x86_ops->enable_irq_window(vcpu);
-
- if (kvm_lapic_enabled(vcpu)) {
- update_cr8_intercept(vcpu);
- kvm_lapic_sync_to_vapic(vcpu);
+ if (kvm_check_request(KVM_REQ_EVENT, vcpu)) {
+ inject_pending_event(vcpu);
+
+ /* enable NMI/IRQ window open exits if needed */
+ if (vcpu->arch.nmi_pending)
+ kvm_x86_ops->enable_nmi_window(vcpu);
+ else if (kvm_cpu_has_interrupt(vcpu) || req_int_win)
+ kvm_x86_ops->enable_irq_window(vcpu);
+
+ if (kvm_lapic_enabled(vcpu)) {
+ update_cr8_intercept(vcpu);
+ kvm_lapic_sync_to_vapic(vcpu);
+ }
}
}
--
1.7.10.1
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH v2 2/3] KVM: Optimize vcpu->requests slow path slightly
2012-05-20 13:49 [PATCH v2 0/3] Minor vcpu->requests improvements Avi Kivity
2012-05-20 13:49 ` [PATCH v2 1/3] KVM: Simplify KVM_REQ_EVENT/req_int_win handling Avi Kivity
@ 2012-05-20 13:49 ` Avi Kivity
2012-05-30 20:03 ` Marcelo Tosatti
2012-05-20 13:49 ` [PATCH v2 3/3] KVM: Move mmu reload out of line Avi Kivity
2 siblings, 1 reply; 7+ messages in thread
From: Avi Kivity @ 2012-05-20 13:49 UTC (permalink / raw)
To: Marcelo Tosatti, kvm
Instead of using a atomic operation per active request, use just one
to get all requests at once, then check them with local ops. This
probably isn't any faster, since simultaneous requests are rare, but
it does reduce code size.
Signed-off-by: Avi Kivity <avi@redhat.com>
---
arch/x86/kvm/x86.c | 33 ++++++++++++++++++---------------
1 file changed, 18 insertions(+), 15 deletions(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 953e692..c0209eb 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5232,55 +5232,58 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
bool req_int_win = !irqchip_in_kernel(vcpu->kvm) &&
vcpu->run->request_interrupt_window;
bool req_immediate_exit = 0;
+ ulong reqs;
if (unlikely(req_int_win))
kvm_make_request(KVM_REQ_EVENT, vcpu);
if (vcpu->requests) {
- if (kvm_check_request(KVM_REQ_MMU_RELOAD, vcpu))
+ reqs = xchg(&vcpu->requests, 0UL);
+
+ if (test_bit(KVM_REQ_MMU_RELOAD, &reqs))
kvm_mmu_unload(vcpu);
- if (kvm_check_request(KVM_REQ_MIGRATE_TIMER, vcpu))
+ if (test_bit(KVM_REQ_MIGRATE_TIMER, &reqs))
__kvm_migrate_timers(vcpu);
- if (kvm_check_request(KVM_REQ_CLOCK_UPDATE, vcpu)) {
+ if (test_bit(KVM_REQ_CLOCK_UPDATE, &reqs)) {
r = kvm_guest_time_update(vcpu);
if (unlikely(r))
goto out;
}
- if (kvm_check_request(KVM_REQ_MMU_SYNC, vcpu))
+ if (test_bit(KVM_REQ_MMU_SYNC, &reqs))
kvm_mmu_sync_roots(vcpu);
- if (kvm_check_request(KVM_REQ_TLB_FLUSH, vcpu))
+ if (test_bit(KVM_REQ_TLB_FLUSH, &reqs))
kvm_x86_ops->tlb_flush(vcpu);
- if (kvm_check_request(KVM_REQ_REPORT_TPR_ACCESS, vcpu)) {
+ if (test_bit(KVM_REQ_REPORT_TPR_ACCESS, &reqs)) {
vcpu->run->exit_reason = KVM_EXIT_TPR_ACCESS;
r = 0;
goto out;
}
- if (kvm_check_request(KVM_REQ_TRIPLE_FAULT, vcpu)) {
+ if (test_bit(KVM_REQ_TRIPLE_FAULT, &reqs)) {
vcpu->run->exit_reason = KVM_EXIT_SHUTDOWN;
r = 0;
goto out;
}
- if (kvm_check_request(KVM_REQ_DEACTIVATE_FPU, vcpu)) {
+ if (test_bit(KVM_REQ_DEACTIVATE_FPU, &reqs)) {
vcpu->fpu_active = 0;
kvm_x86_ops->fpu_deactivate(vcpu);
}
- if (kvm_check_request(KVM_REQ_APF_HALT, vcpu)) {
+ if (test_bit(KVM_REQ_APF_HALT, &reqs)) {
/* Page is swapped out. Do synthetic halt */
vcpu->arch.apf.halted = true;
r = 1;
goto out;
}
- if (kvm_check_request(KVM_REQ_STEAL_UPDATE, vcpu))
+ if (test_bit(KVM_REQ_STEAL_UPDATE, &reqs))
record_steal_time(vcpu);
- if (kvm_check_request(KVM_REQ_NMI, vcpu))
+ if (test_bit(KVM_REQ_NMI, &reqs))
process_nmi(vcpu);
req_immediate_exit =
- kvm_check_request(KVM_REQ_IMMEDIATE_EXIT, vcpu);
- if (kvm_check_request(KVM_REQ_PMU, vcpu))
+ test_bit(KVM_REQ_IMMEDIATE_EXIT, &reqs);
+ if (test_bit(KVM_REQ_PMU, &reqs))
kvm_handle_pmu_event(vcpu);
- if (kvm_check_request(KVM_REQ_PMI, vcpu))
+ if (test_bit(KVM_REQ_PMI, &reqs))
kvm_deliver_pmi(vcpu);
- if (kvm_check_request(KVM_REQ_EVENT, vcpu)) {
+ if (test_bit(KVM_REQ_EVENT, &reqs)) {
inject_pending_event(vcpu);
/* enable NMI/IRQ window open exits if needed */
--
1.7.10.1
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH v2 3/3] KVM: Move mmu reload out of line
2012-05-20 13:49 [PATCH v2 0/3] Minor vcpu->requests improvements Avi Kivity
2012-05-20 13:49 ` [PATCH v2 1/3] KVM: Simplify KVM_REQ_EVENT/req_int_win handling Avi Kivity
2012-05-20 13:49 ` [PATCH v2 2/3] KVM: Optimize vcpu->requests slow path slightly Avi Kivity
@ 2012-05-20 13:49 ` Avi Kivity
2 siblings, 0 replies; 7+ messages in thread
From: Avi Kivity @ 2012-05-20 13:49 UTC (permalink / raw)
To: Marcelo Tosatti, kvm
Currently we check that the mmu root exits before every entry. Use the
existing KVM_REQ_MMU_RELOAD mechanism instead, by making it really reload
the mmu, and by adding the request to mmu initialization code.
Signed-off-by: Avi Kivity <avi@redhat.com>
---
arch/x86/kvm/mmu.c | 4 +++-
arch/x86/kvm/svm.c | 1 +
arch/x86/kvm/x86.c | 14 +++++++-------
3 files changed, 11 insertions(+), 8 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 72102e0..589fdaa 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -3181,7 +3181,8 @@ void kvm_mmu_flush_tlb(struct kvm_vcpu *vcpu)
static void paging_new_cr3(struct kvm_vcpu *vcpu)
{
pgprintk("%s: cr3 %lx\n", __func__, kvm_read_cr3(vcpu));
- mmu_free_roots(vcpu);
+ kvm_mmu_unload(vcpu);
+ kvm_mmu_load(vcpu);
}
static unsigned long get_cr3(struct kvm_vcpu *vcpu)
@@ -3470,6 +3471,7 @@ static int init_kvm_nested_mmu(struct kvm_vcpu *vcpu)
static int init_kvm_mmu(struct kvm_vcpu *vcpu)
{
+ kvm_make_request(KVM_REQ_MMU_RELOAD, vcpu);
if (mmu_is_nested(vcpu))
return init_kvm_nested_mmu(vcpu);
else if (tdp_enabled)
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index f75af40..98f13d7 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -2523,6 +2523,7 @@ static bool nested_svm_vmrun(struct vcpu_svm *svm)
if (nested_vmcb->control.nested_ctl) {
kvm_mmu_unload(&svm->vcpu);
+ kvm_make_request(KVM_REQ_MMU_RELOAD, &svm->vcpu);
svm->nested.nested_cr3 = nested_vmcb->control.nested_cr3;
nested_svm_init_mmu_context(&svm->vcpu);
}
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index c0209eb..946933a 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5240,8 +5240,14 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
if (vcpu->requests) {
reqs = xchg(&vcpu->requests, 0UL);
- if (test_bit(KVM_REQ_MMU_RELOAD, &reqs))
+ if (test_bit(KVM_REQ_MMU_RELOAD, &reqs)) {
kvm_mmu_unload(vcpu);
+ r = kvm_mmu_reload(vcpu);
+ if (unlikely(r)) {
+ kvm_make_request(KVM_REQ_MMU_RELOAD, vcpu);
+ goto out;
+ }
+ }
if (test_bit(KVM_REQ_MIGRATE_TIMER, &reqs))
__kvm_migrate_timers(vcpu);
if (test_bit(KVM_REQ_CLOCK_UPDATE, &reqs)) {
@@ -5299,12 +5305,6 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
}
}
- r = kvm_mmu_reload(vcpu);
- if (unlikely(r)) {
- kvm_x86_ops->cancel_injection(vcpu);
- goto out;
- }
-
preempt_disable();
kvm_x86_ops->prepare_guest_switch(vcpu);
--
1.7.10.1
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH v2 2/3] KVM: Optimize vcpu->requests slow path slightly
2012-05-20 13:49 ` [PATCH v2 2/3] KVM: Optimize vcpu->requests slow path slightly Avi Kivity
@ 2012-05-30 20:03 ` Marcelo Tosatti
2012-05-31 9:27 ` Avi Kivity
0 siblings, 1 reply; 7+ messages in thread
From: Marcelo Tosatti @ 2012-05-30 20:03 UTC (permalink / raw)
To: Avi Kivity; +Cc: kvm
On Sun, May 20, 2012 at 04:49:27PM +0300, Avi Kivity wrote:
> Instead of using a atomic operation per active request, use just one
> to get all requests at once, then check them with local ops. This
> probably isn't any faster, since simultaneous requests are rare, but
> it does reduce code size.
>
> Signed-off-by: Avi Kivity <avi@redhat.com>
> ---
> arch/x86/kvm/x86.c | 33 ++++++++++++++++++---------------
> 1 file changed, 18 insertions(+), 15 deletions(-)
>
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 953e692..c0209eb 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -5232,55 +5232,58 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
> bool req_int_win = !irqchip_in_kernel(vcpu->kvm) &&
> vcpu->run->request_interrupt_window;
> bool req_immediate_exit = 0;
> + ulong reqs;
>
> if (unlikely(req_int_win))
> kvm_make_request(KVM_REQ_EVENT, vcpu);
>
> if (vcpu->requests) {
> - if (kvm_check_request(KVM_REQ_MMU_RELOAD, vcpu))
> + reqs = xchg(&vcpu->requests, 0UL);
> +
> + if (test_bit(KVM_REQ_MMU_RELOAD, &reqs))
> kvm_mmu_unload(vcpu);
> - if (kvm_check_request(KVM_REQ_MIGRATE_TIMER, vcpu))
> + if (test_bit(KVM_REQ_MIGRATE_TIMER, &reqs))
> __kvm_migrate_timers(vcpu);
> - if (kvm_check_request(KVM_REQ_CLOCK_UPDATE, vcpu)) {
> + if (test_bit(KVM_REQ_CLOCK_UPDATE, &reqs)) {
> r = kvm_guest_time_update(vcpu);
> if (unlikely(r))
> goto out;
> }
Bailing out loses requests in "reqs".
Caching the requests makes the following type of sequence behave strangely
req = xchg(&vcpu->requests);
if request is set
request handler
...
set REQ_EVENT
...
prepare for guest entry
vcpu->requests set
bail
The code is more straightforward as it is.
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH v2 2/3] KVM: Optimize vcpu->requests slow path slightly
2012-05-30 20:03 ` Marcelo Tosatti
@ 2012-05-31 9:27 ` Avi Kivity
2012-06-02 0:23 ` Marcelo Tosatti
0 siblings, 1 reply; 7+ messages in thread
From: Avi Kivity @ 2012-05-31 9:27 UTC (permalink / raw)
To: Marcelo Tosatti; +Cc: kvm
On 05/30/2012 11:03 PM, Marcelo Tosatti wrote:
> On Sun, May 20, 2012 at 04:49:27PM +0300, Avi Kivity wrote:
>> Instead of using a atomic operation per active request, use just one
>> to get all requests at once, then check them with local ops. This
>> probably isn't any faster, since simultaneous requests are rare, but
>> it does reduce code size.
>>
>> Signed-off-by: Avi Kivity <avi@redhat.com>
>> ---
>> arch/x86/kvm/x86.c | 33 ++++++++++++++++++---------------
>> 1 file changed, 18 insertions(+), 15 deletions(-)
>>
>> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
>> index 953e692..c0209eb 100644
>> --- a/arch/x86/kvm/x86.c
>> +++ b/arch/x86/kvm/x86.c
>> @@ -5232,55 +5232,58 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
>> bool req_int_win = !irqchip_in_kernel(vcpu->kvm) &&
>> vcpu->run->request_interrupt_window;
>> bool req_immediate_exit = 0;
>> + ulong reqs;
>>
>> if (unlikely(req_int_win))
>> kvm_make_request(KVM_REQ_EVENT, vcpu);
>>
>> if (vcpu->requests) {
>> - if (kvm_check_request(KVM_REQ_MMU_RELOAD, vcpu))
>> + reqs = xchg(&vcpu->requests, 0UL);
>> +
>> + if (test_bit(KVM_REQ_MMU_RELOAD, &reqs))
>> kvm_mmu_unload(vcpu);
>> - if (kvm_check_request(KVM_REQ_MIGRATE_TIMER, vcpu))
>> + if (test_bit(KVM_REQ_MIGRATE_TIMER, &reqs))
>> __kvm_migrate_timers(vcpu);
>> - if (kvm_check_request(KVM_REQ_CLOCK_UPDATE, vcpu)) {
>> + if (test_bit(KVM_REQ_CLOCK_UPDATE, &reqs)) {
>> r = kvm_guest_time_update(vcpu);
>> if (unlikely(r))
>> goto out;
>> }
>
> Bailing out loses requests in "reqs".
Whoops, good catch.
>
> Caching the requests makes the following type of sequence behave strangely
>
> req = xchg(&vcpu->requests);
> if request is set
> request handler
> ...
> set REQ_EVENT
> ...
>
> prepare for guest entry
> vcpu->requests set
> bail
I don't really mind that. But I do want to reduce the overhead of a
request, they're not that rare in normal workloads.
How about
for_each_set_bit(req, &vcpu->requests, BITS_PER_LONG) {
clear_bit(bit, &vcpu->requests);
r = request_handlers[bit](vcpu);
if (r)
goto out;
}
? That makes for O(1) handling since usually we only have one request
set (KVM_REQ_EVENT). We'll make that the last one to avoid the scenario
above.
--
error compiling committee.c: too many arguments to function
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH v2 2/3] KVM: Optimize vcpu->requests slow path slightly
2012-05-31 9:27 ` Avi Kivity
@ 2012-06-02 0:23 ` Marcelo Tosatti
0 siblings, 0 replies; 7+ messages in thread
From: Marcelo Tosatti @ 2012-06-02 0:23 UTC (permalink / raw)
To: Avi Kivity; +Cc: kvm
On Thu, May 31, 2012 at 12:27:09PM +0300, Avi Kivity wrote:
> >> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> >> index 953e692..c0209eb 100644
> >> --- a/arch/x86/kvm/x86.c
> >> +++ b/arch/x86/kvm/x86.c
> >> @@ -5232,55 +5232,58 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
> >> bool req_int_win = !irqchip_in_kernel(vcpu->kvm) &&
> >> vcpu->run->request_interrupt_window;
> >> bool req_immediate_exit = 0;
> >> + ulong reqs;
> >>
> >> if (unlikely(req_int_win))
> >> kvm_make_request(KVM_REQ_EVENT, vcpu);
> >>
> >> if (vcpu->requests) {
> >> - if (kvm_check_request(KVM_REQ_MMU_RELOAD, vcpu))
> >> + reqs = xchg(&vcpu->requests, 0UL);
> >> +
> >> + if (test_bit(KVM_REQ_MMU_RELOAD, &reqs))
> >> kvm_mmu_unload(vcpu);
> >> - if (kvm_check_request(KVM_REQ_MIGRATE_TIMER, vcpu))
> >> + if (test_bit(KVM_REQ_MIGRATE_TIMER, &reqs))
> >> __kvm_migrate_timers(vcpu);
> >> - if (kvm_check_request(KVM_REQ_CLOCK_UPDATE, vcpu)) {
> >> + if (test_bit(KVM_REQ_CLOCK_UPDATE, &reqs)) {
> >> r = kvm_guest_time_update(vcpu);
> >> if (unlikely(r))
> >> goto out;
> >> }
> >
> > Bailing out loses requests in "reqs".
>
>
> Whoops, good catch.
>
> >
> > Caching the requests makes the following type of sequence behave strangely
> >
> > req = xchg(&vcpu->requests);
> > if request is set
> > request handler
> > ...
> > set REQ_EVENT
> > ...
> >
> > prepare for guest entry
> > vcpu->requests set
> > bail
>
>
> I don't really mind that. But I do want to reduce the overhead of a
> request, they're not that rare in normal workloads.
>
> How about
>
>
> for_each_set_bit(req, &vcpu->requests, BITS_PER_LONG) {
> clear_bit(bit, &vcpu->requests);
> r = request_handlers[bit](vcpu);
> if (r)
> goto out;
> }
>
> ? That makes for O(1) handling since usually we only have one request
> set (KVM_REQ_EVENT). We'll make that the last one to avoid the scenario
> above.
That is better.
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2012-06-02 0:37 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-05-20 13:49 [PATCH v2 0/3] Minor vcpu->requests improvements Avi Kivity
2012-05-20 13:49 ` [PATCH v2 1/3] KVM: Simplify KVM_REQ_EVENT/req_int_win handling Avi Kivity
2012-05-20 13:49 ` [PATCH v2 2/3] KVM: Optimize vcpu->requests slow path slightly Avi Kivity
2012-05-30 20:03 ` Marcelo Tosatti
2012-05-31 9:27 ` Avi Kivity
2012-06-02 0:23 ` Marcelo Tosatti
2012-05-20 13:49 ` [PATCH v2 3/3] KVM: Move mmu reload out of line Avi Kivity
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox