* [PATCH] kvm/x86: skip async_pf when in guest mode
@ 2016-11-24 16:30 Roman Kagan
2016-11-24 20:49 ` Radim Krčmář
0 siblings, 1 reply; 6+ messages in thread
From: Roman Kagan @ 2016-11-24 16:30 UTC (permalink / raw)
To: Radim Krčmář, Paolo Bonzini, kvm; +Cc: Denis Lunev, Roman Kagan
Async pagefault machinery assumes communication with L1 guests only: all
the state -- MSRs, apf area addresses, etc, -- are for L1. However, it
currently doesn't check if the vCPU is running L1 or L2, and may inject
To reproduce the problem, use a host with swap enabled, run a VM on it,
run a nested VM on top, and set RSS limit for L1 on the host via
/sys/fs/cgroup/memory/machine.slice/machine-*.scope/memory.limit_in_bytes
to swap it out (you may need to tighten and release it once or twice, or
create some memory load inside L1). Very quickly L2 guest starts
receiving pagefaults with bogus %cr2 (apf tokens from the host
actually), and L1 guest starts accumulating tasks stuck in D state in
kvm_async_pf_task_wait.
To avoid that, only do async_pf stuff when executing L1 guest.
Note: this patch only fixes x86; other async_pf-capable arches may also
need something similar.
Signed-off-by: Roman Kagan <rkagan@virtuozzo.com>
---
arch/x86/kvm/mmu.c | 2 +-
arch/x86/kvm/x86.c | 3 ++-
2 files changed, 3 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index d9c7e98..cdafc61 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -3510,7 +3510,7 @@ static bool try_async_pf(struct kvm_vcpu *vcpu, bool prefault, gfn_t gfn,
if (!async)
return false; /* *pfn has correct page already */
- if (!prefault && can_do_async_pf(vcpu)) {
+ if (!prefault && !is_guest_mode(vcpu) && can_do_async_pf(vcpu)) {
trace_kvm_try_async_get_page(gva, gfn);
if (kvm_find_async_pf_gfn(vcpu, gfn)) {
trace_kvm_async_pf_doublefault(gva, gfn);
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 04c5d96..bf11fe4 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -6864,7 +6864,8 @@ static int vcpu_run(struct kvm_vcpu *vcpu)
break;
}
- kvm_check_async_pf_completion(vcpu);
+ if (!is_guest_mode(vcpu))
+ kvm_check_async_pf_completion(vcpu);
if (signal_pending(current)) {
r = -EINTR;
--
2.9.3
^ permalink raw reply related [flat|nested] 6+ messages in thread* Re: [PATCH] kvm/x86: skip async_pf when in guest mode
2016-11-24 16:30 [PATCH] kvm/x86: skip async_pf when in guest mode Roman Kagan
@ 2016-11-24 20:49 ` Radim Krčmář
2016-11-25 7:15 ` Roman Kagan
0 siblings, 1 reply; 6+ messages in thread
From: Radim Krčmář @ 2016-11-24 20:49 UTC (permalink / raw)
To: Roman Kagan; +Cc: Paolo Bonzini, kvm, Denis Lunev
2016-11-24 19:30+0300, Roman Kagan:
> Async pagefault machinery assumes communication with L1 guests only: all
> the state -- MSRs, apf area addresses, etc, -- are for L1. However, it
> currently doesn't check if the vCPU is running L1 or L2, and may inject
>
> To reproduce the problem, use a host with swap enabled, run a VM on it,
> run a nested VM on top, and set RSS limit for L1 on the host via
> /sys/fs/cgroup/memory/machine.slice/machine-*.scope/memory.limit_in_bytes
> to swap it out (you may need to tighten and release it once or twice, or
> create some memory load inside L1). Very quickly L2 guest starts
> receiving pagefaults with bogus %cr2 (apf tokens from the host
> actually), and L1 guest starts accumulating tasks stuck in D state in
> kvm_async_pf_task_wait.
>
> To avoid that, only do async_pf stuff when executing L1 guest.
>
> Note: this patch only fixes x86; other async_pf-capable arches may also
> need something similar.
>
> Signed-off-by: Roman Kagan <rkagan@virtuozzo.com>
> ---
Applied to kvm/queue, thanks.
The VM task in L1 could be scheduled out instead of hogging the VCPU for
a long time, so L1 might want to handle async_pf, especially if L1 set
KVM_ASYNC_PF_SEND_ALWAYS. Another case happens if L1 scheduled out a
high-priority task on async_pf and executed the low-priority VM task in
spare time, expecting another #PF when the page is ready, which might be
long before the next nested VM exit.
Have you considered doing a nested VM exit and delivering the async_pf
to L1 immediately?
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] kvm/x86: skip async_pf when in guest mode
2016-11-24 20:49 ` Radim Krčmář
@ 2016-11-25 7:15 ` Roman Kagan
2016-11-25 8:42 ` Roman Kagan
0 siblings, 1 reply; 6+ messages in thread
From: Roman Kagan @ 2016-11-25 7:15 UTC (permalink / raw)
To: Radim Krčmář; +Cc: Paolo Bonzini, kvm, Denis Lunev
On Thu, Nov 24, 2016 at 09:49:59PM +0100, Radim Krčmář wrote:
> 2016-11-24 19:30+0300, Roman Kagan:
> > Async pagefault machinery assumes communication with L1 guests only: all
> > the state -- MSRs, apf area addresses, etc, -- are for L1. However, it
> > currently doesn't check if the vCPU is running L1 or L2, and may inject
> >
> > To reproduce the problem, use a host with swap enabled, run a VM on it,
> > run a nested VM on top, and set RSS limit for L1 on the host via
> > /sys/fs/cgroup/memory/machine.slice/machine-*.scope/memory.limit_in_bytes
> > to swap it out (you may need to tighten and release it once or twice, or
> > create some memory load inside L1). Very quickly L2 guest starts
> > receiving pagefaults with bogus %cr2 (apf tokens from the host
> > actually), and L1 guest starts accumulating tasks stuck in D state in
> > kvm_async_pf_task_wait.
> >
> > To avoid that, only do async_pf stuff when executing L1 guest.
> >
> > Note: this patch only fixes x86; other async_pf-capable arches may also
> > need something similar.
> >
> > Signed-off-by: Roman Kagan <rkagan@virtuozzo.com>
> > ---
>
> Applied to kvm/queue, thanks.
>
> The VM task in L1 could be scheduled out instead of hogging the VCPU for
> a long time, so L1 might want to handle async_pf, especially if L1 set
> KVM_ASYNC_PF_SEND_ALWAYS. Another case happens if L1 scheduled out a
> high-priority task on async_pf and executed the low-priority VM task in
> spare time, expecting another #PF when the page is ready, which might be
> long before the next nested VM exit.
>
> Have you considered doing a nested VM exit and delivering the async_pf
> to L1 immediately?
I haven't, but it seems to make sense indeed for "page ready" async_pfs.
I'll have a look into it.
Thanks,
Roman.
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] kvm/x86: skip async_pf when in guest mode
2016-11-25 7:15 ` Roman Kagan
@ 2016-11-25 8:42 ` Roman Kagan
2016-11-25 8:51 ` Paolo Bonzini
0 siblings, 1 reply; 6+ messages in thread
From: Roman Kagan @ 2016-11-25 8:42 UTC (permalink / raw)
To: Radim Krčmář; +Cc: Paolo Bonzini, kvm, Denis Lunev
On Fri, Nov 25, 2016 at 10:15:21AM +0300, Roman Kagan wrote:
> On Thu, Nov 24, 2016 at 09:49:59PM +0100, Radim Krčmář wrote:
> > 2016-11-24 19:30+0300, Roman Kagan:
> > > Async pagefault machinery assumes communication with L1 guests only: all
> > > the state -- MSRs, apf area addresses, etc, -- are for L1. However, it
> > > currently doesn't check if the vCPU is running L1 or L2, and may inject
> > >
> > > To reproduce the problem, use a host with swap enabled, run a VM on it,
> > > run a nested VM on top, and set RSS limit for L1 on the host via
> > > /sys/fs/cgroup/memory/machine.slice/machine-*.scope/memory.limit_in_bytes
> > > to swap it out (you may need to tighten and release it once or twice, or
> > > create some memory load inside L1). Very quickly L2 guest starts
> > > receiving pagefaults with bogus %cr2 (apf tokens from the host
> > > actually), and L1 guest starts accumulating tasks stuck in D state in
> > > kvm_async_pf_task_wait.
> > >
> > > To avoid that, only do async_pf stuff when executing L1 guest.
> > >
> > > Note: this patch only fixes x86; other async_pf-capable arches may also
> > > need something similar.
> > >
> > > Signed-off-by: Roman Kagan <rkagan@virtuozzo.com>
> > > ---
> >
> > Applied to kvm/queue, thanks.
> >
> > The VM task in L1 could be scheduled out instead of hogging the VCPU for
> > a long time, so L1 might want to handle async_pf, especially if L1 set
> > KVM_ASYNC_PF_SEND_ALWAYS. Another case happens if L1 scheduled out a
> > high-priority task on async_pf and executed the low-priority VM task in
> > spare time, expecting another #PF when the page is ready, which might be
> > long before the next nested VM exit.
> >
> > Have you considered doing a nested VM exit and delivering the async_pf
> > to L1 immediately?
>
> I haven't, but it seems to make sense indeed for "page ready" async_pfs.
>
> I'll have a look into it.
What's the correct way to kick L2 to L1 from the host? I failed to find
one from a brief skimming through the code. We need a sensible exit
reason delivered to L1 (probably "external interrupt" will do) but I
don't see a method to do so without actually injecting an interrupt into
L1 which is not unlikely to confuse it. Any suggestion?
Thanks,
Roman.
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] kvm/x86: skip async_pf when in guest mode
2016-11-25 8:42 ` Roman Kagan
@ 2016-11-25 8:51 ` Paolo Bonzini
2016-11-25 11:17 ` Roman Kagan
0 siblings, 1 reply; 6+ messages in thread
From: Paolo Bonzini @ 2016-11-25 8:51 UTC (permalink / raw)
To: Roman Kagan; +Cc: Radim Krčmář, kvm, Denis Lunev
> What's the correct way to kick L2 to L1 from the host? I failed to find
> one from a brief skimming through the code. We need a sensible exit
> reason delivered to L1 (probably "external interrupt" will do) but I
> don't see a method to do so without actually injecting an interrupt into
> L1 which is not unlikely to confuse it. Any suggestion?
Perhaps check for async page faults in nested_vmx_check_exception and
nested_svm_intercept, before testing the exception bitmap?
Paolo
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] kvm/x86: skip async_pf when in guest mode
2016-11-25 8:51 ` Paolo Bonzini
@ 2016-11-25 11:17 ` Roman Kagan
0 siblings, 0 replies; 6+ messages in thread
From: Roman Kagan @ 2016-11-25 11:17 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: Radim Krčmář, kvm, Denis Lunev
On Fri, Nov 25, 2016 at 03:51:24AM -0500, Paolo Bonzini wrote:
>
> > What's the correct way to kick L2 to L1 from the host? I failed to find
> > one from a brief skimming through the code. We need a sensible exit
> > reason delivered to L1 (probably "external interrupt" will do) but I
> > don't see a method to do so without actually injecting an interrupt into
> > L1 which is not unlikely to confuse it. Any suggestion?
>
> Perhaps check for async page faults in nested_vmx_check_exception and
> nested_svm_intercept, before testing the exception bitmap?
Yeah I also thought about this, but hoped there's an arch-agnostic
way... Will try this approach, then.
Thanks,
Roman.
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2016-11-25 11:48 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-11-24 16:30 [PATCH] kvm/x86: skip async_pf when in guest mode Roman Kagan
2016-11-24 20:49 ` Radim Krčmář
2016-11-25 7:15 ` Roman Kagan
2016-11-25 8:42 ` Roman Kagan
2016-11-25 8:51 ` Paolo Bonzini
2016-11-25 11:17 ` Roman Kagan
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox