From: cdall@linaro.org (Christoffer Dall)
To: linux-arm-kernel@lists.infradead.org
Subject: [PATCH 3/3] kvm: arm/arm64: Fix locking for kvm_free_stage2_pgd
Date: Wed, 15 Mar 2017 11:56:39 +0100 [thread overview]
Message-ID: <20170315105639.GA31974@cbox> (raw)
In-Reply-To: <314fbde3-17e6-414b-85e6-326de22bdc1c@arm.com>
On Wed, Mar 15, 2017 at 09:39:26AM +0000, Marc Zyngier wrote:
> On 15/03/17 09:21, Christoffer Dall wrote:
> > On Tue, Mar 14, 2017 at 02:52:34PM +0000, Suzuki K Poulose wrote:
> >> In kvm_free_stage2_pgd() we don't hold the kvm->mmu_lock while calling
> >> unmap_stage2_range() on the entire memory range for the guest. This could
> >> cause problems with other callers (e.g, munmap on a memslot) trying to
> >> unmap a range.
> >>
> >> Fixes: commit d5d8184d35c9 ("KVM: ARM: Memory virtualization setup")
> >> Cc: stable at vger.kernel.org # v3.10+
> >> Cc: Marc Zyngier <marc.zyngier@arm.com>
> >> Cc: Christoffer Dall <christoffer.dall@linaro.org>
> >> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
> >> ---
> >> arch/arm/kvm/mmu.c | 3 +++
> >> 1 file changed, 3 insertions(+)
> >>
> >> diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
> >> index 13b9c1f..b361f71 100644
> >> --- a/arch/arm/kvm/mmu.c
> >> +++ b/arch/arm/kvm/mmu.c
> >> @@ -831,7 +831,10 @@ void kvm_free_stage2_pgd(struct kvm *kvm)
> >> if (kvm->arch.pgd == NULL)
> >> return;
> >>
> >> + spin_lock(&kvm->mmu_lock);
> >> unmap_stage2_range(kvm, 0, KVM_PHYS_SIZE);
> >> + spin_unlock(&kvm->mmu_lock);
> >> +
> >
> > This ends up holding the spin lock for potentially quite a while, where
> > we can do things like __flush_dcache_area(), which I think can fault.
>
> I believe we're always using the linear mapping (or kmap on 32bit) in
> order not to fault.
>
ok, then there's just the concern that we may be holding a spinlock for
a very long time. I seem to recall Mario once added something where he
unlocked and gave a chance to schedule something else for each PUD or
something like that, because he ran into the issue during migration. Am
I confusing this with something else?
Thanks,
-Christoffer
next prev parent reply other threads:[~2017-03-15 10:56 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-03-14 14:52 [PATCH 0/3] kvm: arm/arm64: Fixes for use after free problems Suzuki K Poulose
2017-03-14 14:52 ` [PATCH 1/3] kvm: arm/arm64: Take mmap_sem in stage2_unmap_vm Suzuki K Poulose
2017-03-15 9:17 ` Christoffer Dall
2017-03-15 9:34 ` Marc Zyngier
2017-03-15 11:05 ` Christoffer Dall
2017-03-15 13:29 ` Paolo Bonzini
2017-03-14 14:52 ` [PATCH 2/3] kvm: arm/arm64: Take mmap_sem in kvm_arch_prepare_memory_region Suzuki K Poulose
2017-03-15 11:05 ` Christoffer Dall
2017-03-14 14:52 ` [PATCH 3/3] kvm: arm/arm64: Fix locking for kvm_free_stage2_pgd Suzuki K Poulose
2017-03-15 9:21 ` Christoffer Dall
2017-03-15 9:39 ` Marc Zyngier
2017-03-15 10:56 ` Christoffer Dall [this message]
2017-03-15 13:28 ` Marc Zyngier
2017-03-15 13:35 ` Christoffer Dall
2017-03-15 13:43 ` Marc Zyngier
2017-03-15 13:50 ` Robin Murphy
2017-03-15 13:55 ` Marc Zyngier
2017-03-15 14:33 ` Suzuki K Poulose
2017-03-15 15:07 ` Marc Zyngier
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170315105639.GA31974@cbox \
--to=cdall@linaro.org \
--cc=linux-arm-kernel@lists.infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).