* [PATCH] kvm: arm/arm64: Simplify lock relaxation in stage2_wp_range
@ 2017-03-16 18:24 Suzuki K Poulose
2017-03-17 8:48 ` Christoffer Dall
0 siblings, 1 reply; 2+ messages in thread
From: Suzuki K Poulose @ 2017-03-16 18:24 UTC (permalink / raw)
To: linux-arm-kernel
From: Marc Zyngier <marc.zyngier@arm.com>
Add checks to make sure that kvm->mmu_lock is held while calling
stage2_wp_range. Also avoid explicit checks already done by cond_resched_lock().
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
[ Added assert_spin_locked check ]
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
---
arch/arm/kvm/mmu.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
index 7628ef1..37e67f5 100644
--- a/arch/arm/kvm/mmu.c
+++ b/arch/arm/kvm/mmu.c
@@ -1162,6 +1162,8 @@ static void stage2_wp_range(struct kvm *kvm, phys_addr_t addr, phys_addr_t end)
pgd_t *pgd;
phys_addr_t next;
+ assert_spin_locked(&kvm->mmu_lock);
+
pgd = kvm->arch.pgd + stage2_pgd_index(addr);
do {
/*
@@ -1171,8 +1173,7 @@ static void stage2_wp_range(struct kvm *kvm, phys_addr_t addr, phys_addr_t end)
* CONFIG_LOCKDEP. Additionally, holding the lock too long
* will also starve other vCPUs.
*/
- if (need_resched() || spin_needbreak(&kvm->mmu_lock))
- cond_resched_lock(&kvm->mmu_lock);
+ cond_resched_lock(&kvm->mmu_lock);
next = stage2_pgd_addr_end(addr, end);
if (stage2_pgd_present(*pgd))
--
2.7.4
^ permalink raw reply related [flat|nested] 2+ messages in thread
* [PATCH] kvm: arm/arm64: Simplify lock relaxation in stage2_wp_range
2017-03-16 18:24 [PATCH] kvm: arm/arm64: Simplify lock relaxation in stage2_wp_range Suzuki K Poulose
@ 2017-03-17 8:48 ` Christoffer Dall
0 siblings, 0 replies; 2+ messages in thread
From: Christoffer Dall @ 2017-03-17 8:48 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, Mar 16, 2017 at 06:24:34PM +0000, Suzuki K Poulose wrote:
> From: Marc Zyngier <marc.zyngier@arm.com>
>
> Add checks to make sure that kvm->mmu_lock is held while calling
> stage2_wp_range. Also avoid explicit checks already done by cond_resched_lock().
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> [ Added assert_spin_locked check ]
> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Reviewed-by: Christoffer Dall <cdall@linaro.org>
> ---
> arch/arm/kvm/mmu.c | 5 +++--
> 1 file changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
> index 7628ef1..37e67f5 100644
> --- a/arch/arm/kvm/mmu.c
> +++ b/arch/arm/kvm/mmu.c
> @@ -1162,6 +1162,8 @@ static void stage2_wp_range(struct kvm *kvm, phys_addr_t addr, phys_addr_t end)
> pgd_t *pgd;
> phys_addr_t next;
>
> + assert_spin_locked(&kvm->mmu_lock);
> +
> pgd = kvm->arch.pgd + stage2_pgd_index(addr);
> do {
> /*
> @@ -1171,8 +1173,7 @@ static void stage2_wp_range(struct kvm *kvm, phys_addr_t addr, phys_addr_t end)
> * CONFIG_LOCKDEP. Additionally, holding the lock too long
> * will also starve other vCPUs.
> */
> - if (need_resched() || spin_needbreak(&kvm->mmu_lock))
> - cond_resched_lock(&kvm->mmu_lock);
> + cond_resched_lock(&kvm->mmu_lock);
>
> next = stage2_pgd_addr_end(addr, end);
> if (stage2_pgd_present(*pgd))
> --
> 2.7.4
>
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2017-03-17 8:48 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-03-16 18:24 [PATCH] kvm: arm/arm64: Simplify lock relaxation in stage2_wp_range Suzuki K Poulose
2017-03-17 8:48 ` Christoffer Dall
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).