* [PATCH 1/2] KVM: Pass MMU notifier range flags to kvm_unmap_hva_range()
[not found] <20200811102725.7121-1-will@kernel.org>
@ 2020-08-11 10:27 ` Will Deacon
2020-08-19 23:57 ` Sasha Levin
2020-08-26 13:54 ` Sasha Levin
2020-08-11 10:27 ` [PATCH 2/2] KVM: arm64: Only reschedule if MMU_NOTIFIER_RANGE_BLOCKABLE is not set Will Deacon
1 sibling, 2 replies; 5+ messages in thread
From: Will Deacon @ 2020-08-11 10:27 UTC (permalink / raw)
To: kvmarm, linux-kernel, kvm
Cc: Will Deacon, Marc Zyngier, Suzuki K Poulose, James Morse,
Thomas Bogendoerfer, Paul Mackerras, Paolo Bonzini,
Sean Christopherson, stable
The 'flags' field of 'struct mmu_notifier_range' is used to indicate
whether invalidate_range_{start,end}() are permitted to block. In the
case of kvm_mmu_notifier_invalidate_range_start(), this field is not
forwarded on to the architecture-specific implementation of
kvm_unmap_hva_range() and therefore the backend cannot sensibly decide
whether or not to block.
Add an extra 'flags' parameter to kvm_unmap_hva_range() so that
architectures are aware as to whether or not they are permitted to block.
Cc: <stable@vger.kernel.org>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: James Morse <james.morse@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
---
arch/arm64/include/asm/kvm_host.h | 2 +-
arch/arm64/kvm/mmu.c | 2 +-
arch/mips/include/asm/kvm_host.h | 2 +-
arch/mips/kvm/mmu.c | 3 ++-
arch/powerpc/include/asm/kvm_host.h | 3 ++-
arch/powerpc/kvm/book3s.c | 3 ++-
arch/powerpc/kvm/e500_mmu_host.c | 3 ++-
arch/x86/include/asm/kvm_host.h | 3 ++-
arch/x86/kvm/mmu/mmu.c | 3 ++-
virt/kvm/kvm_main.c | 3 ++-
10 files changed, 17 insertions(+), 10 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index e21d4a01372f..759d62343e1d 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -443,7 +443,7 @@ int __kvm_arm_vcpu_set_events(struct kvm_vcpu *vcpu,
#define KVM_ARCH_WANT_MMU_NOTIFIER
int kvm_unmap_hva_range(struct kvm *kvm,
- unsigned long start, unsigned long end);
+ unsigned long start, unsigned long end, unsigned flags);
int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte);
int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end);
int kvm_test_age_hva(struct kvm *kvm, unsigned long hva);
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 31058e6e7c2a..5f6b35c33618 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -2203,7 +2203,7 @@ static int kvm_unmap_hva_handler(struct kvm *kvm, gpa_t gpa, u64 size, void *dat
}
int kvm_unmap_hva_range(struct kvm *kvm,
- unsigned long start, unsigned long end)
+ unsigned long start, unsigned long end, unsigned flags)
{
if (!kvm->arch.pgd)
return 0;
diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h
index 363e7a89d173..ef1d25d49ec8 100644
--- a/arch/mips/include/asm/kvm_host.h
+++ b/arch/mips/include/asm/kvm_host.h
@@ -981,7 +981,7 @@ enum kvm_mips_fault_result kvm_trap_emul_gva_fault(struct kvm_vcpu *vcpu,
#define KVM_ARCH_WANT_MMU_NOTIFIER
int kvm_unmap_hva_range(struct kvm *kvm,
- unsigned long start, unsigned long end);
+ unsigned long start, unsigned long end, unsigned flags);
int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte);
int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end);
int kvm_test_age_hva(struct kvm *kvm, unsigned long hva);
diff --git a/arch/mips/kvm/mmu.c b/arch/mips/kvm/mmu.c
index 49bd160f4d85..0783ac9b3240 100644
--- a/arch/mips/kvm/mmu.c
+++ b/arch/mips/kvm/mmu.c
@@ -518,7 +518,8 @@ static int kvm_unmap_hva_handler(struct kvm *kvm, gfn_t gfn, gfn_t gfn_end,
return 1;
}
-int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end)
+int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end,
+ unsigned flags)
{
handle_hva_to_gpa(kvm, start, end, &kvm_unmap_hva_handler, NULL);
diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index 7e2d061d0445..bccf0ba2da2e 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -58,7 +58,8 @@
#define KVM_ARCH_WANT_MMU_NOTIFIER
extern int kvm_unmap_hva_range(struct kvm *kvm,
- unsigned long start, unsigned long end);
+ unsigned long start, unsigned long end,
+ unsigned flags);
extern int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end);
extern int kvm_test_age_hva(struct kvm *kvm, unsigned long hva);
extern int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte);
diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index 41fedec69ac3..49db50d1db04 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -834,7 +834,8 @@ void kvmppc_core_commit_memory_region(struct kvm *kvm,
kvm->arch.kvm_ops->commit_memory_region(kvm, mem, old, new, change);
}
-int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end)
+int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end,
+ unsigned flags)
{
return kvm->arch.kvm_ops->unmap_hva_range(kvm, start, end);
}
diff --git a/arch/powerpc/kvm/e500_mmu_host.c b/arch/powerpc/kvm/e500_mmu_host.c
index d6c1069e9954..ed0c9c43d0cf 100644
--- a/arch/powerpc/kvm/e500_mmu_host.c
+++ b/arch/powerpc/kvm/e500_mmu_host.c
@@ -734,7 +734,8 @@ static int kvm_unmap_hva(struct kvm *kvm, unsigned long hva)
return 0;
}
-int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end)
+int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end,
+ unsigned flags)
{
/* kvm_unmap_hva flushes everything anyways */
kvm_unmap_hva(kvm, start);
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index be5363b21540..c6908a3d551e 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1641,7 +1641,8 @@ asmlinkage void kvm_spurious_fault(void);
_ASM_EXTABLE(666b, 667b)
#define KVM_ARCH_WANT_MMU_NOTIFIER
-int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end);
+int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end,
+ unsigned flags);
int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end);
int kvm_test_age_hva(struct kvm *kvm, unsigned long hva);
int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte);
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 6d6a0ae7800c..9516a958e780 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -1971,7 +1971,8 @@ static int kvm_handle_hva(struct kvm *kvm, unsigned long hva,
return kvm_handle_hva_range(kvm, hva, hva + 1, data, handler);
}
-int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end)
+int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end,
+ unsigned flags)
{
return kvm_handle_hva_range(kvm, start, end, 0, kvm_unmap_rmapp);
}
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 0a68c9d3d3ab..9e925675a886 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -427,7 +427,8 @@ static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn,
* count is also read inside the mmu_lock critical section.
*/
kvm->mmu_notifier_count++;
- need_tlb_flush = kvm_unmap_hva_range(kvm, range->start, range->end);
+ need_tlb_flush = kvm_unmap_hva_range(kvm, range->start, range->end,
+ range->flags);
need_tlb_flush |= kvm->tlbs_dirty;
/* we've to flush the tlb before the pages can be freed */
if (need_tlb_flush)
--
2.28.0.236.gb10cc79966-goog
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH 2/2] KVM: arm64: Only reschedule if MMU_NOTIFIER_RANGE_BLOCKABLE is not set
[not found] <20200811102725.7121-1-will@kernel.org>
2020-08-11 10:27 ` [PATCH 1/2] KVM: Pass MMU notifier range flags to kvm_unmap_hva_range() Will Deacon
@ 2020-08-11 10:27 ` Will Deacon
2020-08-11 10:52 ` Suzuki K Poulose
1 sibling, 1 reply; 5+ messages in thread
From: Will Deacon @ 2020-08-11 10:27 UTC (permalink / raw)
To: kvmarm, linux-kernel, kvm
Cc: Will Deacon, Marc Zyngier, Suzuki K Poulose, James Morse,
Thomas Bogendoerfer, Paul Mackerras, Paolo Bonzini,
Sean Christopherson, stable
When an MMU notifier call results in unmapping a range that spans multiple
PGDs, we end up calling into cond_resched_lock() when crossing a PGD boundary,
since this avoids running into RCU stalls during VM teardown. Unfortunately,
if the VM is destroyed as a result of OOM, then blocking is not permitted
and the call to the scheduler triggers the following BUG():
| BUG: sleeping function called from invalid context at arch/arm64/kvm/mmu.c:394
| in_atomic(): 1, irqs_disabled(): 0, non_block: 1, pid: 36, name: oom_reaper
| INFO: lockdep is turned off.
| CPU: 3 PID: 36 Comm: oom_reaper Not tainted 5.8.0 #1
| Hardware name: QEMU QEMU Virtual Machine, BIOS 0.0.0 02/06/2015
| Call trace:
| dump_backtrace+0x0/0x284
| show_stack+0x1c/0x28
| dump_stack+0xf0/0x1a4
| ___might_sleep+0x2bc/0x2cc
| unmap_stage2_range+0x160/0x1ac
| kvm_unmap_hva_range+0x1a0/0x1c8
| kvm_mmu_notifier_invalidate_range_start+0x8c/0xf8
| __mmu_notifier_invalidate_range_start+0x218/0x31c
| mmu_notifier_invalidate_range_start_nonblock+0x78/0xb0
| __oom_reap_task_mm+0x128/0x268
| oom_reap_task+0xac/0x298
| oom_reaper+0x178/0x17c
| kthread+0x1e4/0x1fc
| ret_from_fork+0x10/0x30
Use the new 'flags' argument to kvm_unmap_hva_range() to ensure that we
only reschedule if MMU_NOTIFIER_RANGE_BLOCKABLE is set in the notifier
flags.
Cc: <stable@vger.kernel.org>
Fixes: 8b3405e345b5 ("kvm: arm/arm64: Fix locking for kvm_free_stage2_pgd")
Cc: Marc Zyngier <maz@kernel.org>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: James Morse <james.morse@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
---
arch/arm64/kvm/mmu.c | 17 +++++++++++++----
1 file changed, 13 insertions(+), 4 deletions(-)
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 5f6b35c33618..bd47f06739d6 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -365,7 +365,8 @@ static void unmap_stage2_p4ds(struct kvm *kvm, pgd_t *pgd,
* destroying the VM), otherwise another faulting VCPU may come in and mess
* with things behind our backs.
*/
-static void unmap_stage2_range(struct kvm *kvm, phys_addr_t start, u64 size)
+static void __unmap_stage2_range(struct kvm *kvm, phys_addr_t start, u64 size,
+ bool may_block)
{
pgd_t *pgd;
phys_addr_t addr = start, end = start + size;
@@ -390,11 +391,16 @@ static void unmap_stage2_range(struct kvm *kvm, phys_addr_t start, u64 size)
* If the range is too large, release the kvm->mmu_lock
* to prevent starvation and lockup detector warnings.
*/
- if (next != end)
+ if (may_block && next != end)
cond_resched_lock(&kvm->mmu_lock);
} while (pgd++, addr = next, addr != end);
}
+static void unmap_stage2_range(struct kvm *kvm, phys_addr_t start, u64 size)
+{
+ __unmap_stage2_range(kvm, start, size, true);
+}
+
static void stage2_flush_ptes(struct kvm *kvm, pmd_t *pmd,
phys_addr_t addr, phys_addr_t end)
{
@@ -2198,7 +2204,10 @@ static int handle_hva_to_gpa(struct kvm *kvm,
static int kvm_unmap_hva_handler(struct kvm *kvm, gpa_t gpa, u64 size, void *data)
{
- unmap_stage2_range(kvm, gpa, size);
+ unsigned flags = *(unsigned *)data;
+ bool may_block = flags & MMU_NOTIFIER_RANGE_BLOCKABLE;
+
+ __unmap_stage2_range(kvm, gpa, size, may_block);
return 0;
}
@@ -2209,7 +2218,7 @@ int kvm_unmap_hva_range(struct kvm *kvm,
return 0;
trace_kvm_unmap_hva_range(start, end);
- handle_hva_to_gpa(kvm, start, end, &kvm_unmap_hva_handler, NULL);
+ handle_hva_to_gpa(kvm, start, end, &kvm_unmap_hva_handler, &flags);
return 0;
}
--
2.28.0.236.gb10cc79966-goog
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH 2/2] KVM: arm64: Only reschedule if MMU_NOTIFIER_RANGE_BLOCKABLE is not set
2020-08-11 10:27 ` [PATCH 2/2] KVM: arm64: Only reschedule if MMU_NOTIFIER_RANGE_BLOCKABLE is not set Will Deacon
@ 2020-08-11 10:52 ` Suzuki K Poulose
0 siblings, 0 replies; 5+ messages in thread
From: Suzuki K Poulose @ 2020-08-11 10:52 UTC (permalink / raw)
To: will, kvmarm, linux-kernel, kvm
Cc: maz, james.morse, tsbogend, paulus, pbonzini,
sean.j.christopherson, stable
On 08/11/2020 11:27 AM, Will Deacon wrote:
> When an MMU notifier call results in unmapping a range that spans multiple
> PGDs, we end up calling into cond_resched_lock() when crossing a PGD boundary,
> since this avoids running into RCU stalls during VM teardown. Unfortunately,
> if the VM is destroyed as a result of OOM, then blocking is not permitted
> and the call to the scheduler triggers the following BUG():
>
> | BUG: sleeping function called from invalid context at arch/arm64/kvm/mmu.c:394
> | in_atomic(): 1, irqs_disabled(): 0, non_block: 1, pid: 36, name: oom_reaper
> | INFO: lockdep is turned off.
> | CPU: 3 PID: 36 Comm: oom_reaper Not tainted 5.8.0 #1
> | Hardware name: QEMU QEMU Virtual Machine, BIOS 0.0.0 02/06/2015
> | Call trace:
> | dump_backtrace+0x0/0x284
> | show_stack+0x1c/0x28
> | dump_stack+0xf0/0x1a4
> | ___might_sleep+0x2bc/0x2cc
> | unmap_stage2_range+0x160/0x1ac
> | kvm_unmap_hva_range+0x1a0/0x1c8
> | kvm_mmu_notifier_invalidate_range_start+0x8c/0xf8
> | __mmu_notifier_invalidate_range_start+0x218/0x31c
> | mmu_notifier_invalidate_range_start_nonblock+0x78/0xb0
> | __oom_reap_task_mm+0x128/0x268
> | oom_reap_task+0xac/0x298
> | oom_reaper+0x178/0x17c
> | kthread+0x1e4/0x1fc
> | ret_from_fork+0x10/0x30
>
> Use the new 'flags' argument to kvm_unmap_hva_range() to ensure that we
> only reschedule if MMU_NOTIFIER_RANGE_BLOCKABLE is set in the notifier
> flags.
>
> Cc: <stable@vger.kernel.org>
> Fixes: 8b3405e345b5 ("kvm: arm/arm64: Fix locking for kvm_free_stage2_pgd")
> Cc: Marc Zyngier <maz@kernel.org>
> Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
> Cc: James Morse <james.morse@arm.com>
> Signed-off-by: Will Deacon <will@kernel.org>
> ---
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH 1/2] KVM: Pass MMU notifier range flags to kvm_unmap_hva_range()
2020-08-11 10:27 ` [PATCH 1/2] KVM: Pass MMU notifier range flags to kvm_unmap_hva_range() Will Deacon
@ 2020-08-19 23:57 ` Sasha Levin
2020-08-26 13:54 ` Sasha Levin
1 sibling, 0 replies; 5+ messages in thread
From: Sasha Levin @ 2020-08-19 23:57 UTC (permalink / raw)
To: Sasha Levin, Will Deacon, kvmarm, linux-kernel
Cc: Will Deacon, Marc Zyngier, stable, Marc Zyngier, Suzuki K Poulose,
James Morse, stable
Hi
[This is an automated email]
This commit has been processed because it contains a -stable tag.
The stable tag indicates that it's relevant for the following trees: all
The bot has tested the following trees: v5.8.1, v5.7.15, v5.4.58, v4.19.139, v4.14.193, v4.9.232, v4.4.232.
v5.8.1: Build OK!
v5.7.15: Build OK!
v5.4.58: Build OK!
v4.19.139: Failed to apply! Possible dependencies:
18fc7bf8e041 ("arm64: KVM: Allow for direct call of HYP functions when using VHE")
208243c752a7 ("KVM: arm64: Move hyp-init.S to nVHE")
25357de01b95 ("KVM: arm64: Clean up kvm makefiles")
33e45234987e ("arm64: initialize and switch ptrauth kernel keys")
396244692232 ("arm64: preempt: Provide our own implementation of asm/preempt.h")
3f58bf634555 ("KVM: arm/arm64: Share common code in user_mem_abort()")
6396b852e46e ("KVM: arm/arm64: Re-factor setting the Stage 2 entry to exec on fault")
748c0e312fce ("KVM: Make kvm_set_spte_hva() return int")
750319756256 ("arm64: add basic pointer authentication support")
7621712918ad ("KVM: arm64: Add build rules for separate VHE/nVHE object files")
7aa8d1464165 ("arm/arm64: KVM: Introduce kvm_call_hyp_ret()")
86d0dd34eaff ("arm64: cpufeature: add feature for CRC32 instructions")
90776dd1c427 ("arm64/efi: Move variable assignments after SECTIONS")
95b861a4a6d9 ("arm64: arch_timer: Add workaround for ARM erratum 1188873")
a0e50aa3f4a8 ("KVM: arm64: Factor out stage 2 page table data from struct kvm")
b877e9849d41 ("KVM: arm64: Build hyp-entry.S separately for VHE/nVHE")
bd4fb6d270bc ("arm64: Add support for SB barrier and patch in over DSB; ISB sequences")
be1298425665 ("arm64: install user ptrauth keys at kernel exit time")
d82755b2e781 ("KVM: arm64: Kill off CONFIG_KVM_ARM_HOST")
f50b6f6ae131 ("KVM: arm64: Handle calls to prefixed hyp functions")
f56063c51f9f ("arm64: add image head flag definitions")
f8df73388ee2 ("KVM: arm/arm64: Introduce helpers to manipulate page table entries")
v4.14.193: Failed to apply! Possible dependencies:
0db9dd8a0fbd ("KVM: arm/arm64: Stop using the kernel's {pmd,pud,pgd}_populate helpers")
17ab9d57deba ("KVM: arm/arm64: Drop vcpu parameter from guest cache maintenance operartions")
3f58bf634555 ("KVM: arm/arm64: Share common code in user_mem_abort()")
6396b852e46e ("KVM: arm/arm64: Re-factor setting the Stage 2 entry to exec on fault")
694556d54f35 ("KVM: arm/arm64: Clean dcache to PoC when changing PTE due to CoW")
748c0e312fce ("KVM: Make kvm_set_spte_hva() return int")
88dc25e8ea7c ("KVM: arm/arm64: Consolidate page-table accessors")
91c703e0382a ("arm: KVM: Add optimized PIPT icache flushing")
a15f693935a9 ("KVM: arm/arm64: Split dcache/icache flushing")
a9c0e12ebee5 ("KVM: arm/arm64: Only clean the dcache on translation fault")
d0e22b4ac3ba ("KVM: arm/arm64: Limit icache invalidation to prefetch aborts")
f8df73388ee2 ("KVM: arm/arm64: Introduce helpers to manipulate page table entries")
v4.9.232: Failed to apply! Possible dependencies:
1534b3964901 ("KVM: MIPS/MMU: Simplify ASID restoration")
1581ff3dbf69 ("KVM: MIPS/MMU: Move preempt/ASID handling to implementation")
1880afd6057f ("KVM: MIPS/T&E: Add lockless GVA access helpers")
411740f5422a ("KVM: MIPS/MMU: Implement KVM_CAP_SYNC_MMU")
748c0e312fce ("KVM: Make kvm_set_spte_hva() return int")
91cdee5710d5 ("KVM: MIPS/T&E: Restore host asid on return to host")
a2c046e40ff1 ("KVM: MIPS: Add vcpu_run() & vcpu_reenter() callbacks")
a31b50d741bd ("KVM: MIPS/MMU: Invalidate GVA PTs on ASID changes")
a60b8438bdba ("KVM: MIPS: Convert get/set_regs -> vcpu_load/put")
a7ebb2e410f8 ("KVM: MIPS/T&E: active_mm = init_mm in guest context")
aba8592950f1 ("KVM: MIPS/MMU: Invalidate stale GVA PTEs on TLBW")
c550d53934d8 ("KVM: MIPS: Remove duplicated ASIDs from vcpu")
v4.4.232: Failed to apply! Possible dependencies:
16d100db245a ("MIPS: Move Cause.ExcCode trap codes to mipsregs.h")
1880afd6057f ("KVM: MIPS/T&E: Add lockless GVA access helpers")
19d194c62b25 ("MIPS: KVM: Simplify TLB_* macros")
411740f5422a ("KVM: MIPS/MMU: Implement KVM_CAP_SYNC_MMU")
748c0e312fce ("KVM: Make kvm_set_spte_hva() return int")
8cffd1974851 ("MIPS: KVM: Convert code to kernel sized types")
9fbfb06a4065 ("MIPS: KVM: Arrayify struct kvm_mips_tlb::tlb_lo*")
ba049e93aef7 ("kvm: rename pfn_t to kvm_pfn_t")
bdb7ed8608f8 ("MIPS: KVM: Convert headers to kernel sized types")
ca64c2beecd4 ("MIPS: KVM: Abstract guest ASID mask")
caa1faa7aba6 ("MIPS: KVM: Trivial whitespace and style fixes")
NOTE: The patch will not be queued to stable trees until it is upstream.
How should we proceed with this patch?
--
Thanks
Sasha
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH 1/2] KVM: Pass MMU notifier range flags to kvm_unmap_hva_range()
2020-08-11 10:27 ` [PATCH 1/2] KVM: Pass MMU notifier range flags to kvm_unmap_hva_range() Will Deacon
2020-08-19 23:57 ` Sasha Levin
@ 2020-08-26 13:54 ` Sasha Levin
1 sibling, 0 replies; 5+ messages in thread
From: Sasha Levin @ 2020-08-26 13:54 UTC (permalink / raw)
To: Sasha Levin, Will Deacon, kvmarm, linux-kernel
Cc: Will Deacon, Marc Zyngier, stable, Marc Zyngier, Suzuki K Poulose,
James Morse, stable
Hi
[This is an automated email]
This commit has been processed because it contains a -stable tag.
The stable tag indicates that it's relevant for the following trees: all
The bot has tested the following trees: v5.8.2, v5.7.16, v5.4.59, v4.19.140, v4.14.193, v4.9.232, v4.4.232.
v5.8.2: Build OK!
v5.7.16: Build OK!
v5.4.59: Build OK!
v4.19.140: Failed to apply! Possible dependencies:
18fc7bf8e041 ("arm64: KVM: Allow for direct call of HYP functions when using VHE")
208243c752a7 ("KVM: arm64: Move hyp-init.S to nVHE")
25357de01b95 ("KVM: arm64: Clean up kvm makefiles")
33e45234987e ("arm64: initialize and switch ptrauth kernel keys")
396244692232 ("arm64: preempt: Provide our own implementation of asm/preempt.h")
3f58bf634555 ("KVM: arm/arm64: Share common code in user_mem_abort()")
6396b852e46e ("KVM: arm/arm64: Re-factor setting the Stage 2 entry to exec on fault")
748c0e312fce ("KVM: Make kvm_set_spte_hva() return int")
750319756256 ("arm64: add basic pointer authentication support")
7621712918ad ("KVM: arm64: Add build rules for separate VHE/nVHE object files")
7aa8d1464165 ("arm/arm64: KVM: Introduce kvm_call_hyp_ret()")
86d0dd34eaff ("arm64: cpufeature: add feature for CRC32 instructions")
90776dd1c427 ("arm64/efi: Move variable assignments after SECTIONS")
95b861a4a6d9 ("arm64: arch_timer: Add workaround for ARM erratum 1188873")
a0e50aa3f4a8 ("KVM: arm64: Factor out stage 2 page table data from struct kvm")
b877e9849d41 ("KVM: arm64: Build hyp-entry.S separately for VHE/nVHE")
bd4fb6d270bc ("arm64: Add support for SB barrier and patch in over DSB; ISB sequences")
be1298425665 ("arm64: install user ptrauth keys at kernel exit time")
d82755b2e781 ("KVM: arm64: Kill off CONFIG_KVM_ARM_HOST")
f50b6f6ae131 ("KVM: arm64: Handle calls to prefixed hyp functions")
f56063c51f9f ("arm64: add image head flag definitions")
f8df73388ee2 ("KVM: arm/arm64: Introduce helpers to manipulate page table entries")
v4.14.193: Failed to apply! Possible dependencies:
0db9dd8a0fbd ("KVM: arm/arm64: Stop using the kernel's {pmd,pud,pgd}_populate helpers")
17ab9d57deba ("KVM: arm/arm64: Drop vcpu parameter from guest cache maintenance operartions")
3f58bf634555 ("KVM: arm/arm64: Share common code in user_mem_abort()")
6396b852e46e ("KVM: arm/arm64: Re-factor setting the Stage 2 entry to exec on fault")
694556d54f35 ("KVM: arm/arm64: Clean dcache to PoC when changing PTE due to CoW")
748c0e312fce ("KVM: Make kvm_set_spte_hva() return int")
88dc25e8ea7c ("KVM: arm/arm64: Consolidate page-table accessors")
91c703e0382a ("arm: KVM: Add optimized PIPT icache flushing")
a15f693935a9 ("KVM: arm/arm64: Split dcache/icache flushing")
a9c0e12ebee5 ("KVM: arm/arm64: Only clean the dcache on translation fault")
d0e22b4ac3ba ("KVM: arm/arm64: Limit icache invalidation to prefetch aborts")
f8df73388ee2 ("KVM: arm/arm64: Introduce helpers to manipulate page table entries")
v4.9.232: Failed to apply! Possible dependencies:
1534b3964901 ("KVM: MIPS/MMU: Simplify ASID restoration")
1581ff3dbf69 ("KVM: MIPS/MMU: Move preempt/ASID handling to implementation")
1880afd6057f ("KVM: MIPS/T&E: Add lockless GVA access helpers")
411740f5422a ("KVM: MIPS/MMU: Implement KVM_CAP_SYNC_MMU")
748c0e312fce ("KVM: Make kvm_set_spte_hva() return int")
91cdee5710d5 ("KVM: MIPS/T&E: Restore host asid on return to host")
a2c046e40ff1 ("KVM: MIPS: Add vcpu_run() & vcpu_reenter() callbacks")
a31b50d741bd ("KVM: MIPS/MMU: Invalidate GVA PTs on ASID changes")
a60b8438bdba ("KVM: MIPS: Convert get/set_regs -> vcpu_load/put")
a7ebb2e410f8 ("KVM: MIPS/T&E: active_mm = init_mm in guest context")
aba8592950f1 ("KVM: MIPS/MMU: Invalidate stale GVA PTEs on TLBW")
c550d53934d8 ("KVM: MIPS: Remove duplicated ASIDs from vcpu")
v4.4.232: Failed to apply! Possible dependencies:
16d100db245a ("MIPS: Move Cause.ExcCode trap codes to mipsregs.h")
1880afd6057f ("KVM: MIPS/T&E: Add lockless GVA access helpers")
19d194c62b25 ("MIPS: KVM: Simplify TLB_* macros")
411740f5422a ("KVM: MIPS/MMU: Implement KVM_CAP_SYNC_MMU")
748c0e312fce ("KVM: Make kvm_set_spte_hva() return int")
8cffd1974851 ("MIPS: KVM: Convert code to kernel sized types")
9fbfb06a4065 ("MIPS: KVM: Arrayify struct kvm_mips_tlb::tlb_lo*")
ba049e93aef7 ("kvm: rename pfn_t to kvm_pfn_t")
bdb7ed8608f8 ("MIPS: KVM: Convert headers to kernel sized types")
ca64c2beecd4 ("MIPS: KVM: Abstract guest ASID mask")
caa1faa7aba6 ("MIPS: KVM: Trivial whitespace and style fixes")
NOTE: The patch will not be queued to stable trees until it is upstream.
How should we proceed with this patch?
--
Thanks
Sasha
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2020-08-26 13:54 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20200811102725.7121-1-will@kernel.org>
2020-08-11 10:27 ` [PATCH 1/2] KVM: Pass MMU notifier range flags to kvm_unmap_hva_range() Will Deacon
2020-08-19 23:57 ` Sasha Levin
2020-08-26 13:54 ` Sasha Levin
2020-08-11 10:27 ` [PATCH 2/2] KVM: arm64: Only reschedule if MMU_NOTIFIER_RANGE_BLOCKABLE is not set Will Deacon
2020-08-11 10:52 ` Suzuki K Poulose
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).