linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/3] Fix cache flushing when mapping at stage-2
@ 2017-01-25 15:36 Marc Zyngier
  2017-01-25 15:36 ` [PATCH 1/3] arm/arm64: KVM: Enforce unconditional flush to PoC when mapping to stage-2 Marc Zyngier
                   ` (2 more replies)
  0 siblings, 3 replies; 7+ messages in thread
From: Marc Zyngier @ 2017-01-25 15:36 UTC (permalink / raw)
  To: linux-arm-kernel

When we fault in a page, we flush it to the PoC (Point of Coherency)
if the faulting vcpu has its own caches off, so that it can observe
the page we just brought it.
    
But if the vcpu has its caches on, we skip that step. Bad things
happen when *another* vcpu tries to access that page with its own
caches disabled. At that point, there is no garantee that the data has
made it to the PoC, and we access stale data.
    
The obvious fix is to always flush to PoC when a page is faulted in,
no matter what the state of the vcpu is.

This leads to additional cleanups, removing the code that was forcing
read-only memslots to be flushed to PoC as well (since the flushing is
now unconditional). Only the first patch is critical, and deserves a
Cc to stable.

Marc Zyngier (3):
  arm/arm64: KVM: Enforce unconditional flush to PoC when mapping to
    stage-2
  arm/arm64: KVM: Stop propagating cacheability status of a faulted page
  arm/arm64: KVM: Get rid of KVM_MEMSLOT_INCOHERENT

 arch/arm/include/asm/kvm_mmu.h   | 12 ++----------
 arch/arm/kvm/mmu.c               | 20 ++++----------------
 arch/arm64/include/asm/kvm_mmu.h |  6 ++----
 include/linux/kvm_host.h         |  1 -
 4 files changed, 8 insertions(+), 31 deletions(-)

-- 
2.1.4

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH 1/3] arm/arm64: KVM: Enforce unconditional flush to PoC when mapping to stage-2
  2017-01-25 15:36 [PATCH 0/3] Fix cache flushing when mapping at stage-2 Marc Zyngier
@ 2017-01-25 15:36 ` Marc Zyngier
  2017-01-26 13:19   ` Christoffer Dall
  2017-01-25 15:36 ` [PATCH 2/3] arm/arm64: KVM: Stop propagating cacheability status of a faulted page Marc Zyngier
  2017-01-25 15:36 ` [PATCH 3/3] arm/arm64: KVM: Get rid of KVM_MEMSLOT_INCOHERENT Marc Zyngier
  2 siblings, 1 reply; 7+ messages in thread
From: Marc Zyngier @ 2017-01-25 15:36 UTC (permalink / raw)
  To: linux-arm-kernel

When we fault in a page, we flush it to the PoC (Point of Coherency)
if the faulting vcpu has its own caches off, so that it can observe
the page we just brought it.

But if the vcpu has its caches on, we skip that step. Bad things
happen when *another* vcpu tries to access that page with its own
caches disabled. At that point, there is no garantee that the
data has made it to the PoC, and we access stale data.

The obvious fix is to always flush to PoC when a page is faulted
in, no matter what the state of the vcpu is.

Cc: stable at vger.kernel.org
Fixes: 2d58b733c876 ("arm64: KVM: force cache clean on page fault when caches are off")
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm/include/asm/kvm_mmu.h   | 9 +--------
 arch/arm64/include/asm/kvm_mmu.h | 3 +--
 2 files changed, 2 insertions(+), 10 deletions(-)

diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
index 74a44727..a58bbaa 100644
--- a/arch/arm/include/asm/kvm_mmu.h
+++ b/arch/arm/include/asm/kvm_mmu.h
@@ -150,18 +150,12 @@ static inline void __coherent_cache_guest_page(struct kvm_vcpu *vcpu,
 	 * and iterate over the range.
 	 */
 
-	bool need_flush = !vcpu_has_cache_enabled(vcpu) || ipa_uncached;
-
 	VM_BUG_ON(size & ~PAGE_MASK);
 
-	if (!need_flush && !icache_is_pipt())
-		goto vipt_cache;
-
 	while (size) {
 		void *va = kmap_atomic_pfn(pfn);
 
-		if (need_flush)
-			kvm_flush_dcache_to_poc(va, PAGE_SIZE);
+		kvm_flush_dcache_to_poc(va, PAGE_SIZE);
 
 		if (icache_is_pipt())
 			__cpuc_coherent_user_range((unsigned long)va,
@@ -173,7 +167,6 @@ static inline void __coherent_cache_guest_page(struct kvm_vcpu *vcpu,
 		kunmap_atomic(va);
 	}
 
-vipt_cache:
 	if (!icache_is_pipt() && !icache_is_vivt_asid_tagged()) {
 		/* any kind of VIPT cache */
 		__flush_icache_all();
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 6f72fe8..6d22017 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -241,8 +241,7 @@ static inline void __coherent_cache_guest_page(struct kvm_vcpu *vcpu,
 {
 	void *va = page_address(pfn_to_page(pfn));
 
-	if (!vcpu_has_cache_enabled(vcpu) || ipa_uncached)
-		kvm_flush_dcache_to_poc(va, size);
+	kvm_flush_dcache_to_poc(va, size);
 
 	if (!icache_is_aliasing()) {		/* PIPT */
 		flush_icache_range((unsigned long)va,
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 2/3] arm/arm64: KVM: Stop propagating cacheability status of a faulted page
  2017-01-25 15:36 [PATCH 0/3] Fix cache flushing when mapping at stage-2 Marc Zyngier
  2017-01-25 15:36 ` [PATCH 1/3] arm/arm64: KVM: Enforce unconditional flush to PoC when mapping to stage-2 Marc Zyngier
@ 2017-01-25 15:36 ` Marc Zyngier
  2017-01-26 13:22   ` Christoffer Dall
  2017-01-25 15:36 ` [PATCH 3/3] arm/arm64: KVM: Get rid of KVM_MEMSLOT_INCOHERENT Marc Zyngier
  2 siblings, 1 reply; 7+ messages in thread
From: Marc Zyngier @ 2017-01-25 15:36 UTC (permalink / raw)
  To: linux-arm-kernel

Now that we unconditionally flush newly mapped pages to the PoC,
there is no need to care about the "uncached" status of individual
pages - they must all be visible all the way down.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm/include/asm/kvm_mmu.h   |  3 +--
 arch/arm/kvm/mmu.c               | 11 ++++-------
 arch/arm64/include/asm/kvm_mmu.h |  3 +--
 3 files changed, 6 insertions(+), 11 deletions(-)

diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
index a58bbaa..95f38dc 100644
--- a/arch/arm/include/asm/kvm_mmu.h
+++ b/arch/arm/include/asm/kvm_mmu.h
@@ -129,8 +129,7 @@ static inline bool vcpu_has_cache_enabled(struct kvm_vcpu *vcpu)
 
 static inline void __coherent_cache_guest_page(struct kvm_vcpu *vcpu,
 					       kvm_pfn_t pfn,
-					       unsigned long size,
-					       bool ipa_uncached)
+					       unsigned long size)
 {
 	/*
 	 * If we are going to insert an instruction page and the icache is
diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
index a5265ed..5cc35080 100644
--- a/arch/arm/kvm/mmu.c
+++ b/arch/arm/kvm/mmu.c
@@ -1232,9 +1232,9 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
 }
 
 static void coherent_cache_guest_page(struct kvm_vcpu *vcpu, kvm_pfn_t pfn,
-				      unsigned long size, bool uncached)
+				      unsigned long size)
 {
-	__coherent_cache_guest_page(vcpu, pfn, size, uncached);
+	__coherent_cache_guest_page(vcpu, pfn, size);
 }
 
 static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
@@ -1250,7 +1250,6 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
 	struct vm_area_struct *vma;
 	kvm_pfn_t pfn;
 	pgprot_t mem_type = PAGE_S2;
-	bool fault_ipa_uncached;
 	bool logging_active = memslot_is_logging(memslot);
 	unsigned long flags = 0;
 
@@ -1337,8 +1336,6 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
 	if (!hugetlb && !force_pte)
 		hugetlb = transparent_hugepage_adjust(&pfn, &fault_ipa);
 
-	fault_ipa_uncached = memslot->flags & KVM_MEMSLOT_INCOHERENT;
-
 	if (hugetlb) {
 		pmd_t new_pmd = pfn_pmd(pfn, mem_type);
 		new_pmd = pmd_mkhuge(new_pmd);
@@ -1346,7 +1343,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
 			new_pmd = kvm_s2pmd_mkwrite(new_pmd);
 			kvm_set_pfn_dirty(pfn);
 		}
-		coherent_cache_guest_page(vcpu, pfn, PMD_SIZE, fault_ipa_uncached);
+		coherent_cache_guest_page(vcpu, pfn, PMD_SIZE);
 		ret = stage2_set_pmd_huge(kvm, memcache, fault_ipa, &new_pmd);
 	} else {
 		pte_t new_pte = pfn_pte(pfn, mem_type);
@@ -1356,7 +1353,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
 			kvm_set_pfn_dirty(pfn);
 			mark_page_dirty(kvm, gfn);
 		}
-		coherent_cache_guest_page(vcpu, pfn, PAGE_SIZE, fault_ipa_uncached);
+		coherent_cache_guest_page(vcpu, pfn, PAGE_SIZE);
 		ret = stage2_set_pte(kvm, memcache, fault_ipa, &new_pte, flags);
 	}
 
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 6d22017..aa1e6db 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -236,8 +236,7 @@ static inline bool vcpu_has_cache_enabled(struct kvm_vcpu *vcpu)
 
 static inline void __coherent_cache_guest_page(struct kvm_vcpu *vcpu,
 					       kvm_pfn_t pfn,
-					       unsigned long size,
-					       bool ipa_uncached)
+					       unsigned long size)
 {
 	void *va = page_address(pfn_to_page(pfn));
 
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 3/3] arm/arm64: KVM: Get rid of KVM_MEMSLOT_INCOHERENT
  2017-01-25 15:36 [PATCH 0/3] Fix cache flushing when mapping at stage-2 Marc Zyngier
  2017-01-25 15:36 ` [PATCH 1/3] arm/arm64: KVM: Enforce unconditional flush to PoC when mapping to stage-2 Marc Zyngier
  2017-01-25 15:36 ` [PATCH 2/3] arm/arm64: KVM: Stop propagating cacheability status of a faulted page Marc Zyngier
@ 2017-01-25 15:36 ` Marc Zyngier
  2017-01-26 13:22   ` Christoffer Dall
  2 siblings, 1 reply; 7+ messages in thread
From: Marc Zyngier @ 2017-01-25 15:36 UTC (permalink / raw)
  To: linux-arm-kernel

KVM_MEMSLOT_INCOHERENT is not used anymore, as we've killed its
only use in the arm/arm64 MMU code. Let's remove the last artifacts.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm/kvm/mmu.c       | 9 ---------
 include/linux/kvm_host.h | 1 -
 2 files changed, 10 deletions(-)

diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
index 5cc35080..962616f 100644
--- a/arch/arm/kvm/mmu.c
+++ b/arch/arm/kvm/mmu.c
@@ -1876,15 +1876,6 @@ void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *free,
 int kvm_arch_create_memslot(struct kvm *kvm, struct kvm_memory_slot *slot,
 			    unsigned long npages)
 {
-	/*
-	 * Readonly memslots are not incoherent with the caches by definition,
-	 * but in practice, they are used mostly to emulate ROMs or NOR flashes
-	 * that the guest may consider devices and hence map as uncached.
-	 * To prevent incoherency issues in these cases, tag all readonly
-	 * regions as incoherent.
-	 */
-	if (slot->flags & KVM_MEM_READONLY)
-		slot->flags |= KVM_MEMSLOT_INCOHERENT;
 	return 0;
 }
 
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 1c5190d..cda457b 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -45,7 +45,6 @@
  * include/linux/kvm_h.
  */
 #define KVM_MEMSLOT_INVALID	(1UL << 16)
-#define KVM_MEMSLOT_INCOHERENT	(1UL << 17)
 
 /* Two fragments for cross MMIO pages. */
 #define KVM_MAX_MMIO_FRAGMENTS	2
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 1/3] arm/arm64: KVM: Enforce unconditional flush to PoC when mapping to stage-2
  2017-01-25 15:36 ` [PATCH 1/3] arm/arm64: KVM: Enforce unconditional flush to PoC when mapping to stage-2 Marc Zyngier
@ 2017-01-26 13:19   ` Christoffer Dall
  0 siblings, 0 replies; 7+ messages in thread
From: Christoffer Dall @ 2017-01-26 13:19 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Jan 25, 2017 at 03:36:29PM +0000, Marc Zyngier wrote:
> When we fault in a page, we flush it to the PoC (Point of Coherency)
> if the faulting vcpu has its own caches off, so that it can observe
> the page we just brought it.
> 
> But if the vcpu has its caches on, we skip that step. Bad things
> happen when *another* vcpu tries to access that page with its own
> caches disabled. At that point, there is no garantee that the
> data has made it to the PoC, and we access stale data.
> 
> The obvious fix is to always flush to PoC when a page is faulted
> in, no matter what the state of the vcpu is.
> 
> Cc: stable at vger.kernel.org
> Fixes: 2d58b733c876 ("arm64: KVM: force cache clean on page fault when caches are off")
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>

> ---
>  arch/arm/include/asm/kvm_mmu.h   | 9 +--------
>  arch/arm64/include/asm/kvm_mmu.h | 3 +--
>  2 files changed, 2 insertions(+), 10 deletions(-)
> 
> diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
> index 74a44727..a58bbaa 100644
> --- a/arch/arm/include/asm/kvm_mmu.h
> +++ b/arch/arm/include/asm/kvm_mmu.h
> @@ -150,18 +150,12 @@ static inline void __coherent_cache_guest_page(struct kvm_vcpu *vcpu,
>  	 * and iterate over the range.
>  	 */
>  
> -	bool need_flush = !vcpu_has_cache_enabled(vcpu) || ipa_uncached;
> -
>  	VM_BUG_ON(size & ~PAGE_MASK);
>  
> -	if (!need_flush && !icache_is_pipt())
> -		goto vipt_cache;
> -
>  	while (size) {
>  		void *va = kmap_atomic_pfn(pfn);
>  
> -		if (need_flush)
> -			kvm_flush_dcache_to_poc(va, PAGE_SIZE);
> +		kvm_flush_dcache_to_poc(va, PAGE_SIZE);
>  
>  		if (icache_is_pipt())
>  			__cpuc_coherent_user_range((unsigned long)va,
> @@ -173,7 +167,6 @@ static inline void __coherent_cache_guest_page(struct kvm_vcpu *vcpu,
>  		kunmap_atomic(va);
>  	}
>  
> -vipt_cache:
>  	if (!icache_is_pipt() && !icache_is_vivt_asid_tagged()) {
>  		/* any kind of VIPT cache */
>  		__flush_icache_all();
> diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
> index 6f72fe8..6d22017 100644
> --- a/arch/arm64/include/asm/kvm_mmu.h
> +++ b/arch/arm64/include/asm/kvm_mmu.h
> @@ -241,8 +241,7 @@ static inline void __coherent_cache_guest_page(struct kvm_vcpu *vcpu,
>  {
>  	void *va = page_address(pfn_to_page(pfn));
>  
> -	if (!vcpu_has_cache_enabled(vcpu) || ipa_uncached)
> -		kvm_flush_dcache_to_poc(va, size);
> +	kvm_flush_dcache_to_poc(va, size);
>  
>  	if (!icache_is_aliasing()) {		/* PIPT */
>  		flush_icache_range((unsigned long)va,
> -- 
> 2.1.4
> 

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH 2/3] arm/arm64: KVM: Stop propagating cacheability status of a faulted page
  2017-01-25 15:36 ` [PATCH 2/3] arm/arm64: KVM: Stop propagating cacheability status of a faulted page Marc Zyngier
@ 2017-01-26 13:22   ` Christoffer Dall
  0 siblings, 0 replies; 7+ messages in thread
From: Christoffer Dall @ 2017-01-26 13:22 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Jan 25, 2017 at 03:36:30PM +0000, Marc Zyngier wrote:
> Now that we unconditionally flush newly mapped pages to the PoC,
> there is no need to care about the "uncached" status of individual
> pages - they must all be visible all the way down.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH 3/3] arm/arm64: KVM: Get rid of KVM_MEMSLOT_INCOHERENT
  2017-01-25 15:36 ` [PATCH 3/3] arm/arm64: KVM: Get rid of KVM_MEMSLOT_INCOHERENT Marc Zyngier
@ 2017-01-26 13:22   ` Christoffer Dall
  0 siblings, 0 replies; 7+ messages in thread
From: Christoffer Dall @ 2017-01-26 13:22 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Jan 25, 2017 at 03:36:31PM +0000, Marc Zyngier wrote:
> KVM_MEMSLOT_INCOHERENT is not used anymore, as we've killed its
> only use in the arm/arm64 MMU code. Let's remove the last artifacts.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2017-01-26 13:22 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-01-25 15:36 [PATCH 0/3] Fix cache flushing when mapping at stage-2 Marc Zyngier
2017-01-25 15:36 ` [PATCH 1/3] arm/arm64: KVM: Enforce unconditional flush to PoC when mapping to stage-2 Marc Zyngier
2017-01-26 13:19   ` Christoffer Dall
2017-01-25 15:36 ` [PATCH 2/3] arm/arm64: KVM: Stop propagating cacheability status of a faulted page Marc Zyngier
2017-01-26 13:22   ` Christoffer Dall
2017-01-25 15:36 ` [PATCH 3/3] arm/arm64: KVM: Get rid of KVM_MEMSLOT_INCOHERENT Marc Zyngier
2017-01-26 13:22   ` Christoffer Dall

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).