* [PATCH 0/6] ARM/AMR64: Few patches on kvm and mm code
@ 2013-11-14 19:37 Santosh Shilimkar
2013-11-14 19:37 ` [PATCH 1/6] arm64: mm: Add __virt_to_idmap() to keep kvm build happy Santosh Shilimkar
` (5 more replies)
0 siblings, 6 replies; 41+ messages in thread
From: Santosh Shilimkar @ 2013-11-14 19:37 UTC (permalink / raw)
To: linux-arm-kernel
While playing with KVM on Keystone, I came across few issues which needs
to be addressed. Have discussed most of these with Marc and Christoffer
off-list but we all thought of getting these patchset on list for wider
review.
At least virt_addr_valid() update am not sure and would like
to get discussion going on how to proceed with it. The last patch
in the series is a hack and included in the series to get some
tips.
Special thanks to Christoffer and Marc for their support in figuring
out these issues.
Generated against tip of Linus's tree(4fbf888) which includes
__virt_to_idmap() and related patchset.
Cc: Christoffer Dall <christoffer.dall@linaro.org>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Santosh Shilimkar (6):
arm64: mm: Add __virt_to_idmap() to keep kvm build happy
ARM: kvm: Use virt_to_idmap instead of virt_to_phys for idmap
mappings
ARM: mm: Drop the lowmem watermark check from virt_addr_valid()
arm64: mm: Drop the lowmem watermark check from virt_addr_valid()
ARM: kvm: Use phys_addr_t instead of unsigned long in mm code
ARM: kvm: TMP: Commit the hyp page tables to main memory
arch/arm/include/asm/memory.h | 3 +--
arch/arm/kvm/mmu.c | 12 ++++++------
arch/arm64/include/asm/memory.h | 11 +++++++++--
3 files changed, 16 insertions(+), 10 deletions(-)
--
1.7.9.5
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 1/6] arm64: mm: Add __virt_to_idmap() to keep kvm build happy
2013-11-14 19:37 [PATCH 0/6] ARM/AMR64: Few patches on kvm and mm code Santosh Shilimkar
@ 2013-11-14 19:37 ` Santosh Shilimkar
2013-11-15 13:34 ` Catalin Marinas
2013-11-15 15:05 ` Marc Zyngier
2013-11-14 19:37 ` [PATCH 2/6] ARM: kvm: Use virt_to_idmap instead of virt_to_phys for idmap mappings Santosh Shilimkar
` (4 subsequent siblings)
5 siblings, 2 replies; 41+ messages in thread
From: Santosh Shilimkar @ 2013-11-14 19:37 UTC (permalink / raw)
To: linux-arm-kernel
ARM kvm code will make use of __virt_to_idmap() on arm32
machines as hardware interconnect supported alias of physical
memory for idmap purposes. The same code is shared with arm64
bit and hence will break the builds. So we add __virt_to_idmap()
which is just __virt_to_phys() on arm64 bit to keep build happy.
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
---
arch/arm64/include/asm/memory.h | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index 3776217..d9341ee 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -75,6 +75,14 @@
#define __phys_to_virt(x) ((unsigned long)((x) - PHYS_OFFSET + PAGE_OFFSET))
/*
+ * Added to keep arm64 kvm build working which shares code with
+ * 32bit port. ARM kvm code makes use of __virt_to_idmap() on arm32
+ * machines as hardware interconnect supported alias of physical
+ * memory for idmap purposes.
+ */
+#define virt_to_idmap(x) __virt_to_phys(x)
+
+/*
* Convert a physical address to a Page Frame Number and back
*/
#define __phys_to_pfn(paddr) ((unsigned long)((paddr) >> PAGE_SHIFT))
--
1.7.9.5
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH 2/6] ARM: kvm: Use virt_to_idmap instead of virt_to_phys for idmap mappings
2013-11-14 19:37 [PATCH 0/6] ARM/AMR64: Few patches on kvm and mm code Santosh Shilimkar
2013-11-14 19:37 ` [PATCH 1/6] arm64: mm: Add __virt_to_idmap() to keep kvm build happy Santosh Shilimkar
@ 2013-11-14 19:37 ` Santosh Shilimkar
2013-11-15 0:12 ` Christoffer Dall
2013-11-14 19:37 ` [PATCH 3/6] ARM: mm: Drop the lowmem watermark check from virt_addr_valid() Santosh Shilimkar
` (3 subsequent siblings)
5 siblings, 1 reply; 41+ messages in thread
From: Santosh Shilimkar @ 2013-11-14 19:37 UTC (permalink / raw)
To: linux-arm-kernel
KVM initialisation fails on architectures implementing virt_to_idmap() because
virt_to_phys() on such architectures won't fetch you the correct idmap page.
So update the KVM ARM code to use the virt_to_idmap() to fix the issue.
Cc: Christoffer Dall <christoffer.dall@linaro.org>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
---
arch/arm/kvm/mmu.c | 9 ++++-----
1 file changed, 4 insertions(+), 5 deletions(-)
diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
index b0de86b..3bd652f 100644
--- a/arch/arm/kvm/mmu.c
+++ b/arch/arm/kvm/mmu.c
@@ -747,9 +747,9 @@ int kvm_mmu_init(void)
{
int err;
- hyp_idmap_start = virt_to_phys(__hyp_idmap_text_start);
- hyp_idmap_end = virt_to_phys(__hyp_idmap_text_end);
- hyp_idmap_vector = virt_to_phys(__kvm_hyp_init);
+ hyp_idmap_start = virt_to_idmap(__hyp_idmap_text_start);
+ hyp_idmap_end = virt_to_idmap(__hyp_idmap_text_end);
+ hyp_idmap_vector = virt_to_idmap(__kvm_hyp_init);
if ((hyp_idmap_start ^ hyp_idmap_end) & PAGE_MASK) {
/*
@@ -775,8 +775,7 @@ int kvm_mmu_init(void)
* caches off at that point.
*/
kvm_flush_dcache_to_poc(init_bounce_page, len);
-
- phys_base = virt_to_phys(init_bounce_page);
+ phys_base = virt_to_idmap(init_bounce_page);
hyp_idmap_vector += phys_base - hyp_idmap_start;
hyp_idmap_start = phys_base;
hyp_idmap_end = phys_base + len;
--
1.7.9.5
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH 3/6] ARM: mm: Drop the lowmem watermark check from virt_addr_valid()
2013-11-14 19:37 [PATCH 0/6] ARM/AMR64: Few patches on kvm and mm code Santosh Shilimkar
2013-11-14 19:37 ` [PATCH 1/6] arm64: mm: Add __virt_to_idmap() to keep kvm build happy Santosh Shilimkar
2013-11-14 19:37 ` [PATCH 2/6] ARM: kvm: Use virt_to_idmap instead of virt_to_phys for idmap mappings Santosh Shilimkar
@ 2013-11-14 19:37 ` Santosh Shilimkar
2013-11-15 0:22 ` Christoffer Dall
2013-11-15 14:20 ` Russell King - ARM Linux
2013-11-14 19:37 ` [PATCH 4/6] arm64: " Santosh Shilimkar
` (2 subsequent siblings)
5 siblings, 2 replies; 41+ messages in thread
From: Santosh Shilimkar @ 2013-11-14 19:37 UTC (permalink / raw)
To: linux-arm-kernel
Slab allocator can allocate memory beyond the lowmem watermark
which can lead to false failure of virt_addr_valid().
So drop the check. The issue was seen with percpu_alloc()
in KVM code which was allocating memory beyond lowmem watermark.
Am not completly sure whether this is the right fix and if it could
impact any other user of virt_addr_valid(). Without this fix as
pointed out the KVM init was failing in my testing.
Cc: Christoffer Dall <christoffer.dall@linaro.org>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
---
arch/arm/include/asm/memory.h | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h
index 4dd2145..412da47 100644
--- a/arch/arm/include/asm/memory.h
+++ b/arch/arm/include/asm/memory.h
@@ -343,8 +343,7 @@ static inline __deprecated void *bus_to_virt(unsigned long x)
#define ARCH_PFN_OFFSET PHYS_PFN_OFFSET
#define virt_to_page(kaddr) pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
-#define virt_addr_valid(kaddr) ((unsigned long)(kaddr) >= PAGE_OFFSET && (unsigned long)(kaddr) < (unsigned long)high_memory)
-
+#define virt_addr_valid(kaddr) ((unsigned long)(kaddr) >= PAGE_OFFSET)
#endif
#include <asm-generic/memory_model.h>
--
1.7.9.5
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH 4/6] arm64: mm: Drop the lowmem watermark check from virt_addr_valid()
2013-11-14 19:37 [PATCH 0/6] ARM/AMR64: Few patches on kvm and mm code Santosh Shilimkar
` (2 preceding siblings ...)
2013-11-14 19:37 ` [PATCH 3/6] ARM: mm: Drop the lowmem watermark check from virt_addr_valid() Santosh Shilimkar
@ 2013-11-14 19:37 ` Santosh Shilimkar
2013-11-15 13:39 ` Catalin Marinas
2013-11-14 19:37 ` [PATCH 5/6] ARM: kvm: Use phys_addr_t instead of unsigned long in mm code Santosh Shilimkar
2013-11-14 19:37 ` [PATCH 6/6] ARM: kvm: TMP: Commit the hyp page tables to main memory Santosh Shilimkar
5 siblings, 1 reply; 41+ messages in thread
From: Santosh Shilimkar @ 2013-11-14 19:37 UTC (permalink / raw)
To: linux-arm-kernel
Slab allocator can allocate memory beyond the lowmem watermark
which can lead to false failure of virt_addr_valid().
So drop the check. The issue was seen with percpu_alloc()
in KVM code which was allocating memory beyond lowmem watermark.
Am not completly sure whether this is the right fix and if it could
impact any other user of virt_addr_valid(). Without this fix as
pointed out the KVM init was failing in my testing.
Cc: Christoffer Dall <christoffer.dall@linaro.org>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
---
arch/arm64/include/asm/memory.h | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index d9341ee..d04d047 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -154,8 +154,7 @@ static inline void *phys_to_virt(phys_addr_t x)
#define ARCH_PFN_OFFSET PHYS_PFN_OFFSET
#define virt_to_page(kaddr) pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
-#define virt_addr_valid(kaddr) (((void *)(kaddr) >= (void *)PAGE_OFFSET) && \
- ((void *)(kaddr) < (void *)high_memory))
+#define virt_addr_valid(kaddr) ((void *)(kaddr) >= (void *)PAGE_OFFSET)
#endif
--
1.7.9.5
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH 5/6] ARM: kvm: Use phys_addr_t instead of unsigned long in mm code
2013-11-14 19:37 [PATCH 0/6] ARM/AMR64: Few patches on kvm and mm code Santosh Shilimkar
` (3 preceding siblings ...)
2013-11-14 19:37 ` [PATCH 4/6] arm64: " Santosh Shilimkar
@ 2013-11-14 19:37 ` Santosh Shilimkar
2013-11-15 0:09 ` Christoffer Dall
2013-11-15 11:58 ` Marc Zyngier
2013-11-14 19:37 ` [PATCH 6/6] ARM: kvm: TMP: Commit the hyp page tables to main memory Santosh Shilimkar
5 siblings, 2 replies; 41+ messages in thread
From: Santosh Shilimkar @ 2013-11-14 19:37 UTC (permalink / raw)
To: linux-arm-kernel
The unsigned long datatype is not sufficient for mapping physical addresses
greater than 4GB.
So fix the KVM mm code accordingly. Special thanks to Christopher for debug
help to figure out the bug.
Cc: Christoffer Dall <christoffer.dall@linaro.org>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
---
arch/arm/kvm/mmu.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
index 3bd652f..657f15e 100644
--- a/arch/arm/kvm/mmu.c
+++ b/arch/arm/kvm/mmu.c
@@ -318,7 +318,7 @@ out:
*/
int create_hyp_mappings(void *from, void *to)
{
- unsigned long phys_addr = virt_to_phys(from);
+ phys_addr_t phys_addr = virt_to_phys(from);
unsigned long start = KERN_TO_HYP((unsigned long)from);
unsigned long end = KERN_TO_HYP((unsigned long)to);
--
1.7.9.5
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH 6/6] ARM: kvm: TMP: Commit the hyp page tables to main memory
2013-11-14 19:37 [PATCH 0/6] ARM/AMR64: Few patches on kvm and mm code Santosh Shilimkar
` (4 preceding siblings ...)
2013-11-14 19:37 ` [PATCH 5/6] ARM: kvm: Use phys_addr_t instead of unsigned long in mm code Santosh Shilimkar
@ 2013-11-14 19:37 ` Santosh Shilimkar
2013-11-14 22:36 ` Santosh Shilimkar
2013-11-15 0:11 ` Christoffer Dall
5 siblings, 2 replies; 41+ messages in thread
From: Santosh Shilimkar @ 2013-11-14 19:37 UTC (permalink / raw)
To: linux-arm-kernel
This is a temporary hack which I have to use to avoid a weired crash while
starting the guest OS on Keystsone. They are random crashesh while the
guest os userspace starts. Additional data point is, it seen only with first
guest OS lanch. Subsequest guest OS starts normal.
I still don't know why this is needed but it helps to get around the issue
and hence including the patch in the series for the discussion
Cc: Christoffer Dall <christoffer.dall@linaro.org>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
---
arch/arm/kvm/mmu.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
index 657f15e..5f6f460 100644
--- a/arch/arm/kvm/mmu.c
+++ b/arch/arm/kvm/mmu.c
@@ -826,6 +826,7 @@ int kvm_mmu_init(void)
goto out;
}
+ flush_cache_all();
return 0;
out:
free_hyp_pgds();
--
1.7.9.5
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH 6/6] ARM: kvm: TMP: Commit the hyp page tables to main memory
2013-11-14 19:37 ` [PATCH 6/6] ARM: kvm: TMP: Commit the hyp page tables to main memory Santosh Shilimkar
@ 2013-11-14 22:36 ` Santosh Shilimkar
2013-11-15 0:11 ` Christoffer Dall
1 sibling, 0 replies; 41+ messages in thread
From: Santosh Shilimkar @ 2013-11-14 22:36 UTC (permalink / raw)
To: linux-arm-kernel
On Thursday 14 November 2013 02:37 PM, Santosh Shilimkar wrote:
> This is a temporary hack which I have to use to avoid a weired crash while
> starting the guest OS on Keystsone. They are random crashesh while the
> guest os userspace starts. Additional data point is, it seen only with first
> guest OS lanch. Subsequest guest OS starts normal.
>
> I still don't know why this is needed but it helps to get around the issue
> and hence including the patch in the series for the discussion
>
> Cc: Christoffer Dall <christoffer.dall@linaro.org>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: Russell King <linux@arm.linux.org.uk>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>
>
> Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
> ---
Ignore this patch. I thought it helps but in more testing I found
that issue still comes up even with the below change.
> arch/arm/kvm/mmu.c | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
> index 657f15e..5f6f460 100644
> --- a/arch/arm/kvm/mmu.c
> +++ b/arch/arm/kvm/mmu.c
> @@ -826,6 +826,7 @@ int kvm_mmu_init(void)
> goto out;
> }
>
> + flush_cache_all();
> return 0;
> out:
> free_hyp_pgds();
>
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 5/6] ARM: kvm: Use phys_addr_t instead of unsigned long in mm code
2013-11-14 19:37 ` [PATCH 5/6] ARM: kvm: Use phys_addr_t instead of unsigned long in mm code Santosh Shilimkar
@ 2013-11-15 0:09 ` Christoffer Dall
2013-11-15 0:10 ` Santosh Shilimkar
2013-11-15 11:58 ` Marc Zyngier
1 sibling, 1 reply; 41+ messages in thread
From: Christoffer Dall @ 2013-11-15 0:09 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, Nov 14, 2013 at 02:37:45PM -0500, Santosh Shilimkar wrote:
> The unsigned long datatype is not sufficient for mapping physical addresses
> greater than 4GB.
>
> So fix the KVM mm code accordingly. Special thanks to Christopher for debug
> help to figure out the bug.
>
> Cc: Christoffer Dall <christoffer.dall@linaro.org>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
>
> Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
> ---
> arch/arm/kvm/mmu.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
> index 3bd652f..657f15e 100644
> --- a/arch/arm/kvm/mmu.c
> +++ b/arch/arm/kvm/mmu.c
> @@ -318,7 +318,7 @@ out:
> */
> int create_hyp_mappings(void *from, void *to)
> {
> - unsigned long phys_addr = virt_to_phys(from);
> + phys_addr_t phys_addr = virt_to_phys(from);
> unsigned long start = KERN_TO_HYP((unsigned long)from);
> unsigned long end = KERN_TO_HYP((unsigned long)to);
>
> --
> 1.7.9.5
>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
I can just apply this to kvm-arm-next immediately, or we can wait and
pull it in with your series if you prefer. Let me know.
-Christoffer
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 5/6] ARM: kvm: Use phys_addr_t instead of unsigned long in mm code
2013-11-15 0:09 ` Christoffer Dall
@ 2013-11-15 0:10 ` Santosh Shilimkar
0 siblings, 0 replies; 41+ messages in thread
From: Santosh Shilimkar @ 2013-11-15 0:10 UTC (permalink / raw)
To: linux-arm-kernel
On Thursday 14 November 2013 07:09 PM, Christoffer Dall wrote:
> On Thu, Nov 14, 2013 at 02:37:45PM -0500, Santosh Shilimkar wrote:
>> The unsigned long datatype is not sufficient for mapping physical addresses
>> greater than 4GB.
>>
>> So fix the KVM mm code accordingly. Special thanks to Christopher for debug
>> help to figure out the bug.
>>
>> Cc: Christoffer Dall <christoffer.dall@linaro.org>
>> Cc: Marc Zyngier <marc.zyngier@arm.com>
>>
>> Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
>> ---
>> arch/arm/kvm/mmu.c | 2 +-
>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
>> index 3bd652f..657f15e 100644
>> --- a/arch/arm/kvm/mmu.c
>> +++ b/arch/arm/kvm/mmu.c
>> @@ -318,7 +318,7 @@ out:
>> */
>> int create_hyp_mappings(void *from, void *to)
>> {
>> - unsigned long phys_addr = virt_to_phys(from);
>> + phys_addr_t phys_addr = virt_to_phys(from);
>> unsigned long start = KERN_TO_HYP((unsigned long)from);
>> unsigned long end = KERN_TO_HYP((unsigned long)to);
>>
>> --
>> 1.7.9.5
>>
>
> Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
>
Thanks
> I can just apply this to kvm-arm-next immediately, or we can wait and
> pull it in with your series if you prefer. Let me know.
>
Feel free to pick this up since this one can go as bug
fix as well.
Regards,
Santosh
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 6/6] ARM: kvm: TMP: Commit the hyp page tables to main memory
2013-11-14 19:37 ` [PATCH 6/6] ARM: kvm: TMP: Commit the hyp page tables to main memory Santosh Shilimkar
2013-11-14 22:36 ` Santosh Shilimkar
@ 2013-11-15 0:11 ` Christoffer Dall
2013-11-15 0:15 ` Santosh Shilimkar
1 sibling, 1 reply; 41+ messages in thread
From: Christoffer Dall @ 2013-11-15 0:11 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, Nov 14, 2013 at 02:37:46PM -0500, Santosh Shilimkar wrote:
> This is a temporary hack which I have to use to avoid a weired crash while
> starting the guest OS on Keystsone. They are random crashesh while the
> guest os userspace starts. Additional data point is, it seen only with first
> guest OS lanch. Subsequest guest OS starts normal.
>
what crashes? The guest? Where, how?
> I still don't know why this is needed but it helps to get around the issue
> and hence including the patch in the series for the discussion
It may not be needed but is just hiding the issue. I'm afraid you're
going to have to dig a little more into this.
-Christoffer
>
> Cc: Christoffer Dall <christoffer.dall@linaro.org>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: Russell King <linux@arm.linux.org.uk>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>
>
> Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
> ---
> arch/arm/kvm/mmu.c | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
> index 657f15e..5f6f460 100644
> --- a/arch/arm/kvm/mmu.c
> +++ b/arch/arm/kvm/mmu.c
> @@ -826,6 +826,7 @@ int kvm_mmu_init(void)
> goto out;
> }
>
> + flush_cache_all();
> return 0;
> out:
> free_hyp_pgds();
> --
> 1.7.9.5
>
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 2/6] ARM: kvm: Use virt_to_idmap instead of virt_to_phys for idmap mappings
2013-11-14 19:37 ` [PATCH 2/6] ARM: kvm: Use virt_to_idmap instead of virt_to_phys for idmap mappings Santosh Shilimkar
@ 2013-11-15 0:12 ` Christoffer Dall
0 siblings, 0 replies; 41+ messages in thread
From: Christoffer Dall @ 2013-11-15 0:12 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, Nov 14, 2013 at 02:37:42PM -0500, Santosh Shilimkar wrote:
> KVM initialisation fails on architectures implementing virt_to_idmap() because
> virt_to_phys() on such architectures won't fetch you the correct idmap page.
>
> So update the KVM ARM code to use the virt_to_idmap() to fix the issue.
>
> Cc: Christoffer Dall <christoffer.dall@linaro.org>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
>
> Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
> ---
> arch/arm/kvm/mmu.c | 9 ++++-----
> 1 file changed, 4 insertions(+), 5 deletions(-)
>
> diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
> index b0de86b..3bd652f 100644
> --- a/arch/arm/kvm/mmu.c
> +++ b/arch/arm/kvm/mmu.c
> @@ -747,9 +747,9 @@ int kvm_mmu_init(void)
> {
> int err;
>
> - hyp_idmap_start = virt_to_phys(__hyp_idmap_text_start);
> - hyp_idmap_end = virt_to_phys(__hyp_idmap_text_end);
> - hyp_idmap_vector = virt_to_phys(__kvm_hyp_init);
> + hyp_idmap_start = virt_to_idmap(__hyp_idmap_text_start);
> + hyp_idmap_end = virt_to_idmap(__hyp_idmap_text_end);
> + hyp_idmap_vector = virt_to_idmap(__kvm_hyp_init);
>
> if ((hyp_idmap_start ^ hyp_idmap_end) & PAGE_MASK) {
> /*
> @@ -775,8 +775,7 @@ int kvm_mmu_init(void)
> * caches off at that point.
> */
> kvm_flush_dcache_to_poc(init_bounce_page, len);
> -
> - phys_base = virt_to_phys(init_bounce_page);
> + phys_base = virt_to_idmap(init_bounce_page);
> hyp_idmap_vector += phys_base - hyp_idmap_start;
> hyp_idmap_start = phys_base;
> hyp_idmap_end = phys_base + len;
> --
> 1.7.9.5
>
Looks good to me.
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 6/6] ARM: kvm: TMP: Commit the hyp page tables to main memory
2013-11-15 0:11 ` Christoffer Dall
@ 2013-11-15 0:15 ` Santosh Shilimkar
2013-11-15 0:27 ` Christoffer Dall
0 siblings, 1 reply; 41+ messages in thread
From: Santosh Shilimkar @ 2013-11-15 0:15 UTC (permalink / raw)
To: linux-arm-kernel
On Thursday 14 November 2013 07:11 PM, Christoffer Dall wrote:
> On Thu, Nov 14, 2013 at 02:37:46PM -0500, Santosh Shilimkar wrote:
>> This is a temporary hack which I have to use to avoid a weired crash while
>> starting the guest OS on Keystsone. They are random crashesh while the
>> guest os userspace starts. Additional data point is, it seen only with first
>> guest OS lanch. Subsequest guest OS starts normal.
>>
>
> what crashes? The guest? Where, how?
>
When guest userspace starts. The crashes are random but always after the
guest init process have started.
>
>> I still don't know why this is needed but it helps to get around the issue
>> and hence including the patch in the series for the discussion
>
> It may not be needed but is just hiding the issue. I'm afraid you're
> going to have to dig a little more into this.
>
I replied already on this. Further testing, I found the issue
even with this patch applied. I need to dig bit more as you said.
Regards,
Santosh
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 3/6] ARM: mm: Drop the lowmem watermark check from virt_addr_valid()
2013-11-14 19:37 ` [PATCH 3/6] ARM: mm: Drop the lowmem watermark check from virt_addr_valid() Santosh Shilimkar
@ 2013-11-15 0:22 ` Christoffer Dall
2013-11-15 0:31 ` Santosh Shilimkar
2013-11-15 11:43 ` Marc Zyngier
2013-11-15 14:20 ` Russell King - ARM Linux
1 sibling, 2 replies; 41+ messages in thread
From: Christoffer Dall @ 2013-11-15 0:22 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, Nov 14, 2013 at 02:37:43PM -0500, Santosh Shilimkar wrote:
> Slab allocator can allocate memory beyond the lowmem watermark
> which can lead to false failure of virt_addr_valid().
>
> So drop the check. The issue was seen with percpu_alloc()
> in KVM code which was allocating memory beyond lowmem watermark.
>
> Am not completly sure whether this is the right fix and if it could
> impact any other user of virt_addr_valid(). Without this fix as
> pointed out the KVM init was failing in my testing.
>
> Cc: Christoffer Dall <christoffer.dall@linaro.org>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: Russell King <linux@arm.linux.org.uk>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>
>
> Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
> ---
> arch/arm/include/asm/memory.h | 3 +--
> 1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h
> index 4dd2145..412da47 100644
> --- a/arch/arm/include/asm/memory.h
> +++ b/arch/arm/include/asm/memory.h
> @@ -343,8 +343,7 @@ static inline __deprecated void *bus_to_virt(unsigned long x)
> #define ARCH_PFN_OFFSET PHYS_PFN_OFFSET
>
> #define virt_to_page(kaddr) pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
> -#define virt_addr_valid(kaddr) ((unsigned long)(kaddr) >= PAGE_OFFSET && (unsigned long)(kaddr) < (unsigned long)high_memory)
> -
> +#define virt_addr_valid(kaddr) ((unsigned long)(kaddr) >= PAGE_OFFSET)
> #endif
>
> #include <asm-generic/memory_model.h>
> --
> 1.7.9.5
>
This looks wrong to me. Check Documentation/arm/memory.txt, this would
return true for the VMALLOC region, which would cause virt_to_phys to
give you something invalid, which would be bad.
We use the check in create_hyp_mappings to be sure that the physical
address returned by virt_to_phys is valid and that if we're mapping more
than one page that those pages are physically contiguous.
So if you want to get rid of this check, you need to change the mapping
functionality to obtain the physical address by walking the page table
mappings for each page that you are mapping instead. Or limit each call
to a single page in size and take the physical address as input and use
per_cpu_ptr_to_phys at the caller side instead.
Alternatively, we need to get rid of alloc_percpu and use regular
kmallocs instead, unless anyone else knows of an even better way.
-Christoffer
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 6/6] ARM: kvm: TMP: Commit the hyp page tables to main memory
2013-11-15 0:15 ` Santosh Shilimkar
@ 2013-11-15 0:27 ` Christoffer Dall
2013-11-15 0:36 ` Santosh Shilimkar
0 siblings, 1 reply; 41+ messages in thread
From: Christoffer Dall @ 2013-11-15 0:27 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, Nov 14, 2013 at 07:15:44PM -0500, Santosh Shilimkar wrote:
> On Thursday 14 November 2013 07:11 PM, Christoffer Dall wrote:
> > On Thu, Nov 14, 2013 at 02:37:46PM -0500, Santosh Shilimkar wrote:
> >> This is a temporary hack which I have to use to avoid a weired crash while
> >> starting the guest OS on Keystsone. They are random crashesh while the
> >> guest os userspace starts. Additional data point is, it seen only with first
> >> guest OS lanch. Subsequest guest OS starts normal.
> >>
> >
> > what crashes? The guest? Where, how?
> >
> When guest userspace starts. The crashes are random but always after the
> guest init process have started.
>
So you get a guest kernel crash when guest userspace starts?
Are the crashes completely random or is it always some pointer
dereference that goes wrong, is it init crashing and causing the kernel
to crash (from killed init), or is it always the same kernel thread, or
anything coherent at all?
It could be anything, really. You could try a really brute force
debugging option of adding a complete cache flush at the end of
user_mem_abort in arch/arm/kvm/mmu.c to see if this is cache related at
all...
Are you running with huge pages enabled?
> >
> >> I still don't know why this is needed but it helps to get around the issue
> >> and hence including the patch in the series for the discussion
> >
> > It may not be needed but is just hiding the issue. I'm afraid you're
> > going to have to dig a little more into this.
> >
> I replied already on this. Further testing, I found the issue
> even with this patch applied. I need to dig bit more as you said.
Yeah, sorry I missed your reply before replying to the patch.
-Christoffer
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 3/6] ARM: mm: Drop the lowmem watermark check from virt_addr_valid()
2013-11-15 0:22 ` Christoffer Dall
@ 2013-11-15 0:31 ` Santosh Shilimkar
2013-11-15 0:40 ` Christoffer Dall
2013-11-15 11:43 ` Marc Zyngier
1 sibling, 1 reply; 41+ messages in thread
From: Santosh Shilimkar @ 2013-11-15 0:31 UTC (permalink / raw)
To: linux-arm-kernel
On Thursday 14 November 2013 07:22 PM, Christoffer Dall wrote:
> On Thu, Nov 14, 2013 at 02:37:43PM -0500, Santosh Shilimkar wrote:
>> Slab allocator can allocate memory beyond the lowmem watermark
>> which can lead to false failure of virt_addr_valid().
>>
>> So drop the check. The issue was seen with percpu_alloc()
>> in KVM code which was allocating memory beyond lowmem watermark.
>>
>> Am not completly sure whether this is the right fix and if it could
>> impact any other user of virt_addr_valid(). Without this fix as
>> pointed out the KVM init was failing in my testing.
>>
>> Cc: Christoffer Dall <christoffer.dall@linaro.org>
>> Cc: Marc Zyngier <marc.zyngier@arm.com>
>> Cc: Russell King <linux@arm.linux.org.uk>
>> Cc: Catalin Marinas <catalin.marinas@arm.com>
>> Cc: Will Deacon <will.deacon@arm.com>
>>
>> Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
>> ---
>> arch/arm/include/asm/memory.h | 3 +--
>> 1 file changed, 1 insertion(+), 2 deletions(-)
>>
>> diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h
>> index 4dd2145..412da47 100644
>> --- a/arch/arm/include/asm/memory.h
>> +++ b/arch/arm/include/asm/memory.h
>> @@ -343,8 +343,7 @@ static inline __deprecated void *bus_to_virt(unsigned long x)
>> #define ARCH_PFN_OFFSET PHYS_PFN_OFFSET
>>
>> #define virt_to_page(kaddr) pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
>> -#define virt_addr_valid(kaddr) ((unsigned long)(kaddr) >= PAGE_OFFSET && (unsigned long)(kaddr) < (unsigned long)high_memory)
>> -
>> +#define virt_addr_valid(kaddr) ((unsigned long)(kaddr) >= PAGE_OFFSET)
>> #endif
>>
>> #include <asm-generic/memory_model.h>
>> --
>> 1.7.9.5
>>
>
> This looks wrong to me. Check Documentation/arm/memory.txt, this would
> return true for the VMALLOC region, which would cause virt_to_phys to
> give you something invalid, which would be bad.
>
I also thought it might not be right fix and hence added a disclaimer
in the commit message ;-)
Regards,
Santosh
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 6/6] ARM: kvm: TMP: Commit the hyp page tables to main memory
2013-11-15 0:27 ` Christoffer Dall
@ 2013-11-15 0:36 ` Santosh Shilimkar
2013-11-15 0:42 ` Christoffer Dall
0 siblings, 1 reply; 41+ messages in thread
From: Santosh Shilimkar @ 2013-11-15 0:36 UTC (permalink / raw)
To: linux-arm-kernel
On Thursday 14 November 2013 07:27 PM, Christoffer Dall wrote:
> On Thu, Nov 14, 2013 at 07:15:44PM -0500, Santosh Shilimkar wrote:
>> On Thursday 14 November 2013 07:11 PM, Christoffer Dall wrote:
>>> On Thu, Nov 14, 2013 at 02:37:46PM -0500, Santosh Shilimkar wrote:
>>>> This is a temporary hack which I have to use to avoid a weired crash while
>>>> starting the guest OS on Keystsone. They are random crashesh while the
>>>> guest os userspace starts. Additional data point is, it seen only with first
>>>> guest OS lanch. Subsequest guest OS starts normal.
>>>>
>>>
>>> what crashes? The guest? Where, how?
>>>
>> When guest userspace starts. The crashes are random but always after the
>> guest init process have started.
>>
>
> So you get a guest kernel crash when guest userspace starts?
>
> Are the crashes completely random or is it always some pointer
> dereference that goes wrong, is it init crashing and causing the kernel
> to crash (from killed init), or is it always the same kernel thread, or
> anything coherent at all?
>
Completely random. I have seen almost all of the above possible crashes
like pointer derefence, init process skipping some steps, console going
for toss, the log in prompt just won't let me log in etc
> It could be anything, really. You could try a really brute force
> debugging option of adding a complete cache flush at the end of
> user_mem_abort in arch/arm/kvm/mmu.c to see if this is cache related at
> all...
>
I will try that. I strongly suspect this has to do with bad page tables.
remember I see this issue with when using memory which starts beyond 4GB.
> Are you running with huge pages enabled?
>
Nope.
>>>
>>>> I still don't know why this is needed but it helps to get around the issue
>>>> and hence including the patch in the series for the discussion
>>>
>>> It may not be needed but is just hiding the issue. I'm afraid you're
>>> going to have to dig a little more into this.
>>>
>> I replied already on this. Further testing, I found the issue
>> even with this patch applied. I need to dig bit more as you said.
>
> Yeah, sorry I missed your reply before replying to the patch.
>
np. Thanks
regards,
Santosh
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 3/6] ARM: mm: Drop the lowmem watermark check from virt_addr_valid()
2013-11-15 0:31 ` Santosh Shilimkar
@ 2013-11-15 0:40 ` Christoffer Dall
2013-11-15 11:48 ` Marc Zyngier
0 siblings, 1 reply; 41+ messages in thread
From: Christoffer Dall @ 2013-11-15 0:40 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, Nov 14, 2013 at 07:31:36PM -0500, Santosh Shilimkar wrote:
> On Thursday 14 November 2013 07:22 PM, Christoffer Dall wrote:
> > On Thu, Nov 14, 2013 at 02:37:43PM -0500, Santosh Shilimkar wrote:
> >> Slab allocator can allocate memory beyond the lowmem watermark
> >> which can lead to false failure of virt_addr_valid().
> >>
> >> So drop the check. The issue was seen with percpu_alloc()
> >> in KVM code which was allocating memory beyond lowmem watermark.
> >>
> >> Am not completly sure whether this is the right fix and if it could
> >> impact any other user of virt_addr_valid(). Without this fix as
> >> pointed out the KVM init was failing in my testing.
> >>
> >> Cc: Christoffer Dall <christoffer.dall@linaro.org>
> >> Cc: Marc Zyngier <marc.zyngier@arm.com>
> >> Cc: Russell King <linux@arm.linux.org.uk>
> >> Cc: Catalin Marinas <catalin.marinas@arm.com>
> >> Cc: Will Deacon <will.deacon@arm.com>
> >>
> >> Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
> >> ---
> >> arch/arm/include/asm/memory.h | 3 +--
> >> 1 file changed, 1 insertion(+), 2 deletions(-)
> >>
> >> diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h
> >> index 4dd2145..412da47 100644
> >> --- a/arch/arm/include/asm/memory.h
> >> +++ b/arch/arm/include/asm/memory.h
> >> @@ -343,8 +343,7 @@ static inline __deprecated void *bus_to_virt(unsigned long x)
> >> #define ARCH_PFN_OFFSET PHYS_PFN_OFFSET
> >>
> >> #define virt_to_page(kaddr) pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
> >> -#define virt_addr_valid(kaddr) ((unsigned long)(kaddr) >= PAGE_OFFSET && (unsigned long)(kaddr) < (unsigned long)high_memory)
> >> -
> >> +#define virt_addr_valid(kaddr) ((unsigned long)(kaddr) >= PAGE_OFFSET)
> >> #endif
> >>
> >> #include <asm-generic/memory_model.h>
> >> --
> >> 1.7.9.5
> >>
> >
> > This looks wrong to me. Check Documentation/arm/memory.txt, this would
> > return true for the VMALLOC region, which would cause virt_to_phys to
> > give you something invalid, which would be bad.
> >
> I also thought it might not be right fix and hence added a disclaimer
> in the commit message ;-)
>
Yes I know, I'm not holding it against you ;)
Not sure what the nicest fix is though. I think we can get away with
the single page restriction per call for Hyp mappings for now. IIRC the
Hyp code is still limited to be within a single page.
Let's see what Marc has to say.
-Christoffer
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 6/6] ARM: kvm: TMP: Commit the hyp page tables to main memory
2013-11-15 0:36 ` Santosh Shilimkar
@ 2013-11-15 0:42 ` Christoffer Dall
2013-11-15 1:19 ` Santosh Shilimkar
0 siblings, 1 reply; 41+ messages in thread
From: Christoffer Dall @ 2013-11-15 0:42 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, Nov 14, 2013 at 07:36:37PM -0500, Santosh Shilimkar wrote:
> On Thursday 14 November 2013 07:27 PM, Christoffer Dall wrote:
> > On Thu, Nov 14, 2013 at 07:15:44PM -0500, Santosh Shilimkar wrote:
> >> On Thursday 14 November 2013 07:11 PM, Christoffer Dall wrote:
> >>> On Thu, Nov 14, 2013 at 02:37:46PM -0500, Santosh Shilimkar wrote:
> >>>> This is a temporary hack which I have to use to avoid a weired crash while
> >>>> starting the guest OS on Keystsone. They are random crashesh while the
> >>>> guest os userspace starts. Additional data point is, it seen only with first
> >>>> guest OS lanch. Subsequest guest OS starts normal.
> >>>>
> >>>
> >>> what crashes? The guest? Where, how?
> >>>
> >> When guest userspace starts. The crashes are random but always after the
> >> guest init process have started.
> >>
> >
> > So you get a guest kernel crash when guest userspace starts?
> >
> > Are the crashes completely random or is it always some pointer
> > dereference that goes wrong, is it init crashing and causing the kernel
> > to crash (from killed init), or is it always the same kernel thread, or
> > anything coherent at all?
> >
> Completely random. I have seen almost all of the above possible crashes
> like pointer derefence, init process skipping some steps, console going
> for toss, the log in prompt just won't let me log in etc
>
>
> > It could be anything, really. You could try a really brute force
> > debugging option of adding a complete cache flush at the end of
> > user_mem_abort in arch/arm/kvm/mmu.c to see if this is cache related at
> > all...
> >
> I will try that. I strongly suspect this has to do with bad page tables.
> remember I see this issue with when using memory which starts beyond 4GB.
>
>
But once it crashes, if you kill the VM process and start a new one,
then the new one runs flawlessly? Did you stress test the second VM
(hackbench or something) so we're sure the second one is indeed stable?
What happens if you start a guest, kill it immediately, and then start
another guest?
-Christoffer
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 6/6] ARM: kvm: TMP: Commit the hyp page tables to main memory
2013-11-15 0:42 ` Christoffer Dall
@ 2013-11-15 1:19 ` Santosh Shilimkar
2013-11-15 1:35 ` Christoffer Dall
0 siblings, 1 reply; 41+ messages in thread
From: Santosh Shilimkar @ 2013-11-15 1:19 UTC (permalink / raw)
To: linux-arm-kernel
On Thursday 14 November 2013 07:42 PM, Christoffer Dall wrote:
> On Thu, Nov 14, 2013 at 07:36:37PM -0500, Santosh Shilimkar wrote:
>> On Thursday 14 November 2013 07:27 PM, Christoffer Dall wrote:
>>> On Thu, Nov 14, 2013 at 07:15:44PM -0500, Santosh Shilimkar wrote:
>>>> On Thursday 14 November 2013 07:11 PM, Christoffer Dall wrote:
>>>>> On Thu, Nov 14, 2013 at 02:37:46PM -0500, Santosh Shilimkar wrote:
>>>>>> This is a temporary hack which I have to use to avoid a weired crash while
>>>>>> starting the guest OS on Keystsone. They are random crashesh while the
>>>>>> guest os userspace starts. Additional data point is, it seen only with first
>>>>>> guest OS lanch. Subsequest guest OS starts normal.
>>>>>>
>>>>>
>>>>> what crashes? The guest? Where, how?
>>>>>
>>>> When guest userspace starts. The crashes are random but always after the
>>>> guest init process have started.
>>>>
>>>
>>> So you get a guest kernel crash when guest userspace starts?
>>>
>>> Are the crashes completely random or is it always some pointer
>>> dereference that goes wrong, is it init crashing and causing the kernel
>>> to crash (from killed init), or is it always the same kernel thread, or
>>> anything coherent at all?
>>>
>> Completely random. I have seen almost all of the above possible crashes
>> like pointer derefence, init process skipping some steps, console going
>> for toss, the log in prompt just won't let me log in etc
>>
>>
>>> It could be anything, really. You could try a really brute force
>>> debugging option of adding a complete cache flush at the end of
>>> user_mem_abort in arch/arm/kvm/mmu.c to see if this is cache related at
>>> all...
>>>
>> I will try that. I strongly suspect this has to do with bad page tables.
>> remember I see this issue with when using memory which starts beyond 4GB.
>>
full cache full at end of user_mem_abort() doesn't help. So it might not be
cache related then.
>
> But once it crashes, if you kill the VM process and start a new one,
> then the new one runs flawlessly? Did you stress test the second VM
> (hackbench or something) so we're sure the second one is indeed stable?
>
> What happens if you start a guest, kill it immediately, and then start
> another guest?
>
And the observation about subsequent VM's being stable also doesn't hold
true. Additional symptom what I saw was segmentation fault as well as
hitting kvm load/store trap. This also possibly indicates instructions
corruption.
Regards,
Santosh
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 6/6] ARM: kvm: TMP: Commit the hyp page tables to main memory
2013-11-15 1:19 ` Santosh Shilimkar
@ 2013-11-15 1:35 ` Christoffer Dall
0 siblings, 0 replies; 41+ messages in thread
From: Christoffer Dall @ 2013-11-15 1:35 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, Nov 14, 2013 at 08:19:13PM -0500, Santosh Shilimkar wrote:
> On Thursday 14 November 2013 07:42 PM, Christoffer Dall wrote:
> > On Thu, Nov 14, 2013 at 07:36:37PM -0500, Santosh Shilimkar wrote:
> >> On Thursday 14 November 2013 07:27 PM, Christoffer Dall wrote:
> >>> On Thu, Nov 14, 2013 at 07:15:44PM -0500, Santosh Shilimkar wrote:
> >>>> On Thursday 14 November 2013 07:11 PM, Christoffer Dall wrote:
> >>>>> On Thu, Nov 14, 2013 at 02:37:46PM -0500, Santosh Shilimkar wrote:
> >>>>>> This is a temporary hack which I have to use to avoid a weired crash while
> >>>>>> starting the guest OS on Keystsone. They are random crashesh while the
> >>>>>> guest os userspace starts. Additional data point is, it seen only with first
> >>>>>> guest OS lanch. Subsequest guest OS starts normal.
> >>>>>>
> >>>>>
> >>>>> what crashes? The guest? Where, how?
> >>>>>
> >>>> When guest userspace starts. The crashes are random but always after the
> >>>> guest init process have started.
> >>>>
> >>>
> >>> So you get a guest kernel crash when guest userspace starts?
> >>>
> >>> Are the crashes completely random or is it always some pointer
> >>> dereference that goes wrong, is it init crashing and causing the kernel
> >>> to crash (from killed init), or is it always the same kernel thread, or
> >>> anything coherent at all?
> >>>
> >> Completely random. I have seen almost all of the above possible crashes
> >> like pointer derefence, init process skipping some steps, console going
> >> for toss, the log in prompt just won't let me log in etc
> >>
> >>
> >>> It could be anything, really. You could try a really brute force
> >>> debugging option of adding a complete cache flush at the end of
> >>> user_mem_abort in arch/arm/kvm/mmu.c to see if this is cache related at
> >>> all...
> >>>
> >> I will try that. I strongly suspect this has to do with bad page tables.
> >> remember I see this issue with when using memory which starts beyond 4GB.
> >>
> full cache full at end of user_mem_abort() doesn't help. So it might not be
> cache related then.
>
> >
> > But once it crashes, if you kill the VM process and start a new one,
> > then the new one runs flawlessly? Did you stress test the second VM
> > (hackbench or something) so we're sure the second one is indeed stable?
> >
> > What happens if you start a guest, kill it immediately, and then start
> > another guest?
> >
> And the observation about subsequent VM's being stable also doesn't hold
> true. Additional symptom what I saw was segmentation fault as well as
> hitting kvm load/store trap. This also possibly indicates instructions
> corruption.
>
Cool, so we only know it breaks when the physical address is >4GB.
Awesome.
It may be helpful to cherry-pick this commit:
https://git.linaro.org/gitweb?p=people/cdall/linux-kvm-arm.git;a=commitdiff;h=df6dc9f43f2a37547d4ce034706ef0cfc4235129
Then capture a full trace of the VM when executing until the guest
crashes and look at the trace to see if we're mapping and faulting on
the pages we think we are or if it looks like something is being
truncated.
Feel free to send me one of those logs and I'll be happy to take a look.
-Christoffer
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 3/6] ARM: mm: Drop the lowmem watermark check from virt_addr_valid()
2013-11-15 0:22 ` Christoffer Dall
2013-11-15 0:31 ` Santosh Shilimkar
@ 2013-11-15 11:43 ` Marc Zyngier
2013-11-15 14:55 ` Santosh Shilimkar
1 sibling, 1 reply; 41+ messages in thread
From: Marc Zyngier @ 2013-11-15 11:43 UTC (permalink / raw)
To: linux-arm-kernel
On 15/11/13 00:22, Christoffer Dall wrote:
> On Thu, Nov 14, 2013 at 02:37:43PM -0500, Santosh Shilimkar wrote:
>> Slab allocator can allocate memory beyond the lowmem watermark
>> which can lead to false failure of virt_addr_valid().
>>
>> So drop the check. The issue was seen with percpu_alloc()
>> in KVM code which was allocating memory beyond lowmem watermark.
>>
>> Am not completly sure whether this is the right fix and if it could
>> impact any other user of virt_addr_valid(). Without this fix as
>> pointed out the KVM init was failing in my testing.
>>
>> Cc: Christoffer Dall <christoffer.dall@linaro.org>
>> Cc: Marc Zyngier <marc.zyngier@arm.com>
>> Cc: Russell King <linux@arm.linux.org.uk>
>> Cc: Catalin Marinas <catalin.marinas@arm.com>
>> Cc: Will Deacon <will.deacon@arm.com>
>>
>> Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
>> ---
>> arch/arm/include/asm/memory.h | 3 +--
>> 1 file changed, 1 insertion(+), 2 deletions(-)
>>
>> diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h
>> index 4dd2145..412da47 100644
>> --- a/arch/arm/include/asm/memory.h
>> +++ b/arch/arm/include/asm/memory.h
>> @@ -343,8 +343,7 @@ static inline __deprecated void *bus_to_virt(unsigned long x)
>> #define ARCH_PFN_OFFSET PHYS_PFN_OFFSET
>>
>> #define virt_to_page(kaddr) pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
>> -#define virt_addr_valid(kaddr) ((unsigned long)(kaddr) >= PAGE_OFFSET && (unsigned long)(kaddr) < (unsigned long)high_memory)
>> -
>> +#define virt_addr_valid(kaddr) ((unsigned long)(kaddr) >= PAGE_OFFSET)
>> #endif
>>
>> #include <asm-generic/memory_model.h>
>> --
>> 1.7.9.5
>>
>
> This looks wrong to me. Check Documentation/arm/memory.txt, this would
> return true for the VMALLOC region, which would cause virt_to_phys to
> give you something invalid, which would be bad.
>
> We use the check in create_hyp_mappings to be sure that the physical
> address returned by virt_to_phys is valid and that if we're mapping more
> than one page that those pages are physically contiguous.
>
> So if you want to get rid of this check, you need to change the mapping
> functionality to obtain the physical address by walking the page table
> mappings for each page that you are mapping instead. Or limit each call
> to a single page in size and take the physical address as input and use
> per_cpu_ptr_to_phys at the caller side instead.
>
> Alternatively, we need to get rid of alloc_percpu and use regular
> kmallocs instead, unless anyone else knows of an even better way.
alloc_percpu has nice properties (cache locality, mostly).
One way out of it would be to give percpu stuff a special treatment. Can
you try the attach patch as a first approximation? It needs more
refinements (allocations straddling two pages?), but I think that's the
right sort of things.
Let me know how it works for you.
M.
--
Jazz is not dead. It just smells funny...
-------------- next part --------------
A non-text attachment was scrubbed...
Name: 0001-arm-arm64-KVM-introduce-new-mapping-API-for-percpu-m.patch
Type: text/x-diff
Size: 3340 bytes
Desc: not available
URL: <http://lists.infradead.org/pipermail/linux-arm-kernel/attachments/20131115/88381749/attachment.bin>
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 3/6] ARM: mm: Drop the lowmem watermark check from virt_addr_valid()
2013-11-15 0:40 ` Christoffer Dall
@ 2013-11-15 11:48 ` Marc Zyngier
0 siblings, 0 replies; 41+ messages in thread
From: Marc Zyngier @ 2013-11-15 11:48 UTC (permalink / raw)
To: linux-arm-kernel
On 15/11/13 00:40, Christoffer Dall wrote:
> On Thu, Nov 14, 2013 at 07:31:36PM -0500, Santosh Shilimkar wrote:
>> On Thursday 14 November 2013 07:22 PM, Christoffer Dall wrote:
>>> On Thu, Nov 14, 2013 at 02:37:43PM -0500, Santosh Shilimkar wrote:
>>>> Slab allocator can allocate memory beyond the lowmem watermark
>>>> which can lead to false failure of virt_addr_valid().
>>>>
>>>> So drop the check. The issue was seen with percpu_alloc()
>>>> in KVM code which was allocating memory beyond lowmem watermark.
>>>>
>>>> Am not completly sure whether this is the right fix and if it could
>>>> impact any other user of virt_addr_valid(). Without this fix as
>>>> pointed out the KVM init was failing in my testing.
>>>>
>>>> Cc: Christoffer Dall <christoffer.dall@linaro.org>
>>>> Cc: Marc Zyngier <marc.zyngier@arm.com>
>>>> Cc: Russell King <linux@arm.linux.org.uk>
>>>> Cc: Catalin Marinas <catalin.marinas@arm.com>
>>>> Cc: Will Deacon <will.deacon@arm.com>
>>>>
>>>> Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
>>>> ---
>>>> arch/arm/include/asm/memory.h | 3 +--
>>>> 1 file changed, 1 insertion(+), 2 deletions(-)
>>>>
>>>> diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h
>>>> index 4dd2145..412da47 100644
>>>> --- a/arch/arm/include/asm/memory.h
>>>> +++ b/arch/arm/include/asm/memory.h
>>>> @@ -343,8 +343,7 @@ static inline __deprecated void *bus_to_virt(unsigned long x)
>>>> #define ARCH_PFN_OFFSET PHYS_PFN_OFFSET
>>>>
>>>> #define virt_to_page(kaddr) pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
>>>> -#define virt_addr_valid(kaddr) ((unsigned long)(kaddr) >= PAGE_OFFSET && (unsigned long)(kaddr) < (unsigned long)high_memory)
>>>> -
>>>> +#define virt_addr_valid(kaddr) ((unsigned long)(kaddr) >= PAGE_OFFSET)
>>>> #endif
>>>>
>>>> #include <asm-generic/memory_model.h>
>>>> --
>>>> 1.7.9.5
>>>>
>>>
>>> This looks wrong to me. Check Documentation/arm/memory.txt, this would
>>> return true for the VMALLOC region, which would cause virt_to_phys to
>>> give you something invalid, which would be bad.
>>>
>> I also thought it might not be right fix and hence added a disclaimer
>> in the commit message ;-)
>>
> Yes I know, I'm not holding it against you ;)
>
> Not sure what the nicest fix is though. I think we can get away with
> the single page restriction per call for Hyp mappings for now. IIRC the
> Hyp code is still limited to be within a single page.
>
> Let's see what Marc has to say.
So that may change quite quickly. I already have additional code for
GICv3 in HYP, and I'd like to avoid putting restrictions that we'd have
a hard time removing later.
I'd like to try the approach I just sent out in a separate email first,
and if it can't be made to work, we can see how to either enforce a
single-page restriction or move the percpu stuff to kmalloc instead.
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 5/6] ARM: kvm: Use phys_addr_t instead of unsigned long in mm code
2013-11-14 19:37 ` [PATCH 5/6] ARM: kvm: Use phys_addr_t instead of unsigned long in mm code Santosh Shilimkar
2013-11-15 0:09 ` Christoffer Dall
@ 2013-11-15 11:58 ` Marc Zyngier
1 sibling, 0 replies; 41+ messages in thread
From: Marc Zyngier @ 2013-11-15 11:58 UTC (permalink / raw)
To: linux-arm-kernel
On 14/11/13 19:37, Santosh Shilimkar wrote:
> The unsigned long datatype is not sufficient for mapping physical addresses
> greater than 4GB.
>
> So fix the KVM mm code accordingly. Special thanks to Christopher for debug
> help to figure out the bug.
>
> Cc: Christoffer Dall <christoffer.dall@linaro.org>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
>
> Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Nice catch.
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
M.
> ---
> arch/arm/kvm/mmu.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
> index 3bd652f..657f15e 100644
> --- a/arch/arm/kvm/mmu.c
> +++ b/arch/arm/kvm/mmu.c
> @@ -318,7 +318,7 @@ out:
> */
> int create_hyp_mappings(void *from, void *to)
> {
> - unsigned long phys_addr = virt_to_phys(from);
> + phys_addr_t phys_addr = virt_to_phys(from);
> unsigned long start = KERN_TO_HYP((unsigned long)from);
> unsigned long end = KERN_TO_HYP((unsigned long)to);
>
>
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 1/6] arm64: mm: Add __virt_to_idmap() to keep kvm build happy
2013-11-14 19:37 ` [PATCH 1/6] arm64: mm: Add __virt_to_idmap() to keep kvm build happy Santosh Shilimkar
@ 2013-11-15 13:34 ` Catalin Marinas
2013-11-15 14:55 ` Santosh Shilimkar
2013-11-15 15:05 ` Marc Zyngier
1 sibling, 1 reply; 41+ messages in thread
From: Catalin Marinas @ 2013-11-15 13:34 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, Nov 14, 2013 at 07:37:41PM +0000, Santosh Shilimkar wrote:
> ARM kvm code will make use of __virt_to_idmap() on arm32
> machines as hardware interconnect supported alias of physical
> memory for idmap purposes. The same code is shared with arm64
> bit and hence will break the builds. So we add __virt_to_idmap()
> which is just __virt_to_phys() on arm64 bit to keep build happy.
>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: Christoffer Dall <christoffer.dall@linaro.org>
>
> Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
BTW, there is normally no space between the Cc and Signed-off-by lines.
> ---
> arch/arm64/include/asm/memory.h | 8 ++++++++
> 1 file changed, 8 insertions(+)
>
> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> index 3776217..d9341ee 100644
> --- a/arch/arm64/include/asm/memory.h
> +++ b/arch/arm64/include/asm/memory.h
> @@ -75,6 +75,14 @@
> #define __phys_to_virt(x) ((unsigned long)((x) - PHYS_OFFSET + PAGE_OFFSET))
>
> /*
> + * Added to keep arm64 kvm build working which shares code with
> + * 32bit port. ARM kvm code makes use of __virt_to_idmap() on arm32
> + * machines as hardware interconnect supported alias of physical
> + * memory for idmap purposes.
> + */
> +#define virt_to_idmap(x) __virt_to_phys(x)
There are consistency issues with the use of underscores before
__virt_to_idmap. And I think you can simply say something like "Required
by ARM KVM code". The explanation on why really belongs to the 32-bit
arm code.
--
Catalin
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 4/6] arm64: mm: Drop the lowmem watermark check from virt_addr_valid()
2013-11-14 19:37 ` [PATCH 4/6] arm64: " Santosh Shilimkar
@ 2013-11-15 13:39 ` Catalin Marinas
2013-11-15 14:25 ` Marc Zyngier
0 siblings, 1 reply; 41+ messages in thread
From: Catalin Marinas @ 2013-11-15 13:39 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, Nov 14, 2013 at 07:37:44PM +0000, Santosh Shilimkar wrote:
> Slab allocator can allocate memory beyond the lowmem watermark
> which can lead to false failure of virt_addr_valid().
>
> So drop the check. The issue was seen with percpu_alloc()
> in KVM code which was allocating memory beyond lowmem watermark.
>
> Am not completly sure whether this is the right fix and if it could
> impact any other user of virt_addr_valid(). Without this fix as
> pointed out the KVM init was failing in my testing.
Do you have a problem on arm64? There is no lowmem watermark here.
--
Catalin
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 3/6] ARM: mm: Drop the lowmem watermark check from virt_addr_valid()
2013-11-14 19:37 ` [PATCH 3/6] ARM: mm: Drop the lowmem watermark check from virt_addr_valid() Santosh Shilimkar
2013-11-15 0:22 ` Christoffer Dall
@ 2013-11-15 14:20 ` Russell King - ARM Linux
2013-11-15 15:40 ` Santosh Shilimkar
1 sibling, 1 reply; 41+ messages in thread
From: Russell King - ARM Linux @ 2013-11-15 14:20 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, Nov 14, 2013 at 02:37:43PM -0500, Santosh Shilimkar wrote:
> Slab allocator can allocate memory beyond the lowmem watermark
> which can lead to false failure of virt_addr_valid().
This is definitely going to cause problems.
> So drop the check. The issue was seen with percpu_alloc()
> in KVM code which was allocating memory beyond lowmem watermark.
>
> Am not completly sure whether this is the right fix and if it could
> impact any other user of virt_addr_valid(). Without this fix as
> pointed out the KVM init was failing in my testing.
virt_addr_valid() gets used in some places to check whether the virtual
address is part of the lowmem mapping and explicitly not part of vmalloc()
or DMA coherent space:
drivers/mtd/nand/gpmi-nand/gpmi-nand.c
drivers/net/wireless/ath/ath6kl/sdio.c
It opens up the checks in include/linux/scatterlist.h to start accepting
non-streaming DMA capable buffers as well.
It also bypasses a check in the slab code to ensure that it's a potentially
valid pointer that was handed out by slab.
.. etc ..
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 4/6] arm64: mm: Drop the lowmem watermark check from virt_addr_valid()
2013-11-15 13:39 ` Catalin Marinas
@ 2013-11-15 14:25 ` Marc Zyngier
2013-11-15 14:57 ` Santosh Shilimkar
0 siblings, 1 reply; 41+ messages in thread
From: Marc Zyngier @ 2013-11-15 14:25 UTC (permalink / raw)
To: linux-arm-kernel
On 15/11/13 13:39, Catalin Marinas wrote:
> On Thu, Nov 14, 2013 at 07:37:44PM +0000, Santosh Shilimkar wrote:
>> Slab allocator can allocate memory beyond the lowmem watermark
>> which can lead to false failure of virt_addr_valid().
>>
>> So drop the check. The issue was seen with percpu_alloc()
>> in KVM code which was allocating memory beyond lowmem watermark.
>>
>> Am not completly sure whether this is the right fix and if it could
>> impact any other user of virt_addr_valid(). Without this fix as
>> pointed out the KVM init was failing in my testing.
>
> Do you have a problem on arm64? There is no lowmem watermark here.
I don't think that change is relevant for arm64. Actually, the more I
look at it, I don't think virt_addr_valid() should be touched at all.
The real issue is the use of virt_to_phys()/virt_addr_valid() on a
percpu pointer, which is wrong and only worked by chance so far.
I'm working on a patch to solve this.
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 3/6] ARM: mm: Drop the lowmem watermark check from virt_addr_valid()
2013-11-15 11:43 ` Marc Zyngier
@ 2013-11-15 14:55 ` Santosh Shilimkar
2013-11-15 15:08 ` Marc Zyngier
0 siblings, 1 reply; 41+ messages in thread
From: Santosh Shilimkar @ 2013-11-15 14:55 UTC (permalink / raw)
To: linux-arm-kernel
On Friday 15 November 2013 06:43 AM, Marc Zyngier wrote:
> On 15/11/13 00:22, Christoffer Dall wrote:
>> > On Thu, Nov 14, 2013 at 02:37:43PM -0500, Santosh Shilimkar wrote:
>>> >> Slab allocator can allocate memory beyond the lowmem watermark
>>> >> which can lead to false failure of virt_addr_valid().
>>> >>
>>> >> So drop the check. The issue was seen with percpu_alloc()
>>> >> in KVM code which was allocating memory beyond lowmem watermark.
>>> >>
>>> >> Am not completly sure whether this is the right fix and if it could
>>> >> impact any other user of virt_addr_valid(). Without this fix as
>>> >> pointed out the KVM init was failing in my testing.
>>> >>
>>> >> Cc: Christoffer Dall <christoffer.dall@linaro.org>
>>> >> Cc: Marc Zyngier <marc.zyngier@arm.com>
>>> >> Cc: Russell King <linux@arm.linux.org.uk>
>>> >> Cc: Catalin Marinas <catalin.marinas@arm.com>
>>> >> Cc: Will Deacon <will.deacon@arm.com>
>>> >>
>>> >> Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
>>> >> ---
>>> >> arch/arm/include/asm/memory.h | 3 +--
>>> >> 1 file changed, 1 insertion(+), 2 deletions(-)
>>> >>
>>> >> diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h
>>> >> index 4dd2145..412da47 100644
>>> >> --- a/arch/arm/include/asm/memory.h
>>> >> +++ b/arch/arm/include/asm/memory.h
>>> >> @@ -343,8 +343,7 @@ static inline __deprecated void *bus_to_virt(unsigned long x)
>>> >> #define ARCH_PFN_OFFSET PHYS_PFN_OFFSET
>>> >>
>>> >> #define virt_to_page(kaddr) pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
>>> >> -#define virt_addr_valid(kaddr) ((unsigned long)(kaddr) >= PAGE_OFFSET && (unsigned long)(kaddr) < (unsigned long)high_memory)
>>> >> -
>>> >> +#define virt_addr_valid(kaddr) ((unsigned long)(kaddr) >= PAGE_OFFSET)
>>> >> #endif
>>> >>
>>> >> #include <asm-generic/memory_model.h>
>>> >> --
>>> >> 1.7.9.5
>>> >>
>> >
>> > This looks wrong to me. Check Documentation/arm/memory.txt, this would
>> > return true for the VMALLOC region, which would cause virt_to_phys to
>> > give you something invalid, which would be bad.
>> >
>> > We use the check in create_hyp_mappings to be sure that the physical
>> > address returned by virt_to_phys is valid and that if we're mapping more
>> > than one page that those pages are physically contiguous.
>> >
>> > So if you want to get rid of this check, you need to change the mapping
>> > functionality to obtain the physical address by walking the page table
>> > mappings for each page that you are mapping instead. Or limit each call
>> > to a single page in size and take the physical address as input and use
>> > per_cpu_ptr_to_phys at the caller side instead.
>> >
>> > Alternatively, we need to get rid of alloc_percpu and use regular
>> > kmallocs instead, unless anyone else knows of an even better way.
> alloc_percpu has nice properties (cache locality, mostly).
>
> One way out of it would be to give percpu stuff a special treatment. Can
> you try the attach patch as a first approximation? It needs more
> refinements (allocations straddling two pages?), but I think that's the
> right sort of things.
>
> Let me know how it works for you.
>
Host boots bug guest fails. Patch needs small update as mentioned
with inline patch.
> From 01bc1c8eaebdd70b1ea044050144b9bfb3375f82 Mon Sep 17 00:00:00 2001
> From: Marc Zyngier <marc.zyngier@arm.com>
> Date: Fri, 15 Nov 2013 11:36:36 +0000
> Subject: [PATCH] arm/arm64: KVM: introduce new mapping API for percpu mappings
>
> Using virt_to_phys on percpu mappings is horribly wrong (my own bad).
> Thankfully, the kernel offers a way to obtain the phisical address
> of such a mapping.
>
> Add a new "create_hyp_percpu_mappings" to deal with those.
>
> *Fully untested, don't merge*
>
> Not-Even-Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
> arch/arm/include/asm/kvm_mmu.h | 1 +
> arch/arm/kvm/arm.c | 2 +-
> arch/arm/kvm/mmu.c | 20 ++++++++++++++++++++
> arch/arm64/include/asm/kvm_mmu.h | 1 +
> 4 files changed, 23 insertions(+), 1 deletion(-)
>
> diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
> index 9b28c41..6dcb9ff 100644
> --- a/arch/arm/include/asm/kvm_mmu.h
> +++ b/arch/arm/include/asm/kvm_mmu.h
> @@ -43,6 +43,7 @@
> #include <asm/pgalloc.h>
>
> int create_hyp_mappings(void *from, void *to);
> +int create_hyp_percpu_mappings(void *from, void *to);
> int create_hyp_io_mappings(void *from, void *to, phys_addr_t);
> void free_boot_hyp_pgd(void);
> void free_hyp_pgds(void);
> diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
> index 9c697db..6191960 100644
> --- a/arch/arm/kvm/arm.c
> +++ b/arch/arm/kvm/arm.c
> @@ -911,7 +911,7 @@ static int init_hyp_mode(void)
> kvm_cpu_context_t *cpu_ctxt;
>
> cpu_ctxt = per_cpu_ptr(kvm_host_cpu_state, cpu);
> - err = create_hyp_mappings(cpu_ctxt, cpu_ctxt + 1);
> + err = create_hyp_percpu_mappings(cpu_ctxt, cpu_ctxt + 1);
>
> if (err) {
> kvm_err("Cannot map host CPU state: %d\n", err);
> diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
> index b0de86b..e509718 100644
> --- a/arch/arm/kvm/mmu.c
> +++ b/arch/arm/kvm/mmu.c
> @@ -331,6 +331,26 @@ int create_hyp_mappings(void *from, void *to)
> }
>
> /**
> + * create_hyp_percpu_mappings - duplicate a percpu kernel virtual address
> + * range in Hyp mode
> + * @from: The virtual kernel start address of the range
> + * @to: The virtual kernel end address of the range (exclusive)
> + *
> + * The same virtual address as the kernel virtual address is also used
> + * in Hyp-mode mapping (modulo HYP_PAGE_OFFSET) to the same underlying
> + * physical pages. It *has* to be a percpu pointer.
> + */
> +int create_hyp_percpu_mappings(void *from, void *to)
> +{
> + unsigned long phys_addr = per_cpu_ptr_to_phys(from);
phys_addr_t phys_addr = per_cpu_ptr_to_phys(from);
With this change things work as expected without $subject patch.
Thanks for the patch.
Regards,
Ssantosh
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 1/6] arm64: mm: Add __virt_to_idmap() to keep kvm build happy
2013-11-15 13:34 ` Catalin Marinas
@ 2013-11-15 14:55 ` Santosh Shilimkar
2013-11-15 14:58 ` Catalin Marinas
0 siblings, 1 reply; 41+ messages in thread
From: Santosh Shilimkar @ 2013-11-15 14:55 UTC (permalink / raw)
To: linux-arm-kernel
On Friday 15 November 2013 08:34 AM, Catalin Marinas wrote:
> On Thu, Nov 14, 2013 at 07:37:41PM +0000, Santosh Shilimkar wrote:
>> ARM kvm code will make use of __virt_to_idmap() on arm32
>> machines as hardware interconnect supported alias of physical
>> memory for idmap purposes. The same code is shared with arm64
>> bit and hence will break the builds. So we add __virt_to_idmap()
>> which is just __virt_to_phys() on arm64 bit to keep build happy.
>>
>> Cc: Catalin Marinas <catalin.marinas@arm.com>
>> Cc: Will Deacon <will.deacon@arm.com>
>> Cc: Marc Zyngier <marc.zyngier@arm.com>
>> Cc: Christoffer Dall <christoffer.dall@linaro.org>
>>
>> Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
>
> BTW, there is normally no space between the Cc and Signed-off-by lines.
>
OK.
>> ---
>> arch/arm64/include/asm/memory.h | 8 ++++++++
>> 1 file changed, 8 insertions(+)
>>
>> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
>> index 3776217..d9341ee 100644
>> --- a/arch/arm64/include/asm/memory.h
>> +++ b/arch/arm64/include/asm/memory.h
>> @@ -75,6 +75,14 @@
>> #define __phys_to_virt(x) ((unsigned long)((x) - PHYS_OFFSET + PAGE_OFFSET))
>>
>> /*
>> + * Added to keep arm64 kvm build working which shares code with
>> + * 32bit port. ARM kvm code makes use of __virt_to_idmap() on arm32
>> + * machines as hardware interconnect supported alias of physical
>> + * memory for idmap purposes.
>> + */
>> +#define virt_to_idmap(x) __virt_to_phys(x)
>
> There are consistency issues with the use of underscores before
> __virt_to_idmap. And I think you can simply say something like "Required
> by ARM KVM code". The explanation on why really belongs to the 32-bit
> arm code.
>
Will update the comment as you suggested. Do you want me to rename the
macros as well. I need to do update some more code for that.
Let me know
Regards,
Santosh
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 4/6] arm64: mm: Drop the lowmem watermark check from virt_addr_valid()
2013-11-15 14:25 ` Marc Zyngier
@ 2013-11-15 14:57 ` Santosh Shilimkar
0 siblings, 0 replies; 41+ messages in thread
From: Santosh Shilimkar @ 2013-11-15 14:57 UTC (permalink / raw)
To: linux-arm-kernel
On Friday 15 November 2013 09:25 AM, Marc Zyngier wrote:
> On 15/11/13 13:39, Catalin Marinas wrote:
>> On Thu, Nov 14, 2013 at 07:37:44PM +0000, Santosh Shilimkar wrote:
>>> Slab allocator can allocate memory beyond the lowmem watermark
>>> which can lead to false failure of virt_addr_valid().
>>>
>>> So drop the check. The issue was seen with percpu_alloc()
>>> in KVM code which was allocating memory beyond lowmem watermark.
>>>
>>> Am not completly sure whether this is the right fix and if it could
>>> impact any other user of virt_addr_valid(). Without this fix as
>>> pointed out the KVM init was failing in my testing.
>>
>> Do you have a problem on arm64? There is no lowmem watermark here.
>
> I don't think that change is relevant for arm64. Actually, the more I
> look at it, I don't think virt_addr_valid() should be touched at all.
>
> The real issue is the use of virt_to_phys()/virt_addr_valid() on a
> percpu pointer, which is wrong and only worked by chance so far.
>
> I'm working on a patch to solve this.
>
Yep. I tried Marc's other approach and that does solve the issue.
The $subject patch was any way more for getting attention to figure
out right solution ;-) which I guess we have now.
Regards,
Santosh
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 1/6] arm64: mm: Add __virt_to_idmap() to keep kvm build happy
2013-11-15 14:55 ` Santosh Shilimkar
@ 2013-11-15 14:58 ` Catalin Marinas
2013-11-15 15:31 ` Santosh Shilimkar
0 siblings, 1 reply; 41+ messages in thread
From: Catalin Marinas @ 2013-11-15 14:58 UTC (permalink / raw)
To: linux-arm-kernel
On Fri, Nov 15, 2013 at 02:55:50PM +0000, Santosh Shilimkar wrote:
> On Friday 15 November 2013 08:34 AM, Catalin Marinas wrote:
> > On Thu, Nov 14, 2013 at 07:37:41PM +0000, Santosh Shilimkar wrote:
> >> /*
> >> + * Added to keep arm64 kvm build working which shares code with
> >> + * 32bit port. ARM kvm code makes use of __virt_to_idmap() on arm32
> >> + * machines as hardware interconnect supported alias of physical
> >> + * memory for idmap purposes.
> >> + */
> >> +#define virt_to_idmap(x) __virt_to_phys(x)
> >
> > There are consistency issues with the use of underscores before
> > __virt_to_idmap. And I think you can simply say something like "Required
> > by ARM KVM code". The explanation on why really belongs to the 32-bit
> > arm code.
> >
> Will update the comment as you suggested. Do you want me to rename the
> macros as well. I need to do update some more code for that.
The macro name should be whatever is in arch/arm/ but the comment and
the macro disagree here.
--
Catalin
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 1/6] arm64: mm: Add __virt_to_idmap() to keep kvm build happy
2013-11-14 19:37 ` [PATCH 1/6] arm64: mm: Add __virt_to_idmap() to keep kvm build happy Santosh Shilimkar
2013-11-15 13:34 ` Catalin Marinas
@ 2013-11-15 15:05 ` Marc Zyngier
2013-11-15 15:25 ` Santosh Shilimkar
1 sibling, 1 reply; 41+ messages in thread
From: Marc Zyngier @ 2013-11-15 15:05 UTC (permalink / raw)
To: linux-arm-kernel
On 14/11/13 19:37, Santosh Shilimkar wrote:
> ARM kvm code will make use of __virt_to_idmap() on arm32
> machines as hardware interconnect supported alias of physical
> memory for idmap purposes. The same code is shared with arm64
> bit and hence will break the builds. So we add __virt_to_idmap()
> which is just __virt_to_phys() on arm64 bit to keep build happy.
>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: Christoffer Dall <christoffer.dall@linaro.org>
>
> Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
> ---
> arch/arm64/include/asm/memory.h | 8 ++++++++
> 1 file changed, 8 insertions(+)
>
> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> index 3776217..d9341ee 100644
> --- a/arch/arm64/include/asm/memory.h
> +++ b/arch/arm64/include/asm/memory.h
> @@ -75,6 +75,14 @@
> #define __phys_to_virt(x) ((unsigned long)((x) - PHYS_OFFSET + PAGE_OFFSET))
>
> /*
> + * Added to keep arm64 kvm build working which shares code with
> + * 32bit port. ARM kvm code makes use of __virt_to_idmap() on arm32
> + * machines as hardware interconnect supported alias of physical
> + * memory for idmap purposes.
> + */
> +#define virt_to_idmap(x) __virt_to_phys(x)
> +
> +/*
> * Convert a physical address to a Page Frame Number and back
> */
> #define __phys_to_pfn(paddr) ((unsigned long)((paddr) >> PAGE_SHIFT))
>
I'd rather have a kvm_virt_to_phys() in kvm_mmu.h. That's how we've
dealt with that kind of difference so far.
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 3/6] ARM: mm: Drop the lowmem watermark check from virt_addr_valid()
2013-11-15 14:55 ` Santosh Shilimkar
@ 2013-11-15 15:08 ` Marc Zyngier
2013-11-15 15:46 ` Christoffer Dall
0 siblings, 1 reply; 41+ messages in thread
From: Marc Zyngier @ 2013-11-15 15:08 UTC (permalink / raw)
To: linux-arm-kernel
On 15/11/13 14:55, Santosh Shilimkar wrote:
> On Friday 15 November 2013 06:43 AM, Marc Zyngier wrote:
>> On 15/11/13 00:22, Christoffer Dall wrote:
>>>> On Thu, Nov 14, 2013 at 02:37:43PM -0500, Santosh Shilimkar wrote:
>>>>>> Slab allocator can allocate memory beyond the lowmem watermark
>>>>>> which can lead to false failure of virt_addr_valid().
>>>>>>
>>>>>> So drop the check. The issue was seen with percpu_alloc()
>>>>>> in KVM code which was allocating memory beyond lowmem watermark.
>>>>>>
>>>>>> Am not completly sure whether this is the right fix and if it could
>>>>>> impact any other user of virt_addr_valid(). Without this fix as
>>>>>> pointed out the KVM init was failing in my testing.
>>>>>>
>>>>>> Cc: Christoffer Dall <christoffer.dall@linaro.org>
>>>>>> Cc: Marc Zyngier <marc.zyngier@arm.com>
>>>>>> Cc: Russell King <linux@arm.linux.org.uk>
>>>>>> Cc: Catalin Marinas <catalin.marinas@arm.com>
>>>>>> Cc: Will Deacon <will.deacon@arm.com>
>>>>>>
>>>>>> Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
>>>>>> ---
>>>>>> arch/arm/include/asm/memory.h | 3 +--
>>>>>> 1 file changed, 1 insertion(+), 2 deletions(-)
>>>>>>
>>>>>> diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h
>>>>>> index 4dd2145..412da47 100644
>>>>>> --- a/arch/arm/include/asm/memory.h
>>>>>> +++ b/arch/arm/include/asm/memory.h
>>>>>> @@ -343,8 +343,7 @@ static inline __deprecated void *bus_to_virt(unsigned long x)
>>>>>> #define ARCH_PFN_OFFSET PHYS_PFN_OFFSET
>>>>>>
>>>>>> #define virt_to_page(kaddr) pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
>>>>>> -#define virt_addr_valid(kaddr) ((unsigned long)(kaddr) >= PAGE_OFFSET && (unsigned long)(kaddr) < (unsigned long)high_memory)
>>>>>> -
>>>>>> +#define virt_addr_valid(kaddr) ((unsigned long)(kaddr) >= PAGE_OFFSET)
>>>>>> #endif
>>>>>>
>>>>>> #include <asm-generic/memory_model.h>
>>>>>> --
>>>>>> 1.7.9.5
>>>>>>
>>>>
>>>> This looks wrong to me. Check Documentation/arm/memory.txt, this would
>>>> return true for the VMALLOC region, which would cause virt_to_phys to
>>>> give you something invalid, which would be bad.
>>>>
>>>> We use the check in create_hyp_mappings to be sure that the physical
>>>> address returned by virt_to_phys is valid and that if we're mapping more
>>>> than one page that those pages are physically contiguous.
>>>>
>>>> So if you want to get rid of this check, you need to change the mapping
>>>> functionality to obtain the physical address by walking the page table
>>>> mappings for each page that you are mapping instead. Or limit each call
>>>> to a single page in size and take the physical address as input and use
>>>> per_cpu_ptr_to_phys at the caller side instead.
>>>>
>>>> Alternatively, we need to get rid of alloc_percpu and use regular
>>>> kmallocs instead, unless anyone else knows of an even better way.
>> alloc_percpu has nice properties (cache locality, mostly).
>>
>> One way out of it would be to give percpu stuff a special treatment. Can
>> you try the attach patch as a first approximation? It needs more
>> refinements (allocations straddling two pages?), but I think that's the
>> right sort of things.
>>
>> Let me know how it works for you.
>>
> Host boots bug guest fails. Patch needs small update as mentioned
> with inline patch.
>
>> From 01bc1c8eaebdd70b1ea044050144b9bfb3375f82 Mon Sep 17 00:00:00 2001
>> From: Marc Zyngier <marc.zyngier@arm.com>
>> Date: Fri, 15 Nov 2013 11:36:36 +0000
>> Subject: [PATCH] arm/arm64: KVM: introduce new mapping API for percpu mappings
>>
>> Using virt_to_phys on percpu mappings is horribly wrong (my own bad).
>> Thankfully, the kernel offers a way to obtain the phisical address
>> of such a mapping.
>>
>> Add a new "create_hyp_percpu_mappings" to deal with those.
>>
>> *Fully untested, don't merge*
>>
>> Not-Even-Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
>> ---
>> arch/arm/include/asm/kvm_mmu.h | 1 +
>> arch/arm/kvm/arm.c | 2 +-
>> arch/arm/kvm/mmu.c | 20 ++++++++++++++++++++
>> arch/arm64/include/asm/kvm_mmu.h | 1 +
>> 4 files changed, 23 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
>> index 9b28c41..6dcb9ff 100644
>> --- a/arch/arm/include/asm/kvm_mmu.h
>> +++ b/arch/arm/include/asm/kvm_mmu.h
>> @@ -43,6 +43,7 @@
>> #include <asm/pgalloc.h>
>>
>> int create_hyp_mappings(void *from, void *to);
>> +int create_hyp_percpu_mappings(void *from, void *to);
>> int create_hyp_io_mappings(void *from, void *to, phys_addr_t);
>> void free_boot_hyp_pgd(void);
>> void free_hyp_pgds(void);
>> diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
>> index 9c697db..6191960 100644
>> --- a/arch/arm/kvm/arm.c
>> +++ b/arch/arm/kvm/arm.c
>> @@ -911,7 +911,7 @@ static int init_hyp_mode(void)
>> kvm_cpu_context_t *cpu_ctxt;
>>
>> cpu_ctxt = per_cpu_ptr(kvm_host_cpu_state, cpu);
>> - err = create_hyp_mappings(cpu_ctxt, cpu_ctxt + 1);
>> + err = create_hyp_percpu_mappings(cpu_ctxt, cpu_ctxt + 1);
>>
>> if (err) {
>> kvm_err("Cannot map host CPU state: %d\n", err);
>> diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
>> index b0de86b..e509718 100644
>> --- a/arch/arm/kvm/mmu.c
>> +++ b/arch/arm/kvm/mmu.c
>> @@ -331,6 +331,26 @@ int create_hyp_mappings(void *from, void *to)
>> }
>>
>> /**
>> + * create_hyp_percpu_mappings - duplicate a percpu kernel virtual address
>> + * range in Hyp mode
>> + * @from: The virtual kernel start address of the range
>> + * @to: The virtual kernel end address of the range (exclusive)
>> + *
>> + * The same virtual address as the kernel virtual address is also used
>> + * in Hyp-mode mapping (modulo HYP_PAGE_OFFSET) to the same underlying
>> + * physical pages. It *has* to be a percpu pointer.
>> + */
>> +int create_hyp_percpu_mappings(void *from, void *to)
>> +{
>> + unsigned long phys_addr = per_cpu_ptr_to_phys(from);
> phys_addr_t phys_addr = per_cpu_ptr_to_phys(from);
Yeah, of course... ;-)
> With this change things work as expected without $subject patch.
> Thanks for the patch.
Good. I'll respin another version with support for allocations
straddling multiple pages and post it ASAP.
Thanks for testing!
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 1/6] arm64: mm: Add __virt_to_idmap() to keep kvm build happy
2013-11-15 15:05 ` Marc Zyngier
@ 2013-11-15 15:25 ` Santosh Shilimkar
2013-11-15 15:29 ` Marc Zyngier
0 siblings, 1 reply; 41+ messages in thread
From: Santosh Shilimkar @ 2013-11-15 15:25 UTC (permalink / raw)
To: linux-arm-kernel
On Friday 15 November 2013 10:05 AM, Marc Zyngier wrote:
> On 14/11/13 19:37, Santosh Shilimkar wrote:
>> ARM kvm code will make use of __virt_to_idmap() on arm32
>> machines as hardware interconnect supported alias of physical
>> memory for idmap purposes. The same code is shared with arm64
>> bit and hence will break the builds. So we add __virt_to_idmap()
>> which is just __virt_to_phys() on arm64 bit to keep build happy.
>>
>> Cc: Catalin Marinas <catalin.marinas@arm.com>
>> Cc: Will Deacon <will.deacon@arm.com>
>> Cc: Marc Zyngier <marc.zyngier@arm.com>
>> Cc: Christoffer Dall <christoffer.dall@linaro.org>
>>
>> Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
>> ---
>> arch/arm64/include/asm/memory.h | 8 ++++++++
>> 1 file changed, 8 insertions(+)
>>
>> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
>> index 3776217..d9341ee 100644
>> --- a/arch/arm64/include/asm/memory.h
>> +++ b/arch/arm64/include/asm/memory.h
>> @@ -75,6 +75,14 @@
>> #define __phys_to_virt(x) ((unsigned long)((x) - PHYS_OFFSET + PAGE_OFFSET))
>>
>> /*
>> + * Added to keep arm64 kvm build working which shares code with
>> + * 32bit port. ARM kvm code makes use of __virt_to_idmap() on arm32
>> + * machines as hardware interconnect supported alias of physical
>> + * memory for idmap purposes.
>> + */
>> +#define virt_to_idmap(x) __virt_to_phys(x)
>> +
>> +/*
>> * Convert a physical address to a Page Frame Number and back
>> */
>> #define __phys_to_pfn(paddr) ((unsigned long)((paddr) >> PAGE_SHIFT))
>>
>
> I'd rather have a kvm_virt_to_phys() in kvm_mmu.h. That's how we've
> dealt with that kind of difference so far.
>
Are you suggesting something like below ?
---
arch/arm/include/asm/kvm_mmu.h | 1 +
arch/arm/kvm/mmu.c | 8 ++++----
arch/arm64/include/asm/kvm_mmu.h | 1 +
3 files changed, 6 insertions(+), 4 deletions(-)
diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
index 9b28c41..fd90efa 100644
--- a/arch/arm/include/asm/kvm_mmu.h
+++ b/arch/arm/include/asm/kvm_mmu.h
@@ -129,6 +129,7 @@ static inline void coherent_icache_guest_page(struct kvm *kvm, gfn_t gfn)
}
#define kvm_flush_dcache_to_poc(a,l) __cpuc_flush_dcache_area((a), (l))
+#define kvm_virt_to_phys(x) virt_to_idmap((unsigned long)(x))
#endif /* !__ASSEMBLY__ */
diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
index b0de86b..071e535 100644
--- a/arch/arm/kvm/mmu.c
+++ b/arch/arm/kvm/mmu.c
@@ -747,9 +747,9 @@ int kvm_mmu_init(void)
{
int err;
- hyp_idmap_start = virt_to_phys(__hyp_idmap_text_start);
- hyp_idmap_end = virt_to_phys(__hyp_idmap_text_end);
- hyp_idmap_vector = virt_to_phys(__kvm_hyp_init);
+ hyp_idmap_start = kvm_virt_to_phys(__hyp_idmap_text_start);
+ hyp_idmap_end = kvm_virt_to_phys(__hyp_idmap_text_end);
+ hyp_idmap_vector = kvm_virt_to_phys(__kvm_hyp_init);
if ((hyp_idmap_start ^ hyp_idmap_end) & PAGE_MASK) {
/*
@@ -776,7 +776,7 @@ int kvm_mmu_init(void)
*/
kvm_flush_dcache_to_poc(init_bounce_page, len);
- phys_base = virt_to_phys(init_bounce_page);
+ phys_base = kvm_virt_to_phys(init_bounce_page);
hyp_idmap_vector += phys_base - hyp_idmap_start;
hyp_idmap_start = phys_base;
hyp_idmap_end = phys_base + len;
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index efe609c..9ce7c88 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -130,6 +130,7 @@ static inline void coherent_icache_guest_page(struct kvm *kvm, gfn_t gfn)
}
#define kvm_flush_dcache_to_poc(a,l) __flush_dcache_area((a), (l))
+#define kvm_virt_to_phys(x) __virt_to_phys((unsigned long)(x))
#endif /* __ASSEMBLY__ */
#endif /* __ARM64_KVM_MMU_H__ */
--
1.7.9.5
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH 1/6] arm64: mm: Add __virt_to_idmap() to keep kvm build happy
2013-11-15 15:25 ` Santosh Shilimkar
@ 2013-11-15 15:29 ` Marc Zyngier
2013-11-15 15:31 ` Catalin Marinas
0 siblings, 1 reply; 41+ messages in thread
From: Marc Zyngier @ 2013-11-15 15:29 UTC (permalink / raw)
To: linux-arm-kernel
On 15/11/13 15:25, Santosh Shilimkar wrote:
> On Friday 15 November 2013 10:05 AM, Marc Zyngier wrote:
>> On 14/11/13 19:37, Santosh Shilimkar wrote:
>>> ARM kvm code will make use of __virt_to_idmap() on arm32
>>> machines as hardware interconnect supported alias of physical
>>> memory for idmap purposes. The same code is shared with arm64
>>> bit and hence will break the builds. So we add __virt_to_idmap()
>>> which is just __virt_to_phys() on arm64 bit to keep build happy.
>>>
>>> Cc: Catalin Marinas <catalin.marinas@arm.com>
>>> Cc: Will Deacon <will.deacon@arm.com>
>>> Cc: Marc Zyngier <marc.zyngier@arm.com>
>>> Cc: Christoffer Dall <christoffer.dall@linaro.org>
>>>
>>> Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
>>> ---
>>> arch/arm64/include/asm/memory.h | 8 ++++++++
>>> 1 file changed, 8 insertions(+)
>>>
>>> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
>>> index 3776217..d9341ee 100644
>>> --- a/arch/arm64/include/asm/memory.h
>>> +++ b/arch/arm64/include/asm/memory.h
>>> @@ -75,6 +75,14 @@
>>> #define __phys_to_virt(x) ((unsigned long)((x) - PHYS_OFFSET + PAGE_OFFSET))
>>>
>>> /*
>>> + * Added to keep arm64 kvm build working which shares code with
>>> + * 32bit port. ARM kvm code makes use of __virt_to_idmap() on arm32
>>> + * machines as hardware interconnect supported alias of physical
>>> + * memory for idmap purposes.
>>> + */
>>> +#define virt_to_idmap(x) __virt_to_phys(x)
>>> +
>>> +/*
>>> * Convert a physical address to a Page Frame Number and back
>>> */
>>> #define __phys_to_pfn(paddr) ((unsigned long)((paddr) >> PAGE_SHIFT))
>>>
>>
>> I'd rather have a kvm_virt_to_phys() in kvm_mmu.h. That's how we've
>> dealt with that kind of difference so far.
>>
> Are you suggesting something like below ?
Yes, I like it a lot more.
Catalin, what do you think?
M.
>
> ---
> arch/arm/include/asm/kvm_mmu.h | 1 +
> arch/arm/kvm/mmu.c | 8 ++++----
> arch/arm64/include/asm/kvm_mmu.h | 1 +
> 3 files changed, 6 insertions(+), 4 deletions(-)
>
> diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
> index 9b28c41..fd90efa 100644
> --- a/arch/arm/include/asm/kvm_mmu.h
> +++ b/arch/arm/include/asm/kvm_mmu.h
> @@ -129,6 +129,7 @@ static inline void coherent_icache_guest_page(struct kvm *kvm, gfn_t gfn)
> }
>
> #define kvm_flush_dcache_to_poc(a,l) __cpuc_flush_dcache_area((a), (l))
> +#define kvm_virt_to_phys(x) virt_to_idmap((unsigned long)(x))
>
> #endif /* !__ASSEMBLY__ */
>
> diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
> index b0de86b..071e535 100644
> --- a/arch/arm/kvm/mmu.c
> +++ b/arch/arm/kvm/mmu.c
> @@ -747,9 +747,9 @@ int kvm_mmu_init(void)
> {
> int err;
>
> - hyp_idmap_start = virt_to_phys(__hyp_idmap_text_start);
> - hyp_idmap_end = virt_to_phys(__hyp_idmap_text_end);
> - hyp_idmap_vector = virt_to_phys(__kvm_hyp_init);
> + hyp_idmap_start = kvm_virt_to_phys(__hyp_idmap_text_start);
> + hyp_idmap_end = kvm_virt_to_phys(__hyp_idmap_text_end);
> + hyp_idmap_vector = kvm_virt_to_phys(__kvm_hyp_init);
>
> if ((hyp_idmap_start ^ hyp_idmap_end) & PAGE_MASK) {
> /*
> @@ -776,7 +776,7 @@ int kvm_mmu_init(void)
> */
> kvm_flush_dcache_to_poc(init_bounce_page, len);
>
> - phys_base = virt_to_phys(init_bounce_page);
> + phys_base = kvm_virt_to_phys(init_bounce_page);
> hyp_idmap_vector += phys_base - hyp_idmap_start;
> hyp_idmap_start = phys_base;
> hyp_idmap_end = phys_base + len;
> diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
> index efe609c..9ce7c88 100644
> --- a/arch/arm64/include/asm/kvm_mmu.h
> +++ b/arch/arm64/include/asm/kvm_mmu.h
> @@ -130,6 +130,7 @@ static inline void coherent_icache_guest_page(struct kvm *kvm, gfn_t gfn)
> }
>
> #define kvm_flush_dcache_to_poc(a,l) __flush_dcache_area((a), (l))
> +#define kvm_virt_to_phys(x) __virt_to_phys((unsigned long)(x))
>
> #endif /* __ASSEMBLY__ */
> #endif /* __ARM64_KVM_MMU_H__ */
>
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 1/6] arm64: mm: Add __virt_to_idmap() to keep kvm build happy
2013-11-15 15:29 ` Marc Zyngier
@ 2013-11-15 15:31 ` Catalin Marinas
2013-11-15 15:33 ` Santosh Shilimkar
0 siblings, 1 reply; 41+ messages in thread
From: Catalin Marinas @ 2013-11-15 15:31 UTC (permalink / raw)
To: linux-arm-kernel
On Fri, Nov 15, 2013 at 03:29:52PM +0000, Marc Zyngier wrote:
> On 15/11/13 15:25, Santosh Shilimkar wrote:
> > On Friday 15 November 2013 10:05 AM, Marc Zyngier wrote:
> >> On 14/11/13 19:37, Santosh Shilimkar wrote:
> >>> ARM kvm code will make use of __virt_to_idmap() on arm32
> >>> machines as hardware interconnect supported alias of physical
> >>> memory for idmap purposes. The same code is shared with arm64
> >>> bit and hence will break the builds. So we add __virt_to_idmap()
> >>> which is just __virt_to_phys() on arm64 bit to keep build happy.
> >>>
> >>> Cc: Catalin Marinas <catalin.marinas@arm.com>
> >>> Cc: Will Deacon <will.deacon@arm.com>
> >>> Cc: Marc Zyngier <marc.zyngier@arm.com>
> >>> Cc: Christoffer Dall <christoffer.dall@linaro.org>
> >>>
> >>> Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
> >>> ---
> >>> arch/arm64/include/asm/memory.h | 8 ++++++++
> >>> 1 file changed, 8 insertions(+)
> >>>
> >>> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> >>> index 3776217..d9341ee 100644
> >>> --- a/arch/arm64/include/asm/memory.h
> >>> +++ b/arch/arm64/include/asm/memory.h
> >>> @@ -75,6 +75,14 @@
> >>> #define __phys_to_virt(x) ((unsigned long)((x) - PHYS_OFFSET + PAGE_OFFSET))
> >>>
> >>> /*
> >>> + * Added to keep arm64 kvm build working which shares code with
> >>> + * 32bit port. ARM kvm code makes use of __virt_to_idmap() on arm32
> >>> + * machines as hardware interconnect supported alias of physical
> >>> + * memory for idmap purposes.
> >>> + */
> >>> +#define virt_to_idmap(x) __virt_to_phys(x)
> >>> +
> >>> +/*
> >>> * Convert a physical address to a Page Frame Number and back
> >>> */
> >>> #define __phys_to_pfn(paddr) ((unsigned long)((paddr) >> PAGE_SHIFT))
> >>>
> >>
> >> I'd rather have a kvm_virt_to_phys() in kvm_mmu.h. That's how we've
> >> dealt with that kind of difference so far.
> >>
> > Are you suggesting something like below ?
>
> Yes, I like it a lot more.
>
> Catalin, what do you think?
It looks better ;)
--
Catalin
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 1/6] arm64: mm: Add __virt_to_idmap() to keep kvm build happy
2013-11-15 14:58 ` Catalin Marinas
@ 2013-11-15 15:31 ` Santosh Shilimkar
0 siblings, 0 replies; 41+ messages in thread
From: Santosh Shilimkar @ 2013-11-15 15:31 UTC (permalink / raw)
To: linux-arm-kernel
On Friday 15 November 2013 09:58 AM, Catalin Marinas wrote:
> On Fri, Nov 15, 2013 at 02:55:50PM +0000, Santosh Shilimkar wrote:
>> On Friday 15 November 2013 08:34 AM, Catalin Marinas wrote:
>>> On Thu, Nov 14, 2013 at 07:37:41PM +0000, Santosh Shilimkar wrote:
>>>> /*
>>>> + * Added to keep arm64 kvm build working which shares code with
>>>> + * 32bit port. ARM kvm code makes use of __virt_to_idmap() on arm32
>>>> + * machines as hardware interconnect supported alias of physical
>>>> + * memory for idmap purposes.
>>>> + */
>>>> +#define virt_to_idmap(x) __virt_to_phys(x)
>>>
>>> There are consistency issues with the use of underscores before
>>> __virt_to_idmap. And I think you can simply say something like "Required
>>> by ARM KVM code". The explanation on why really belongs to the 32-bit
>>> arm code.
>>>
>> Will update the comment as you suggested. Do you want me to rename the
>> macros as well. I need to do update some more code for that.
>
> The macro name should be whatever is in arch/arm/ but the comment and
> the macro disagree here.
>
Damn .... I should have read it carefully. The comment is using wrong
name. Sorry.
regards,
Santosh
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 1/6] arm64: mm: Add __virt_to_idmap() to keep kvm build happy
2013-11-15 15:31 ` Catalin Marinas
@ 2013-11-15 15:33 ` Santosh Shilimkar
0 siblings, 0 replies; 41+ messages in thread
From: Santosh Shilimkar @ 2013-11-15 15:33 UTC (permalink / raw)
To: linux-arm-kernel
On Friday 15 November 2013 10:31 AM, Catalin Marinas wrote:
> On Fri, Nov 15, 2013 at 03:29:52PM +0000, Marc Zyngier wrote:
>> On 15/11/13 15:25, Santosh Shilimkar wrote:
>>> On Friday 15 November 2013 10:05 AM, Marc Zyngier wrote:
>>>> On 14/11/13 19:37, Santosh Shilimkar wrote:
>>>>> ARM kvm code will make use of __virt_to_idmap() on arm32
>>>>> machines as hardware interconnect supported alias of physical
>>>>> memory for idmap purposes. The same code is shared with arm64
>>>>> bit and hence will break the builds. So we add __virt_to_idmap()
>>>>> which is just __virt_to_phys() on arm64 bit to keep build happy.
>>>>>
>>>>> Cc: Catalin Marinas <catalin.marinas@arm.com>
>>>>> Cc: Will Deacon <will.deacon@arm.com>
>>>>> Cc: Marc Zyngier <marc.zyngier@arm.com>
>>>>> Cc: Christoffer Dall <christoffer.dall@linaro.org>
>>>>>
>>>>> Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
>>>>> ---
>>>>> arch/arm64/include/asm/memory.h | 8 ++++++++
>>>>> 1 file changed, 8 insertions(+)
>>>>>
>>>>> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
>>>>> index 3776217..d9341ee 100644
>>>>> --- a/arch/arm64/include/asm/memory.h
>>>>> +++ b/arch/arm64/include/asm/memory.h
>>>>> @@ -75,6 +75,14 @@
>>>>> #define __phys_to_virt(x) ((unsigned long)((x) - PHYS_OFFSET + PAGE_OFFSET))
>>>>>
>>>>> /*
>>>>> + * Added to keep arm64 kvm build working which shares code with
>>>>> + * 32bit port. ARM kvm code makes use of __virt_to_idmap() on arm32
>>>>> + * machines as hardware interconnect supported alias of physical
>>>>> + * memory for idmap purposes.
>>>>> + */
>>>>> +#define virt_to_idmap(x) __virt_to_phys(x)
>>>>> +
>>>>> +/*
>>>>> * Convert a physical address to a Page Frame Number and back
>>>>> */
>>>>> #define __phys_to_pfn(paddr) ((unsigned long)((paddr) >> PAGE_SHIFT))
>>>>>
>>>>
>>>> I'd rather have a kvm_virt_to_phys() in kvm_mmu.h. That's how we've
>>>> dealt with that kind of difference so far.
>>>>
>>> Are you suggesting something like below ?
>>
>> Yes, I like it a lot more.
>>
>> Catalin, what do you think?
>
> It looks better ;)
>
OK then. I will use this approach in my refreshed version. Still need
to get around other lingering issue with guest userspace corruption.
Regards,
Santosh
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 3/6] ARM: mm: Drop the lowmem watermark check from virt_addr_valid()
2013-11-15 14:20 ` Russell King - ARM Linux
@ 2013-11-15 15:40 ` Santosh Shilimkar
0 siblings, 0 replies; 41+ messages in thread
From: Santosh Shilimkar @ 2013-11-15 15:40 UTC (permalink / raw)
To: linux-arm-kernel
On Friday 15 November 2013 09:20 AM, Russell King - ARM Linux wrote:
> On Thu, Nov 14, 2013 at 02:37:43PM -0500, Santosh Shilimkar wrote:
>> Slab allocator can allocate memory beyond the lowmem watermark
>> which can lead to false failure of virt_addr_valid().
>
> This is definitely going to cause problems.
>
>> So drop the check. The issue was seen with percpu_alloc()
>> in KVM code which was allocating memory beyond lowmem watermark.
>>
>> Am not completly sure whether this is the right fix and if it could
>> impact any other user of virt_addr_valid(). Without this fix as
>> pointed out the KVM init was failing in my testing.
>
> virt_addr_valid() gets used in some places to check whether the virtual
> address is part of the lowmem mapping and explicitly not part of vmalloc()
> or DMA coherent space:
>
> drivers/mtd/nand/gpmi-nand/gpmi-nand.c
> drivers/net/wireless/ath/ath6kl/sdio.c
>
> It opens up the checks in include/linux/scatterlist.h to start accepting
> non-streaming DMA capable buffers as well.
>
> It also bypasses a check in the slab code to ensure that it's a potentially
> valid pointer that was handed out by slab.
>
Thanks for the additional information Russell. We are going with an alternate
approach from Marc so this patch will be dropped any ways.
Regards,
Santosh
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 3/6] ARM: mm: Drop the lowmem watermark check from virt_addr_valid()
2013-11-15 15:08 ` Marc Zyngier
@ 2013-11-15 15:46 ` Christoffer Dall
0 siblings, 0 replies; 41+ messages in thread
From: Christoffer Dall @ 2013-11-15 15:46 UTC (permalink / raw)
To: linux-arm-kernel
On Fri, Nov 15, 2013 at 03:08:13PM +0000, Marc Zyngier wrote:
> On 15/11/13 14:55, Santosh Shilimkar wrote:
> > On Friday 15 November 2013 06:43 AM, Marc Zyngier wrote:
> >> On 15/11/13 00:22, Christoffer Dall wrote:
> >>>> On Thu, Nov 14, 2013 at 02:37:43PM -0500, Santosh Shilimkar wrote:
> >>>>>> Slab allocator can allocate memory beyond the lowmem watermark
> >>>>>> which can lead to false failure of virt_addr_valid().
> >>>>>>
> >>>>>> So drop the check. The issue was seen with percpu_alloc()
> >>>>>> in KVM code which was allocating memory beyond lowmem watermark.
> >>>>>>
> >>>>>> Am not completly sure whether this is the right fix and if it could
> >>>>>> impact any other user of virt_addr_valid(). Without this fix as
> >>>>>> pointed out the KVM init was failing in my testing.
> >>>>>>
> >>>>>> Cc: Christoffer Dall <christoffer.dall@linaro.org>
> >>>>>> Cc: Marc Zyngier <marc.zyngier@arm.com>
> >>>>>> Cc: Russell King <linux@arm.linux.org.uk>
> >>>>>> Cc: Catalin Marinas <catalin.marinas@arm.com>
> >>>>>> Cc: Will Deacon <will.deacon@arm.com>
> >>>>>>
> >>>>>> Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
> >>>>>> ---
> >>>>>> arch/arm/include/asm/memory.h | 3 +--
> >>>>>> 1 file changed, 1 insertion(+), 2 deletions(-)
> >>>>>>
> >>>>>> diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h
> >>>>>> index 4dd2145..412da47 100644
> >>>>>> --- a/arch/arm/include/asm/memory.h
> >>>>>> +++ b/arch/arm/include/asm/memory.h
> >>>>>> @@ -343,8 +343,7 @@ static inline __deprecated void *bus_to_virt(unsigned long x)
> >>>>>> #define ARCH_PFN_OFFSET PHYS_PFN_OFFSET
> >>>>>>
> >>>>>> #define virt_to_page(kaddr) pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
> >>>>>> -#define virt_addr_valid(kaddr) ((unsigned long)(kaddr) >= PAGE_OFFSET && (unsigned long)(kaddr) < (unsigned long)high_memory)
> >>>>>> -
> >>>>>> +#define virt_addr_valid(kaddr) ((unsigned long)(kaddr) >= PAGE_OFFSET)
> >>>>>> #endif
> >>>>>>
> >>>>>> #include <asm-generic/memory_model.h>
> >>>>>> --
> >>>>>> 1.7.9.5
> >>>>>>
> >>>>
> >>>> This looks wrong to me. Check Documentation/arm/memory.txt, this would
> >>>> return true for the VMALLOC region, which would cause virt_to_phys to
> >>>> give you something invalid, which would be bad.
> >>>>
> >>>> We use the check in create_hyp_mappings to be sure that the physical
> >>>> address returned by virt_to_phys is valid and that if we're mapping more
> >>>> than one page that those pages are physically contiguous.
> >>>>
> >>>> So if you want to get rid of this check, you need to change the mapping
> >>>> functionality to obtain the physical address by walking the page table
> >>>> mappings for each page that you are mapping instead. Or limit each call
> >>>> to a single page in size and take the physical address as input and use
> >>>> per_cpu_ptr_to_phys at the caller side instead.
> >>>>
> >>>> Alternatively, we need to get rid of alloc_percpu and use regular
> >>>> kmallocs instead, unless anyone else knows of an even better way.
> >> alloc_percpu has nice properties (cache locality, mostly).
> >>
> >> One way out of it would be to give percpu stuff a special treatment. Can
> >> you try the attach patch as a first approximation? It needs more
> >> refinements (allocations straddling two pages?), but I think that's the
> >> right sort of things.
> >>
> >> Let me know how it works for you.
> >>
> > Host boots bug guest fails. Patch needs small update as mentioned
> > with inline patch.
> >
> >> From 01bc1c8eaebdd70b1ea044050144b9bfb3375f82 Mon Sep 17 00:00:00 2001
> >> From: Marc Zyngier <marc.zyngier@arm.com>
> >> Date: Fri, 15 Nov 2013 11:36:36 +0000
> >> Subject: [PATCH] arm/arm64: KVM: introduce new mapping API for percpu mappings
> >>
> >> Using virt_to_phys on percpu mappings is horribly wrong (my own bad).
> >> Thankfully, the kernel offers a way to obtain the phisical address
> >> of such a mapping.
> >>
> >> Add a new "create_hyp_percpu_mappings" to deal with those.
> >>
> >> *Fully untested, don't merge*
> >>
> >> Not-Even-Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> >> ---
> >> arch/arm/include/asm/kvm_mmu.h | 1 +
> >> arch/arm/kvm/arm.c | 2 +-
> >> arch/arm/kvm/mmu.c | 20 ++++++++++++++++++++
> >> arch/arm64/include/asm/kvm_mmu.h | 1 +
> >> 4 files changed, 23 insertions(+), 1 deletion(-)
> >>
> >> diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
> >> index 9b28c41..6dcb9ff 100644
> >> --- a/arch/arm/include/asm/kvm_mmu.h
> >> +++ b/arch/arm/include/asm/kvm_mmu.h
> >> @@ -43,6 +43,7 @@
> >> #include <asm/pgalloc.h>
> >>
> >> int create_hyp_mappings(void *from, void *to);
> >> +int create_hyp_percpu_mappings(void *from, void *to);
> >> int create_hyp_io_mappings(void *from, void *to, phys_addr_t);
> >> void free_boot_hyp_pgd(void);
> >> void free_hyp_pgds(void);
> >> diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
> >> index 9c697db..6191960 100644
> >> --- a/arch/arm/kvm/arm.c
> >> +++ b/arch/arm/kvm/arm.c
> >> @@ -911,7 +911,7 @@ static int init_hyp_mode(void)
> >> kvm_cpu_context_t *cpu_ctxt;
> >>
> >> cpu_ctxt = per_cpu_ptr(kvm_host_cpu_state, cpu);
> >> - err = create_hyp_mappings(cpu_ctxt, cpu_ctxt + 1);
> >> + err = create_hyp_percpu_mappings(cpu_ctxt, cpu_ctxt + 1);
> >>
> >> if (err) {
> >> kvm_err("Cannot map host CPU state: %d\n", err);
> >> diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
> >> index b0de86b..e509718 100644
> >> --- a/arch/arm/kvm/mmu.c
> >> +++ b/arch/arm/kvm/mmu.c
> >> @@ -331,6 +331,26 @@ int create_hyp_mappings(void *from, void *to)
> >> }
> >>
> >> /**
> >> + * create_hyp_percpu_mappings - duplicate a percpu kernel virtual address
> >> + * range in Hyp mode
> >> + * @from: The virtual kernel start address of the range
> >> + * @to: The virtual kernel end address of the range (exclusive)
> >> + *
> >> + * The same virtual address as the kernel virtual address is also used
> >> + * in Hyp-mode mapping (modulo HYP_PAGE_OFFSET) to the same underlying
> >> + * physical pages. It *has* to be a percpu pointer.
> >> + */
> >> +int create_hyp_percpu_mappings(void *from, void *to)
> >> +{
> >> + unsigned long phys_addr = per_cpu_ptr_to_phys(from);
> > phys_addr_t phys_addr = per_cpu_ptr_to_phys(from);
>
> Yeah, of course... ;-)
>
> > With this change things work as expected without $subject patch.
> > Thanks for the patch.
>
> Good. I'll respin another version with support for allocations
> straddling multiple pages and post it ASAP.
>
Marc, hold on, can't we just make the create_hyp_mappings more generic?
I think it would be much cleaner to, either:
1) use the existing function, but take a physical address and let the
caller figure that part out, and limit mappings to a single page
2) make create_hyp_mappings handle the full thing, check if the addr can
be translated with virt_to_phys and otherwise do
page_to_phys(vmalloc_tp_page(addr)), and handle cross-page mappings.
Basically it's what the tail end of per_cpu_ptr_to_phys does, only more
generically for any allocation.
-Christoffer
^ permalink raw reply [flat|nested] 41+ messages in thread
end of thread, other threads:[~2013-11-15 15:46 UTC | newest]
Thread overview: 41+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-11-14 19:37 [PATCH 0/6] ARM/AMR64: Few patches on kvm and mm code Santosh Shilimkar
2013-11-14 19:37 ` [PATCH 1/6] arm64: mm: Add __virt_to_idmap() to keep kvm build happy Santosh Shilimkar
2013-11-15 13:34 ` Catalin Marinas
2013-11-15 14:55 ` Santosh Shilimkar
2013-11-15 14:58 ` Catalin Marinas
2013-11-15 15:31 ` Santosh Shilimkar
2013-11-15 15:05 ` Marc Zyngier
2013-11-15 15:25 ` Santosh Shilimkar
2013-11-15 15:29 ` Marc Zyngier
2013-11-15 15:31 ` Catalin Marinas
2013-11-15 15:33 ` Santosh Shilimkar
2013-11-14 19:37 ` [PATCH 2/6] ARM: kvm: Use virt_to_idmap instead of virt_to_phys for idmap mappings Santosh Shilimkar
2013-11-15 0:12 ` Christoffer Dall
2013-11-14 19:37 ` [PATCH 3/6] ARM: mm: Drop the lowmem watermark check from virt_addr_valid() Santosh Shilimkar
2013-11-15 0:22 ` Christoffer Dall
2013-11-15 0:31 ` Santosh Shilimkar
2013-11-15 0:40 ` Christoffer Dall
2013-11-15 11:48 ` Marc Zyngier
2013-11-15 11:43 ` Marc Zyngier
2013-11-15 14:55 ` Santosh Shilimkar
2013-11-15 15:08 ` Marc Zyngier
2013-11-15 15:46 ` Christoffer Dall
2013-11-15 14:20 ` Russell King - ARM Linux
2013-11-15 15:40 ` Santosh Shilimkar
2013-11-14 19:37 ` [PATCH 4/6] arm64: " Santosh Shilimkar
2013-11-15 13:39 ` Catalin Marinas
2013-11-15 14:25 ` Marc Zyngier
2013-11-15 14:57 ` Santosh Shilimkar
2013-11-14 19:37 ` [PATCH 5/6] ARM: kvm: Use phys_addr_t instead of unsigned long in mm code Santosh Shilimkar
2013-11-15 0:09 ` Christoffer Dall
2013-11-15 0:10 ` Santosh Shilimkar
2013-11-15 11:58 ` Marc Zyngier
2013-11-14 19:37 ` [PATCH 6/6] ARM: kvm: TMP: Commit the hyp page tables to main memory Santosh Shilimkar
2013-11-14 22:36 ` Santosh Shilimkar
2013-11-15 0:11 ` Christoffer Dall
2013-11-15 0:15 ` Santosh Shilimkar
2013-11-15 0:27 ` Christoffer Dall
2013-11-15 0:36 ` Santosh Shilimkar
2013-11-15 0:42 ` Christoffer Dall
2013-11-15 1:19 ` Santosh Shilimkar
2013-11-15 1:35 ` Christoffer Dall
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).