* [PATCH 0/5] fix illegal use of __pa() in KVM code
@ 2013-01-21 17:52 Dave Hansen
2013-01-21 17:52 ` [PATCH 1/5] make DEBUG_VIRTUAL work earlier in boot Dave Hansen
` (4 more replies)
0 siblings, 5 replies; 14+ messages in thread
From: Dave Hansen @ 2013-01-21 17:52 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, Gleb Natapov, H. Peter Anvin, x86, Marcelo Tosatti,
Rik van Riel, Dave Hansen
This series fixes a hard-to-debug early boot hang on 32-bit
NUMA systems. It adds coverage to the debugging code,
adds some helpers, and eventually fixes the original bug I
was hitting.
[v2]
* Moved DEBUG_VIRTUAL patch earlier in the series (it has no
dependencies on anything else and stands on its own.
* Created page_level_*() helpers to replace a nasty switch()
at hpa's suggestion
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH 1/5] make DEBUG_VIRTUAL work earlier in boot
2013-01-21 17:52 [PATCH 0/5] fix illegal use of __pa() in KVM code Dave Hansen
@ 2013-01-21 17:52 ` Dave Hansen
2013-01-21 17:52 ` [PATCH 2/5] pagetable level size/shift/mask helpers Dave Hansen
` (3 subsequent siblings)
4 siblings, 0 replies; 14+ messages in thread
From: Dave Hansen @ 2013-01-21 17:52 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, Gleb Natapov, H. Peter Anvin, x86, Marcelo Tosatti,
Rik van Riel, Dave Hansen
The KVM code has some repeated bugs in it around use of __pa() on
per-cpu data. Those data are not in an area on which using
__pa() is valid. However, they are also called early enough in
boot that __vmalloc_start_set is not set, and thus the
CONFIG_DEBUG_VIRTUAL debugging does not catch them.
This adds a check to also verify __pa() calls against max_low_pfn,
which we can use earler in boot than is_vmalloc_addr(). However,
if we are super-early in boot, max_low_pfn=0 and this will trip
on every call, so also make sure that max_low_pfn is set before
we try to use it.
With this patch applied, CONFIG_DEBUG_VIRTUAL will actually
catch the bug I was chasing (and fix later in this series).
I'd love to find a generic way so that any __pa() call on percpu
areas could do a BUG_ON(), but there don't appear to be any nice
and easy ways to check if an address is a percpu one. Anybody
have ideas on a way to do this?
Signed-off-by: Dave Hansen <dave@linux.vnet.ibm.com>
---
linux-2.6.git-dave/arch/x86/mm/numa.c | 2 +-
linux-2.6.git-dave/arch/x86/mm/pat.c | 4 ++--
linux-2.6.git-dave/arch/x86/mm/physaddr.c | 9 ++++++++-
3 files changed, 11 insertions(+), 4 deletions(-)
diff -puN arch/x86/mm/numa.c~make-DEBUG_VIRTUAL-work-earlier-in-boot arch/x86/mm/numa.c
--- linux-2.6.git/arch/x86/mm/numa.c~make-DEBUG_VIRTUAL-work-earlier-in-boot 2013-01-17 10:22:25.614425502 -0800
+++ linux-2.6.git-dave/arch/x86/mm/numa.c 2013-01-17 10:22:25.622425572 -0800
@@ -219,7 +219,7 @@ static void __init setup_node_data(int n
*/
nd = alloc_remap(nid, nd_size);
if (nd) {
- nd_pa = __pa(nd);
+ nd_pa = __phys_addr_nodebug(nd);
remapped = true;
} else {
nd_pa = memblock_alloc_nid(nd_size, SMP_CACHE_BYTES, nid);
diff -puN arch/x86/mm/pat.c~make-DEBUG_VIRTUAL-work-earlier-in-boot arch/x86/mm/pat.c
--- linux-2.6.git/arch/x86/mm/pat.c~make-DEBUG_VIRTUAL-work-earlier-in-boot 2013-01-17 10:22:25.614425502 -0800
+++ linux-2.6.git-dave/arch/x86/mm/pat.c 2013-01-17 10:22:25.622425572 -0800
@@ -560,10 +560,10 @@ int kernel_map_sync_memtype(u64 base, un
{
unsigned long id_sz;
- if (base >= __pa(high_memory))
+ if (base > __pa(high_memory-1))
return 0;
- id_sz = (__pa(high_memory) < base + size) ?
+ id_sz = (__pa(high_memory-1) <= base + size) ?
__pa(high_memory) - base :
size;
diff -puN arch/x86/mm/physaddr.c~make-DEBUG_VIRTUAL-work-earlier-in-boot arch/x86/mm/physaddr.c
--- linux-2.6.git/arch/x86/mm/physaddr.c~make-DEBUG_VIRTUAL-work-earlier-in-boot 2013-01-17 10:22:25.618425536 -0800
+++ linux-2.6.git-dave/arch/x86/mm/physaddr.c 2013-01-17 10:22:25.622425572 -0800
@@ -1,3 +1,4 @@
+#include <linux/bootmem.h>
#include <linux/mmdebug.h>
#include <linux/module.h>
#include <linux/mm.h>
@@ -47,10 +48,16 @@ EXPORT_SYMBOL(__virt_addr_valid);
#ifdef CONFIG_DEBUG_VIRTUAL
unsigned long __phys_addr(unsigned long x)
{
+ unsigned long phys_addr = x - PAGE_OFFSET;
/* VMALLOC_* aren't constants */
VIRTUAL_BUG_ON(x < PAGE_OFFSET);
VIRTUAL_BUG_ON(__vmalloc_start_set && is_vmalloc_addr((void *) x));
- return x - PAGE_OFFSET;
+ /* max_low_pfn is set early, but not _that_ early */
+ if (max_low_pfn) {
+ VIRTUAL_BUG_ON((phys_addr >> PAGE_SHIFT) > max_low_pfn);
+ BUG_ON(slow_virt_to_phys((void *)x) != phys_addr);
+ }
+ return phys_addr;
}
EXPORT_SYMBOL(__phys_addr);
#endif
_
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH 2/5] pagetable level size/shift/mask helpers
2013-01-21 17:52 [PATCH 0/5] fix illegal use of __pa() in KVM code Dave Hansen
2013-01-21 17:52 ` [PATCH 1/5] make DEBUG_VIRTUAL work earlier in boot Dave Hansen
@ 2013-01-21 17:52 ` Dave Hansen
2013-01-21 17:52 ` [PATCH 3/5] use new pagetable helpers in try_preserve_large_page() Dave Hansen
` (2 subsequent siblings)
4 siblings, 0 replies; 14+ messages in thread
From: Dave Hansen @ 2013-01-21 17:52 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, Gleb Natapov, H. Peter Anvin, x86, Marcelo Tosatti,
Rik van Riel, Dave Hansen
I plan to use lookup_address() to walk the kernel pagetables
in a later patch. It returns a "pte" and the level in the
pagetables where the "pte" was found. The level is just an
enum and needs to be converted to a useful value in order to
do address calculations with it. These helpers will be used
in at least two places.
This also gives the anonymous enum a real name so that no one
gets confused about what they should be passing in to these
helpers.
"PTE_SHIFT" was chosen for naming consistency with the other
pagetable levels (PGD/PUD/PMD_SHIFT).
Cc: H. Peter Anvin <hpa@zytor.com>
Signed-off-by: Dave Hansen <dave@linux.vnet.ibm.com>
---
linux-2.6.git-dave/arch/x86/include/asm/pgtable.h | 14 ++++++++++++++
linux-2.6.git-dave/arch/x86/include/asm/pgtable_types.h | 2 +-
2 files changed, 15 insertions(+), 1 deletion(-)
diff -puN arch/x86/include/asm/pgtable.h~pagetable-level-size-helpers arch/x86/include/asm/pgtable.h
--- linux-2.6.git/arch/x86/include/asm/pgtable.h~pagetable-level-size-helpers 2013-01-17 10:22:25.958428542 -0800
+++ linux-2.6.git-dave/arch/x86/include/asm/pgtable.h 2013-01-17 10:22:25.962428578 -0800
@@ -390,6 +390,7 @@ pte_t *populate_extra_pte(unsigned long
#ifndef __ASSEMBLY__
#include <linux/mm_types.h>
+#include <linux/log2.h>
static inline int pte_none(pte_t pte)
{
@@ -781,6 +782,19 @@ static inline void clone_pgd_range(pgd_t
memcpy(dst, src, count * sizeof(pgd_t));
}
+#define PTE_SHIFT ilog2(PTRS_PER_PTE)
+static inline int page_level_shift(enum pg_level level)
+{
+ return (PAGE_SHIFT - PTE_SHIFT) + level * PTE_SHIFT;
+}
+static inline unsigned long page_level_size(enum pg_level level)
+{
+ return 1UL << page_level_shift(level);
+}
+static inline unsigned long page_level_mask(enum pg_level level)
+{
+ return ~(page_level_size(level) - 1);
+}
#include <asm-generic/pgtable.h>
#endif /* __ASSEMBLY__ */
diff -puN arch/x86/include/asm/pgtable_types.h~pagetable-level-size-helpers arch/x86/include/asm/pgtable_types.h
--- linux-2.6.git/arch/x86/include/asm/pgtable_types.h~pagetable-level-size-helpers 2013-01-17 10:22:25.958428542 -0800
+++ linux-2.6.git-dave/arch/x86/include/asm/pgtable_types.h 2013-01-17 10:22:25.966428612 -0800
@@ -331,7 +331,7 @@ extern void native_pagetable_init(void);
struct seq_file;
extern void arch_report_meminfo(struct seq_file *m);
-enum {
+enum pg_level {
PG_LEVEL_NONE,
PG_LEVEL_4K,
PG_LEVEL_2M,
_
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH 3/5] use new pagetable helpers in try_preserve_large_page()
2013-01-21 17:52 [PATCH 0/5] fix illegal use of __pa() in KVM code Dave Hansen
2013-01-21 17:52 ` [PATCH 1/5] make DEBUG_VIRTUAL work earlier in boot Dave Hansen
2013-01-21 17:52 ` [PATCH 2/5] pagetable level size/shift/mask helpers Dave Hansen
@ 2013-01-21 17:52 ` Dave Hansen
2013-01-21 17:52 ` [PATCH 4/5] create slow_virt_to_phys() Dave Hansen
2013-01-21 17:52 ` [PATCH 5/5] fix kvm's use of __pa() on percpu areas Dave Hansen
4 siblings, 0 replies; 14+ messages in thread
From: Dave Hansen @ 2013-01-21 17:52 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, Gleb Natapov, H. Peter Anvin, x86, Marcelo Tosatti,
Rik van Riel, Dave Hansen
try_preserve_large_page() can be slightly simplified by using
the new page_level_*() helpers.
Signed-off-by: Dave Hansen <dave@linux.vnet.ibm.com>
---
linux-2.6.git-dave/arch/x86/mm/pageattr.c | 9 +++------
1 file changed, 3 insertions(+), 6 deletions(-)
diff -puN arch/x86/mm/pageattr.c~use-new-pagetable-helpers arch/x86/mm/pageattr.c
--- linux-2.6.git/arch/x86/mm/pageattr.c~use-new-pagetable-helpers 2013-01-17 10:22:26.282431407 -0800
+++ linux-2.6.git-dave/arch/x86/mm/pageattr.c 2013-01-17 10:22:26.286431442 -0800
@@ -412,15 +412,12 @@ try_preserve_large_page(pte_t *kpte, uns
switch (level) {
case PG_LEVEL_2M:
- psize = PMD_PAGE_SIZE;
- pmask = PMD_PAGE_MASK;
- break;
#ifdef CONFIG_X86_64
case PG_LEVEL_1G:
- psize = PUD_PAGE_SIZE;
- pmask = PUD_PAGE_MASK;
- break;
#endif
+ psize = page_level_size(level);
+ pmask = page_level_mask(level);
+ break;
default:
do_split = -EINVAL;
goto out_unlock;
_
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH 4/5] create slow_virt_to_phys()
2013-01-21 17:52 [PATCH 0/5] fix illegal use of __pa() in KVM code Dave Hansen
` (2 preceding siblings ...)
2013-01-21 17:52 ` [PATCH 3/5] use new pagetable helpers in try_preserve_large_page() Dave Hansen
@ 2013-01-21 17:52 ` Dave Hansen
2013-01-21 18:08 ` H. Peter Anvin
2013-01-21 17:52 ` [PATCH 5/5] fix kvm's use of __pa() on percpu areas Dave Hansen
4 siblings, 1 reply; 14+ messages in thread
From: Dave Hansen @ 2013-01-21 17:52 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, Gleb Natapov, H. Peter Anvin, x86, Marcelo Tosatti,
Rik van Riel, Dave Hansen
This is necessary because __pa() does not work on some kinds of
memory, like vmalloc() or the alloc_remap() areas on 32-bit
NUMA systems. We have some functions to do conversions _like_
this in the vmalloc() code (like vmalloc_to_page()), but they
do not work on sizes other than 4k pages. We would potentially
need to be able to handle all the page sizes that we use for
the kernel linear mapping (4k, 2M, 1G).
In practice, on 32-bit NUMA systems, the percpu areas get stuck
in the alloc_remap() area. Any __pa() call on them will break
and basically return garbage.
This patch introduces a new function slow_virt_to_phys(), which
walks the kernel page tables on x86 and should do precisely
the same logical thing as __pa(), but actually work on a wider
range of memory. It should work on the normal linear mapping,
vmalloc(), kmap(), etc...
Signed-off-by: Dave Hansen <dave@linux.vnet.ibm.com>
Acked-by: Rik van Riel <riel@redhat.com>
---
linux-2.6.git-dave/arch/x86/include/asm/pgtable_types.h | 1
linux-2.6.git-dave/arch/x86/mm/pageattr.c | 31 ++++++++++++++++
2 files changed, 32 insertions(+)
diff -puN arch/x86/include/asm/pgtable_types.h~create-slow_virt_to_phys arch/x86/include/asm/pgtable_types.h
--- linux-2.6.git/arch/x86/include/asm/pgtable_types.h~create-slow_virt_to_phys 2013-01-17 10:22:26.590434129 -0800
+++ linux-2.6.git-dave/arch/x86/include/asm/pgtable_types.h 2013-01-17 10:22:26.598434199 -0800
@@ -352,6 +352,7 @@ static inline void update_page_count(int
* as a pte too.
*/
extern pte_t *lookup_address(unsigned long address, unsigned int *level);
+extern phys_addr_t slow_virt_to_phys(void *__address);
#endif /* !__ASSEMBLY__ */
diff -puN arch/x86/mm/pageattr.c~create-slow_virt_to_phys arch/x86/mm/pageattr.c
--- linux-2.6.git/arch/x86/mm/pageattr.c~create-slow_virt_to_phys 2013-01-17 10:22:26.594434163 -0800
+++ linux-2.6.git-dave/arch/x86/mm/pageattr.c 2013-01-17 10:22:26.598434199 -0800
@@ -364,6 +364,37 @@ pte_t *lookup_address(unsigned long addr
EXPORT_SYMBOL_GPL(lookup_address);
/*
+ * This is necessary because __pa() does not work on some
+ * kinds of memory, like vmalloc() or the alloc_remap()
+ * areas on 32-bit NUMA systems. The percpu areas can
+ * end up in this kind of memory, for instance.
+ *
+ * This could be optimized, but it is only intended to be
+ * used at inititalization time, and keeping it
+ * unoptimized should increase the testing coverage for
+ * the more obscure platforms.
+ */
+phys_addr_t slow_virt_to_phys(void *__virt_addr)
+{
+ unsigned long virt_addr = (unsigned long)__virt_addr;
+ phys_addr_t phys_addr;
+ unsigned long offset;
+ unsigned int level = -1;
+ unsigned long psize = 0;
+ unsigned long pmask = 0;
+ pte_t *pte;
+
+ pte = lookup_address(virt_addr, &level);
+ BUG_ON(!pte);
+ psize = page_level_size(level);
+ pmask = page_level_mask(level);
+ offset = virt_addr & ~pmask;
+ phys_addr = pte_pfn(*pte) << PAGE_SHIFT;
+ return (phys_addr | offset);
+}
+EXPORT_SYMBOL_GPL(slow_virt_to_phys);
+
+/*
* Set the new pmd in all the pgds we know about:
*/
static void __set_pmd_pte(pte_t *kpte, unsigned long address, pte_t pte)
_
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH 5/5] fix kvm's use of __pa() on percpu areas
2013-01-21 17:52 [PATCH 0/5] fix illegal use of __pa() in KVM code Dave Hansen
` (3 preceding siblings ...)
2013-01-21 17:52 ` [PATCH 4/5] create slow_virt_to_phys() Dave Hansen
@ 2013-01-21 17:52 ` Dave Hansen
2013-01-21 18:38 ` H. Peter Anvin
4 siblings, 1 reply; 14+ messages in thread
From: Dave Hansen @ 2013-01-21 17:52 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, Gleb Natapov, H. Peter Anvin, x86, Marcelo Tosatti,
Rik van Riel, Dave Hansen
In short, it is illegal to call __pa() on an address holding
a percpu variable. The times when this actually matters are
pretty obscure (certain 32-bit NUMA systems), but it _does_
happen. It is important to keep KVM guests working on these
systems because the real hardware is getting harder and
harder to find.
This bug manifested first by me seeing a plain hang at boot
after this message:
CPU 0 irqstacks, hard=f3018000 soft=f301a000
or, sometimes, it would actually make it out to the console:
[ 0.000000] BUG: unable to handle kernel paging request at ffffffff
I eventually traced it down to the KVM async pagefault code.
This can be worked around by disabling that code either at
compile-time, or on the kernel command-line.
The kvm async pagefault code was injecting page faults in
to the guest which the guest misinterpreted because its
"reason" was not being properly sent from the host.
The guest passes a physical address of an per-cpu async page
fault structure via an MSR to the host. Since __pa() is
broken on percpu data, the physical address it sent was
bascially bogus and the host went scribbling on random data.
The guest never saw the real reason for the page fault (it
was injected by the host), assumed that the kernel had taken
a _real_ page fault, and panic()'d. The behavior varied,
though, depending on what got corrupted by the bad write.
Signed-off-by: Dave Hansen <dave@linux.vnet.ibm.com>
Acked-by: Rik van Riel <riel@redhat.com>
---
linux-2.6.git-dave/arch/x86/kernel/kvm.c | 9 +++++----
linux-2.6.git-dave/arch/x86/kernel/kvmclock.c | 4 ++--
2 files changed, 7 insertions(+), 6 deletions(-)
diff -puN arch/x86/kernel/kvm.c~fix-kvm-__pa-use-on-percpu-areas arch/x86/kernel/kvm.c
--- linux-2.6.git/arch/x86/kernel/kvm.c~fix-kvm-__pa-use-on-percpu-areas 2013-01-17 10:22:26.914436992 -0800
+++ linux-2.6.git-dave/arch/x86/kernel/kvm.c 2013-01-17 10:22:26.922437062 -0800
@@ -289,9 +289,9 @@ static void kvm_register_steal_time(void
memset(st, 0, sizeof(*st));
- wrmsrl(MSR_KVM_STEAL_TIME, (__pa(st) | KVM_MSR_ENABLED));
+ wrmsrl(MSR_KVM_STEAL_TIME, (slow_virt_to_phys(st) | KVM_MSR_ENABLED));
printk(KERN_INFO "kvm-stealtime: cpu %d, msr %lx\n",
- cpu, __pa(st));
+ cpu, slow_virt_to_phys(st));
}
static DEFINE_PER_CPU(unsigned long, kvm_apic_eoi) = KVM_PV_EOI_DISABLED;
@@ -316,7 +316,7 @@ void __cpuinit kvm_guest_cpu_init(void)
return;
if (kvm_para_has_feature(KVM_FEATURE_ASYNC_PF) && kvmapf) {
- u64 pa = __pa(&__get_cpu_var(apf_reason));
+ u64 pa = slow_virt_to_phys(&__get_cpu_var(apf_reason));
#ifdef CONFIG_PREEMPT
pa |= KVM_ASYNC_PF_SEND_ALWAYS;
@@ -332,7 +332,8 @@ void __cpuinit kvm_guest_cpu_init(void)
/* Size alignment is implied but just to make it explicit. */
BUILD_BUG_ON(__alignof__(kvm_apic_eoi) < 4);
__get_cpu_var(kvm_apic_eoi) = 0;
- pa = __pa(&__get_cpu_var(kvm_apic_eoi)) | KVM_MSR_ENABLED;
+ pa = slow_virt_to_phys(&__get_cpu_var(kvm_apic_eoi))
+ | KVM_MSR_ENABLED;
wrmsrl(MSR_KVM_PV_EOI_EN, pa);
}
diff -puN arch/x86/kernel/kvmclock.c~fix-kvm-__pa-use-on-percpu-areas arch/x86/kernel/kvmclock.c
--- linux-2.6.git/arch/x86/kernel/kvmclock.c~fix-kvm-__pa-use-on-percpu-areas 2013-01-17 10:22:26.918437028 -0800
+++ linux-2.6.git-dave/arch/x86/kernel/kvmclock.c 2013-01-17 10:22:26.922437062 -0800
@@ -162,8 +162,8 @@ int kvm_register_clock(char *txt)
int low, high, ret;
struct pvclock_vcpu_time_info *src = &hv_clock[cpu].pvti;
- low = (int)__pa(src) | 1;
- high = ((u64)__pa(src) >> 32);
+ low = (int)slow_virt_to_phys(src) | 1;
+ high = ((u64)slow_virt_to_phys(src) >> 32);
ret = native_write_msr_safe(msr_kvm_system_time, low, high);
printk(KERN_INFO "kvm-clock: cpu %d, msr %x:%x, %s\n",
cpu, high, low, txt);
_
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 4/5] create slow_virt_to_phys()
2013-01-21 17:52 ` [PATCH 4/5] create slow_virt_to_phys() Dave Hansen
@ 2013-01-21 18:08 ` H. Peter Anvin
2013-01-21 18:18 ` Dave Hansen
0 siblings, 1 reply; 14+ messages in thread
From: H. Peter Anvin @ 2013-01-21 18:08 UTC (permalink / raw)
To: Dave Hansen, linux-kernel
Cc: linux-mm, Gleb Natapov, x86, Marcelo Tosatti, Rik van Riel
Why are you initializing psize/pmask?
Dave Hansen <dave@linux.vnet.ibm.com> wrote:
>
>This is necessary because __pa() does not work on some kinds of
>memory, like vmalloc() or the alloc_remap() areas on 32-bit
>NUMA systems. We have some functions to do conversions _like_
>this in the vmalloc() code (like vmalloc_to_page()), but they
>do not work on sizes other than 4k pages. We would potentially
>need to be able to handle all the page sizes that we use for
>the kernel linear mapping (4k, 2M, 1G).
>
>In practice, on 32-bit NUMA systems, the percpu areas get stuck
>in the alloc_remap() area. Any __pa() call on them will break
>and basically return garbage.
>
>This patch introduces a new function slow_virt_to_phys(), which
>walks the kernel page tables on x86 and should do precisely
>the same logical thing as __pa(), but actually work on a wider
>range of memory. It should work on the normal linear mapping,
>vmalloc(), kmap(), etc...
>
>Signed-off-by: Dave Hansen <dave@linux.vnet.ibm.com>
>Acked-by: Rik van Riel <riel@redhat.com>
>---
>
> linux-2.6.git-dave/arch/x86/include/asm/pgtable_types.h | 1
>linux-2.6.git-dave/arch/x86/mm/pageattr.c | 31
>++++++++++++++++
> 2 files changed, 32 insertions(+)
>
>diff -puN arch/x86/include/asm/pgtable_types.h~create-slow_virt_to_phys
>arch/x86/include/asm/pgtable_types.h
>---
>linux-2.6.git/arch/x86/include/asm/pgtable_types.h~create-slow_virt_to_phys 2013-01-17
>10:22:26.590434129 -0800
>+++ linux-2.6.git-dave/arch/x86/include/asm/pgtable_types.h 2013-01-17
>10:22:26.598434199 -0800
>@@ -352,6 +352,7 @@ static inline void update_page_count(int
> * as a pte too.
> */
>extern pte_t *lookup_address(unsigned long address, unsigned int
>*level);
>+extern phys_addr_t slow_virt_to_phys(void *__address);
>
> #endif /* !__ASSEMBLY__ */
>
>diff -puN arch/x86/mm/pageattr.c~create-slow_virt_to_phys
>arch/x86/mm/pageattr.c
>---
>linux-2.6.git/arch/x86/mm/pageattr.c~create-slow_virt_to_phys 2013-01-17
>10:22:26.594434163 -0800
>+++ linux-2.6.git-dave/arch/x86/mm/pageattr.c 2013-01-17
>10:22:26.598434199 -0800
>@@ -364,6 +364,37 @@ pte_t *lookup_address(unsigned long addr
> EXPORT_SYMBOL_GPL(lookup_address);
>
> /*
>+ * This is necessary because __pa() does not work on some
>+ * kinds of memory, like vmalloc() or the alloc_remap()
>+ * areas on 32-bit NUMA systems. The percpu areas can
>+ * end up in this kind of memory, for instance.
>+ *
>+ * This could be optimized, but it is only intended to be
>+ * used at inititalization time, and keeping it
>+ * unoptimized should increase the testing coverage for
>+ * the more obscure platforms.
>+ */
>+phys_addr_t slow_virt_to_phys(void *__virt_addr)
>+{
>+ unsigned long virt_addr = (unsigned long)__virt_addr;
>+ phys_addr_t phys_addr;
>+ unsigned long offset;
>+ unsigned int level = -1;
>+ unsigned long psize = 0;
>+ unsigned long pmask = 0;
>+ pte_t *pte;
>+
>+ pte = lookup_address(virt_addr, &level);
>+ BUG_ON(!pte);
>+ psize = page_level_size(level);
>+ pmask = page_level_mask(level);
>+ offset = virt_addr & ~pmask;
>+ phys_addr = pte_pfn(*pte) << PAGE_SHIFT;
>+ return (phys_addr | offset);
>+}
>+EXPORT_SYMBOL_GPL(slow_virt_to_phys);
>+
>+/*
> * Set the new pmd in all the pgds we know about:
> */
>static void __set_pmd_pte(pte_t *kpte, unsigned long address, pte_t
>pte)
>_
--
Sent from my mobile phone. Please excuse brevity and lack of formatting.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 4/5] create slow_virt_to_phys()
2013-01-21 18:08 ` H. Peter Anvin
@ 2013-01-21 18:18 ` Dave Hansen
0 siblings, 0 replies; 14+ messages in thread
From: Dave Hansen @ 2013-01-21 18:18 UTC (permalink / raw)
To: H. Peter Anvin
Cc: linux-kernel, linux-mm, Gleb Natapov, x86, Marcelo Tosatti,
Rik van Riel
On 01/21/2013 10:08 AM, H. Peter Anvin wrote:
> Why are you initializing psize/pmask?
It's an artifact from the switch() that was there before. I'll clean it up.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 5/5] fix kvm's use of __pa() on percpu areas
2013-01-21 17:52 ` [PATCH 5/5] fix kvm's use of __pa() on percpu areas Dave Hansen
@ 2013-01-21 18:38 ` H. Peter Anvin
2013-01-21 18:59 ` Dave Hansen
2013-01-21 19:02 ` Gleb Natapov
0 siblings, 2 replies; 14+ messages in thread
From: H. Peter Anvin @ 2013-01-21 18:38 UTC (permalink / raw)
To: Dave Hansen, linux-kernel
Cc: linux-mm, Gleb Natapov, x86, Marcelo Tosatti, Rik van Riel
Final question: are any of these done in frequent paths? (I believe no, but...)
Dave Hansen <dave@linux.vnet.ibm.com> wrote:
>
>In short, it is illegal to call __pa() on an address holding
>a percpu variable. The times when this actually matters are
>pretty obscure (certain 32-bit NUMA systems), but it _does_
>happen. It is important to keep KVM guests working on these
>systems because the real hardware is getting harder and
>harder to find.
>
>This bug manifested first by me seeing a plain hang at boot
>after this message:
>
> CPU 0 irqstacks, hard=f3018000 soft=f301a000
>
>or, sometimes, it would actually make it out to the console:
>
>[ 0.000000] BUG: unable to handle kernel paging request at ffffffff
>
>I eventually traced it down to the KVM async pagefault code.
>This can be worked around by disabling that code either at
>compile-time, or on the kernel command-line.
>
>The kvm async pagefault code was injecting page faults in
>to the guest which the guest misinterpreted because its
>"reason" was not being properly sent from the host.
>
>The guest passes a physical address of an per-cpu async page
>fault structure via an MSR to the host. Since __pa() is
>broken on percpu data, the physical address it sent was
>bascially bogus and the host went scribbling on random data.
>The guest never saw the real reason for the page fault (it
>was injected by the host), assumed that the kernel had taken
>a _real_ page fault, and panic()'d. The behavior varied,
>though, depending on what got corrupted by the bad write.
>
>Signed-off-by: Dave Hansen <dave@linux.vnet.ibm.com>
>Acked-by: Rik van Riel <riel@redhat.com>
>---
>
> linux-2.6.git-dave/arch/x86/kernel/kvm.c | 9 +++++----
> linux-2.6.git-dave/arch/x86/kernel/kvmclock.c | 4 ++--
> 2 files changed, 7 insertions(+), 6 deletions(-)
>
>diff -puN arch/x86/kernel/kvm.c~fix-kvm-__pa-use-on-percpu-areas
>arch/x86/kernel/kvm.c
>---
>linux-2.6.git/arch/x86/kernel/kvm.c~fix-kvm-__pa-use-on-percpu-areas 2013-01-17
>10:22:26.914436992 -0800
>+++ linux-2.6.git-dave/arch/x86/kernel/kvm.c 2013-01-17
>10:22:26.922437062 -0800
>@@ -289,9 +289,9 @@ static void kvm_register_steal_time(void
>
> memset(st, 0, sizeof(*st));
>
>- wrmsrl(MSR_KVM_STEAL_TIME, (__pa(st) | KVM_MSR_ENABLED));
>+ wrmsrl(MSR_KVM_STEAL_TIME, (slow_virt_to_phys(st) |
>KVM_MSR_ENABLED));
> printk(KERN_INFO "kvm-stealtime: cpu %d, msr %lx\n",
>- cpu, __pa(st));
>+ cpu, slow_virt_to_phys(st));
> }
>
>static DEFINE_PER_CPU(unsigned long, kvm_apic_eoi) =
>KVM_PV_EOI_DISABLED;
>@@ -316,7 +316,7 @@ void __cpuinit kvm_guest_cpu_init(void)
> return;
>
> if (kvm_para_has_feature(KVM_FEATURE_ASYNC_PF) && kvmapf) {
>- u64 pa = __pa(&__get_cpu_var(apf_reason));
>+ u64 pa = slow_virt_to_phys(&__get_cpu_var(apf_reason));
>
> #ifdef CONFIG_PREEMPT
> pa |= KVM_ASYNC_PF_SEND_ALWAYS;
>@@ -332,7 +332,8 @@ void __cpuinit kvm_guest_cpu_init(void)
> /* Size alignment is implied but just to make it explicit. */
> BUILD_BUG_ON(__alignof__(kvm_apic_eoi) < 4);
> __get_cpu_var(kvm_apic_eoi) = 0;
>- pa = __pa(&__get_cpu_var(kvm_apic_eoi)) | KVM_MSR_ENABLED;
>+ pa = slow_virt_to_phys(&__get_cpu_var(kvm_apic_eoi))
>+ | KVM_MSR_ENABLED;
> wrmsrl(MSR_KVM_PV_EOI_EN, pa);
> }
>
>diff -puN arch/x86/kernel/kvmclock.c~fix-kvm-__pa-use-on-percpu-areas
>arch/x86/kernel/kvmclock.c
>---
>linux-2.6.git/arch/x86/kernel/kvmclock.c~fix-kvm-__pa-use-on-percpu-areas 2013-01-17
>10:22:26.918437028 -0800
>+++ linux-2.6.git-dave/arch/x86/kernel/kvmclock.c 2013-01-17
>10:22:26.922437062 -0800
>@@ -162,8 +162,8 @@ int kvm_register_clock(char *txt)
> int low, high, ret;
> struct pvclock_vcpu_time_info *src = &hv_clock[cpu].pvti;
>
>- low = (int)__pa(src) | 1;
>- high = ((u64)__pa(src) >> 32);
>+ low = (int)slow_virt_to_phys(src) | 1;
>+ high = ((u64)slow_virt_to_phys(src) >> 32);
> ret = native_write_msr_safe(msr_kvm_system_time, low, high);
> printk(KERN_INFO "kvm-clock: cpu %d, msr %x:%x, %s\n",
> cpu, high, low, txt);
>_
--
Sent from my mobile phone. Please excuse brevity and lack of formatting.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 5/5] fix kvm's use of __pa() on percpu areas
2013-01-21 18:38 ` H. Peter Anvin
@ 2013-01-21 18:59 ` Dave Hansen
2013-01-21 19:22 ` H. Peter Anvin
2013-01-21 19:02 ` Gleb Natapov
1 sibling, 1 reply; 14+ messages in thread
From: Dave Hansen @ 2013-01-21 18:59 UTC (permalink / raw)
To: H. Peter Anvin
Cc: linux-kernel, linux-mm, Gleb Natapov, x86, Marcelo Tosatti,
Rik van Riel
On 01/21/2013 10:38 AM, H. Peter Anvin wrote:
> Final question: are any of these done in frequent paths? (I believe no, but...)
Nope. All of the places that it gets used here are in
initialization-time paths. The two we have here are when kvm and the
host are setting up a new vcpu and when the kvmclock clocksource is
being registered. A CPU getting hotplugged is the only thing that might
even have these get called more than at boot.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 5/5] fix kvm's use of __pa() on percpu areas
2013-01-21 18:38 ` H. Peter Anvin
2013-01-21 18:59 ` Dave Hansen
@ 2013-01-21 19:02 ` Gleb Natapov
1 sibling, 0 replies; 14+ messages in thread
From: Gleb Natapov @ 2013-01-21 19:02 UTC (permalink / raw)
To: H. Peter Anvin
Cc: Dave Hansen, linux-kernel, linux-mm, x86, Marcelo Tosatti,
Rik van Riel
On Mon, Jan 21, 2013 at 12:38:06PM -0600, H. Peter Anvin wrote:
> Final question: are any of these done in frequent paths? (I believe no, but...)
>
No, only during guest boot.
> Dave Hansen <dave@linux.vnet.ibm.com> wrote:
>
> >
> >In short, it is illegal to call __pa() on an address holding
> >a percpu variable. The times when this actually matters are
> >pretty obscure (certain 32-bit NUMA systems), but it _does_
> >happen. It is important to keep KVM guests working on these
> >systems because the real hardware is getting harder and
> >harder to find.
> >
> >This bug manifested first by me seeing a plain hang at boot
> >after this message:
> >
> > CPU 0 irqstacks, hard=f3018000 soft=f301a000
> >
> >or, sometimes, it would actually make it out to the console:
> >
> >[ 0.000000] BUG: unable to handle kernel paging request at ffffffff
> >
> >I eventually traced it down to the KVM async pagefault code.
> >This can be worked around by disabling that code either at
> >compile-time, or on the kernel command-line.
> >
> >The kvm async pagefault code was injecting page faults in
> >to the guest which the guest misinterpreted because its
> >"reason" was not being properly sent from the host.
> >
> >The guest passes a physical address of an per-cpu async page
> >fault structure via an MSR to the host. Since __pa() is
> >broken on percpu data, the physical address it sent was
> >bascially bogus and the host went scribbling on random data.
> >The guest never saw the real reason for the page fault (it
> >was injected by the host), assumed that the kernel had taken
> >a _real_ page fault, and panic()'d. The behavior varied,
> >though, depending on what got corrupted by the bad write.
> >
> >Signed-off-by: Dave Hansen <dave@linux.vnet.ibm.com>
> >Acked-by: Rik van Riel <riel@redhat.com>
> >---
> >
> > linux-2.6.git-dave/arch/x86/kernel/kvm.c | 9 +++++----
> > linux-2.6.git-dave/arch/x86/kernel/kvmclock.c | 4 ++--
> > 2 files changed, 7 insertions(+), 6 deletions(-)
> >
> >diff -puN arch/x86/kernel/kvm.c~fix-kvm-__pa-use-on-percpu-areas
> >arch/x86/kernel/kvm.c
> >---
> >linux-2.6.git/arch/x86/kernel/kvm.c~fix-kvm-__pa-use-on-percpu-areas 2013-01-17
> >10:22:26.914436992 -0800
> >+++ linux-2.6.git-dave/arch/x86/kernel/kvm.c 2013-01-17
> >10:22:26.922437062 -0800
> >@@ -289,9 +289,9 @@ static void kvm_register_steal_time(void
> >
> > memset(st, 0, sizeof(*st));
> >
> >- wrmsrl(MSR_KVM_STEAL_TIME, (__pa(st) | KVM_MSR_ENABLED));
> >+ wrmsrl(MSR_KVM_STEAL_TIME, (slow_virt_to_phys(st) |
> >KVM_MSR_ENABLED));
> > printk(KERN_INFO "kvm-stealtime: cpu %d, msr %lx\n",
> >- cpu, __pa(st));
> >+ cpu, slow_virt_to_phys(st));
> > }
> >
> >static DEFINE_PER_CPU(unsigned long, kvm_apic_eoi) =
> >KVM_PV_EOI_DISABLED;
> >@@ -316,7 +316,7 @@ void __cpuinit kvm_guest_cpu_init(void)
> > return;
> >
> > if (kvm_para_has_feature(KVM_FEATURE_ASYNC_PF) && kvmapf) {
> >- u64 pa = __pa(&__get_cpu_var(apf_reason));
> >+ u64 pa = slow_virt_to_phys(&__get_cpu_var(apf_reason));
> >
> > #ifdef CONFIG_PREEMPT
> > pa |= KVM_ASYNC_PF_SEND_ALWAYS;
> >@@ -332,7 +332,8 @@ void __cpuinit kvm_guest_cpu_init(void)
> > /* Size alignment is implied but just to make it explicit. */
> > BUILD_BUG_ON(__alignof__(kvm_apic_eoi) < 4);
> > __get_cpu_var(kvm_apic_eoi) = 0;
> >- pa = __pa(&__get_cpu_var(kvm_apic_eoi)) | KVM_MSR_ENABLED;
> >+ pa = slow_virt_to_phys(&__get_cpu_var(kvm_apic_eoi))
> >+ | KVM_MSR_ENABLED;
> > wrmsrl(MSR_KVM_PV_EOI_EN, pa);
> > }
> >
> >diff -puN arch/x86/kernel/kvmclock.c~fix-kvm-__pa-use-on-percpu-areas
> >arch/x86/kernel/kvmclock.c
> >---
> >linux-2.6.git/arch/x86/kernel/kvmclock.c~fix-kvm-__pa-use-on-percpu-areas 2013-01-17
> >10:22:26.918437028 -0800
> >+++ linux-2.6.git-dave/arch/x86/kernel/kvmclock.c 2013-01-17
> >10:22:26.922437062 -0800
> >@@ -162,8 +162,8 @@ int kvm_register_clock(char *txt)
> > int low, high, ret;
> > struct pvclock_vcpu_time_info *src = &hv_clock[cpu].pvti;
> >
> >- low = (int)__pa(src) | 1;
> >- high = ((u64)__pa(src) >> 32);
> >+ low = (int)slow_virt_to_phys(src) | 1;
> >+ high = ((u64)slow_virt_to_phys(src) >> 32);
> > ret = native_write_msr_safe(msr_kvm_system_time, low, high);
> > printk(KERN_INFO "kvm-clock: cpu %d, msr %x:%x, %s\n",
> > cpu, high, low, txt);
> >_
>
> --
> Sent from my mobile phone. Please excuse brevity and lack of formatting.
--
Gleb.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 5/5] fix kvm's use of __pa() on percpu areas
2013-01-21 18:59 ` Dave Hansen
@ 2013-01-21 19:22 ` H. Peter Anvin
0 siblings, 0 replies; 14+ messages in thread
From: H. Peter Anvin @ 2013-01-21 19:22 UTC (permalink / raw)
To: Dave Hansen
Cc: linux-kernel, linux-mm, Gleb Natapov, x86, Marcelo Tosatti,
Rik van Riel
Cool, just checking.
Dave Hansen <dave@linux.vnet.ibm.com> wrote:
>On 01/21/2013 10:38 AM, H. Peter Anvin wrote:
>> Final question: are any of these done in frequent paths? (I believe
>no, but...)
>
>Nope. All of the places that it gets used here are in
>initialization-time paths. The two we have here are when kvm and the
>host are setting up a new vcpu and when the kvmclock clocksource is
>being registered. A CPU getting hotplugged is the only thing that
>might
>even have these get called more than at boot.
--
Sent from my mobile phone. Please excuse brevity and lack of formatting.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH 5/5] fix kvm's use of __pa() on percpu areas
2013-01-22 21:24 [PATCH 0/5] [v3] fix illegal use of __pa() in KVM code Dave Hansen
@ 2013-01-22 21:24 ` Dave Hansen
2013-01-23 0:08 ` Marcelo Tosatti
0 siblings, 1 reply; 14+ messages in thread
From: Dave Hansen @ 2013-01-22 21:24 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, Gleb Natapov, H. Peter Anvin, x86, Marcelo Tosatti,
Rik van Riel, Dave Hansen
In short, it is illegal to call __pa() on an address holding
a percpu variable. This replaces those __pa() calls with
slow_virt_to_phys(). All of the cases in this patch are
in boot time (or CPU hotplug time at worst) code, so the
slow pagetable walking in slow_virt_to_phys() is not expected
to have a performance impact.
The times when this actually matters are pretty obscure
(certain 32-bit NUMA systems), but it _does_ happen. It is
important to keep KVM guests working on these systems because
the real hardware is getting harder and harder to find.
This bug manifested first by me seeing a plain hang at boot
after this message:
CPU 0 irqstacks, hard=f3018000 soft=f301a000
or, sometimes, it would actually make it out to the console:
[ 0.000000] BUG: unable to handle kernel paging request at ffffffff
I eventually traced it down to the KVM async pagefault code.
This can be worked around by disabling that code either at
compile-time, or on the kernel command-line.
The kvm async pagefault code was injecting page faults in
to the guest which the guest misinterpreted because its
"reason" was not being properly sent from the host.
The guest passes a physical address of an per-cpu async page
fault structure via an MSR to the host. Since __pa() is
broken on percpu data, the physical address it sent was
bascially bogus and the host went scribbling on random data.
The guest never saw the real reason for the page fault (it
was injected by the host), assumed that the kernel had taken
a _real_ page fault, and panic()'d. The behavior varied,
though, depending on what got corrupted by the bad write.
Signed-off-by: Dave Hansen <dave@linux.vnet.ibm.com>
Acked-by: Rik van Riel <riel@redhat.com>
---
linux-2.6.git-dave/arch/x86/kernel/kvm.c | 9 +++++----
linux-2.6.git-dave/arch/x86/kernel/kvmclock.c | 4 ++--
2 files changed, 7 insertions(+), 6 deletions(-)
diff -puN arch/x86/kernel/kvm.c~fix-kvm-__pa-use-on-percpu-areas arch/x86/kernel/kvm.c
--- linux-2.6.git/arch/x86/kernel/kvm.c~fix-kvm-__pa-use-on-percpu-areas 2013-01-22 13:17:16.424317475 -0800
+++ linux-2.6.git-dave/arch/x86/kernel/kvm.c 2013-01-22 13:17:16.432317541 -0800
@@ -289,9 +289,9 @@ static void kvm_register_steal_time(void
memset(st, 0, sizeof(*st));
- wrmsrl(MSR_KVM_STEAL_TIME, (__pa(st) | KVM_MSR_ENABLED));
+ wrmsrl(MSR_KVM_STEAL_TIME, (slow_virt_to_phys(st) | KVM_MSR_ENABLED));
printk(KERN_INFO "kvm-stealtime: cpu %d, msr %lx\n",
- cpu, __pa(st));
+ cpu, slow_virt_to_phys(st));
}
static DEFINE_PER_CPU(unsigned long, kvm_apic_eoi) = KVM_PV_EOI_DISABLED;
@@ -316,7 +316,7 @@ void __cpuinit kvm_guest_cpu_init(void)
return;
if (kvm_para_has_feature(KVM_FEATURE_ASYNC_PF) && kvmapf) {
- u64 pa = __pa(&__get_cpu_var(apf_reason));
+ u64 pa = slow_virt_to_phys(&__get_cpu_var(apf_reason));
#ifdef CONFIG_PREEMPT
pa |= KVM_ASYNC_PF_SEND_ALWAYS;
@@ -332,7 +332,8 @@ void __cpuinit kvm_guest_cpu_init(void)
/* Size alignment is implied but just to make it explicit. */
BUILD_BUG_ON(__alignof__(kvm_apic_eoi) < 4);
__get_cpu_var(kvm_apic_eoi) = 0;
- pa = __pa(&__get_cpu_var(kvm_apic_eoi)) | KVM_MSR_ENABLED;
+ pa = slow_virt_to_phys(&__get_cpu_var(kvm_apic_eoi))
+ | KVM_MSR_ENABLED;
wrmsrl(MSR_KVM_PV_EOI_EN, pa);
}
diff -puN arch/x86/kernel/kvmclock.c~fix-kvm-__pa-use-on-percpu-areas arch/x86/kernel/kvmclock.c
--- linux-2.6.git/arch/x86/kernel/kvmclock.c~fix-kvm-__pa-use-on-percpu-areas 2013-01-22 13:17:16.428317508 -0800
+++ linux-2.6.git-dave/arch/x86/kernel/kvmclock.c 2013-01-22 13:17:16.432317541 -0800
@@ -162,8 +162,8 @@ int kvm_register_clock(char *txt)
int low, high, ret;
struct pvclock_vcpu_time_info *src = &hv_clock[cpu].pvti;
- low = (int)__pa(src) | 1;
- high = ((u64)__pa(src) >> 32);
+ low = (int)slow_virt_to_phys(src) | 1;
+ high = ((u64)slow_virt_to_phys(src) >> 32);
ret = native_write_msr_safe(msr_kvm_system_time, low, high);
printk(KERN_INFO "kvm-clock: cpu %d, msr %x:%x, %s\n",
cpu, high, low, txt);
_
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 5/5] fix kvm's use of __pa() on percpu areas
2013-01-22 21:24 ` [PATCH 5/5] fix kvm's use of __pa() on percpu areas Dave Hansen
@ 2013-01-23 0:08 ` Marcelo Tosatti
0 siblings, 0 replies; 14+ messages in thread
From: Marcelo Tosatti @ 2013-01-23 0:08 UTC (permalink / raw)
To: Dave Hansen
Cc: linux-kernel, linux-mm, Gleb Natapov, H. Peter Anvin, x86,
Rik van Riel
On Tue, Jan 22, 2013 at 01:24:35PM -0800, Dave Hansen wrote:
>
> In short, it is illegal to call __pa() on an address holding
> a percpu variable. This replaces those __pa() calls with
> slow_virt_to_phys(). All of the cases in this patch are
> in boot time (or CPU hotplug time at worst) code, so the
> slow pagetable walking in slow_virt_to_phys() is not expected
> to have a performance impact.
>
> The times when this actually matters are pretty obscure
> (certain 32-bit NUMA systems), but it _does_ happen. It is
> important to keep KVM guests working on these systems because
> the real hardware is getting harder and harder to find.
>
> This bug manifested first by me seeing a plain hang at boot
> after this message:
>
> CPU 0 irqstacks, hard=f3018000 soft=f301a000
>
> or, sometimes, it would actually make it out to the console:
>
> [ 0.000000] BUG: unable to handle kernel paging request at ffffffff
>
> I eventually traced it down to the KVM async pagefault code.
> This can be worked around by disabling that code either at
> compile-time, or on the kernel command-line.
>
> The kvm async pagefault code was injecting page faults in
> to the guest which the guest misinterpreted because its
> "reason" was not being properly sent from the host.
>
> The guest passes a physical address of an per-cpu async page
> fault structure via an MSR to the host. Since __pa() is
> broken on percpu data, the physical address it sent was
> bascially bogus and the host went scribbling on random data.
> The guest never saw the real reason for the page fault (it
> was injected by the host), assumed that the kernel had taken
> a _real_ page fault, and panic()'d. The behavior varied,
> though, depending on what got corrupted by the bad write.
>
> Signed-off-by: Dave Hansen <dave@linux.vnet.ibm.com>
> Acked-by: Rik van Riel <riel@redhat.com>
> ---
>
> linux-2.6.git-dave/arch/x86/kernel/kvm.c | 9 +++++----
> linux-2.6.git-dave/arch/x86/kernel/kvmclock.c | 4 ++--
> 2 files changed, 7 insertions(+), 6 deletions(-)
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2013-01-23 0:21 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-01-21 17:52 [PATCH 0/5] fix illegal use of __pa() in KVM code Dave Hansen
2013-01-21 17:52 ` [PATCH 1/5] make DEBUG_VIRTUAL work earlier in boot Dave Hansen
2013-01-21 17:52 ` [PATCH 2/5] pagetable level size/shift/mask helpers Dave Hansen
2013-01-21 17:52 ` [PATCH 3/5] use new pagetable helpers in try_preserve_large_page() Dave Hansen
2013-01-21 17:52 ` [PATCH 4/5] create slow_virt_to_phys() Dave Hansen
2013-01-21 18:08 ` H. Peter Anvin
2013-01-21 18:18 ` Dave Hansen
2013-01-21 17:52 ` [PATCH 5/5] fix kvm's use of __pa() on percpu areas Dave Hansen
2013-01-21 18:38 ` H. Peter Anvin
2013-01-21 18:59 ` Dave Hansen
2013-01-21 19:22 ` H. Peter Anvin
2013-01-21 19:02 ` Gleb Natapov
-- strict thread matches above, loose matches on Subject: below --
2013-01-22 21:24 [PATCH 0/5] [v3] fix illegal use of __pa() in KVM code Dave Hansen
2013-01-22 21:24 ` [PATCH 5/5] fix kvm's use of __pa() on percpu areas Dave Hansen
2013-01-23 0:08 ` Marcelo Tosatti
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).