* [PATCH 0/2] x86: deal with remaining W+X pages on Xen
@ 2017-12-12 10:24 Jan Beulich
2017-12-12 10:31 ` [PATCH 1/2] x86: consider effective protection attributes in W+X check Jan Beulich
` (2 more replies)
0 siblings, 3 replies; 19+ messages in thread
From: Jan Beulich @ 2017-12-12 10:24 UTC (permalink / raw)
To: mingo, tglx, Boris Ostrovsky, Juergen Gross, hpa; +Cc: xen-devel, linux-kernel
The two patches here are entirely independent (i.e. they could by applied
in any order and/or go through separate trees), but for the warning to go
away both are necessary.
1: x86: consider effective protection attributes in W+X check
2: x86-64/Xen: eliminate W+X pages
Signed-off-by: Jan Beulich <jbeulich@suse.com>
^ permalink raw reply [flat|nested] 19+ messages in thread* [PATCH 1/2] x86: consider effective protection attributes in W+X check 2017-12-12 10:24 [PATCH 0/2] x86: deal with remaining W+X pages on Xen Jan Beulich @ 2017-12-12 10:31 ` Jan Beulich 2017-12-12 10:36 ` Ingo Molnar 2017-12-12 10:32 ` [PATCH 2/2] x86-64/Xen: eliminate W+X mappings Jan Beulich [not found] ` <5A2FBE0A0200007800196B4F@suse.com> 2 siblings, 1 reply; 19+ messages in thread From: Jan Beulich @ 2017-12-12 10:31 UTC (permalink / raw) To: mingo, tglx, hpa Cc: xen-devel, Boris Ostrovsky, Juergen Gross, sds, linux-kernel Using just the leaf page table entry flags would cause a false warning in case _PAGE_RW is clear or _PAGE_NX is set in a higher level entry. Hand through both the current entry's flags as well as the accumulated effective value (the latter as pgprotval_t instead of pgprot_t, as it's not an actual entry's value). Signed-off-by: Jan Beulich <jbeulich@suse.com> --- arch/x86/mm/dump_pagetables.c | 92 ++++++++++++++++++++++++++---------------- 1 file changed, 57 insertions(+), 35 deletions(-) --- 4.15-rc3/arch/x86/mm/dump_pagetables.c +++ 4.15-rc3-x86-dumppgt-effective-prot/arch/x86/mm/dump_pagetables.c @@ -29,6 +29,7 @@ struct pg_state { int level; pgprot_t current_prot; + pgprotval_t effective_prot; unsigned long start_address; unsigned long current_address; const struct addr_marker *marker; @@ -202,9 +203,9 @@ static unsigned long normalize_addr(unsi * print what we collected so far. */ static void note_page(struct seq_file *m, struct pg_state *st, - pgprot_t new_prot, int level) + pgprot_t new_prot, pgprotval_t new_eff, int level) { - pgprotval_t prot, cur; + pgprotval_t prot, cur, eff; static const char units[] = "BKMGTPE"; /* @@ -214,23 +215,24 @@ static void note_page(struct seq_file *m */ prot = pgprot_val(new_prot); cur = pgprot_val(st->current_prot); + eff = st->effective_prot; if (!st->level) { /* First entry */ st->current_prot = new_prot; + st->effective_prot = new_eff; st->level = level; st->marker = address_markers; st->lines = 0; pt_dump_seq_printf(m, st->to_dmesg, "---[ %s ]---\n", st->marker->name); - } else if (prot != cur || level != st->level || + } else if (prot != cur || new_eff != eff || level != st->level || st->current_address >= st->marker[1].start_address) { const char *unit = units; unsigned long delta; int width = sizeof(unsigned long) * 2; - pgprotval_t pr = pgprot_val(st->current_prot); - if (st->check_wx && (pr & _PAGE_RW) && !(pr & _PAGE_NX)) { + if (st->check_wx && (eff & _PAGE_RW) && !(eff & _PAGE_NX)) { WARN_ONCE(1, "x86/mm: Found insecure W+X mapping at address %p/%pS\n", (void *)st->start_address, @@ -284,21 +286,30 @@ static void note_page(struct seq_file *m st->start_address = st->current_address; st->current_prot = new_prot; + st->effective_prot = new_eff; st->level = level; } } -static void walk_pte_level(struct seq_file *m, struct pg_state *st, pmd_t addr, unsigned long P) +static inline pgprotval_t effective_prot(pgprotval_t prot1, pgprotval_t prot2) +{ + return (prot1 & prot2 & (_PAGE_USER | _PAGE_RW)) | + ((prot1 | prot2) & _PAGE_NX); +} + +static void walk_pte_level(struct seq_file *m, struct pg_state *st, pmd_t addr, + pgprotval_t eff_in, unsigned long P) { int i; pte_t *start; - pgprotval_t prot; + pgprotval_t prot, eff; start = (pte_t *)pmd_page_vaddr(addr); for (i = 0; i < PTRS_PER_PTE; i++) { prot = pte_flags(*start); + eff = effective_prot(eff_in, prot); st->current_address = normalize_addr(P + i * PTE_LEVEL_MULT); - note_page(m, st, __pgprot(prot), 5); + note_page(m, st, __pgprot(prot), eff, 5); start++; } } @@ -335,42 +346,45 @@ static inline bool kasan_page_table(stru #if PTRS_PER_PMD > 1 -static void walk_pmd_level(struct seq_file *m, struct pg_state *st, pud_t addr, unsigned long P) +static void walk_pmd_level(struct seq_file *m, struct pg_state *st, pud_t addr, + pgprotval_t eff_in, unsigned long P) { int i; pmd_t *start, *pmd_start; - pgprotval_t prot; + pgprotval_t prot, eff; pmd_start = start = (pmd_t *)pud_page_vaddr(addr); for (i = 0; i < PTRS_PER_PMD; i++) { st->current_address = normalize_addr(P + i * PMD_LEVEL_MULT); if (!pmd_none(*start)) { + prot = pmd_flags(*start); + eff = effective_prot(eff_in, prot); if (pmd_large(*start) || !pmd_present(*start)) { - prot = pmd_flags(*start); - note_page(m, st, __pgprot(prot), 4); + note_page(m, st, __pgprot(prot), eff, 4); } else if (!kasan_page_table(m, st, pmd_start)) { - walk_pte_level(m, st, *start, + walk_pte_level(m, st, *start, eff, P + i * PMD_LEVEL_MULT); } } else - note_page(m, st, __pgprot(0), 4); + note_page(m, st, __pgprot(0), 0, 4); start++; } } #else -#define walk_pmd_level(m,s,a,p) walk_pte_level(m,s,__pmd(pud_val(a)),p) +#define walk_pmd_level(m,s,a,e,p) walk_pte_level(m,s,__pmd(pud_val(a)),e,p) #define pud_large(a) pmd_large(__pmd(pud_val(a))) #define pud_none(a) pmd_none(__pmd(pud_val(a))) #endif #if PTRS_PER_PUD > 1 -static void walk_pud_level(struct seq_file *m, struct pg_state *st, p4d_t addr, unsigned long P) +static void walk_pud_level(struct seq_file *m, struct pg_state *st, p4d_t addr, + pgprotval_t eff_in, unsigned long P) { int i; pud_t *start, *pud_start; - pgprotval_t prot; + pgprotval_t prot, eff; pud_t *prev_pud = NULL; pud_start = start = (pud_t *)p4d_page_vaddr(addr); @@ -378,15 +392,16 @@ static void walk_pud_level(struct seq_fi for (i = 0; i < PTRS_PER_PUD; i++) { st->current_address = normalize_addr(P + i * PUD_LEVEL_MULT); if (!pud_none(*start)) { + prot = pud_flags(*start); + eff = effective_prot(eff_in, prot); if (pud_large(*start) || !pud_present(*start)) { - prot = pud_flags(*start); - note_page(m, st, __pgprot(prot), 3); + note_page(m, st, __pgprot(prot), eff, 3); } else if (!kasan_page_table(m, st, pud_start)) { - walk_pmd_level(m, st, *start, + walk_pmd_level(m, st, *start, eff, P + i * PUD_LEVEL_MULT); } } else - note_page(m, st, __pgprot(0), 3); + note_page(m, st, __pgprot(0), 0, 3); prev_pud = start; start++; @@ -394,40 +409,42 @@ static void walk_pud_level(struct seq_fi } #else -#define walk_pud_level(m,s,a,p) walk_pmd_level(m,s,__pud(p4d_val(a)),p) +#define walk_pud_level(m,s,a,e,p) walk_pmd_level(m,s,__pud(p4d_val(a)),e,p) #define p4d_large(a) pud_large(__pud(p4d_val(a))) #define p4d_none(a) pud_none(__pud(p4d_val(a))) #endif #if PTRS_PER_P4D > 1 -static void walk_p4d_level(struct seq_file *m, struct pg_state *st, pgd_t addr, unsigned long P) +static void walk_p4d_level(struct seq_file *m, struct pg_state *st, pgd_t addr, + pgprotval_t eff_in, unsigned long P) { int i; p4d_t *start, *p4d_start; - pgprotval_t prot; + pgprotval_t prot, eff; p4d_start = start = (p4d_t *)pgd_page_vaddr(addr); for (i = 0; i < PTRS_PER_P4D; i++) { st->current_address = normalize_addr(P + i * P4D_LEVEL_MULT); if (!p4d_none(*start)) { + prot = p4d_flags(*start); + eff = effective_prot(eff_in, prot); if (p4d_large(*start) || !p4d_present(*start)) { - prot = p4d_flags(*start); - note_page(m, st, __pgprot(prot), 2); + note_page(m, st, __pgprot(prot), eff, 2); } else if (!kasan_page_table(m, st, p4d_start)) { - walk_pud_level(m, st, *start, + walk_pud_level(m, st, *start, eff, P + i * P4D_LEVEL_MULT); } } else - note_page(m, st, __pgprot(0), 2); + note_page(m, st, __pgprot(0), 0, 2); start++; } } #else -#define walk_p4d_level(m,s,a,p) walk_pud_level(m,s,__p4d(pgd_val(a)),p) +#define walk_p4d_level(m,s,a,e,p) walk_pud_level(m,s,__p4d(pgd_val(a)),e,p) #define pgd_large(a) p4d_large(__p4d(pgd_val(a))) #define pgd_none(a) p4d_none(__p4d(pgd_val(a))) #endif @@ -454,7 +471,7 @@ static void ptdump_walk_pgd_level_core(s #else pgd_t *start = swapper_pg_dir; #endif - pgprotval_t prot; + pgprotval_t prot, eff; int i; struct pg_state st = {}; @@ -470,15 +487,20 @@ static void ptdump_walk_pgd_level_core(s for (i = 0; i < PTRS_PER_PGD; i++) { st.current_address = normalize_addr(i * PGD_LEVEL_MULT); if (!pgd_none(*start) && !is_hypervisor_range(i)) { + prot = pgd_flags(*start); +#ifdef CONFIG_X86_PAE + eff = _PAGE_USER | _PAGE_RW; +#else + eff = prot; +#endif if (pgd_large(*start) || !pgd_present(*start)) { - prot = pgd_flags(*start); - note_page(m, &st, __pgprot(prot), 1); + note_page(m, &st, __pgprot(prot), eff, 1); } else { - walk_p4d_level(m, &st, *start, + walk_p4d_level(m, &st, *start, eff, i * PGD_LEVEL_MULT); } } else - note_page(m, &st, __pgprot(0), 1); + note_page(m, &st, __pgprot(0), 0, 1); cond_resched(); start++; @@ -486,7 +508,7 @@ static void ptdump_walk_pgd_level_core(s /* Flush out the last page */ st.current_address = normalize_addr(PTRS_PER_PGD*PGD_LEVEL_MULT); - note_page(m, &st, __pgprot(0), 0); + note_page(m, &st, __pgprot(0), 0, 0); if (!checkwx) return; if (st.wx_pages) ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 1/2] x86: consider effective protection attributes in W+X check 2017-12-12 10:31 ` [PATCH 1/2] x86: consider effective protection attributes in W+X check Jan Beulich @ 2017-12-12 10:36 ` Ingo Molnar 2017-12-12 10:43 ` Jan Beulich 0 siblings, 1 reply; 19+ messages in thread From: Ingo Molnar @ 2017-12-12 10:36 UTC (permalink / raw) To: Jan Beulich Cc: mingo, tglx, hpa, xen-devel, Boris Ostrovsky, Juergen Gross, sds, linux-kernel * Jan Beulich <JBeulich@suse.com> wrote: > Using just the leaf page table entry flags would cause a false warning > in case _PAGE_RW is clear or _PAGE_NX is set in a higher level entry. Good find - I assume this bug can cause both false positives and false negatives as well, right? Thanks, Ingo ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 1/2] x86: consider effective protection attributes in W+X check 2017-12-12 10:36 ` Ingo Molnar @ 2017-12-12 10:43 ` Jan Beulich 0 siblings, 0 replies; 19+ messages in thread From: Jan Beulich @ 2017-12-12 10:43 UTC (permalink / raw) To: Ingo Molnar Cc: mingo, tglx, xen-devel, Boris Ostrovsky, Juergen Gross, sds, linux-kernel, hpa >>> On 12.12.17 at 11:36, <mingo@kernel.org> wrote: > * Jan Beulich <JBeulich@suse.com> wrote: > >> Using just the leaf page table entry flags would cause a false warning >> in case _PAGE_RW is clear or _PAGE_NX is set in a higher level entry. > > Good find - I assume this bug can cause both false positives and false > negatives > as well, right? Yes, albeit I'm not aware of any outside of Xen (with that other patch applied). Jan ^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH 2/2] x86-64/Xen: eliminate W+X mappings 2017-12-12 10:24 [PATCH 0/2] x86: deal with remaining W+X pages on Xen Jan Beulich 2017-12-12 10:31 ` [PATCH 1/2] x86: consider effective protection attributes in W+X check Jan Beulich @ 2017-12-12 10:32 ` Jan Beulich 2017-12-12 10:38 ` Ingo Molnar ` (2 more replies) [not found] ` <5A2FBE0A0200007800196B4F@suse.com> 2 siblings, 3 replies; 19+ messages in thread From: Jan Beulich @ 2017-12-12 10:32 UTC (permalink / raw) To: Boris Ostrovsky, Juergen Gross; +Cc: mingo, tglx, xen-devel, linux-kernel, hpa A few thousand such pages are usually left around due to the re-use of L1 tables having been provided by the hypervisor (Dom0) or tool stack (DomU). Set NX in the direct map variant, which needs to be done in L2 due to the dual use of the re-used L1s. For x86_configure_nx() to actually do what it is supposed to do, call get_cpu_cap() first. This was broken by commit 4763ed4d45 ("x86, mm: Clean up and simplify NX enablement") when switching away from the direct EFER read. Signed-off-by: Jan Beulich <jbeulich@suse.com> --- While I certainly dislike the added header inclusion to obtain the prototype for get_cpu_cap(), I couldn't find a better alternative. I'm open to suggestions. --- arch/x86/xen/enlighten_pv.c | 3 +++ arch/x86/xen/mmu_pv.c | 10 ++++++++++ 2 files changed, 13 insertions(+) --- 4.15-rc3/arch/x86/xen/enlighten_pv.c +++ 4.15-rc3-x86_64-Xen-avoid-W+X/arch/x86/xen/enlighten_pv.c @@ -88,6 +88,8 @@ #include "multicalls.h" #include "pmu.h" +#include "../kernel/cpu/cpu.h" /* get_cpu_cap() */ + void *xen_initial_gdt; static int xen_cpu_up_prepare_pv(unsigned int cpu); @@ -1258,6 +1260,7 @@ asmlinkage __visible void __init xen_sta __userpte_alloc_gfp &= ~__GFP_HIGHMEM; /* Work out if we support NX */ + get_cpu_cap(&boot_cpu_data); x86_configure_nx(); /* Get mfn list */ --- 4.15-rc3/arch/x86/xen/mmu_pv.c +++ 4.15-rc3-x86_64-Xen-avoid-W+X/arch/x86/xen/mmu_pv.c @@ -1902,6 +1902,16 @@ void __init xen_setup_kernel_pagetable(p /* Graft it onto L4[511][510] */ copy_page(level2_kernel_pgt, l2); + /* Zap execute permission from the ident map. Due to the sharing of + * L1 entries we need to do this in the L2. */ + if (__supported_pte_mask & _PAGE_NX) + for (i = 0; i < PTRS_PER_PMD; ++i) { + if (pmd_none(level2_ident_pgt[i])) + continue; + level2_ident_pgt[i] = + pmd_set_flags(level2_ident_pgt[i], _PAGE_NX); + } + /* Copy the initial P->M table mappings if necessary. */ i = pgd_index(xen_start_info->mfn_list); if (i && i < pgd_index(__START_KERNEL_map)) ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 2/2] x86-64/Xen: eliminate W+X mappings 2017-12-12 10:32 ` [PATCH 2/2] x86-64/Xen: eliminate W+X mappings Jan Beulich @ 2017-12-12 10:38 ` Ingo Molnar 2017-12-12 10:48 ` Jan Beulich [not found] ` <5A2FC2280200007800196BB8@suse.com> 2017-12-18 11:11 ` [PATCH v2] " Jan Beulich 2017-12-18 16:37 ` [PATCH v3] " Jan Beulich 2 siblings, 2 replies; 19+ messages in thread From: Ingo Molnar @ 2017-12-12 10:38 UTC (permalink / raw) To: Jan Beulich Cc: Boris Ostrovsky, Juergen Gross, mingo, tglx, xen-devel, linux-kernel, hpa * Jan Beulich <JBeulich@suse.com> wrote: > A few thousand such pages are usually left around due to the re-use of > L1 tables having been provided by the hypervisor (Dom0) or tool stack > (DomU). Set NX in the direct map variant, which needs to be done in L2 > due to the dual use of the re-used L1s. > > For x86_configure_nx() to actually do what it is supposed to do, call > get_cpu_cap() first. This was broken by commit 4763ed4d45 ("x86, mm: > Clean up and simplify NX enablement") when switching away from the > direct EFER read. > > Signed-off-by: Jan Beulich <jbeulich@suse.com> > --- > While I certainly dislike the added header inclusion to obtain the > prototype for get_cpu_cap(), I couldn't find a better alternative. I'm > open to suggestions. > --- > arch/x86/xen/enlighten_pv.c | 3 +++ > arch/x86/xen/mmu_pv.c | 10 ++++++++++ > 2 files changed, 13 insertions(+) > > --- 4.15-rc3/arch/x86/xen/enlighten_pv.c > +++ 4.15-rc3-x86_64-Xen-avoid-W+X/arch/x86/xen/enlighten_pv.c > @@ -88,6 +88,8 @@ > #include "multicalls.h" > #include "pmu.h" > > +#include "../kernel/cpu/cpu.h" /* get_cpu_cap() */ > + > void *xen_initial_gdt; > > static int xen_cpu_up_prepare_pv(unsigned int cpu); > @@ -1258,6 +1260,7 @@ asmlinkage __visible void __init xen_sta > __userpte_alloc_gfp &= ~__GFP_HIGHMEM; > > /* Work out if we support NX */ > + get_cpu_cap(&boot_cpu_data); > x86_configure_nx(); > > /* Get mfn list */ > --- 4.15-rc3/arch/x86/xen/mmu_pv.c > +++ 4.15-rc3-x86_64-Xen-avoid-W+X/arch/x86/xen/mmu_pv.c > @@ -1902,6 +1902,16 @@ void __init xen_setup_kernel_pagetable(p > /* Graft it onto L4[511][510] */ > copy_page(level2_kernel_pgt, l2); > > + /* Zap execute permission from the ident map. Due to the sharing of > + * L1 entries we need to do this in the L2. */ please use the customary (multi-line) comment style: /* * Comment ..... * ...... goes here. */ specified in Documentation/CodingStyle. > + if (__supported_pte_mask & _PAGE_NX) > + for (i = 0; i < PTRS_PER_PMD; ++i) { > + if (pmd_none(level2_ident_pgt[i])) > + continue; > + level2_ident_pgt[i] = > + pmd_set_flags(level2_ident_pgt[i], _PAGE_NX); So the line break here is quite distracting, especially considering how similar it is to the alignment of the 'continue' statement. I.e. visually it looks like control flow alignment. Would be much better to just leave it a single page and ignore checkpatch here. Thanks, Ingo ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 2/2] x86-64/Xen: eliminate W+X mappings 2017-12-12 10:38 ` Ingo Molnar @ 2017-12-12 10:48 ` Jan Beulich 2017-12-12 10:54 ` Ingo Molnar [not found] ` <5A2FC2280200007800196BB8@suse.com> 1 sibling, 1 reply; 19+ messages in thread From: Jan Beulich @ 2017-12-12 10:48 UTC (permalink / raw) To: Ingo Molnar Cc: mingo, tglx, xen-devel, Boris Ostrovsky, Juergen Gross, linux-kernel, hpa >>> On 12.12.17 at 11:38, <mingo@kernel.org> wrote: > * Jan Beulich <JBeulich@suse.com> wrote: >> --- 4.15-rc3/arch/x86/xen/mmu_pv.c >> +++ 4.15-rc3-x86_64-Xen-avoid-W+X/arch/x86/xen/mmu_pv.c >> @@ -1902,6 +1902,16 @@ void __init xen_setup_kernel_pagetable(p >> /* Graft it onto L4[511][510] */ >> copy_page(level2_kernel_pgt, l2); >> >> + /* Zap execute permission from the ident map. Due to the sharing of >> + * L1 entries we need to do this in the L2. */ > > please use the customary (multi-line) comment style: > > /* > * Comment ..... > * ...... goes here. > */ > > specified in Documentation/CodingStyle. I would have but didn't because all other comments in this function use this (wrong) style. I've concluded that consistency is better here than matching the style doc. If the Xen maintainers tell me otherwise, I'll happily adjust the patch. >> + if (__supported_pte_mask & _PAGE_NX) >> + for (i = 0; i < PTRS_PER_PMD; ++i) { >> + if (pmd_none(level2_ident_pgt[i])) >> + continue; >> + level2_ident_pgt[i] = >> + pmd_set_flags(level2_ident_pgt[i], _PAGE_NX); > > So the line break here is quite distracting, especially considering how similar it > is to the alignment of the 'continue' statement. I.e. visually it looks like > control flow alignment. > > Would be much better to just leave it a single page and ignore checkpatch > here. Again I'll wait to see what the Xen maintainers think. I too dislike line splits like this one, but the line ended up quite a bit too long, not just a character or two. I also wasn't sure whether splitting between the function arguments would be okay, leaving the first line just slightly too long. Jan ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 2/2] x86-64/Xen: eliminate W+X mappings 2017-12-12 10:48 ` Jan Beulich @ 2017-12-12 10:54 ` Ingo Molnar 0 siblings, 0 replies; 19+ messages in thread From: Ingo Molnar @ 2017-12-12 10:54 UTC (permalink / raw) To: Jan Beulich Cc: mingo, tglx, xen-devel, Boris Ostrovsky, Juergen Gross, linux-kernel, hpa * Jan Beulich <JBeulich@suse.com> wrote: > >>> On 12.12.17 at 11:38, <mingo@kernel.org> wrote: > > * Jan Beulich <JBeulich@suse.com> wrote: > >> --- 4.15-rc3/arch/x86/xen/mmu_pv.c > >> +++ 4.15-rc3-x86_64-Xen-avoid-W+X/arch/x86/xen/mmu_pv.c > >> @@ -1902,6 +1902,16 @@ void __init xen_setup_kernel_pagetable(p > >> /* Graft it onto L4[511][510] */ > >> copy_page(level2_kernel_pgt, l2); > >> > >> + /* Zap execute permission from the ident map. Due to the sharing of > >> + * L1 entries we need to do this in the L2. */ > > > > please use the customary (multi-line) comment style: > > > > /* > > * Comment ..... > > * ...... goes here. > > */ > > > > specified in Documentation/CodingStyle. > > I would have but didn't because all other comments in this function > use this (wrong) style. I've concluded that consistency is better > here than matching the style doc. If the Xen maintainers tell me > otherwise, I'll happily adjust the patch. Then it should be cleaned up in a separate patch. The file is in arch/x86/ and both Documentation/CodingStyle and Linus's position is pretty unambiguous on this, there's no special exceptions for ugliness in arch/x86/ as far as I'm concerned. Please guys fix this mess, NAK otherwise. Thanks, Ingo ^ permalink raw reply [flat|nested] 19+ messages in thread
[parent not found: <5A2FC2280200007800196BB8@suse.com>]
* Re: [PATCH 2/2] x86-64/Xen: eliminate W+X mappings [not found] ` <5A2FC2280200007800196BB8@suse.com> @ 2017-12-12 13:14 ` Juergen Gross 0 siblings, 0 replies; 19+ messages in thread From: Juergen Gross @ 2017-12-12 13:14 UTC (permalink / raw) To: Jan Beulich, Ingo Molnar Cc: mingo, tglx, xen-devel, Boris Ostrovsky, linux-kernel, hpa On 12/12/17 11:48, Jan Beulich wrote: >>>> On 12.12.17 at 11:38, <mingo@kernel.org> wrote: >> * Jan Beulich <JBeulich@suse.com> wrote: >>> --- 4.15-rc3/arch/x86/xen/mmu_pv.c >>> +++ 4.15-rc3-x86_64-Xen-avoid-W+X/arch/x86/xen/mmu_pv.c >>> @@ -1902,6 +1902,16 @@ void __init xen_setup_kernel_pagetable(p >>> /* Graft it onto L4[511][510] */ >>> copy_page(level2_kernel_pgt, l2); >>> >>> + /* Zap execute permission from the ident map. Due to the sharing of >>> + * L1 entries we need to do this in the L2. */ >> >> please use the customary (multi-line) comment style: >> >> /* >> * Comment ..... >> * ...... goes here. >> */ >> >> specified in Documentation/CodingStyle. > > I would have but didn't because all other comments in this function > use this (wrong) style. I've concluded that consistency is better > here than matching the style doc. If the Xen maintainers tell me > otherwise, I'll happily adjust the patch. Yes, please use the correct style with new comments. > >>> + if (__supported_pte_mask & _PAGE_NX) >>> + for (i = 0; i < PTRS_PER_PMD; ++i) { >>> + if (pmd_none(level2_ident_pgt[i])) >>> + continue; >>> + level2_ident_pgt[i] = >>> + pmd_set_flags(level2_ident_pgt[i], _PAGE_NX); >> >> So the line break here is quite distracting, especially considering how similar it >> is to the alignment of the 'continue' statement. I.e. visually it looks like >> control flow alignment. >> >> Would be much better to just leave it a single page and ignore checkpatch >> here. > > Again I'll wait to see what the Xen maintainers think. I too dislike > line splits like this one, but the line ended up quite a bit too long, > not just a character or two. I also wasn't sure whether splitting > between the function arguments would be okay, leaving the first > line just slightly too long. That would result in a 80 character line, which IMHO is the best choice here. Juergen ^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v2] x86-64/Xen: eliminate W+X mappings 2017-12-12 10:32 ` [PATCH 2/2] x86-64/Xen: eliminate W+X mappings Jan Beulich 2017-12-12 10:38 ` Ingo Molnar @ 2017-12-18 11:11 ` Jan Beulich 2017-12-18 12:28 ` Ingo Molnar 2017-12-18 16:37 ` [PATCH v3] " Jan Beulich 2 siblings, 1 reply; 19+ messages in thread From: Jan Beulich @ 2017-12-18 11:11 UTC (permalink / raw) To: Boris Ostrovsky, Juergen Gross; +Cc: mingo, tglx, xen-devel, linux-kernel, hpa A few thousand such pages are usually left around due to the re-use of L1 tables having been provided by the hypervisor (Dom0) or tool stack (DomU). Set NX in the direct map variant, which needs to be done in L2 due to the dual use of the re-used L1s. For x86_configure_nx() to actually do what it is supposed to do, call get_cpu_cap() first. This was broken by commit 4763ed4d45 ("x86, mm: Clean up and simplify NX enablement") when switching away from the direct EFER read. Signed-off-by: Jan Beulich <jbeulich@suse.com> --- v2: Adjust comment style and indentation. --- While I certainly dislike the added header inclusion to obtain the prototype for get_cpu_cap(), I couldn't find a better alternative. I'm open to suggestions. --- arch/x86/xen/enlighten_pv.c | 3 +++ arch/x86/xen/mmu_pv.c | 10 ++++++++++ 2 files changed, 13 insertions(+) --- 4.15-rc3/arch/x86/xen/enlighten_pv.c +++ 4.15-rc3-x86_64-Xen-avoid-W+X/arch/x86/xen/enlighten_pv.c @@ -88,6 +88,8 @@ #include "multicalls.h" #include "pmu.h" +#include "../kernel/cpu/cpu.h" /* get_cpu_cap() */ + void *xen_initial_gdt; static int xen_cpu_up_prepare_pv(unsigned int cpu); @@ -1258,6 +1260,7 @@ asmlinkage __visible void __init xen_sta __userpte_alloc_gfp &= ~__GFP_HIGHMEM; /* Work out if we support NX */ + get_cpu_cap(&boot_cpu_data); x86_configure_nx(); /* Get mfn list */ --- 4.15-rc4/arch/x86/xen/mmu_pv.c +++ 4.15-rc4-x86_64-Xen-avoid-W+X/arch/x86/xen/mmu_pv.c @@ -1902,6 +1902,18 @@ void __init xen_setup_kernel_pagetable(p /* Graft it onto L4[511][510] */ copy_page(level2_kernel_pgt, l2); + /* + * Zap execute permission from the ident map. Due to the sharing of + * L1 entries we need to do this in the L2. + */ + if (__supported_pte_mask & _PAGE_NX) + for (i = 0; i < PTRS_PER_PMD; ++i) { + if (pmd_none(level2_ident_pgt[i])) + continue; + level2_ident_pgt[i] = pmd_set_flags(level2_ident_pgt[i], + _PAGE_NX); + } + /* Copy the initial P->M table mappings if necessary. */ i = pgd_index(xen_start_info->mfn_list); if (i && i < pgd_index(__START_KERNEL_map)) ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v2] x86-64/Xen: eliminate W+X mappings 2017-12-18 11:11 ` [PATCH v2] " Jan Beulich @ 2017-12-18 12:28 ` Ingo Molnar 2017-12-18 15:52 ` Joe Perches 0 siblings, 1 reply; 19+ messages in thread From: Ingo Molnar @ 2017-12-18 12:28 UTC (permalink / raw) To: Jan Beulich Cc: Boris Ostrovsky, Juergen Gross, mingo, tglx, xen-devel, linux-kernel, hpa, Borislav Petkov * Jan Beulich <JBeulich@suse.com> wrote: > A few thousand such pages are usually left around due to the re-use of > L1 tables having been provided by the hypervisor (Dom0) or tool stack > (DomU). Set NX in the direct map variant, which needs to be done in L2 > due to the dual use of the re-used L1s. > > For x86_configure_nx() to actually do what it is supposed to do, call > get_cpu_cap() first. This was broken by commit 4763ed4d45 ("x86, mm: > Clean up and simplify NX enablement") when switching away from the > direct EFER read. > > Signed-off-by: Jan Beulich <jbeulich@suse.com> > --- > v2: Adjust comment style and indentation. > --- > While I certainly dislike the added header inclusion to obtain the > prototype for get_cpu_cap(), I couldn't find a better alternative. I'm > open to suggestions. > --- > arch/x86/xen/enlighten_pv.c | 3 +++ > arch/x86/xen/mmu_pv.c | 10 ++++++++++ > 2 files changed, 13 insertions(+) > > --- 4.15-rc3/arch/x86/xen/enlighten_pv.c > +++ 4.15-rc3-x86_64-Xen-avoid-W+X/arch/x86/xen/enlighten_pv.c > @@ -88,6 +88,8 @@ > #include "multicalls.h" > #include "pmu.h" > > +#include "../kernel/cpu/cpu.h" /* get_cpu_cap() */ > + > void *xen_initial_gdt; > > static int xen_cpu_up_prepare_pv(unsigned int cpu); > @@ -1258,6 +1260,7 @@ asmlinkage __visible void __init xen_sta > __userpte_alloc_gfp &= ~__GFP_HIGHMEM; > > /* Work out if we support NX */ > + get_cpu_cap(&boot_cpu_data); > x86_configure_nx(); > > /* Get mfn list */ > --- 4.15-rc4/arch/x86/xen/mmu_pv.c > +++ 4.15-rc4-x86_64-Xen-avoid-W+X/arch/x86/xen/mmu_pv.c > @@ -1902,6 +1902,18 @@ void __init xen_setup_kernel_pagetable(p > /* Graft it onto L4[511][510] */ > copy_page(level2_kernel_pgt, l2); > > + /* > + * Zap execute permission from the ident map. Due to the sharing of > + * L1 entries we need to do this in the L2. > + */ > + if (__supported_pte_mask & _PAGE_NX) > + for (i = 0; i < PTRS_PER_PMD; ++i) { > + if (pmd_none(level2_ident_pgt[i])) > + continue; > + level2_ident_pgt[i] = pmd_set_flags(level2_ident_pgt[i], > + _PAGE_NX); > + } > + This chunk has two stylistic problems: - Curly braces need to be added - Line broken in an ugly fashion: just make it long and ignore the checkpatch col80 warning looks good otherwise. Thanks, Ingo ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v2] x86-64/Xen: eliminate W+X mappings 2017-12-18 12:28 ` Ingo Molnar @ 2017-12-18 15:52 ` Joe Perches 0 siblings, 0 replies; 19+ messages in thread From: Joe Perches @ 2017-12-18 15:52 UTC (permalink / raw) To: Ingo Molnar, Jan Beulich Cc: Boris Ostrovsky, Juergen Gross, mingo, tglx, xen-devel, linux-kernel, hpa, Borislav Petkov On Mon, 2017-12-18 at 13:28 +0100, Ingo Molnar wrote: > * Jan Beulich <JBeulich@suse.com> wrote: > > > A few thousand such pages are usually left around due to the re-use of > > L1 tables having been provided by the hypervisor (Dom0) or tool stack > > (DomU). Set NX in the direct map variant, which needs to be done in L2 > > due to the dual use of the re-used L1s. > > > > For x86_configure_nx() to actually do what it is supposed to do, call > > get_cpu_cap() first. This was broken by commit 4763ed4d45 ("x86, mm: > > Clean up and simplify NX enablement") when switching away from the > > direct EFER read. [] > > --- 4.15-rc4/arch/x86/xen/mmu_pv.c > > +++ 4.15-rc4-x86_64-Xen-avoid-W+X/arch/x86/xen/mmu_pv.c > > @@ -1902,6 +1902,18 @@ void __init xen_setup_kernel_pagetable(p > > /* Graft it onto L4[511][510] */ > > copy_page(level2_kernel_pgt, l2); > > > > + /* > > + * Zap execute permission from the ident map. Due to the sharing of > > + * L1 entries we need to do this in the L2. > > + */ > > + if (__supported_pte_mask & _PAGE_NX) > > + for (i = 0; i < PTRS_PER_PMD; ++i) { > > + if (pmd_none(level2_ident_pgt[i])) > > + continue; > > + level2_ident_pgt[i] = pmd_set_flags(level2_ident_pgt[i], > > + _PAGE_NX); > > + } > > + > > This chunk has two stylistic problems: > > - Curly braces need to be added > - Line broken in an ugly fashion: just make it long and ignore the checkpatch col80 warning > > looks good otherwise. stylistic trivia: Instead of repeating level2_ident_pgt[i] multiple times, it might be nicer to use temporaries and not use i at all. Something like: if (__supported_pte_mask & _PAGE_NX) { pmd_t *pmd = level2_ident_pgt; pmd_t *end = pmd + PTRS_PER_PMD; for (; pmd < end; pmd++) { if (!pmd_none(pmd)) *pmd = pmd_set_flags(pmd, _PAGE_NX); } } ^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v3] x86-64/Xen: eliminate W+X mappings 2017-12-12 10:32 ` [PATCH 2/2] x86-64/Xen: eliminate W+X mappings Jan Beulich 2017-12-12 10:38 ` Ingo Molnar 2017-12-18 11:11 ` [PATCH v2] " Jan Beulich @ 2017-12-18 16:37 ` Jan Beulich 2 siblings, 0 replies; 19+ messages in thread From: Jan Beulich @ 2017-12-18 16:37 UTC (permalink / raw) To: Boris Ostrovsky, Juergen Gross; +Cc: mingo, tglx, xen-devel, linux-kernel, hpa A few thousand such pages are usually left around due to the re-use of L1 tables having been provided by the hypervisor (Dom0) or tool stack (DomU). Set NX in the direct map variant, which needs to be done in L2 due to the dual use of the re-used L1s. For x86_configure_nx() to actually do what it is supposed to do, call get_cpu_cap() first. This was broken by commit 4763ed4d45 ("x86, mm: Clean up and simplify NX enablement") when switching away from the direct EFER read. Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Juergen Gross <jgross@suse.com> --- v3: More style adjustment. v2: Adjust comment style and indentation. --- arch/x86/xen/enlighten_pv.c | 3 +++ arch/x86/xen/mmu_pv.c | 10 ++++++++++ 2 files changed, 13 insertions(+) --- 4.15-rc3/arch/x86/xen/enlighten_pv.c +++ 4.15-rc3-x86_64-Xen-avoid-W+X/arch/x86/xen/enlighten_pv.c @@ -88,6 +88,8 @@ #include "multicalls.h" #include "pmu.h" +#include "../kernel/cpu/cpu.h" /* get_cpu_cap() */ + void *xen_initial_gdt; static int xen_cpu_up_prepare_pv(unsigned int cpu); @@ -1258,6 +1260,7 @@ asmlinkage __visible void __init xen_sta __userpte_alloc_gfp &= ~__GFP_HIGHMEM; /* Work out if we support NX */ + get_cpu_cap(&boot_cpu_data); x86_configure_nx(); /* Get mfn list */ --- 4.15-rc4/arch/x86/xen/mmu_pv.c +++ 4.15-rc4-x86_64-Xen-avoid-W+X/arch/x86/xen/mmu_pv.c @@ -1902,6 +1902,18 @@ void __init xen_setup_kernel_pagetable(p /* Graft it onto L4[511][510] */ copy_page(level2_kernel_pgt, l2); + /* + * Zap execute permission from the ident map. Due to the sharing of + * L1 entries we need to do this in the L2. + */ + if (__supported_pte_mask & _PAGE_NX) { + for (i = 0; i < PTRS_PER_PMD; ++i) { + if (pmd_none(level2_ident_pgt[i])) + continue; + level2_ident_pgt[i] = pmd_set_flags(level2_ident_pgt[i], _PAGE_NX); + } + } + /* Copy the initial P->M table mappings if necessary. */ i = pgd_index(xen_start_info->mfn_list); if (i && i < pgd_index(__START_KERNEL_map)) ^ permalink raw reply [flat|nested] 19+ messages in thread
[parent not found: <5A2FBE0A0200007800196B4F@suse.com>]
* Re: [PATCH 1/2] x86: consider effective protection attributes in W+X check [not found] ` <5A2FBE0A0200007800196B4F@suse.com> @ 2017-12-14 14:04 ` Juergen Gross 2017-12-14 14:12 ` Thomas Gleixner 2017-12-14 14:15 ` Jan Beulich 0 siblings, 2 replies; 19+ messages in thread From: Juergen Gross @ 2017-12-14 14:04 UTC (permalink / raw) To: Jan Beulich, mingo, Thomas Gleixner, hpa Cc: xen-devel, Boris Ostrovsky, sds, linux-kernel On 12/12/17 11:31, Jan Beulich wrote: > Using just the leaf page table entry flags would cause a false warning > in case _PAGE_RW is clear or _PAGE_NX is set in a higher level entry. > Hand through both the current entry's flags as well as the accumulated > effective value (the latter as pgprotval_t instead of pgprot_t, as it's > not an actual entry's value). > > Signed-off-by: Jan Beulich <jbeulich@suse.com> > --- > arch/x86/mm/dump_pagetables.c | 92 ++++++++++++++++++++++++++---------------- > 1 file changed, 57 insertions(+), 35 deletions(-) > > --- 4.15-rc3/arch/x86/mm/dump_pagetables.c > +++ 4.15-rc3-x86-dumppgt-effective-prot/arch/x86/mm/dump_pagetables.c > @@ -29,6 +29,7 @@ > struct pg_state { > int level; > pgprot_t current_prot; > + pgprotval_t effective_prot; > unsigned long start_address; > unsigned long current_address; > const struct addr_marker *marker; > @@ -202,9 +203,9 @@ static unsigned long normalize_addr(unsi > * print what we collected so far. > */ > static void note_page(struct seq_file *m, struct pg_state *st, > - pgprot_t new_prot, int level) > + pgprot_t new_prot, pgprotval_t new_eff, int level) > { > - pgprotval_t prot, cur; > + pgprotval_t prot, cur, eff; > static const char units[] = "BKMGTPE"; > > /* > @@ -214,23 +215,24 @@ static void note_page(struct seq_file *m > */ > prot = pgprot_val(new_prot); > cur = pgprot_val(st->current_prot); > + eff = st->effective_prot; > > if (!st->level) { > /* First entry */ > st->current_prot = new_prot; > + st->effective_prot = new_eff; > st->level = level; > st->marker = address_markers; > st->lines = 0; > pt_dump_seq_printf(m, st->to_dmesg, "---[ %s ]---\n", > st->marker->name); > - } else if (prot != cur || level != st->level || > + } else if (prot != cur || new_eff != eff || level != st->level || > st->current_address >= st->marker[1].start_address) { > const char *unit = units; > unsigned long delta; > int width = sizeof(unsigned long) * 2; > - pgprotval_t pr = pgprot_val(st->current_prot); > > - if (st->check_wx && (pr & _PAGE_RW) && !(pr & _PAGE_NX)) { > + if (st->check_wx && (eff & _PAGE_RW) && !(eff & _PAGE_NX)) { > WARN_ONCE(1, > "x86/mm: Found insecure W+X mapping at address %p/%pS\n", > (void *)st->start_address, > @@ -284,21 +286,30 @@ static void note_page(struct seq_file *m > > st->start_address = st->current_address; > st->current_prot = new_prot; > + st->effective_prot = new_eff; > st->level = level; > } > } > > -static void walk_pte_level(struct seq_file *m, struct pg_state *st, pmd_t addr, unsigned long P) > +static inline pgprotval_t effective_prot(pgprotval_t prot1, pgprotval_t prot2) > +{ > + return (prot1 & prot2 & (_PAGE_USER | _PAGE_RW)) | > + ((prot1 | prot2) & _PAGE_NX); > +} > + > +static void walk_pte_level(struct seq_file *m, struct pg_state *st, pmd_t addr, > + pgprotval_t eff_in, unsigned long P) > { > int i; > pte_t *start; > - pgprotval_t prot; > + pgprotval_t prot, eff; > > start = (pte_t *)pmd_page_vaddr(addr); > for (i = 0; i < PTRS_PER_PTE; i++) { > prot = pte_flags(*start); > + eff = effective_prot(eff_in, prot); > st->current_address = normalize_addr(P + i * PTE_LEVEL_MULT); > - note_page(m, st, __pgprot(prot), 5); > + note_page(m, st, __pgprot(prot), eff, 5); > start++; > } > } > @@ -335,42 +346,45 @@ static inline bool kasan_page_table(stru > > #if PTRS_PER_PMD > 1 > > -static void walk_pmd_level(struct seq_file *m, struct pg_state *st, pud_t addr, unsigned long P) > +static void walk_pmd_level(struct seq_file *m, struct pg_state *st, pud_t addr, > + pgprotval_t eff_in, unsigned long P) > { > int i; > pmd_t *start, *pmd_start; > - pgprotval_t prot; > + pgprotval_t prot, eff; > > pmd_start = start = (pmd_t *)pud_page_vaddr(addr); > for (i = 0; i < PTRS_PER_PMD; i++) { > st->current_address = normalize_addr(P + i * PMD_LEVEL_MULT); > if (!pmd_none(*start)) { > + prot = pmd_flags(*start); > + eff = effective_prot(eff_in, prot); > if (pmd_large(*start) || !pmd_present(*start)) { > - prot = pmd_flags(*start); > - note_page(m, st, __pgprot(prot), 4); > + note_page(m, st, __pgprot(prot), eff, 4); > } else if (!kasan_page_table(m, st, pmd_start)) { > - walk_pte_level(m, st, *start, > + walk_pte_level(m, st, *start, eff, > P + i * PMD_LEVEL_MULT); > } You can drop the braces for both cases. Applies to similar constructs below, too. With that fixed you can add my: Reviewed-by: Juergen Gross <jgross@suse.com> Juergen ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 1/2] x86: consider effective protection attributes in W+X check 2017-12-14 14:04 ` [PATCH 1/2] x86: consider effective protection attributes in W+X check Juergen Gross @ 2017-12-14 14:12 ` Thomas Gleixner 2017-12-14 14:15 ` Jan Beulich 1 sibling, 0 replies; 19+ messages in thread From: Thomas Gleixner @ 2017-12-14 14:12 UTC (permalink / raw) To: Juergen Gross Cc: Jan Beulich, mingo, hpa, xen-devel, Boris Ostrovsky, sds, linux-kernel On Thu, 14 Dec 2017, Juergen Gross wrote: > On 12/12/17 11:31, Jan Beulich wrote: > > for (i = 0; i < PTRS_PER_PMD; i++) { > > st->current_address = normalize_addr(P + i * PMD_LEVEL_MULT); > > if (!pmd_none(*start)) { > > + prot = pmd_flags(*start); > > + eff = effective_prot(eff_in, prot); > > if (pmd_large(*start) || !pmd_present(*start)) { > > - prot = pmd_flags(*start); > > - note_page(m, st, __pgprot(prot), 4); > > + note_page(m, st, __pgprot(prot), eff, 4); > > } else if (!kasan_page_table(m, st, pmd_start)) { > > - walk_pte_level(m, st, *start, > > + walk_pte_level(m, st, *start, eff, > > P + i * PMD_LEVEL_MULT); > > } > > You can drop the braces for both cases. Applies to similar > constructs below, too. No. See: https://marc.info/?l=linux-kernel&m=148467980905537 This is the same issue: if (foo) bla(); else blurb(somestuff, morestuff, evenmorestuff, crap); vs. if (foo) { bla(); } else { blurb(somestuff, morestuff, evenmorestuff, crap); } Thanks, tglx ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 1/2] x86: consider effective protection attributes in W+X check 2017-12-14 14:04 ` [PATCH 1/2] x86: consider effective protection attributes in W+X check Juergen Gross 2017-12-14 14:12 ` Thomas Gleixner @ 2017-12-14 14:15 ` Jan Beulich 2017-12-14 14:17 ` Thomas Gleixner 2017-12-14 14:21 ` Juergen Gross 1 sibling, 2 replies; 19+ messages in thread From: Jan Beulich @ 2017-12-14 14:15 UTC (permalink / raw) To: Juergen Gross Cc: mingo, Thomas Gleixner, xen-devel, Boris Ostrovsky, sds, linux-kernel, hpa >>> On 14.12.17 at 15:04, <jgross@suse.com> wrote: > On 12/12/17 11:31, Jan Beulich wrote: >> @@ -335,42 +346,45 @@ static inline bool kasan_page_table(stru >> >> #if PTRS_PER_PMD > 1 >> >> -static void walk_pmd_level(struct seq_file *m, struct pg_state *st, pud_t addr, unsigned long P) >> +static void walk_pmd_level(struct seq_file *m, struct pg_state *st, pud_t addr, >> + pgprotval_t eff_in, unsigned long P) >> { >> int i; >> pmd_t *start, *pmd_start; >> - pgprotval_t prot; >> + pgprotval_t prot, eff; >> >> pmd_start = start = (pmd_t *)pud_page_vaddr(addr); >> for (i = 0; i < PTRS_PER_PMD; i++) { >> st->current_address = normalize_addr(P + i * PMD_LEVEL_MULT); >> if (!pmd_none(*start)) { >> + prot = pmd_flags(*start); >> + eff = effective_prot(eff_in, prot); >> if (pmd_large(*start) || !pmd_present(*start)) { >> - prot = pmd_flags(*start); >> - note_page(m, st, __pgprot(prot), 4); >> + note_page(m, st, __pgprot(prot), eff, 4); >> } else if (!kasan_page_table(m, st, pmd_start)) { >> - walk_pte_level(m, st, *start, >> + walk_pte_level(m, st, *start, eff, >> P + i * PMD_LEVEL_MULT); >> } > > You can drop the braces for both cases. Applies to similar > constructs below, too. I did consider that, but decided against to allow the patch to show more clearly what it is that is actually being changed. > With that fixed you can add my: > > Reviewed-by: Juergen Gross <jgross@suse.com> Thanks. I'd like to wait for the x86 maintainer's opinion, and hence won't add your R-b unless you tell me that's fine either way, or unless they too would prefer resulting code cleanliness over patch readability. Jan ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 1/2] x86: consider effective protection attributes in W+X check 2017-12-14 14:15 ` Jan Beulich @ 2017-12-14 14:17 ` Thomas Gleixner 2017-12-14 14:21 ` Juergen Gross 1 sibling, 0 replies; 19+ messages in thread From: Thomas Gleixner @ 2017-12-14 14:17 UTC (permalink / raw) To: Jan Beulich Cc: Juergen Gross, mingo, xen-devel, Boris Ostrovsky, sds, linux-kernel, hpa On Thu, 14 Dec 2017, Jan Beulich wrote: > >>> On 14.12.17 at 15:04, <jgross@suse.com> wrote: > > On 12/12/17 11:31, Jan Beulich wrote: > >> @@ -335,42 +346,45 @@ static inline bool kasan_page_table(stru > >> > >> #if PTRS_PER_PMD > 1 > >> > >> -static void walk_pmd_level(struct seq_file *m, struct pg_state *st, pud_t addr, unsigned long P) > >> +static void walk_pmd_level(struct seq_file *m, struct pg_state *st, pud_t addr, > >> + pgprotval_t eff_in, unsigned long P) > >> { > >> int i; > >> pmd_t *start, *pmd_start; > >> - pgprotval_t prot; > >> + pgprotval_t prot, eff; > >> > >> pmd_start = start = (pmd_t *)pud_page_vaddr(addr); > >> for (i = 0; i < PTRS_PER_PMD; i++) { > >> st->current_address = normalize_addr(P + i * PMD_LEVEL_MULT); > >> if (!pmd_none(*start)) { > >> + prot = pmd_flags(*start); > >> + eff = effective_prot(eff_in, prot); > >> if (pmd_large(*start) || !pmd_present(*start)) { > >> - prot = pmd_flags(*start); > >> - note_page(m, st, __pgprot(prot), 4); > >> + note_page(m, st, __pgprot(prot), eff, 4); > >> } else if (!kasan_page_table(m, st, pmd_start)) { > >> - walk_pte_level(m, st, *start, > >> + walk_pte_level(m, st, *start, eff, > >> P + i * PMD_LEVEL_MULT); > >> } > > > > You can drop the braces for both cases. Applies to similar > > constructs below, too. > > I did consider that, but decided against to allow the patch to show > more clearly what it is that is actually being changed. > > > With that fixed you can add my: > > > > Reviewed-by: Juergen Gross <jgross@suse.com> > > Thanks. I'd like to wait for the x86 maintainer's opinion, and hence > won't add your R-b unless you tell me that's fine either way, or > unless they too would prefer resulting code cleanliness over patch > readability. If you remove the braces the code readability degrades because it's not a single line statement. Thanks, tglx ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 1/2] x86: consider effective protection attributes in W+X check 2017-12-14 14:15 ` Jan Beulich 2017-12-14 14:17 ` Thomas Gleixner @ 2017-12-14 14:21 ` Juergen Gross 1 sibling, 0 replies; 19+ messages in thread From: Juergen Gross @ 2017-12-14 14:21 UTC (permalink / raw) To: Jan Beulich Cc: mingo, Thomas Gleixner, xen-devel, Boris Ostrovsky, sds, linux-kernel, hpa On 14/12/17 15:15, Jan Beulich wrote: >>>> On 14.12.17 at 15:04, <jgross@suse.com> wrote: >> On 12/12/17 11:31, Jan Beulich wrote: >>> @@ -335,42 +346,45 @@ static inline bool kasan_page_table(stru >>> >>> #if PTRS_PER_PMD > 1 >>> >>> -static void walk_pmd_level(struct seq_file *m, struct pg_state *st, pud_t addr, unsigned long P) >>> +static void walk_pmd_level(struct seq_file *m, struct pg_state *st, pud_t addr, >>> + pgprotval_t eff_in, unsigned long P) >>> { >>> int i; >>> pmd_t *start, *pmd_start; >>> - pgprotval_t prot; >>> + pgprotval_t prot, eff; >>> >>> pmd_start = start = (pmd_t *)pud_page_vaddr(addr); >>> for (i = 0; i < PTRS_PER_PMD; i++) { >>> st->current_address = normalize_addr(P + i * PMD_LEVEL_MULT); >>> if (!pmd_none(*start)) { >>> + prot = pmd_flags(*start); >>> + eff = effective_prot(eff_in, prot); >>> if (pmd_large(*start) || !pmd_present(*start)) { >>> - prot = pmd_flags(*start); >>> - note_page(m, st, __pgprot(prot), 4); >>> + note_page(m, st, __pgprot(prot), eff, 4); >>> } else if (!kasan_page_table(m, st, pmd_start)) { >>> - walk_pte_level(m, st, *start, >>> + walk_pte_level(m, st, *start, eff, >>> P + i * PMD_LEVEL_MULT); >>> } >> >> You can drop the braces for both cases. Applies to similar >> constructs below, too. > > I did consider that, but decided against to allow the patch to show > more clearly what it is that is actually being changed. > >> With that fixed you can add my: >> >> Reviewed-by: Juergen Gross <jgross@suse.com> > > Thanks. I'd like to wait for the x86 maintainer's opinion, and hence > won't add your R-b unless you tell me that's fine either way, or > unless they too would prefer resulting code cleanliness over patch > readability. I'm fine with the braces kept, too. Juergen ^ permalink raw reply [flat|nested] 19+ messages in thread
[parent not found: <5A2FAEB802000055000F9D66@suse.com>]
[parent not found: <5A2FBE540200007800196B52@suse.com>]
[parent not found: <5A37B0770200007800198130@suse.com>]
* Re: [PATCH v2] x86-64/Xen: eliminate W+X mappings [not found] ` <5A37B0770200007800198130@suse.com> @ 2017-12-18 11:50 ` Juergen Gross 0 siblings, 0 replies; 19+ messages in thread From: Juergen Gross @ 2017-12-18 11:50 UTC (permalink / raw) To: Jan Beulich, Boris Ostrovsky; +Cc: mingo, tglx, xen-devel, linux-kernel, hpa On 18/12/17 12:11, Jan Beulich wrote: > A few thousand such pages are usually left around due to the re-use of > L1 tables having been provided by the hypervisor (Dom0) or tool stack > (DomU). Set NX in the direct map variant, which needs to be done in L2 > due to the dual use of the re-used L1s. > > For x86_configure_nx() to actually do what it is supposed to do, call > get_cpu_cap() first. This was broken by commit 4763ed4d45 ("x86, mm: > Clean up and simplify NX enablement") when switching away from the > direct EFER read. > > Signed-off-by: Jan Beulich <jbeulich@suse.com> > --- > v2: Adjust comment style and indentation. > --- > While I certainly dislike the added header inclusion to obtain the > prototype for get_cpu_cap(), I couldn't find a better alternative. I'm > open to suggestions. Move the prototype to arch/x86/include/asm/cpu.h ? With that fixed (or without it in case the x86 maintainers don't like the prototype to be moved): Reviewed-by: Juergen Gross <jgross@suse.com> Juergen ^ permalink raw reply [flat|nested] 19+ messages in thread
end of thread, other threads:[~2017-12-18 16:37 UTC | newest]
Thread overview: 19+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-12-12 10:24 [PATCH 0/2] x86: deal with remaining W+X pages on Xen Jan Beulich
2017-12-12 10:31 ` [PATCH 1/2] x86: consider effective protection attributes in W+X check Jan Beulich
2017-12-12 10:36 ` Ingo Molnar
2017-12-12 10:43 ` Jan Beulich
2017-12-12 10:32 ` [PATCH 2/2] x86-64/Xen: eliminate W+X mappings Jan Beulich
2017-12-12 10:38 ` Ingo Molnar
2017-12-12 10:48 ` Jan Beulich
2017-12-12 10:54 ` Ingo Molnar
[not found] ` <5A2FC2280200007800196BB8@suse.com>
2017-12-12 13:14 ` Juergen Gross
2017-12-18 11:11 ` [PATCH v2] " Jan Beulich
2017-12-18 12:28 ` Ingo Molnar
2017-12-18 15:52 ` Joe Perches
2017-12-18 16:37 ` [PATCH v3] " Jan Beulich
[not found] ` <5A2FBE0A0200007800196B4F@suse.com>
2017-12-14 14:04 ` [PATCH 1/2] x86: consider effective protection attributes in W+X check Juergen Gross
2017-12-14 14:12 ` Thomas Gleixner
2017-12-14 14:15 ` Jan Beulich
2017-12-14 14:17 ` Thomas Gleixner
2017-12-14 14:21 ` Juergen Gross
[not found] <5A2FAEB802000055000F9D66@suse.com>
[not found] ` <5A2FBE540200007800196B52@suse.com>
[not found] ` <5A37B0770200007800198130@suse.com>
2017-12-18 11:50 ` [PATCH v2] x86-64/Xen: eliminate W+X mappings Juergen Gross
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox