* [PATCH v6 05/11] arm64/mm: Add support for XPFO [not found] <20170907173609.22696-1-tycho@docker.com> @ 2017-09-07 17:36 ` Tycho Andersen 2017-09-08 7:53 ` Christoph Hellwig 2017-09-14 18:22 ` [kernel-hardening] " Mark Rutland 2017-09-07 17:36 ` [PATCH v6 07/11] arm64/mm, xpfo: temporarily map dcache regions Tycho Andersen ` (3 subsequent siblings) 4 siblings, 2 replies; 19+ messages in thread From: Tycho Andersen @ 2017-09-07 17:36 UTC (permalink / raw) To: linux-arm-kernel From: Juerg Haefliger <juerg.haefliger@canonical.com> Enable support for eXclusive Page Frame Ownership (XPFO) for arm64 and provide a hook for updating a single kernel page table entry (which is required by the generic XPFO code). v6: use flush_tlb_kernel_range() instead of __flush_tlb_one() CC: linux-arm-kernel at lists.infradead.org Signed-off-by: Juerg Haefliger <juerg.haefliger@canonical.com> Signed-off-by: Tycho Andersen <tycho@docker.com> --- arch/arm64/Kconfig | 1 + arch/arm64/mm/Makefile | 2 ++ arch/arm64/mm/xpfo.c | 58 ++++++++++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 61 insertions(+) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index dfd908630631..44fa44ef02ec 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -121,6 +121,7 @@ config ARM64 select SPARSE_IRQ select SYSCTL_EXCEPTION_TRACE select THREAD_INFO_IN_TASK + select ARCH_SUPPORTS_XPFO help ARM 64-bit (AArch64) Linux support. diff --git a/arch/arm64/mm/Makefile b/arch/arm64/mm/Makefile index 9b0ba191e48e..22e5cab543d8 100644 --- a/arch/arm64/mm/Makefile +++ b/arch/arm64/mm/Makefile @@ -11,3 +11,5 @@ KASAN_SANITIZE_physaddr.o += n obj-$(CONFIG_KASAN) += kasan_init.o KASAN_SANITIZE_kasan_init.o := n + +obj-$(CONFIG_XPFO) += xpfo.o diff --git a/arch/arm64/mm/xpfo.c b/arch/arm64/mm/xpfo.c new file mode 100644 index 000000000000..678e2be848eb --- /dev/null +++ b/arch/arm64/mm/xpfo.c @@ -0,0 +1,58 @@ +/* + * Copyright (C) 2017 Hewlett Packard Enterprise Development, L.P. + * Copyright (C) 2016 Brown University. All rights reserved. + * + * Authors: + * Juerg Haefliger <juerg.haefliger@hpe.com> + * Vasileios P. Kemerlis <vpk@cs.brown.edu> + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License version 2 as published by + * the Free Software Foundation. + */ + +#include <linux/mm.h> +#include <linux/module.h> + +#include <asm/tlbflush.h> + +/* + * Lookup the page table entry for a virtual address and return a pointer to + * the entry. Based on x86 tree. + */ +static pte_t *lookup_address(unsigned long addr) +{ + pgd_t *pgd; + pud_t *pud; + pmd_t *pmd; + + pgd = pgd_offset_k(addr); + if (pgd_none(*pgd)) + return NULL; + + pud = pud_offset(pgd, addr); + if (pud_none(*pud)) + return NULL; + + pmd = pmd_offset(pud, addr); + if (pmd_none(*pmd)) + return NULL; + + return pte_offset_kernel(pmd, addr); +} + +/* Update a single kernel page table entry */ +inline void set_kpte(void *kaddr, struct page *page, pgprot_t prot) +{ + pte_t *pte = lookup_address((unsigned long)kaddr); + + set_pte(pte, pfn_pte(page_to_pfn(page), prot)); +} + +inline void xpfo_flush_kernel_tlb(struct page *page, int order) +{ + unsigned long kaddr = (unsigned long)page_address(page); + unsigned long size = PAGE_SIZE; + + flush_tlb_kernel_range(kaddr, kaddr + (1 << order) * size); +} -- 2.11.0 ^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH v6 05/11] arm64/mm: Add support for XPFO 2017-09-07 17:36 ` [PATCH v6 05/11] arm64/mm: Add support for XPFO Tycho Andersen @ 2017-09-08 7:53 ` Christoph Hellwig 2017-09-08 17:24 ` Tycho Andersen 2017-09-14 18:22 ` [kernel-hardening] " Mark Rutland 1 sibling, 1 reply; 19+ messages in thread From: Christoph Hellwig @ 2017-09-08 7:53 UTC (permalink / raw) To: linux-arm-kernel > +/* > + * Lookup the page table entry for a virtual address and return a pointer to > + * the entry. Based on x86 tree. > + */ > +static pte_t *lookup_address(unsigned long addr) Seems like this should be moved to common arm64 mm code and used by kernel_page_present. ^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v6 05/11] arm64/mm: Add support for XPFO 2017-09-08 7:53 ` Christoph Hellwig @ 2017-09-08 17:24 ` Tycho Andersen 2017-09-14 10:41 ` Julien Grall 0 siblings, 1 reply; 19+ messages in thread From: Tycho Andersen @ 2017-09-08 17:24 UTC (permalink / raw) To: linux-arm-kernel On Fri, Sep 08, 2017 at 12:53:47AM -0700, Christoph Hellwig wrote: > > +/* > > + * Lookup the page table entry for a virtual address and return a pointer to > > + * the entry. Based on x86 tree. > > + */ > > +static pte_t *lookup_address(unsigned long addr) > > Seems like this should be moved to common arm64 mm code and used by > kernel_page_present. Sounds good, I'll include something like the patch below in the next series. Unfortunately, adding an implementation of lookup_address seems to be slightly more complicated than necessary, because of the xen piece. We have to define lookup_address() with the level parameter, but it's not obvious to me to name the page levels. So for now I've just left it as a WARN() if someone supplies it. It seems like xen still does need this to be defined, because if I define it without level: drivers/xen/xenbus/xenbus_client.c: In function ?xenbus_unmap_ring_vfree_pv?: drivers/xen/xenbus/xenbus_client.c:760:4: error: too many arguments to function ?lookup_address? lookup_address(addr, &level)).maddr; ^~~~~~~~~~~~~~ In file included from ./arch/arm64/include/asm/page.h:37:0, from ./include/linux/mmzone.h:20, from ./include/linux/gfp.h:5, from ./include/linux/mm.h:9, from drivers/xen/xenbus/xenbus_client.c:33: ./arch/arm64/include/asm/pgtable-types.h:67:15: note: declared here extern pte_t *lookup_address(unsigned long addr); ^~~~~~~~~~~~~~ I've cc-d the xen folks, maybe they can suggest a way to untangle it? Alternatively, if someone can suggest a good naming scheme for the page levels, I can just do that. Cheers, Tycho >From 0b3be95873e3e8caa39fa50efc0d06d57fc6eb5e Mon Sep 17 00:00:00 2001 From: Tycho Andersen <tycho@docker.com> Date: Fri, 8 Sep 2017 10:43:26 -0600 Subject: [PATCH] arm64: add lookup_address() Similarly to x86, let's add lookup_address() and use it in kernel_page_present(). We'll use it in the next patch for the implementation of XPFO as well. Signed-off-by: Tycho Andersen <tycho@docker.com> --- arch/arm64/include/asm/pgtable-types.h | 2 ++ arch/arm64/mm/pageattr.c | 47 ++++++++++++++++++++-------------- include/xen/arm/page.h | 10 -------- 3 files changed, 30 insertions(+), 29 deletions(-) diff --git a/arch/arm64/include/asm/pgtable-types.h b/arch/arm64/include/asm/pgtable-types.h index 345a072b5856..fad3db5a673f 100644 --- a/arch/arm64/include/asm/pgtable-types.h +++ b/arch/arm64/include/asm/pgtable-types.h @@ -64,4 +64,6 @@ typedef struct { pteval_t pgprot; } pgprot_t; #include <asm-generic/5level-fixup.h> #endif +extern pte_t *lookup_address(unsigned long addr, unsigned int *level); + #endif /* __ASM_PGTABLE_TYPES_H */ diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c index a682a0a2a0fa..437a12485873 100644 --- a/arch/arm64/mm/pageattr.c +++ b/arch/arm64/mm/pageattr.c @@ -138,6 +138,32 @@ int set_memory_valid(unsigned long addr, int numpages, int enable) __pgprot(PTE_VALID)); } +pte_t *lookup_address(unsigned long addr, unsigned int *level) +{ + pgd_t *pgd; + pud_t *pud; + pmd_t *pmd; + + if (unlikely(level)) { + WARN(1, "level unused on arm64\n"); + *level = 0; + } + + pgd = pgd_offset_k(addr); + if (pgd_none(*pgd)) + return NULL; + + pud = pud_offset(pgd, addr); + if (pud_none(*pud)) + return NULL; + + pmd = pmd_offset(pud, addr); + if (pmd_none(*pmd)) + return NULL; + + return pte_offset_kernel(pmd, addr); +} + #ifdef CONFIG_DEBUG_PAGEALLOC void __kernel_map_pages(struct page *page, int numpages, int enable) { @@ -156,29 +182,12 @@ void __kernel_map_pages(struct page *page, int numpages, int enable) */ bool kernel_page_present(struct page *page) { - pgd_t *pgd; - pud_t *pud; - pmd_t *pmd; - pte_t *pte; unsigned long addr = (unsigned long)page_address(page); + pte_t *pte = lookup_address(addr); - pgd = pgd_offset_k(addr); - if (pgd_none(*pgd)) - return false; - - pud = pud_offset(pgd, addr); - if (pud_none(*pud)) - return false; - if (pud_sect(*pud)) - return true; - - pmd = pmd_offset(pud, addr); - if (pmd_none(*pmd)) + if (!pte) return false; - if (pmd_sect(*pmd)) - return true; - pte = pte_offset_kernel(pmd, addr); return pte_valid(*pte); } #endif /* CONFIG_HIBERNATION */ diff --git a/include/xen/arm/page.h b/include/xen/arm/page.h index 415dbc6e43fd..6adc2a955340 100644 --- a/include/xen/arm/page.h +++ b/include/xen/arm/page.h @@ -84,16 +84,6 @@ static inline xmaddr_t arbitrary_virt_to_machine(void *vaddr) BUG(); } -/* TODO: this shouldn't be here but it is because the frontend drivers - * are using it (its rolled in headers) even though we won't hit the code path. - * So for right now just punt with this. - */ -static inline pte_t *lookup_address(unsigned long address, unsigned int *level) -{ - BUG(); - return NULL; -} - extern int set_foreign_p2m_mapping(struct gnttab_map_grant_ref *map_ops, struct gnttab_map_grant_ref *kmap_ops, struct page **pages, unsigned int count); -- 2.11.0 ^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH v6 05/11] arm64/mm: Add support for XPFO 2017-09-08 17:24 ` Tycho Andersen @ 2017-09-14 10:41 ` Julien Grall 2017-09-14 11:29 ` Juergen Gross 0 siblings, 1 reply; 19+ messages in thread From: Julien Grall @ 2017-09-14 10:41 UTC (permalink / raw) To: linux-arm-kernel Hi, CC Juergen, Boris and Stefano. On 08/09/17 18:24, Tycho Andersen wrote: > On Fri, Sep 08, 2017 at 12:53:47AM -0700, Christoph Hellwig wrote: >>> +/* >>> + * Lookup the page table entry for a virtual address and return a pointer to >>> + * the entry. Based on x86 tree. >>> + */ >>> +static pte_t *lookup_address(unsigned long addr) >> >> Seems like this should be moved to common arm64 mm code and used by >> kernel_page_present. > > Sounds good, I'll include something like the patch below in the next > series. > > Unfortunately, adding an implementation of lookup_address seems to be > slightly more complicated than necessary, because of the xen piece. We > have to define lookup_address() with the level parameter, but it's not > obvious to me to name the page levels. So for now I've just left it as > a WARN() if someone supplies it. > > It seems like xen still does need this to be defined, because if I > define it without level: > > drivers/xen/xenbus/xenbus_client.c: In function ?xenbus_unmap_ring_vfree_pv?: > drivers/xen/xenbus/xenbus_client.c:760:4: error: too many arguments to function ?lookup_address? > lookup_address(addr, &level)).maddr; > ^~~~~~~~~~~~~~ > In file included from ./arch/arm64/include/asm/page.h:37:0, > from ./include/linux/mmzone.h:20, > from ./include/linux/gfp.h:5, > from ./include/linux/mm.h:9, > from drivers/xen/xenbus/xenbus_client.c:33: > ./arch/arm64/include/asm/pgtable-types.h:67:15: note: declared here > extern pte_t *lookup_address(unsigned long addr); > ^~~~~~~~~~~~~~ > > I've cc-d the xen folks, maybe they can suggest a way to untangle it? > Alternatively, if someone can suggest a good naming scheme for the > page levels, I can just do that. The implementation of lookup_address(...) on ARM for Xen (see include/xen/arm/page.h) is just a BUG(). This is because this code should never be called (only used for x86 PV code). Furthermore, xenbus client does not use at all the level. It is just to cope with the x86 version of lookup_address. So one way to solve the problem would be to introduce xen_lookup_address(addr) that would be implemented as: - on x86 unsigned int level; return lookup_address(addr, &level).maddr; - on ARM BUG(); With that there would be no prototype clash and avoid introducing a level parameter. Cheers, -- Julien Grall ^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v6 05/11] arm64/mm: Add support for XPFO 2017-09-14 10:41 ` Julien Grall @ 2017-09-14 11:29 ` Juergen Gross 0 siblings, 0 replies; 19+ messages in thread From: Juergen Gross @ 2017-09-14 11:29 UTC (permalink / raw) To: linux-arm-kernel On 14/09/17 12:41, Julien Grall wrote: > Hi, > > CC Juergen, Boris and Stefano. > > On 08/09/17 18:24, Tycho Andersen wrote: >> On Fri, Sep 08, 2017 at 12:53:47AM -0700, Christoph Hellwig wrote: >>>> +/* >>>> + * Lookup the page table entry for a virtual address and return a >>>> pointer to >>>> + * the entry. Based on x86 tree. >>>> + */ >>>> +static pte_t *lookup_address(unsigned long addr) >>> >>> Seems like this should be moved to common arm64 mm code and used by >>> kernel_page_present. >> >> Sounds good, I'll include something like the patch below in the next >> series. >> >> Unfortunately, adding an implementation of lookup_address seems to be >> slightly more complicated than necessary, because of the xen piece. We >> have to define lookup_address() with the level parameter, but it's not >> obvious to me to name the page levels. So for now I've just left it as >> a WARN() if someone supplies it. >> >> It seems like xen still does need this to be defined, because if I >> define it without level: >> >> drivers/xen/xenbus/xenbus_client.c: In function >> ?xenbus_unmap_ring_vfree_pv?: >> drivers/xen/xenbus/xenbus_client.c:760:4: error: too many arguments to >> function ?lookup_address? >> ???? lookup_address(addr, &level)).maddr; >> ???? ^~~~~~~~~~~~~~ >> In file included from ./arch/arm64/include/asm/page.h:37:0, >> ????????????????? from ./include/linux/mmzone.h:20, >> ????????????????? from ./include/linux/gfp.h:5, >> ????????????????? from ./include/linux/mm.h:9, >> ????????????????? from drivers/xen/xenbus/xenbus_client.c:33: >> ./arch/arm64/include/asm/pgtable-types.h:67:15: note: declared here >> ? extern pte_t *lookup_address(unsigned long addr); >> ??????????????? ^~~~~~~~~~~~~~ >> >> I've cc-d the xen folks, maybe they can suggest a way to untangle it? >> Alternatively, if someone can suggest a good naming scheme for the >> page levels, I can just do that. > > The implementation of lookup_address(...) on ARM for Xen (see > include/xen/arm/page.h) is just a BUG(). This is because this code > should never be called (only used for x86 PV code). > > Furthermore, xenbus client does not use at all the level. It is just to > cope with the x86 version of lookup_address. > > So one way to solve the problem would be to introduce > xen_lookup_address(addr) that would be implemented as: > ????- on x86 > ??????? unsigned int level; > > ??????? return lookup_address(addr, &level).maddr; > ????- on ARM > ??????? BUG(); > > With that there would be no prototype clash and avoid introducing a > level parameter. I'd rather add some #ifdef CONFIG_XEN_PV and remove the *_pv functions from ARM completely this way. I'm sending a patch soon... Juergen ^ permalink raw reply [flat|nested] 19+ messages in thread
* [kernel-hardening] [PATCH v6 05/11] arm64/mm: Add support for XPFO 2017-09-07 17:36 ` [PATCH v6 05/11] arm64/mm: Add support for XPFO Tycho Andersen 2017-09-08 7:53 ` Christoph Hellwig @ 2017-09-14 18:22 ` Mark Rutland 2017-09-18 21:27 ` Tycho Andersen 1 sibling, 1 reply; 19+ messages in thread From: Mark Rutland @ 2017-09-14 18:22 UTC (permalink / raw) To: linux-arm-kernel Hi, On Thu, Sep 07, 2017 at 11:36:03AM -0600, Tycho Andersen wrote: > From: Juerg Haefliger <juerg.haefliger@canonical.com> > > Enable support for eXclusive Page Frame Ownership (XPFO) for arm64 and > provide a hook for updating a single kernel page table entry (which is > required by the generic XPFO code). > > v6: use flush_tlb_kernel_range() instead of __flush_tlb_one() > > CC: linux-arm-kernel at lists.infradead.org > Signed-off-by: Juerg Haefliger <juerg.haefliger@canonical.com> > Signed-off-by: Tycho Andersen <tycho@docker.com> > --- > arch/arm64/Kconfig | 1 + > arch/arm64/mm/Makefile | 2 ++ > arch/arm64/mm/xpfo.c | 58 ++++++++++++++++++++++++++++++++++++++++++++++++++ > 3 files changed, 61 insertions(+) > > diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig > index dfd908630631..44fa44ef02ec 100644 > --- a/arch/arm64/Kconfig > +++ b/arch/arm64/Kconfig > @@ -121,6 +121,7 @@ config ARM64 > select SPARSE_IRQ > select SYSCTL_EXCEPTION_TRACE > select THREAD_INFO_IN_TASK > + select ARCH_SUPPORTS_XPFO A bit of a nit, but this list is (intended to be) organised alphabetically. Could you please try to retain that? i.e. place this between ARCH_SUPPORTS_NUMA_BALANCING and ARCH_WANT_COMPAT_IPC_PARSE_VERSION. > help > ARM 64-bit (AArch64) Linux support. > > diff --git a/arch/arm64/mm/Makefile b/arch/arm64/mm/Makefile > index 9b0ba191e48e..22e5cab543d8 100644 > --- a/arch/arm64/mm/Makefile > +++ b/arch/arm64/mm/Makefile > @@ -11,3 +11,5 @@ KASAN_SANITIZE_physaddr.o += n > > obj-$(CONFIG_KASAN) += kasan_init.o > KASAN_SANITIZE_kasan_init.o := n > + > +obj-$(CONFIG_XPFO) += xpfo.o > diff --git a/arch/arm64/mm/xpfo.c b/arch/arm64/mm/xpfo.c > new file mode 100644 > index 000000000000..678e2be848eb > --- /dev/null > +++ b/arch/arm64/mm/xpfo.c > @@ -0,0 +1,58 @@ > +/* > + * Copyright (C) 2017 Hewlett Packard Enterprise Development, L.P. > + * Copyright (C) 2016 Brown University. All rights reserved. > + * > + * Authors: > + * Juerg Haefliger <juerg.haefliger@hpe.com> > + * Vasileios P. Kemerlis <vpk@cs.brown.edu> > + * > + * This program is free software; you can redistribute it and/or modify it > + * under the terms of the GNU General Public License version 2 as published by > + * the Free Software Foundation. > + */ > + > +#include <linux/mm.h> > +#include <linux/module.h> > + > +#include <asm/tlbflush.h> > + > +/* > + * Lookup the page table entry for a virtual address and return a pointer to > + * the entry. Based on x86 tree. > + */ Is this intended for kernel VAs, user VAs, or both? There are different constraints for fiddling with either (e.g. holding mmap_sem), so we should be clear regarding the intended use-case. > +static pte_t *lookup_address(unsigned long addr) > +{ > + pgd_t *pgd; > + pud_t *pud; > + pmd_t *pmd; > + > + pgd = pgd_offset_k(addr); > + if (pgd_none(*pgd)) > + return NULL; > + > + pud = pud_offset(pgd, addr); > + if (pud_none(*pud)) > + return NULL; What if it's not none, but not a table? I think we chould check pud_sect() here, and/or pud_bad(). > + > + pmd = pmd_offset(pud, addr); > + if (pmd_none(*pmd)) > + return NULL; Likewise. > + > + return pte_offset_kernel(pmd, addr); > +} Given this expects a pte, it might make more sense to call this lookup_address_pte() to make that clear. > + > +/* Update a single kernel page table entry */ > +inline void set_kpte(void *kaddr, struct page *page, pgprot_t prot) > +{ > + pte_t *pte = lookup_address((unsigned long)kaddr); > + > + set_pte(pte, pfn_pte(page_to_pfn(page), prot)); We can get NULL from lookup_address(), so this doesn't look right. If NULL implies an error, drop a BUG_ON(!pte) before the set_pte. > +} > + > +inline void xpfo_flush_kernel_tlb(struct page *page, int order) > +{ > + unsigned long kaddr = (unsigned long)page_address(page); > + unsigned long size = PAGE_SIZE; unsigned long size = PAGE_SIZE << order; > + > + flush_tlb_kernel_range(kaddr, kaddr + (1 << order) * size); ... and this can be simpler. I haven't brought myself back up to speed, so it might not be possible, but I still think it would be preferable for XPFO to call flush_tlb_kernel_range() directly. Thanks, Mark. ^ permalink raw reply [flat|nested] 19+ messages in thread
* [kernel-hardening] [PATCH v6 05/11] arm64/mm: Add support for XPFO 2017-09-14 18:22 ` [kernel-hardening] " Mark Rutland @ 2017-09-18 21:27 ` Tycho Andersen 0 siblings, 0 replies; 19+ messages in thread From: Tycho Andersen @ 2017-09-18 21:27 UTC (permalink / raw) To: linux-arm-kernel Hi Mark, On Thu, Sep 14, 2017 at 07:22:08PM +0100, Mark Rutland wrote: > Hi, > > On Thu, Sep 07, 2017 at 11:36:03AM -0600, Tycho Andersen wrote: > > From: Juerg Haefliger <juerg.haefliger@canonical.com> > > > > Enable support for eXclusive Page Frame Ownership (XPFO) for arm64 and > > provide a hook for updating a single kernel page table entry (which is > > required by the generic XPFO code). > > > > v6: use flush_tlb_kernel_range() instead of __flush_tlb_one() > > > > CC: linux-arm-kernel at lists.infradead.org > > Signed-off-by: Juerg Haefliger <juerg.haefliger@canonical.com> > > Signed-off-by: Tycho Andersen <tycho@docker.com> > > --- > > arch/arm64/Kconfig | 1 + > > arch/arm64/mm/Makefile | 2 ++ > > arch/arm64/mm/xpfo.c | 58 ++++++++++++++++++++++++++++++++++++++++++++++++++ > > 3 files changed, 61 insertions(+) > > > > diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig > > index dfd908630631..44fa44ef02ec 100644 > > --- a/arch/arm64/Kconfig > > +++ b/arch/arm64/Kconfig > > @@ -121,6 +121,7 @@ config ARM64 > > select SPARSE_IRQ > > select SYSCTL_EXCEPTION_TRACE > > select THREAD_INFO_IN_TASK > > + select ARCH_SUPPORTS_XPFO > > A bit of a nit, but this list is (intended to be) organised alphabetically. > Could you please try to retain that? > > i.e. place this between ARCH_SUPPORTS_NUMA_BALANCING and > ARCH_WANT_COMPAT_IPC_PARSE_VERSION. Will do. > > help > > ARM 64-bit (AArch64) Linux support. > > > > diff --git a/arch/arm64/mm/Makefile b/arch/arm64/mm/Makefile > > index 9b0ba191e48e..22e5cab543d8 100644 > > --- a/arch/arm64/mm/Makefile > > +++ b/arch/arm64/mm/Makefile > > @@ -11,3 +11,5 @@ KASAN_SANITIZE_physaddr.o += n > > > > obj-$(CONFIG_KASAN) += kasan_init.o > > KASAN_SANITIZE_kasan_init.o := n > > + > > +obj-$(CONFIG_XPFO) += xpfo.o > > diff --git a/arch/arm64/mm/xpfo.c b/arch/arm64/mm/xpfo.c > > new file mode 100644 > > index 000000000000..678e2be848eb > > --- /dev/null > > +++ b/arch/arm64/mm/xpfo.c > > @@ -0,0 +1,58 @@ > > +/* > > + * Copyright (C) 2017 Hewlett Packard Enterprise Development, L.P. > > + * Copyright (C) 2016 Brown University. All rights reserved. > > + * > > + * Authors: > > + * Juerg Haefliger <juerg.haefliger@hpe.com> > > + * Vasileios P. Kemerlis <vpk@cs.brown.edu> > > + * > > + * This program is free software; you can redistribute it and/or modify it > > + * under the terms of the GNU General Public License version 2 as published by > > + * the Free Software Foundation. > > + */ > > + > > +#include <linux/mm.h> > > +#include <linux/module.h> > > + > > +#include <asm/tlbflush.h> > > + > > +/* > > + * Lookup the page table entry for a virtual address and return a pointer to > > + * the entry. Based on x86 tree. > > + */ > > Is this intended for kernel VAs, user VAs, or both? > > There are different constraints for fiddling with either (e.g. holding > mmap_sem), so we should be clear regarding the intended use-case. kernel only; I can add a comment noting this. > > +static pte_t *lookup_address(unsigned long addr) > > +{ > > + pgd_t *pgd; > > + pud_t *pud; > > + pmd_t *pmd; > > + > > + pgd = pgd_offset_k(addr); > > + if (pgd_none(*pgd)) > > + return NULL; > > + > > + pud = pud_offset(pgd, addr); > > + if (pud_none(*pud)) > > + return NULL; > > What if it's not none, but not a table? > > I think we chould check pud_sect() here, and/or pud_bad(). > > > + > > + pmd = pmd_offset(pud, addr); > > + if (pmd_none(*pmd)) > > + return NULL; > > Likewise. In principle pud_sect() should be okay, because we say that XPFO doesn't support section mappings yet. I'll add a check for pud_bad(). However, Christoph suggested that we move this to common code and there it won't be okay. > > + > > + return pte_offset_kernel(pmd, addr); > > +} > > Given this expects a pte, it might make more sense to call this > lookup_address_pte() to make that clear. > > > + > > +/* Update a single kernel page table entry */ > > +inline void set_kpte(void *kaddr, struct page *page, pgprot_t prot) > > +{ > > + pte_t *pte = lookup_address((unsigned long)kaddr); > > + > > + set_pte(pte, pfn_pte(page_to_pfn(page), prot)); > > We can get NULL from lookup_address(), so this doesn't look right. > > If NULL implies an error, drop a BUG_ON(!pte) before the set_pte. It does, I'll add this (as a WARN() and then no-op), thanks. > > +} > > + > > +inline void xpfo_flush_kernel_tlb(struct page *page, int order) > > +{ > > + unsigned long kaddr = (unsigned long)page_address(page); > > + unsigned long size = PAGE_SIZE; > > unsigned long size = PAGE_SIZE << order; > > > + > > + flush_tlb_kernel_range(kaddr, kaddr + (1 << order) * size); > > ... and this can be simpler. > > I haven't brought myself back up to speed, so it might not be possible, but I > still think it would be preferable for XPFO to call flush_tlb_kernel_range() > directly. I don't think we can, since on x86 it uses smp functions, and in some cases those aren't safe. Cheers, Tycho ^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v6 07/11] arm64/mm, xpfo: temporarily map dcache regions [not found] <20170907173609.22696-1-tycho@docker.com> 2017-09-07 17:36 ` [PATCH v6 05/11] arm64/mm: Add support for XPFO Tycho Andersen @ 2017-09-07 17:36 ` Tycho Andersen 2017-09-14 18:25 ` Mark Rutland 2017-09-07 17:36 ` [PATCH v6 08/11] arm64/mm: Add support for XPFO to swiotlb Tycho Andersen ` (2 subsequent siblings) 4 siblings, 1 reply; 19+ messages in thread From: Tycho Andersen @ 2017-09-07 17:36 UTC (permalink / raw) To: linux-arm-kernel From: Juerg Haefliger <juerg.haefliger@canonical.com> If the page is unmapped by XPFO, a data cache flush results in a fatal page fault, so let's temporarily map the region, flush the cache, and then unmap it. v6: actually flush in the face of xpfo, and temporarily map the underlying memory so it can be flushed correctly CC: linux-arm-kernel at lists.infradead.org Signed-off-by: Juerg Haefliger <juerg.haefliger@canonical.com> Signed-off-by: Tycho Andersen <tycho@docker.com> --- arch/arm64/mm/flush.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c index 21a8d828cbf4..e335e3fd4fca 100644 --- a/arch/arm64/mm/flush.c +++ b/arch/arm64/mm/flush.c @@ -20,6 +20,7 @@ #include <linux/export.h> #include <linux/mm.h> #include <linux/pagemap.h> +#include <linux/xpfo.h> #include <asm/cacheflush.h> #include <asm/cache.h> @@ -28,9 +29,15 @@ void sync_icache_aliases(void *kaddr, unsigned long len) { unsigned long addr = (unsigned long)kaddr; + unsigned long num_pages = XPFO_NUM_PAGES(addr, len); + void *mapping[num_pages]; if (icache_is_aliasing()) { + xpfo_temp_map(kaddr, len, mapping, + sizeof(mapping[0]) * num_pages); __clean_dcache_area_pou(kaddr, len); + xpfo_temp_unmap(kaddr, len, mapping, + sizeof(mapping[0]) * num_pages); __flush_icache_all(); } else { flush_icache_range(addr, addr + len); -- 2.11.0 ^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH v6 07/11] arm64/mm, xpfo: temporarily map dcache regions 2017-09-07 17:36 ` [PATCH v6 07/11] arm64/mm, xpfo: temporarily map dcache regions Tycho Andersen @ 2017-09-14 18:25 ` Mark Rutland 2017-09-18 21:29 ` Tycho Andersen 0 siblings, 1 reply; 19+ messages in thread From: Mark Rutland @ 2017-09-14 18:25 UTC (permalink / raw) To: linux-arm-kernel On Thu, Sep 07, 2017 at 11:36:05AM -0600, Tycho Andersen wrote: > From: Juerg Haefliger <juerg.haefliger@canonical.com> > > If the page is unmapped by XPFO, a data cache flush results in a fatal > page fault, so let's temporarily map the region, flush the cache, and then > unmap it. > > v6: actually flush in the face of xpfo, and temporarily map the underlying > memory so it can be flushed correctly > > CC: linux-arm-kernel at lists.infradead.org > Signed-off-by: Juerg Haefliger <juerg.haefliger@canonical.com> > Signed-off-by: Tycho Andersen <tycho@docker.com> > --- > arch/arm64/mm/flush.c | 7 +++++++ > 1 file changed, 7 insertions(+) > > diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c > index 21a8d828cbf4..e335e3fd4fca 100644 > --- a/arch/arm64/mm/flush.c > +++ b/arch/arm64/mm/flush.c > @@ -20,6 +20,7 @@ > #include <linux/export.h> > #include <linux/mm.h> > #include <linux/pagemap.h> > +#include <linux/xpfo.h> > > #include <asm/cacheflush.h> > #include <asm/cache.h> > @@ -28,9 +29,15 @@ > void sync_icache_aliases(void *kaddr, unsigned long len) > { > unsigned long addr = (unsigned long)kaddr; > + unsigned long num_pages = XPFO_NUM_PAGES(addr, len); > + void *mapping[num_pages]; > > if (icache_is_aliasing()) { > + xpfo_temp_map(kaddr, len, mapping, > + sizeof(mapping[0]) * num_pages); > __clean_dcache_area_pou(kaddr, len); > + xpfo_temp_unmap(kaddr, len, mapping, > + sizeof(mapping[0]) * num_pages); Does this create the mapping in-place? Can we not just kmap_atomic() an alias? Or is there a problem with that? Thanks, Mark. ^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v6 07/11] arm64/mm, xpfo: temporarily map dcache regions 2017-09-14 18:25 ` Mark Rutland @ 2017-09-18 21:29 ` Tycho Andersen 0 siblings, 0 replies; 19+ messages in thread From: Tycho Andersen @ 2017-09-18 21:29 UTC (permalink / raw) To: linux-arm-kernel On Thu, Sep 14, 2017 at 07:25:56PM +0100, Mark Rutland wrote: > On Thu, Sep 07, 2017 at 11:36:05AM -0600, Tycho Andersen wrote: > > From: Juerg Haefliger <juerg.haefliger@canonical.com> > > > > If the page is unmapped by XPFO, a data cache flush results in a fatal > > page fault, so let's temporarily map the region, flush the cache, and then > > unmap it. > > > > v6: actually flush in the face of xpfo, and temporarily map the underlying > > memory so it can be flushed correctly > > > > CC: linux-arm-kernel at lists.infradead.org > > Signed-off-by: Juerg Haefliger <juerg.haefliger@canonical.com> > > Signed-off-by: Tycho Andersen <tycho@docker.com> > > --- > > arch/arm64/mm/flush.c | 7 +++++++ > > 1 file changed, 7 insertions(+) > > > > diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c > > index 21a8d828cbf4..e335e3fd4fca 100644 > > --- a/arch/arm64/mm/flush.c > > +++ b/arch/arm64/mm/flush.c > > @@ -20,6 +20,7 @@ > > #include <linux/export.h> > > #include <linux/mm.h> > > #include <linux/pagemap.h> > > +#include <linux/xpfo.h> > > > > #include <asm/cacheflush.h> > > #include <asm/cache.h> > > @@ -28,9 +29,15 @@ > > void sync_icache_aliases(void *kaddr, unsigned long len) > > { > > unsigned long addr = (unsigned long)kaddr; > > + unsigned long num_pages = XPFO_NUM_PAGES(addr, len); > > + void *mapping[num_pages]; > > > > if (icache_is_aliasing()) { > > + xpfo_temp_map(kaddr, len, mapping, > > + sizeof(mapping[0]) * num_pages); > > __clean_dcache_area_pou(kaddr, len); > > + xpfo_temp_unmap(kaddr, len, mapping, > > + sizeof(mapping[0]) * num_pages); > > Does this create the mapping in-place? > > Can we not just kmap_atomic() an alias? Or is there a problem with that? I think what we really want is something like vmap(), looking at xpfo_temp_map() it seems like the implementation is completely wrong. I wonder if what you mentioned at LSS is possible though: doing cache management with userspace primitives instead of mapping the region just to flush it. Cheers, Tycho ^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v6 08/11] arm64/mm: Add support for XPFO to swiotlb [not found] <20170907173609.22696-1-tycho@docker.com> 2017-09-07 17:36 ` [PATCH v6 05/11] arm64/mm: Add support for XPFO Tycho Andersen 2017-09-07 17:36 ` [PATCH v6 07/11] arm64/mm, xpfo: temporarily map dcache regions Tycho Andersen @ 2017-09-07 17:36 ` Tycho Andersen 2017-09-07 17:36 ` [PATCH v6 09/11] arm64/mm: disable section/contiguous mappings if XPFO is enabled Tycho Andersen 2017-09-07 17:36 ` [PATCH v6 10/11] mm: add a user_virt_to_phys symbol Tycho Andersen 4 siblings, 0 replies; 19+ messages in thread From: Tycho Andersen @ 2017-09-07 17:36 UTC (permalink / raw) To: linux-arm-kernel From: Juerg Haefliger <juerg.haefliger@canonical.com> Pages that are unmapped by XPFO need to be mapped before and unmapped again after (to restore the original state) the __dma_{map,unmap}_area() operations to prevent fatal page faults. v6: * use the hoisted out temporary mapping code instead CC: linux-arm-kernel at lists.infradead.org Signed-off-by: Juerg Haefliger <juerg.haefliger@canonical.com> Signed-off-by: Tycho Andersen <tycho@docker.com> --- arch/arm64/include/asm/cacheflush.h | 11 +++++++++++ arch/arm64/mm/dma-mapping.c | 32 ++++++++++++++++---------------- arch/arm64/mm/xpfo.c | 18 ++++++++++++++++++ include/linux/xpfo.h | 2 ++ 4 files changed, 47 insertions(+), 16 deletions(-) diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h index d74a284abdc2..b6a462e3b2f9 100644 --- a/arch/arm64/include/asm/cacheflush.h +++ b/arch/arm64/include/asm/cacheflush.h @@ -93,6 +93,17 @@ extern void __dma_map_area(const void *, size_t, int); extern void __dma_unmap_area(const void *, size_t, int); extern void __dma_flush_area(const void *, size_t); +#ifdef CONFIG_XPFO +#include <linux/xpfo.h> +#define _dma_map_area(addr, size, dir) \ + xpfo_dma_map_unmap_area(true, addr, size, dir) +#define _dma_unmap_area(addr, size, dir) \ + xpfo_dma_map_unmap_area(false, addr, size, dir) +#else +#define _dma_map_area(addr, size, dir) __dma_map_area(addr, size, dir) +#define _dma_unmap_area(addr, size, dir) __dma_unmap_area(addr, size, dir) +#endif + /* * Copy user data from/to a page which is mapped into a different * processes address space. Really, we want to allow our "user diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c index f27d4dd04384..a79f200786ab 100644 --- a/arch/arm64/mm/dma-mapping.c +++ b/arch/arm64/mm/dma-mapping.c @@ -204,7 +204,7 @@ static dma_addr_t __swiotlb_map_page(struct device *dev, struct page *page, dev_addr = swiotlb_map_page(dev, page, offset, size, dir, attrs); if (!is_device_dma_coherent(dev) && (attrs & DMA_ATTR_SKIP_CPU_SYNC) == 0) - __dma_map_area(phys_to_virt(dma_to_phys(dev, dev_addr)), size, dir); + _dma_map_area(phys_to_virt(dma_to_phys(dev, dev_addr)), size, dir); return dev_addr; } @@ -216,7 +216,7 @@ static void __swiotlb_unmap_page(struct device *dev, dma_addr_t dev_addr, { if (!is_device_dma_coherent(dev) && (attrs & DMA_ATTR_SKIP_CPU_SYNC) == 0) - __dma_unmap_area(phys_to_virt(dma_to_phys(dev, dev_addr)), size, dir); + _dma_unmap_area(phys_to_virt(dma_to_phys(dev, dev_addr)), size, dir); swiotlb_unmap_page(dev, dev_addr, size, dir, attrs); } @@ -231,8 +231,8 @@ static int __swiotlb_map_sg_attrs(struct device *dev, struct scatterlist *sgl, if (!is_device_dma_coherent(dev) && (attrs & DMA_ATTR_SKIP_CPU_SYNC) == 0) for_each_sg(sgl, sg, ret, i) - __dma_map_area(phys_to_virt(dma_to_phys(dev, sg->dma_address)), - sg->length, dir); + _dma_map_area(phys_to_virt(dma_to_phys(dev, sg->dma_address)), + sg->length, dir); return ret; } @@ -248,8 +248,8 @@ static void __swiotlb_unmap_sg_attrs(struct device *dev, if (!is_device_dma_coherent(dev) && (attrs & DMA_ATTR_SKIP_CPU_SYNC) == 0) for_each_sg(sgl, sg, nelems, i) - __dma_unmap_area(phys_to_virt(dma_to_phys(dev, sg->dma_address)), - sg->length, dir); + _dma_unmap_area(phys_to_virt(dma_to_phys(dev, sg->dma_address)), + sg->length, dir); swiotlb_unmap_sg_attrs(dev, sgl, nelems, dir, attrs); } @@ -258,7 +258,7 @@ static void __swiotlb_sync_single_for_cpu(struct device *dev, enum dma_data_direction dir) { if (!is_device_dma_coherent(dev)) - __dma_unmap_area(phys_to_virt(dma_to_phys(dev, dev_addr)), size, dir); + _dma_unmap_area(phys_to_virt(dma_to_phys(dev, dev_addr)), size, dir); swiotlb_sync_single_for_cpu(dev, dev_addr, size, dir); } @@ -268,7 +268,7 @@ static void __swiotlb_sync_single_for_device(struct device *dev, { swiotlb_sync_single_for_device(dev, dev_addr, size, dir); if (!is_device_dma_coherent(dev)) - __dma_map_area(phys_to_virt(dma_to_phys(dev, dev_addr)), size, dir); + _dma_map_area(phys_to_virt(dma_to_phys(dev, dev_addr)), size, dir); } static void __swiotlb_sync_sg_for_cpu(struct device *dev, @@ -280,8 +280,8 @@ static void __swiotlb_sync_sg_for_cpu(struct device *dev, if (!is_device_dma_coherent(dev)) for_each_sg(sgl, sg, nelems, i) - __dma_unmap_area(phys_to_virt(dma_to_phys(dev, sg->dma_address)), - sg->length, dir); + _dma_unmap_area(phys_to_virt(dma_to_phys(dev, sg->dma_address)), + sg->length, dir); swiotlb_sync_sg_for_cpu(dev, sgl, nelems, dir); } @@ -295,8 +295,8 @@ static void __swiotlb_sync_sg_for_device(struct device *dev, swiotlb_sync_sg_for_device(dev, sgl, nelems, dir); if (!is_device_dma_coherent(dev)) for_each_sg(sgl, sg, nelems, i) - __dma_map_area(phys_to_virt(dma_to_phys(dev, sg->dma_address)), - sg->length, dir); + _dma_map_area(phys_to_virt(dma_to_phys(dev, sg->dma_address)), + sg->length, dir); } static int __swiotlb_mmap_pfn(struct vm_area_struct *vma, @@ -758,7 +758,7 @@ static void __iommu_sync_single_for_cpu(struct device *dev, return; phys = iommu_iova_to_phys(iommu_get_domain_for_dev(dev), dev_addr); - __dma_unmap_area(phys_to_virt(phys), size, dir); + _dma_unmap_area(phys_to_virt(phys), size, dir); } static void __iommu_sync_single_for_device(struct device *dev, @@ -771,7 +771,7 @@ static void __iommu_sync_single_for_device(struct device *dev, return; phys = iommu_iova_to_phys(iommu_get_domain_for_dev(dev), dev_addr); - __dma_map_area(phys_to_virt(phys), size, dir); + _dma_map_area(phys_to_virt(phys), size, dir); } static dma_addr_t __iommu_map_page(struct device *dev, struct page *page, @@ -811,7 +811,7 @@ static void __iommu_sync_sg_for_cpu(struct device *dev, return; for_each_sg(sgl, sg, nelems, i) - __dma_unmap_area(sg_virt(sg), sg->length, dir); + _dma_unmap_area(sg_virt(sg), sg->length, dir); } static void __iommu_sync_sg_for_device(struct device *dev, @@ -825,7 +825,7 @@ static void __iommu_sync_sg_for_device(struct device *dev, return; for_each_sg(sgl, sg, nelems, i) - __dma_map_area(sg_virt(sg), sg->length, dir); + _dma_map_area(sg_virt(sg), sg->length, dir); } static int __iommu_map_sg_attrs(struct device *dev, struct scatterlist *sgl, diff --git a/arch/arm64/mm/xpfo.c b/arch/arm64/mm/xpfo.c index 678e2be848eb..342a9ccb93c1 100644 --- a/arch/arm64/mm/xpfo.c +++ b/arch/arm64/mm/xpfo.c @@ -11,8 +11,10 @@ * the Free Software Foundation. */ +#include <linux/highmem.h> #include <linux/mm.h> #include <linux/module.h> +#include <linux/xpfo.h> #include <asm/tlbflush.h> @@ -56,3 +58,19 @@ inline void xpfo_flush_kernel_tlb(struct page *page, int order) flush_tlb_kernel_range(kaddr, kaddr + (1 << order) * size); } + +void xpfo_dma_map_unmap_area(bool map, const void *addr, size_t size, + enum dma_data_direction dir) +{ + unsigned long num_pages = XPFO_NUM_PAGES(addr, size); + void *mapping[num_pages]; + + xpfo_temp_map(addr, size, mapping, sizeof(mapping[0]) * num_pages); + + if (map) + __dma_map_area(addr, size, dir); + else + __dma_unmap_area(addr, size, dir); + + xpfo_temp_unmap(addr, size, mapping, sizeof(mapping[0]) * num_pages); +} diff --git a/include/linux/xpfo.h b/include/linux/xpfo.h index 304b104ec637..d37a06c9d62c 100644 --- a/include/linux/xpfo.h +++ b/include/linux/xpfo.h @@ -18,6 +18,8 @@ #ifdef CONFIG_XPFO +#include <linux/dma-mapping.h> + extern struct page_ext_operations page_xpfo_ops; void set_kpte(void *kaddr, struct page *page, pgprot_t prot); -- 2.11.0 ^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH v6 09/11] arm64/mm: disable section/contiguous mappings if XPFO is enabled [not found] <20170907173609.22696-1-tycho@docker.com> ` (2 preceding siblings ...) 2017-09-07 17:36 ` [PATCH v6 08/11] arm64/mm: Add support for XPFO to swiotlb Tycho Andersen @ 2017-09-07 17:36 ` Tycho Andersen 2017-09-09 15:38 ` Laura Abbott 2017-09-07 17:36 ` [PATCH v6 10/11] mm: add a user_virt_to_phys symbol Tycho Andersen 4 siblings, 1 reply; 19+ messages in thread From: Tycho Andersen @ 2017-09-07 17:36 UTC (permalink / raw) To: linux-arm-kernel XPFO doesn't support section/contiguous mappings yet, so let's disable it if XPFO is turned on. Thanks to Laura Abbot for the simplification from v5, and Mark Rutland for pointing out we need NO_CONT_MAPPINGS too. CC: linux-arm-kernel at lists.infradead.org Signed-off-by: Tycho Andersen <tycho@docker.com> --- arch/arm64/mm/mmu.c | 2 +- include/linux/xpfo.h | 4 ++++ mm/xpfo.c | 6 ++++++ 3 files changed, 11 insertions(+), 1 deletion(-) diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index f1eb15e0e864..34bb95303cce 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -420,7 +420,7 @@ static void __init map_mem(pgd_t *pgd) struct memblock_region *reg; int flags = 0; - if (debug_pagealloc_enabled()) + if (debug_pagealloc_enabled() || xpfo_enabled()) flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS; /* diff --git a/include/linux/xpfo.h b/include/linux/xpfo.h index d37a06c9d62c..1693af1a0293 100644 --- a/include/linux/xpfo.h +++ b/include/linux/xpfo.h @@ -43,6 +43,8 @@ void xpfo_temp_map(const void *addr, size_t size, void **mapping, void xpfo_temp_unmap(const void *addr, size_t size, void **mapping, size_t mapping_len); +bool xpfo_enabled(void); + #else /* !CONFIG_XPFO */ static inline void xpfo_kmap(void *kaddr, struct page *page) { } @@ -65,6 +67,8 @@ static inline void xpfo_temp_unmap(const void *addr, size_t size, } +static inline bool xpfo_enabled(void) { return false; } + #endif /* CONFIG_XPFO */ #endif /* _LINUX_XPFO_H */ diff --git a/mm/xpfo.c b/mm/xpfo.c index f79075bf7d65..25fba05d01bd 100644 --- a/mm/xpfo.c +++ b/mm/xpfo.c @@ -70,6 +70,12 @@ struct page_ext_operations page_xpfo_ops = { .init = init_xpfo, }; +bool __init xpfo_enabled(void) +{ + return !xpfo_disabled; +} +EXPORT_SYMBOL(xpfo_enabled); + static inline struct xpfo *lookup_xpfo(struct page *page) { struct page_ext *page_ext = lookup_page_ext(page); -- 2.11.0 ^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH v6 09/11] arm64/mm: disable section/contiguous mappings if XPFO is enabled 2017-09-07 17:36 ` [PATCH v6 09/11] arm64/mm: disable section/contiguous mappings if XPFO is enabled Tycho Andersen @ 2017-09-09 15:38 ` Laura Abbott 0 siblings, 0 replies; 19+ messages in thread From: Laura Abbott @ 2017-09-09 15:38 UTC (permalink / raw) To: linux-arm-kernel On 09/07/2017 10:36 AM, Tycho Andersen wrote: > XPFO doesn't support section/contiguous mappings yet, so let's disable it > if XPFO is turned on. > > Thanks to Laura Abbot for the simplification from v5, and Mark Rutland for > pointing out we need NO_CONT_MAPPINGS too. > > CC: linux-arm-kernel at lists.infradead.org > Signed-off-by: Tycho Andersen <tycho@docker.com> This should just be folded into the initial arm64 patch since it basically bugs out otherwise. Thanks, Laura > --- > arch/arm64/mm/mmu.c | 2 +- > include/linux/xpfo.h | 4 ++++ > mm/xpfo.c | 6 ++++++ > 3 files changed, 11 insertions(+), 1 deletion(-) > > diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c > index f1eb15e0e864..34bb95303cce 100644 > --- a/arch/arm64/mm/mmu.c > +++ b/arch/arm64/mm/mmu.c > @@ -420,7 +420,7 @@ static void __init map_mem(pgd_t *pgd) > struct memblock_region *reg; > int flags = 0; > > - if (debug_pagealloc_enabled()) > + if (debug_pagealloc_enabled() || xpfo_enabled()) > flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS; > > /* > diff --git a/include/linux/xpfo.h b/include/linux/xpfo.h > index d37a06c9d62c..1693af1a0293 100644 > --- a/include/linux/xpfo.h > +++ b/include/linux/xpfo.h > @@ -43,6 +43,8 @@ void xpfo_temp_map(const void *addr, size_t size, void **mapping, > void xpfo_temp_unmap(const void *addr, size_t size, void **mapping, > size_t mapping_len); > > +bool xpfo_enabled(void); > + > #else /* !CONFIG_XPFO */ > > static inline void xpfo_kmap(void *kaddr, struct page *page) { } > @@ -65,6 +67,8 @@ static inline void xpfo_temp_unmap(const void *addr, size_t size, > } > > > +static inline bool xpfo_enabled(void) { return false; } > + > #endif /* CONFIG_XPFO */ > > #endif /* _LINUX_XPFO_H */ > diff --git a/mm/xpfo.c b/mm/xpfo.c > index f79075bf7d65..25fba05d01bd 100644 > --- a/mm/xpfo.c > +++ b/mm/xpfo.c > @@ -70,6 +70,12 @@ struct page_ext_operations page_xpfo_ops = { > .init = init_xpfo, > }; > > +bool __init xpfo_enabled(void) > +{ > + return !xpfo_disabled; > +} > +EXPORT_SYMBOL(xpfo_enabled); > + > static inline struct xpfo *lookup_xpfo(struct page *page) > { > struct page_ext *page_ext = lookup_page_ext(page); > ^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v6 10/11] mm: add a user_virt_to_phys symbol [not found] <20170907173609.22696-1-tycho@docker.com> ` (3 preceding siblings ...) 2017-09-07 17:36 ` [PATCH v6 09/11] arm64/mm: disable section/contiguous mappings if XPFO is enabled Tycho Andersen @ 2017-09-07 17:36 ` Tycho Andersen 2017-09-08 7:55 ` Christoph Hellwig 2017-09-14 18:34 ` [kernel-hardening] " Mark Rutland 4 siblings, 2 replies; 19+ messages in thread From: Tycho Andersen @ 2017-09-07 17:36 UTC (permalink / raw) To: linux-arm-kernel We need someting like this for testing XPFO. Since it's architecture specific, putting it in the test code is slightly awkward, so let's make it an arch-specific symbol and export it for use in LKDTM. v6: * add a definition of user_virt_to_phys in the !CONFIG_XPFO case CC: linux-arm-kernel at lists.infradead.org CC: x86 at kernel.org Signed-off-by: Tycho Andersen <tycho@docker.com> Tested-by: Marco Benatto <marco.antonio.780@gmail.com> --- arch/arm64/mm/xpfo.c | 51 ++++++++++++++++++++++++++++++++++++++++++++++ arch/x86/mm/xpfo.c | 57 ++++++++++++++++++++++++++++++++++++++++++++++++++++ include/linux/xpfo.h | 5 +++++ 3 files changed, 113 insertions(+) diff --git a/arch/arm64/mm/xpfo.c b/arch/arm64/mm/xpfo.c index 342a9ccb93c1..94a667d94e15 100644 --- a/arch/arm64/mm/xpfo.c +++ b/arch/arm64/mm/xpfo.c @@ -74,3 +74,54 @@ void xpfo_dma_map_unmap_area(bool map, const void *addr, size_t size, xpfo_temp_unmap(addr, size, mapping, sizeof(mapping[0]) * num_pages); } + +/* Convert a user space virtual address to a physical address. + * Shamelessly copied from slow_virt_to_phys() and lookup_address() in + * arch/x86/mm/pageattr.c + */ +phys_addr_t user_virt_to_phys(unsigned long addr) +{ + phys_addr_t phys_addr; + unsigned long offset; + pgd_t *pgd; + p4d_t *p4d; + pud_t *pud; + pmd_t *pmd; + pte_t *pte; + + pgd = pgd_offset(current->mm, addr); + if (pgd_none(*pgd)) + return 0; + + p4d = p4d_offset(pgd, addr); + if (p4d_none(*p4d)) + return 0; + + pud = pud_offset(p4d, addr); + if (pud_none(*pud)) + return 0; + + if (pud_sect(*pud) || !pud_present(*pud)) { + phys_addr = (unsigned long)pud_pfn(*pud) << PAGE_SHIFT; + offset = addr & ~PUD_MASK; + goto out; + } + + pmd = pmd_offset(pud, addr); + if (pmd_none(*pmd)) + return 0; + + if (pmd_sect(*pmd) || !pmd_present(*pmd)) { + phys_addr = (unsigned long)pmd_pfn(*pmd) << PAGE_SHIFT; + offset = addr & ~PMD_MASK; + goto out; + } + + pte = pte_offset_kernel(pmd, addr); + phys_addr = (phys_addr_t)pte_pfn(*pte) << PAGE_SHIFT; + offset = addr & ~PAGE_MASK; + +out: + return (phys_addr_t)(phys_addr | offset); +} +EXPORT_SYMBOL(user_virt_to_phys); diff --git a/arch/x86/mm/xpfo.c b/arch/x86/mm/xpfo.c index 6794d6724ab5..d24cf2c600e8 100644 --- a/arch/x86/mm/xpfo.c +++ b/arch/x86/mm/xpfo.c @@ -112,3 +112,60 @@ inline void xpfo_flush_kernel_tlb(struct page *page, int order) flush_tlb_kernel_range(kaddr, kaddr + (1 << order) * size); } + +/* Convert a user space virtual address to a physical address. + * Shamelessly copied from slow_virt_to_phys() and lookup_address() in + * arch/x86/mm/pageattr.c + */ +phys_addr_t user_virt_to_phys(unsigned long addr) +{ + phys_addr_t phys_addr; + unsigned long offset; + pgd_t *pgd; + p4d_t *p4d; + pud_t *pud; + pmd_t *pmd; + pte_t *pte; + + pgd = pgd_offset(current->mm, addr); + if (pgd_none(*pgd)) + return 0; + + p4d = p4d_offset(pgd, addr); + if (p4d_none(*p4d)) + return 0; + + if (p4d_large(*p4d) || !p4d_present(*p4d)) { + phys_addr = (unsigned long)p4d_pfn(*p4d) << PAGE_SHIFT; + offset = addr & ~P4D_MASK; + goto out; + } + + pud = pud_offset(p4d, addr); + if (pud_none(*pud)) + return 0; + + if (pud_large(*pud) || !pud_present(*pud)) { + phys_addr = (unsigned long)pud_pfn(*pud) << PAGE_SHIFT; + offset = addr & ~PUD_MASK; + goto out; + } + + pmd = pmd_offset(pud, addr); + if (pmd_none(*pmd)) + return 0; + + if (pmd_large(*pmd) || !pmd_present(*pmd)) { + phys_addr = (unsigned long)pmd_pfn(*pmd) << PAGE_SHIFT; + offset = addr & ~PMD_MASK; + goto out; + } + + pte = pte_offset_kernel(pmd, addr); + phys_addr = (phys_addr_t)pte_pfn(*pte) << PAGE_SHIFT; + offset = addr & ~PAGE_MASK; + +out: + return (phys_addr_t)(phys_addr | offset); +} +EXPORT_SYMBOL(user_virt_to_phys); diff --git a/include/linux/xpfo.h b/include/linux/xpfo.h index 1693af1a0293..be72da5fba26 100644 --- a/include/linux/xpfo.h +++ b/include/linux/xpfo.h @@ -19,6 +19,7 @@ #ifdef CONFIG_XPFO #include <linux/dma-mapping.h> +#include <linux/types.h> extern struct page_ext_operations page_xpfo_ops; @@ -45,6 +46,8 @@ void xpfo_temp_unmap(const void *addr, size_t size, void **mapping, bool xpfo_enabled(void); +phys_addr_t user_virt_to_phys(unsigned long addr); + #else /* !CONFIG_XPFO */ static inline void xpfo_kmap(void *kaddr, struct page *page) { } @@ -69,6 +72,8 @@ static inline void xpfo_temp_unmap(const void *addr, size_t size, static inline bool xpfo_enabled(void) { return false; } +static inline phys_addr_t user_virt_to_phys(unsigned long addr) { return 0; } + #endif /* CONFIG_XPFO */ #endif /* _LINUX_XPFO_H */ -- 2.11.0 ^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH v6 10/11] mm: add a user_virt_to_phys symbol 2017-09-07 17:36 ` [PATCH v6 10/11] mm: add a user_virt_to_phys symbol Tycho Andersen @ 2017-09-08 7:55 ` Christoph Hellwig 2017-09-08 15:44 ` Kees Cook 2017-09-14 18:34 ` [kernel-hardening] " Mark Rutland 1 sibling, 1 reply; 19+ messages in thread From: Christoph Hellwig @ 2017-09-08 7:55 UTC (permalink / raw) To: linux-arm-kernel On Thu, Sep 07, 2017 at 11:36:08AM -0600, Tycho Andersen wrote: > We need someting like this for testing XPFO. Since it's architecture > specific, putting it in the test code is slightly awkward, so let's make it > an arch-specific symbol and export it for use in LKDTM. We really should not add an export for this. I think you'll want to just open code it in your test module. ^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v6 10/11] mm: add a user_virt_to_phys symbol 2017-09-08 7:55 ` Christoph Hellwig @ 2017-09-08 15:44 ` Kees Cook 2017-09-11 7:36 ` Christoph Hellwig 0 siblings, 1 reply; 19+ messages in thread From: Kees Cook @ 2017-09-08 15:44 UTC (permalink / raw) To: linux-arm-kernel On Fri, Sep 8, 2017 at 12:55 AM, Christoph Hellwig <hch@infradead.org> wrote: > On Thu, Sep 07, 2017 at 11:36:08AM -0600, Tycho Andersen wrote: >> We need someting like this for testing XPFO. Since it's architecture >> specific, putting it in the test code is slightly awkward, so let's make it >> an arch-specific symbol and export it for use in LKDTM. > > We really should not add an export for this. > > I think you'll want to just open code it in your test module. Isn't that going to be fragile? Why not an export? -Kees -- Kees Cook Pixel Security ^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v6 10/11] mm: add a user_virt_to_phys symbol 2017-09-08 15:44 ` Kees Cook @ 2017-09-11 7:36 ` Christoph Hellwig 0 siblings, 0 replies; 19+ messages in thread From: Christoph Hellwig @ 2017-09-11 7:36 UTC (permalink / raw) To: linux-arm-kernel On Fri, Sep 08, 2017 at 08:44:22AM -0700, Kees Cook wrote: > On Fri, Sep 8, 2017 at 12:55 AM, Christoph Hellwig <hch@infradead.org> wrote: > > On Thu, Sep 07, 2017 at 11:36:08AM -0600, Tycho Andersen wrote: > >> We need someting like this for testing XPFO. Since it's architecture > >> specific, putting it in the test code is slightly awkward, so let's make it > >> an arch-specific symbol and export it for use in LKDTM. > > > > We really should not add an export for this. > > > > I think you'll want to just open code it in your test module. > > Isn't that going to be fragile? Why not an export? It is a little fragile, but it is functionality not needed at all by the kernel, so we should not add it to the kernel image and/or export it. ^ permalink raw reply [flat|nested] 19+ messages in thread
* [kernel-hardening] [PATCH v6 10/11] mm: add a user_virt_to_phys symbol 2017-09-07 17:36 ` [PATCH v6 10/11] mm: add a user_virt_to_phys symbol Tycho Andersen 2017-09-08 7:55 ` Christoph Hellwig @ 2017-09-14 18:34 ` Mark Rutland 2017-09-18 20:56 ` Tycho Andersen 1 sibling, 1 reply; 19+ messages in thread From: Mark Rutland @ 2017-09-14 18:34 UTC (permalink / raw) To: linux-arm-kernel On Thu, Sep 07, 2017 at 11:36:08AM -0600, Tycho Andersen wrote: > We need someting like this for testing XPFO. Since it's architecture > specific, putting it in the test code is slightly awkward, so let's make it > an arch-specific symbol and export it for use in LKDTM. > > v6: * add a definition of user_virt_to_phys in the !CONFIG_XPFO case > > CC: linux-arm-kernel at lists.infradead.org > CC: x86 at kernel.org > Signed-off-by: Tycho Andersen <tycho@docker.com> > Tested-by: Marco Benatto <marco.antonio.780@gmail.com> > --- > arch/arm64/mm/xpfo.c | 51 ++++++++++++++++++++++++++++++++++++++++++++++ > arch/x86/mm/xpfo.c | 57 ++++++++++++++++++++++++++++++++++++++++++++++++++++ > include/linux/xpfo.h | 5 +++++ > 3 files changed, 113 insertions(+) > > diff --git a/arch/arm64/mm/xpfo.c b/arch/arm64/mm/xpfo.c > index 342a9ccb93c1..94a667d94e15 100644 > --- a/arch/arm64/mm/xpfo.c > +++ b/arch/arm64/mm/xpfo.c > @@ -74,3 +74,54 @@ void xpfo_dma_map_unmap_area(bool map, const void *addr, size_t size, > > xpfo_temp_unmap(addr, size, mapping, sizeof(mapping[0]) * num_pages); > } > + > +/* Convert a user space virtual address to a physical address. > + * Shamelessly copied from slow_virt_to_phys() and lookup_address() in > + * arch/x86/mm/pageattr.c > + */ When can this be called? What prevents concurrent modification of the user page tables? i.e. must mmap_sem be held? > +phys_addr_t user_virt_to_phys(unsigned long addr) Does this really need to be architecture specific? Core mm code manages to walk user page tables just fine... > +{ > + phys_addr_t phys_addr; > + unsigned long offset; > + pgd_t *pgd; > + p4d_t *p4d; > + pud_t *pud; > + pmd_t *pmd; > + pte_t *pte; > + > + pgd = pgd_offset(current->mm, addr); > + if (pgd_none(*pgd)) > + return 0; Can we please separate the address and return value? e.g. pass the PA by reference and return an error code. AFAIK, zero is a valid PA, and even if the tables exist, they might point there in the presence of an error. > + > + p4d = p4d_offset(pgd, addr); > + if (p4d_none(*p4d)) > + return 0; > + > + pud = pud_offset(p4d, addr); > + if (pud_none(*pud)) > + return 0; > + > + if (pud_sect(*pud) || !pud_present(*pud)) { > + phys_addr = (unsigned long)pud_pfn(*pud) << PAGE_SHIFT; Was there some problem with: phys_addr = pmd_page_paddr(*pud); ... and similar for the other levels? ... I'd rather introduce new helpers than more open-coded calculations. Thanks, Mark. ^ permalink raw reply [flat|nested] 19+ messages in thread
* [kernel-hardening] [PATCH v6 10/11] mm: add a user_virt_to_phys symbol 2017-09-14 18:34 ` [kernel-hardening] " Mark Rutland @ 2017-09-18 20:56 ` Tycho Andersen 0 siblings, 0 replies; 19+ messages in thread From: Tycho Andersen @ 2017-09-18 20:56 UTC (permalink / raw) To: linux-arm-kernel Hi Mark, On Thu, Sep 14, 2017 at 07:34:02PM +0100, Mark Rutland wrote: > On Thu, Sep 07, 2017 at 11:36:08AM -0600, Tycho Andersen wrote: > > We need someting like this for testing XPFO. Since it's architecture > > specific, putting it in the test code is slightly awkward, so let's make it > > an arch-specific symbol and export it for use in LKDTM. > > > > v6: * add a definition of user_virt_to_phys in the !CONFIG_XPFO case > > > > CC: linux-arm-kernel at lists.infradead.org > > CC: x86 at kernel.org > > Signed-off-by: Tycho Andersen <tycho@docker.com> > > Tested-by: Marco Benatto <marco.antonio.780@gmail.com> > > --- > > arch/arm64/mm/xpfo.c | 51 ++++++++++++++++++++++++++++++++++++++++++++++ > > arch/x86/mm/xpfo.c | 57 ++++++++++++++++++++++++++++++++++++++++++++++++++++ > > include/linux/xpfo.h | 5 +++++ > > 3 files changed, 113 insertions(+) > > > > diff --git a/arch/arm64/mm/xpfo.c b/arch/arm64/mm/xpfo.c > > index 342a9ccb93c1..94a667d94e15 100644 > > --- a/arch/arm64/mm/xpfo.c > > +++ b/arch/arm64/mm/xpfo.c > > @@ -74,3 +74,54 @@ void xpfo_dma_map_unmap_area(bool map, const void *addr, size_t size, > > > > xpfo_temp_unmap(addr, size, mapping, sizeof(mapping[0]) * num_pages); > > } > > + > > +/* Convert a user space virtual address to a physical address. > > + * Shamelessly copied from slow_virt_to_phys() and lookup_address() in > > + * arch/x86/mm/pageattr.c > > + */ > > When can this be called? What prevents concurrent modification of the user page > tables? > > i.e. must mmap_sem be held? Yes, it should be. Since we're moving this back into the lkdtm test code I think it's less important, since nothing should be modifying the tables while the thread is doing the lookup, but I'll add it in the next version. > > +phys_addr_t user_virt_to_phys(unsigned long addr) > > Does this really need to be architecture specific? > > Core mm code manages to walk user page tables just fine... I think so since we don't support section mappings right now, so p*d_sect will always be false. > > +{ > > + phys_addr_t phys_addr; > > + unsigned long offset; > > + pgd_t *pgd; > > + p4d_t *p4d; > > + pud_t *pud; > > + pmd_t *pmd; > > + pte_t *pte; > > + > > + pgd = pgd_offset(current->mm, addr); > > + if (pgd_none(*pgd)) > > + return 0; > > Can we please separate the address and return value? e.g. pass the PA by > reference and return an error code. > > AFAIK, zero is a valid PA, and even if the tables exist, they might point there > in the presence of an error. Sure, I'll rearrange this. > > + > > + p4d = p4d_offset(pgd, addr); > > + if (p4d_none(*p4d)) > > + return 0; > > + > > + pud = pud_offset(p4d, addr); > > + if (pud_none(*pud)) > > + return 0; > > + > > + if (pud_sect(*pud) || !pud_present(*pud)) { > > + phys_addr = (unsigned long)pud_pfn(*pud) << PAGE_SHIFT; > > Was there some problem with: > > phys_addr = pmd_page_paddr(*pud); > > ... and similar for the other levels? > > ... I'd rather introduce new helpers than more open-coded calculations. I wasn't aware of these; we could define a similar set of functions for x86 and then make it not arch-specific. I wonder if we could also use follow_page_pte(), since we know that the page is always present (given that it's been allocated). Unfortunately follow_page_pte() is not exported. Tycho ^ permalink raw reply [flat|nested] 19+ messages in thread
end of thread, other threads:[~2017-09-18 21:29 UTC | newest] Thread overview: 19+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- [not found] <20170907173609.22696-1-tycho@docker.com> 2017-09-07 17:36 ` [PATCH v6 05/11] arm64/mm: Add support for XPFO Tycho Andersen 2017-09-08 7:53 ` Christoph Hellwig 2017-09-08 17:24 ` Tycho Andersen 2017-09-14 10:41 ` Julien Grall 2017-09-14 11:29 ` Juergen Gross 2017-09-14 18:22 ` [kernel-hardening] " Mark Rutland 2017-09-18 21:27 ` Tycho Andersen 2017-09-07 17:36 ` [PATCH v6 07/11] arm64/mm, xpfo: temporarily map dcache regions Tycho Andersen 2017-09-14 18:25 ` Mark Rutland 2017-09-18 21:29 ` Tycho Andersen 2017-09-07 17:36 ` [PATCH v6 08/11] arm64/mm: Add support for XPFO to swiotlb Tycho Andersen 2017-09-07 17:36 ` [PATCH v6 09/11] arm64/mm: disable section/contiguous mappings if XPFO is enabled Tycho Andersen 2017-09-09 15:38 ` Laura Abbott 2017-09-07 17:36 ` [PATCH v6 10/11] mm: add a user_virt_to_phys symbol Tycho Andersen 2017-09-08 7:55 ` Christoph Hellwig 2017-09-08 15:44 ` Kees Cook 2017-09-11 7:36 ` Christoph Hellwig 2017-09-14 18:34 ` [kernel-hardening] " Mark Rutland 2017-09-18 20:56 ` Tycho Andersen
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).