From: Steve Capper <Steve.Capper@arm.com>
To: Anshuman Khandual <Anshuman.Khandual@arm.com>
Cc: "crecklin@redhat.com" <crecklin@redhat.com>,
"ard.biesheuvel@linaro.org" <ard.biesheuvel@linaro.org>,
Marc Zyngier <Marc.Zyngier@arm.com>,
Catalin Marinas <Catalin.Marinas@arm.com>,
"bhsharma@redhat.com" <bhsharma@redhat.com>,
Will Deacon <Will.Deacon@arm.com>, nd <nd@arm.com>,
"linux-arm-kernel@lists.infradead.org"
<linux-arm-kernel@lists.infradead.org>
Subject: Re: [PATCH v3 01/10] arm64: mm: Flip kernel VA space
Date: Mon, 17 Jun 2019 16:08:42 +0000 [thread overview]
Message-ID: <20190617160831.GA1976@capper-debian.cambridge.arm.com> (raw)
In-Reply-To: <e8b68e70-1d01-6a7a-57af-28f7f9b0ae1f@arm.com>
Hi Anshuman,
On Fri, Jun 14, 2019 at 06:30:21PM +0530, Anshuman Khandual wrote:
> On 06/12/2019 10:56 PM, Steve Capper wrote:
> > Put the direct linear map in the lower addresses of the kernel VA range
> > and everything else in the higher ranges.
> >
[...]
> > diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> > index 8ffcf5a512bb..5cd2eb8cb424 100644
> > --- a/arch/arm64/include/asm/memory.h
> > +++ b/arch/arm64/include/asm/memory.h
> > @@ -49,9 +49,9 @@
> > */
> > #define VA_BITS (CONFIG_ARM64_VA_BITS)
> > #define VA_START (UL(0xffffffffffffffff) - \
> > - (UL(1) << VA_BITS) + 1)
> > -#define PAGE_OFFSET (UL(0xffffffffffffffff) - \
> > (UL(1) << (VA_BITS - 1)) + 1)
> > +#define PAGE_OFFSET (UL(0xffffffffffffffff) - \
> > + (UL(1) << VA_BITS) + 1)
>
> PAGE_OFFSET and VA_START swapped their positions.
>
In the file I think they haven't moved, would you like them to be
swapped? (Their values have changed).
> There are many places with UL(0xffffffffffffffff). Time to define
> it as a constant ? Something like [KERNEL|TTBR1]_MAX_VADDR.
>
Could do, I am also tempted to do something like UL(~0) for the sake of
brevity.
> > #define KIMAGE_VADDR (MODULES_END)
> > #define BPF_JIT_REGION_START (VA_START + KASAN_SHADOW_SIZE)
> > #define BPF_JIT_REGION_SIZE (SZ_128M)
> > @@ -59,7 +59,7 @@
> > #define MODULES_END (MODULES_VADDR + MODULES_VSIZE)
> > #define MODULES_VADDR (BPF_JIT_REGION_END)
> > #define MODULES_VSIZE (SZ_128M)
> > -#define VMEMMAP_START (PAGE_OFFSET - VMEMMAP_SIZE)
> > +#define VMEMMAP_START (-VMEMMAP_SIZE)
> > #define PCI_IO_END (VMEMMAP_START - SZ_2M)
> > #define PCI_IO_START (PCI_IO_END - PCI_IO_SIZE)
> > #define FIXADDR_TOP (PCI_IO_START - SZ_2M)
> > @@ -238,7 +238,7 @@ extern u64 vabits_user;
> > * space. Testing the top bit for the start of the region is a
> > * sufficient check.
> > */
> > -#define __is_lm_address(addr) (!!((addr) & BIT(VA_BITS - 1)))
> > +#define __is_lm_address(addr) (!((addr) & BIT(VA_BITS - 1)))
>
> Should it be (!!((addr) & BIT(VA_BITS - 2))) instead for a positive validation
> for addresses in the lower half ?
>
I don't think so, we basically want to test which half of the address
space our address is in, so we test BIT(VA_BITS - 1). If one were to
test BIT(VA_BITS - 2) that would be testing which half of one of the
halves instead.
> >
> > #define __lm_to_phys(addr) (((addr) & ~PAGE_OFFSET) + PHYS_OFFSET)
> > #define __kimg_to_phys(addr) ((addr) - kimage_voffset)
> > diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
> > index 2c41b04708fe..d0ab784304e9 100644
> > --- a/arch/arm64/include/asm/pgtable.h
> > +++ b/arch/arm64/include/asm/pgtable.h
> > @@ -32,7 +32,7 @@
> > * and fixed mappings
> > */
> > #define VMALLOC_START (MODULES_END)
> > -#define VMALLOC_END (PAGE_OFFSET - PUD_SIZE - VMEMMAP_SIZE - SZ_64K)
> > +#define VMALLOC_END (- PUD_SIZE - VMEMMAP_SIZE - SZ_64K)
>
> (-VMEMMAP_SIZE) and (- PUD_SIZE - VMEMMAP_SIZE - SZ_64K) depends on implicit sign
> inversion. IMHO it might be better to add [KERNEL|TTBR1]_MAX_VADDR in the equation.
>
Hmmm, okay, I see what you mean. I will play with this.
> >
> > #define vmemmap ((struct page *)VMEMMAP_START - (memstart_addr >> PAGE_SHIFT))
> >
> > diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c
> > index 9859e1178e6b..6ffcc32f35dd 100644
> > --- a/arch/arm64/kernel/hibernate.c
> > +++ b/arch/arm64/kernel/hibernate.c
> > @@ -497,7 +497,7 @@ int swsusp_arch_resume(void)
> > rc = -ENOMEM;
> > goto out;
> > }
> > - rc = copy_page_tables(tmp_pg_dir, PAGE_OFFSET, 0);
> > + rc = copy_page_tables(tmp_pg_dir, PAGE_OFFSET, VA_START);
> > if (rc)
> > goto out;
> >
> > diff --git a/arch/arm64/mm/dump.c b/arch/arm64/mm/dump.c
> > index 14fe23cd5932..ee4e5bea8944 100644
> > --- a/arch/arm64/mm/dump.c
> > +++ b/arch/arm64/mm/dump.c
> > @@ -30,6 +30,8 @@
> > #include <asm/ptdump.h>
> >
> > static const struct addr_marker address_markers[] = {
> > + { PAGE_OFFSET, "Linear Mapping start" },
> > + { VA_START, "Linear Mapping end" },
> > #ifdef CONFIG_KASAN
> > { KASAN_SHADOW_START, "Kasan shadow start" },
> > { KASAN_SHADOW_END, "Kasan shadow end" },
> > @@ -43,10 +45,8 @@ static const struct addr_marker address_markers[] = {
> > { PCI_IO_START, "PCI I/O start" },
> > { PCI_IO_END, "PCI I/O end" },
> > #ifdef CONFIG_SPARSEMEM_VMEMMAP
> > - { VMEMMAP_START, "vmemmap start" },
> > - { VMEMMAP_START + VMEMMAP_SIZE, "vmemmap end" },
> > + { VMEMMAP_START, "vmemmap" },
>
> Vmemmap end got dropped ?
>
Yeah, IIRC this was because the marker never got printed, I will re-test
this.
> > #endif
> > - { PAGE_OFFSET, "Linear mapping" },
> > { -1, NULL },
> > };
> >
> > @@ -380,7 +380,7 @@ static void ptdump_initialize(void)
> > static struct ptdump_info kernel_ptdump_info = {
> > .mm = &init_mm,
> > .markers = address_markers,
> > - .base_addr = VA_START,
> > + .base_addr = PAGE_OFFSET,
> > };
> >
> > void ptdump_check_wx(void)
> > diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> > index d2adffb81b5d..574ed1d4be19 100644
> > --- a/arch/arm64/mm/init.c
> > +++ b/arch/arm64/mm/init.c
> > @@ -311,7 +311,7 @@ static void __init fdt_enforce_memory_region(void)
> >
> > void __init arm64_memblock_init(void)
> > {
> > - const s64 linear_region_size = -(s64)PAGE_OFFSET;
> > + const s64 linear_region_size = BIT(VA_BITS - 1);
> >
> > /* Handle linux,usable-memory-range property */
> > fdt_enforce_memory_region();
> > @@ -319,13 +319,6 @@ void __init arm64_memblock_init(void)
> > /* Remove memory above our supported physical address size */
> > memblock_remove(1ULL << PHYS_MASK_SHIFT, ULLONG_MAX);
> >
> > - /*
> > - * Ensure that the linear region takes up exactly half of the kernel
> > - * virtual address space. This way, we can distinguish a linear address
> > - * from a kernel/module/vmalloc address by testing a single bit.
> > - */
> > - BUILD_BUG_ON(linear_region_size != BIT(VA_BITS - 1));
> > -
> > /*
> > * Select a suitable value for the base of physical memory.
> > */
> > diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
> > index 296de39ddee5..8066621052db 100644
> > --- a/arch/arm64/mm/kasan_init.c
> > +++ b/arch/arm64/mm/kasan_init.c
> > @@ -229,10 +229,10 @@ void __init kasan_init(void)
> > kasan_map_populate(kimg_shadow_start, kimg_shadow_end,
> > early_pfn_to_nid(virt_to_pfn(lm_alias(_text))));
> >
> > - kasan_populate_early_shadow((void *)KASAN_SHADOW_START,
> > - (void *)mod_shadow_start);
> > + kasan_populate_early_shadow(kasan_mem_to_shadow((void *) VA_START),
> > + (void *)mod_shadow_start);
> > kasan_populate_early_shadow((void *)kimg_shadow_end,
> > - kasan_mem_to_shadow((void *)PAGE_OFFSET));
> > + (void *)KASAN_SHADOW_END);
> >
> > if (kimg_shadow_start > mod_shadow_end)
> > kasan_populate_early_shadow((void *)mod_shadow_end,
> > diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> > index a1bfc4413982..16063ff10c6d 100644
> > --- a/arch/arm64/mm/mmu.c
> > +++ b/arch/arm64/mm/mmu.c
> > @@ -409,7 +409,7 @@ static phys_addr_t pgd_pgtable_alloc(int shift)
> > static void __init create_mapping_noalloc(phys_addr_t phys, unsigned long virt,
> > phys_addr_t size, pgprot_t prot)
> > {
> > - if (virt < VMALLOC_START) {
> > + if ((virt >= VA_START) && (virt < VMALLOC_START)) {
> > pr_warn("BUG: not creating mapping for %pa at 0x%016lx - outside kernel range\n",
> > &phys, virt);
> > return;
> > @@ -436,7 +436,7 @@ void __init create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
> > static void update_mapping_prot(phys_addr_t phys, unsigned long virt,
> > phys_addr_t size, pgprot_t prot)
> > {
> > - if (virt < VMALLOC_START) {
> > + if ((virt >= VA_START) && (virt < VMALLOC_START)) {
> > pr_warn("BUG: not updating mapping for %pa at 0x%016lx - outside kernel range\n",
> > &phys, virt);
> > return;
> >
>
> Seems like adding (virt >= VA_START) is a semantics change here. In the previous
> scheme (virt < VMALLOC_START) included undefined addresses below VA_START as well
> which will be omitted here. Should not we add (virt < PAGE_OFFSET) to check those ?
Not quite, the original code would not warn if we passed an address in
the direct linear mapping. This conditional logic has been modified to
reflect the fact that the direct linear mapping has moved.
If the original behaviour was a bug then I'm happy to fix it :-).
Cheers,
--
Steve
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
next prev parent reply other threads:[~2019-06-17 16:08 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-06-12 17:26 [PATCH v3 00/10] 52-bit kernel + user VAs Steve Capper
2019-06-12 17:26 ` [PATCH v3 01/10] arm64: mm: Flip kernel VA space Steve Capper
2019-06-14 12:17 ` Anshuman Khandual
2019-06-17 16:09 ` Steve Capper
2019-06-26 10:56 ` Catalin Marinas
2019-06-14 13:00 ` Anshuman Khandual
2019-06-17 16:08 ` Steve Capper [this message]
2019-06-12 17:26 ` [PATCH v3 02/10] arm64: kasan: Switch to using KASAN_SHADOW_OFFSET Steve Capper
2019-06-12 17:26 ` [PATCH v3 03/10] arm64: dump: De-constify VA_START and KASAN_SHADOW_START Steve Capper
2019-06-12 17:26 ` [PATCH v3 04/10] arm64: mm: Introduce VA_BITS_MIN Steve Capper
2019-06-12 17:26 ` [PATCH v3 05/10] arm64: mm: Introduce VA_BITS_ACTUAL Steve Capper
2019-06-12 17:26 ` [PATCH v3 06/10] arm64: mm: Logic to make offset_ttbr1 conditional Steve Capper
2019-06-12 17:26 ` [PATCH v3 07/10] arm64: mm: Separate out vmemmap Steve Capper
2019-06-12 17:26 ` [PATCH v3 08/10] arm64: mm: Modify calculation of VMEMMAP_SIZE Steve Capper
2019-06-12 17:26 ` [PATCH v3 09/10] arm64: mm: Tweak PAGE_OFFSET logic Steve Capper
2019-06-12 17:26 ` [PATCH v3 10/10] arm64: mm: Introduce 52-bit Kernel VAs Steve Capper
2019-06-26 11:08 ` [PATCH v3 00/10] 52-bit kernel + user VAs Catalin Marinas
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190617160831.GA1976@capper-debian.cambridge.arm.com \
--to=steve.capper@arm.com \
--cc=Anshuman.Khandual@arm.com \
--cc=Catalin.Marinas@arm.com \
--cc=Marc.Zyngier@arm.com \
--cc=Will.Deacon@arm.com \
--cc=ard.biesheuvel@linaro.org \
--cc=bhsharma@redhat.com \
--cc=crecklin@redhat.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=nd@arm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).