* [PATCH v2 0/2] makedumpfile/arm64: Add support for ARMv8.2 extensions
@ 2019-02-16 20:24 Bhupesh Sharma
2019-02-16 20:24 ` [PATCH v2 1/2] makedumpfile/arm64: Add support for ARMv8.2-LPA (52-bit PA support) Bhupesh Sharma
2019-02-16 20:24 ` [PATCH v2 2/2] makedumpfile/arm64: Add support for ARMv8.2-LVA (52-bit user-space VA support) Bhupesh Sharma
0 siblings, 2 replies; 7+ messages in thread
From: Bhupesh Sharma @ 2019-02-16 20:24 UTC (permalink / raw)
To: kexec; +Cc: bhsharma, bhupesh.linux, k-hagio
Changes since v1:
----------------
- v1 was sent as two separate patches:
http://lists.infradead.org/pipermail/kexec/2019-February/022424.html
(ARMv8.2-LPA)
http://lists.infradead.org/pipermail/kexec/2019-February/022425.html
(ARMv8.2-LVA)
- v2 combined the two in a single patchset and also addresses Kazu's
review comments.
This patchset adds support for ARMv8.2 extensions in makedumpfile code.
I cover the following two cases with this patchset:
- 48-bit VA + 52-bit PA (LPA)
- 48-bit kernel VA + 52-bit virtual VA (LVA)
This has been tested for the following user-cases:
1. Calculating --mem-usage on the primary kernel,
2. Creating a dumpfile using /proc/vmcore, and
3. Post-processing a vmcore.
I have tested this patchset on the following platforms, with kernels
which support/do-not-support ARMv8.2 features:
1. CPUs which don't support ARMv8.2 features, e.g. qualcomm-amberwing,
ampere-osprey.
2. ARMv8 FVP model, which supports ARMv8.2 extensions.
Bhupesh Sharma (2):
makedumpfile/arm64: Add support for ARMv8.2-LPA (52-bit PA support)
makedumpfile/arm64: Add support for ARMv8.2-LVA (52-bit user-space VA
support)
arch/arm64.c | 411 +++++++++++++++++++++++++++++++++++++++++----------------
makedumpfile.c | 2 +
makedumpfile.h | 1 +
3 files changed, 298 insertions(+), 116 deletions(-)
--
2.7.4
_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec
^ permalink raw reply [flat|nested] 7+ messages in thread* [PATCH v2 1/2] makedumpfile/arm64: Add support for ARMv8.2-LPA (52-bit PA support) 2019-02-16 20:24 [PATCH v2 0/2] makedumpfile/arm64: Add support for ARMv8.2 extensions Bhupesh Sharma @ 2019-02-16 20:24 ` Bhupesh Sharma 2019-02-21 15:35 ` Kazuhito Hagio 2019-02-16 20:24 ` [PATCH v2 2/2] makedumpfile/arm64: Add support for ARMv8.2-LVA (52-bit user-space VA support) Bhupesh Sharma 1 sibling, 1 reply; 7+ messages in thread From: Bhupesh Sharma @ 2019-02-16 20:24 UTC (permalink / raw) To: kexec; +Cc: bhsharma, bhupesh.linux, k-hagio ARMv8.2-LPA architecture extension (if available on underlying hardware) can support 52-bit physical addresses, while the kernel virtual addresses remain 48-bit. This patch is in accordance with ARMv8 Architecture Reference Manual version D.a Make sure that we read the 52-bit PA address capability from 'MAX_PHYSMEM_BITS' variable (if available in vmcoreinfo) and accordingly change the pte_to_phy() mask values and also traverse the page-table walk accordingly. Also make sure that it works well for the existing 48-bit PA address platforms and also on environments which use newer kernels with 52-bit PA support but hardware which is not ARM8.2-LPA compliant. I have sent a kernel patch upstream to add 'MAX_PHYSMEM_BITS' to vmcoreinfo for arm64 (see [0]). [0]. http://lists.infradead.org/pipermail/kexec/2019-February/022411.html Signed-off-by: Bhupesh Sharma <bhsharma@redhat.com> --- arch/arm64.c | 310 ++++++++++++++++++++++++++++++++++++++++++----------------- 1 file changed, 221 insertions(+), 89 deletions(-) diff --git a/arch/arm64.c b/arch/arm64.c index 053519359cbc..29247a7fa7db 100644 --- a/arch/arm64.c +++ b/arch/arm64.c @@ -39,72 +39,199 @@ typedef struct { unsigned long pte; } pte_t; +#define __pte(x) ((pte_t) { (x) } ) +#define __pmd(x) ((pmd_t) { (x) } ) +#define __pud(x) ((pud_t) { (x) } ) +#define __pgd(x) ((pgd_t) { (x) } ) + +static int lpa_52_bit_support_available; static int pgtable_level; static int va_bits; static unsigned long kimage_voffset; -#define SZ_4K (4 * 1024) -#define SZ_16K (16 * 1024) -#define SZ_64K (64 * 1024) -#define SZ_128M (128 * 1024 * 1024) +#define SZ_4K 4096 +#define SZ_16K 16384 +#define SZ_64K 65536 -#define PAGE_OFFSET_36 ((0xffffffffffffffffUL) << 36) -#define PAGE_OFFSET_39 ((0xffffffffffffffffUL) << 39) -#define PAGE_OFFSET_42 ((0xffffffffffffffffUL) << 42) -#define PAGE_OFFSET_47 ((0xffffffffffffffffUL) << 47) -#define PAGE_OFFSET_48 ((0xffffffffffffffffUL) << 48) +#define PAGE_OFFSET_36 ((0xffffffffffffffffUL) << 36) +#define PAGE_OFFSET_39 ((0xffffffffffffffffUL) << 39) +#define PAGE_OFFSET_42 ((0xffffffffffffffffUL) << 42) +#define PAGE_OFFSET_47 ((0xffffffffffffffffUL) << 47) +#define PAGE_OFFSET_48 ((0xffffffffffffffffUL) << 48) +#define PAGE_OFFSET_52 ((0xffffffffffffffffUL) << 52) #define pgd_val(x) ((x).pgd) #define pud_val(x) (pgd_val((x).pgd)) #define pmd_val(x) (pud_val((x).pud)) #define pte_val(x) ((x).pte) -#define PAGE_MASK (~(PAGESIZE() - 1)) -#define PGDIR_SHIFT ((PAGESHIFT() - 3) * pgtable_level + 3) -#define PTRS_PER_PGD (1 << (va_bits - PGDIR_SHIFT)) -#define PUD_SHIFT get_pud_shift_arm64() -#define PUD_SIZE (1UL << PUD_SHIFT) -#define PUD_MASK (~(PUD_SIZE - 1)) -#define PTRS_PER_PTE (1 << (PAGESHIFT() - 3)) -#define PTRS_PER_PUD PTRS_PER_PTE -#define PMD_SHIFT ((PAGESHIFT() - 3) * 2 + 3) -#define PMD_SIZE (1UL << PMD_SHIFT) -#define PMD_MASK (~(PMD_SIZE - 1)) +/* See 'include/uapi/linux/const.h' for definitions below */ +#define __AC(X,Y) (X##Y) +#define _AC(X,Y) __AC(X,Y) +#define _AT(T,X) ((T)(X)) + +/* See 'include/asm/pgtable-types.h' for definitions below */ +typedef unsigned long pteval_t; +typedef unsigned long pmdval_t; +typedef unsigned long pudval_t; +typedef unsigned long pgdval_t; + +#define PAGE_SHIFT PAGESHIFT() + +/* See 'arch/arm64/include/asm/pgtable-hwdef.h' for definitions below */ + +/* + * Size mapped by an entry at level n ( 0 <= n <= 3) + * We map (PAGE_SHIFT - 3) at all translation levels and PAGE_SHIFT bits + * in the final page. The maximum number of translation levels supported by + * the architecture is 4. Hence, starting at at level n, we have further + * ((4 - n) - 1) levels of translation excluding the offset within the page. + * So, the total number of bits mapped by an entry at level n is : + * + * ((4 - n) - 1) * (PAGE_SHIFT - 3) + PAGE_SHIFT + * + * Rearranging it a bit we get : + * (4 - n) * (PAGE_SHIFT - 3) + 3 + */ +#define ARM64_HW_PGTABLE_LEVEL_SHIFT(n) ((PAGE_SHIFT - 3) * (4 - (n)) + 3) + +#define PTRS_PER_PTE (1 << (PAGE_SHIFT - 3)) + +/* + * PMD_SHIFT determines the size a level 2 page table entry can map. + */ +#define PMD_SHIFT ARM64_HW_PGTABLE_LEVEL_SHIFT(2) +#define PMD_SIZE (_AC(1, UL) << PMD_SHIFT) +#define PMD_MASK (~(PMD_SIZE-1)) #define PTRS_PER_PMD PTRS_PER_PTE -#define PAGE_PRESENT (1 << 0) +/* + * PUD_SHIFT determines the size a level 1 page table entry can map. + */ +#define PUD_SHIFT ARM64_HW_PGTABLE_LEVEL_SHIFT(1) +#define PUD_SIZE (_AC(1, UL) << PUD_SHIFT) +#define PUD_MASK (~(PUD_SIZE-1)) +#define PTRS_PER_PUD PTRS_PER_PTE + +/* + * PGDIR_SHIFT determines the size a top-level page table entry can map + * (depending on the configuration, this level can be 0, 1 or 2). + */ +#define PGDIR_SHIFT ARM64_HW_PGTABLE_LEVEL_SHIFT(4 - (pgtable_level)) +#define PGDIR_SIZE (_AC(1, UL) << PGDIR_SHIFT) +#define PGDIR_MASK (~(PGDIR_SIZE-1)) +#define PTRS_PER_PGD (1 << ((va_bits) - PGDIR_SHIFT)) + +/* + * Section address mask and size definitions. + */ #define SECTIONS_SIZE_BITS 30 -/* Highest possible physical address supported */ -#define PHYS_MASK_SHIFT 48 -#define PHYS_MASK ((1UL << PHYS_MASK_SHIFT) - 1) + +/* + * Hardware page table definitions. + * + * Level 1 descriptor (PUD). + */ +#define PUD_TYPE_TABLE (_AT(pudval_t, 3) << 0) +#define PUD_TABLE_BIT (_AT(pudval_t, 1) << 1) +#define PUD_TYPE_MASK (_AT(pudval_t, 3) << 0) +#define PUD_TYPE_SECT (_AT(pudval_t, 1) << 0) + +/* + * Level 2 descriptor (PMD). + */ +#define PMD_TYPE_MASK (_AT(pmdval_t, 3) << 0) +#define PMD_TYPE_FAULT (_AT(pmdval_t, 0) << 0) +#define PMD_TYPE_TABLE (_AT(pmdval_t, 3) << 0) +#define PMD_TYPE_SECT (_AT(pmdval_t, 1) << 0) +#define PMD_TABLE_BIT (_AT(pmdval_t, 1) << 1) + /* - * Remove the highest order bits that are not a part of the - * physical address in a section + * Level 3 descriptor (PTE). */ -#define PMD_SECTION_MASK ((1UL << 40) - 1) +#define PTE_ADDR_LOW (((_AT(pteval_t, 1) << (48 - PAGE_SHIFT)) - 1) << PAGE_SHIFT) +#define PTE_ADDR_HIGH (_AT(pteval_t, 0xf) << 12) + +static inline unsigned long +get_pte_addr_mask_arm64(void) +{ + if (lpa_52_bit_support_available) + return (PTE_ADDR_LOW | PTE_ADDR_HIGH); + else + return PTE_ADDR_LOW; +} -#define PMD_TYPE_MASK 3 -#define PMD_TYPE_SECT 1 -#define PMD_TYPE_TABLE 3 +#define PTE_ADDR_MASK get_pte_addr_mask_arm64() -#define PUD_TYPE_MASK 3 -#define PUD_TYPE_SECT 1 -#define PUD_TYPE_TABLE 3 +#define PAGE_MASK (~(PAGESIZE() - 1)) +#define PAGE_PRESENT (1 << 0) + +/* Helper API to convert between a physical address and its placement + * in a page table entry, taking care of 52-bit addresses. + */ +static inline unsigned long +__pte_to_phys(pte_t pte) +{ + if (lpa_52_bit_support_available) + return ((pte_val(pte) & PTE_ADDR_LOW) | ((pte_val(pte) & PTE_ADDR_HIGH) << 36)); + else + return (pte_val(pte) & PTE_ADDR_MASK); +} +/* Find an entry in a page-table-directory */ #define pgd_index(vaddr) (((vaddr) >> PGDIR_SHIFT) & (PTRS_PER_PGD - 1)) -#define pgd_offset(pgdir, vaddr) ((pgd_t *)(pgdir) + pgd_index(vaddr)) -#define pte_index(vaddr) (((vaddr) >> PAGESHIFT()) & (PTRS_PER_PTE - 1)) -#define pmd_page_paddr(pmd) (pmd_val(pmd) & PHYS_MASK & (int32_t)PAGE_MASK) -#define pte_offset(dir, vaddr) ((pte_t*)pmd_page_paddr((*dir)) + pte_index(vaddr)) +static inline pte_t +pgd_pte(pgd_t pgd) +{ + return __pte(pgd_val(pgd)); +} -#define pmd_index(vaddr) (((vaddr) >> PMD_SHIFT) & (PTRS_PER_PMD - 1)) -#define pud_page_paddr(pud) (pud_val(pud) & PHYS_MASK & (int32_t)PAGE_MASK) -#define pmd_offset_pgtbl_lvl_2(pud, vaddr) ((pmd_t *)pud) -#define pmd_offset_pgtbl_lvl_3(pud, vaddr) ((pmd_t *)pud_page_paddr((*pud)) + pmd_index(vaddr)) +#define __pgd_to_phys(pgd) __pte_to_phys(pgd_pte(pgd)) +#define pgd_offset(pgd, vaddr) ((pgd_t *)(pgd) + pgd_index(vaddr)) +static inline pte_t pud_pte(pud_t pud) +{ + return __pte(pud_val(pud)); +} + +static inline unsigned long +pgd_page_paddr(pgd_t pgd) +{ + return __pgd_to_phys(pgd); +} + +/* Find an entry in the first-level page table. */ #define pud_index(vaddr) (((vaddr) >> PUD_SHIFT) & (PTRS_PER_PUD - 1)) -#define pgd_page_paddr(pgd) (pgd_val(pgd) & PHYS_MASK & (int32_t)PAGE_MASK) +#define __pud_to_phys(pud) __pte_to_phys(pud_pte(pud)) + +static inline unsigned long +pud_page_paddr(pud_t pud) +{ + return __pud_to_phys(pud); +} + +/* Find an entry in the second-level page table. */ +#define pmd_index(vaddr) (((vaddr) >> PMD_SHIFT) & (PTRS_PER_PMD - 1)) +#define pmd_offset_pgtbl_lvl_2(dir, vaddr) ((pmd_t *)dir) +#define pmd_offset_pgtbl_lvl_3(dir, vaddr) (pud_page_paddr((*(dir))) + pmd_index(vaddr) * sizeof(pmd_t)) + +static inline pte_t pmd_pte(pmd_t pmd) +{ + return __pte(pmd_val(pmd)); +} + +#define __pmd_to_phys(pmd) __pte_to_phys(pmd_pte(pmd)) + +static inline unsigned long +pmd_page_paddr(pmd_t pmd) +{ + return __pmd_to_phys(pmd); +} + +/* Find an entry in the third-level page table. */ +#define pte_index(vaddr) (((vaddr) >> PAGESHIFT()) & (PTRS_PER_PTE - 1)) +#define pte_offset(dir, vaddr) (pmd_page_paddr((*dir)) + pte_index(vaddr) * sizeof(pte_t)) static unsigned long long __pa(unsigned long vaddr) @@ -116,34 +243,25 @@ __pa(unsigned long vaddr) return (vaddr - kimage_voffset); } -static int -get_pud_shift_arm64(void) +static pud_t * +pud_offset(pgd_t *pgda, pgd_t *pgdv, unsigned long vaddr) { - if (pgtable_level == 4) - return ((PAGESHIFT() - 3) * 3 + 3); + if (pgtable_level > 3) + return (pud_t *)(pgd_page_paddr(*pgdv) + pud_index(vaddr) * sizeof(pud_t)); else - return PGDIR_SHIFT; + return (pud_t *)(pgda); } static pmd_t * pmd_offset(pud_t *puda, pud_t *pudv, unsigned long vaddr) { - if (pgtable_level == 2) { - return pmd_offset_pgtbl_lvl_2(puda, vaddr); - } else { - return pmd_offset_pgtbl_lvl_3(pudv, vaddr); - } -} - -static pud_t * -pud_offset(pgd_t *pgda, pgd_t *pgdv, unsigned long vaddr) -{ - if (pgtable_level == 4) - return ((pud_t *)pgd_page_paddr((*pgdv)) + pud_index(vaddr)); + if (pgtable_level > 2) + return (pmd_t *)(pud_page_paddr(*pudv) + pmd_index(vaddr) * sizeof(pmd_t)); else - return (pud_t *)(pgda); + return (pmd_t*)(puda); } + static int calculate_plat_config(void) { /* derive pgtable_level as per arch/arm64/Kconfig */ @@ -287,6 +405,14 @@ get_stext_symbol(void) int get_machdep_info_arm64(void) { + /* Determine if the PA address range is 52-bits: ARMv8.2-LPA */ + if (NUMBER(MAX_PHYSMEM_BITS) != NOT_FOUND_NUMBER) { + info->max_physmem_bits = NUMBER(MAX_PHYSMEM_BITS); + if (info->max_physmem_bits == 52) + lpa_52_bit_support_available = 1; + } else + info->max_physmem_bits = 48; + /* Check if va_bits is still not initialized. If still 0, call * get_versiondep_info() to initialize the same. */ @@ -299,12 +425,11 @@ get_machdep_info_arm64(void) } kimage_voffset = NUMBER(kimage_voffset); - info->max_physmem_bits = PHYS_MASK_SHIFT; info->section_size_bits = SECTIONS_SIZE_BITS; DEBUG_MSG("kimage_voffset : %lx\n", kimage_voffset); - DEBUG_MSG("max_physmem_bits : %lx\n", info->max_physmem_bits); - DEBUG_MSG("section_size_bits: %lx\n", info->section_size_bits); + DEBUG_MSG("max_physmem_bits : %ld\n", info->max_physmem_bits); + DEBUG_MSG("section_size_bits: %ld\n", info->section_size_bits); return TRUE; } @@ -362,6 +487,19 @@ get_versiondep_info_arm64(void) return TRUE; } +/* 1GB section for Page Table level = 4 and Page Size = 4KB */ +static int +is_pud_sect(pud_t pud) +{ + return ((pud_val(pud) & PUD_TYPE_MASK) == PUD_TYPE_SECT); +} + +static int +is_pmd_sect(pmd_t pmd) +{ + return ((pmd_val(pmd) & PMD_TYPE_MASK) == PMD_TYPE_SECT); +} + /* * vaddr_to_paddr_arm64() - translate arbitrary virtual address to physical * @vaddr: virtual address to translate @@ -399,10 +537,9 @@ vaddr_to_paddr_arm64(unsigned long vaddr) return NOT_PADDR; } - if ((pud_val(pudv) & PUD_TYPE_MASK) == PUD_TYPE_SECT) { - /* 1GB section for Page Table level = 4 and Page Size = 4KB */ - paddr = (pud_val(pudv) & (PUD_MASK & PMD_SECTION_MASK)) - + (vaddr & (PUD_SIZE - 1)); + if (is_pud_sect(pudv)) { + paddr = (pud_page_paddr(pudv) & PUD_MASK) + + (vaddr & (PUD_SIZE - 1)); return paddr; } @@ -411,30 +548,25 @@ vaddr_to_paddr_arm64(unsigned long vaddr) ERRMSG("Can't read pmd\n"); return NOT_PADDR; } + + if (is_pmd_sect(pmdv)) { + paddr = (pmd_page_paddr(pmdv) & PMD_MASK) + + (vaddr & (PMD_SIZE - 1)); + return paddr; + } - switch (pmd_val(pmdv) & PMD_TYPE_MASK) { - case PMD_TYPE_TABLE: - ptea = pte_offset(&pmdv, vaddr); - /* 64k page */ - if (!readmem(PADDR, (unsigned long long)ptea, &ptev, sizeof(ptev))) { - ERRMSG("Can't read pte\n"); - return NOT_PADDR; - } - - if (!(pte_val(ptev) & PAGE_PRESENT)) { - ERRMSG("Can't get a valid pte.\n"); - return NOT_PADDR; - } else { + ptea = (pte_t *)pte_offset(&pmdv, vaddr); + if (!readmem(PADDR, (unsigned long long)ptea, &ptev, sizeof(ptev))) { + ERRMSG("Can't read pte\n"); + return NOT_PADDR; + } - paddr = (PAGEBASE(pte_val(ptev)) & PHYS_MASK) - + (vaddr & (PAGESIZE() - 1)); - } - break; - case PMD_TYPE_SECT: - /* 512MB section for Page Table level = 3 and Page Size = 64KB*/ - paddr = (pmd_val(pmdv) & (PMD_MASK & PMD_SECTION_MASK)) - + (vaddr & (PMD_SIZE - 1)); - break; + if (!(pte_val(ptev) & PAGE_PRESENT)) { + ERRMSG("Can't get a valid pte.\n"); + return NOT_PADDR; + } else { + paddr = __pte_to_phys(ptev) + + (vaddr & (PAGESIZE() - 1)); } return paddr; -- 2.7.4 _______________________________________________ kexec mailing list kexec@lists.infradead.org http://lists.infradead.org/mailman/listinfo/kexec ^ permalink raw reply related [flat|nested] 7+ messages in thread
* RE: [PATCH v2 1/2] makedumpfile/arm64: Add support for ARMv8.2-LPA (52-bit PA support) 2019-02-16 20:24 ` [PATCH v2 1/2] makedumpfile/arm64: Add support for ARMv8.2-LPA (52-bit PA support) Bhupesh Sharma @ 2019-02-21 15:35 ` Kazuhito Hagio 2019-02-22 4:50 ` Bhupesh Sharma 0 siblings, 1 reply; 7+ messages in thread From: Kazuhito Hagio @ 2019-02-21 15:35 UTC (permalink / raw) To: Bhupesh Sharma; +Cc: kexec@lists.infradead.org Hi Bhupesh, -----Original Message----- > ARMv8.2-LPA architecture extension (if available on underlying hardware) > can support 52-bit physical addresses, while the kernel virtual > addresses remain 48-bit. > > This patch is in accordance with ARMv8 Architecture Reference Manual > version D.a > > Make sure that we read the 52-bit PA address capability from > 'MAX_PHYSMEM_BITS' variable (if available in vmcoreinfo) and > accordingly change the pte_to_phy() mask values and also traverse > the page-table walk accordingly. > > Also make sure that it works well for the existing 48-bit PA address > platforms and also on environments which use newer kernels with 52-bit > PA support but hardware which is not ARM8.2-LPA compliant. > > I have sent a kernel patch upstream to add 'MAX_PHYSMEM_BITS' to > vmcoreinfo for arm64 (see [0]). > > [0]. http://lists.infradead.org/pipermail/kexec/2019-February/022411.html > > Signed-off-by: Bhupesh Sharma <bhsharma@redhat.com> This patch looks good to me. For two slight things below, I will remove them when merging. > +/* > + * Size mapped by an entry at level n ( 0 <= n <= 3) > + * We map (PAGE_SHIFT - 3) at all translation levels and PAGE_SHIFT bits > + * in the final page. The maximum number of translation levels supported by > + * the architecture is 4. Hence, starting at at level n, we have further > + * ((4 - n) - 1) levels of translation excluding the offset within the page. > + * So, the total number of bits mapped by an entry at level n is : > + * > + * ((4 - n) - 1) * (PAGE_SHIFT - 3) + PAGE_SHIFT > + * > + * Rearranging it a bit we get : > + * (4 - n) * (PAGE_SHIFT - 3) + 3 > + */ Will remove this comment. > +#define pmd_offset_pgtbl_lvl_2(dir, vaddr) ((pmd_t *)dir) > +#define pmd_offset_pgtbl_lvl_3(dir, vaddr) (pud_page_paddr((*(dir))) + pmd_index(vaddr) * sizeof(pmd_t)) Will remove these two macros not in use. And, as I said on another thread, I'm thinking to merge the following patch after your patch 1/2, it tested OK with 48-bit and 52-bit PA without NUMBER(MAX_PHYSMEM_BITS) in vmcoreinfo. Do you think of any case that this will not work well? diff --git a/arch/arm64.c b/arch/arm64.c index 29247a7..c7e60e0 100644 --- a/arch/arm64.c +++ b/arch/arm64.c @@ -127,6 +127,9 @@ typedef unsigned long pgdval_t; */ #define SECTIONS_SIZE_BITS 30 +#define _MAX_PHYSMEM_BITS_48 48 +#define _MAX_PHYSMEM_BITS_52 52 + /* * Hardware page table definitions. * @@ -402,17 +405,27 @@ get_stext_symbol(void) return(found ? kallsym : FALSE); } +static int +set_max_physmem_bits_arm64(void) +{ + long array_len = ARRAY_LENGTH(mem_section); + + info->max_physmem_bits = _MAX_PHYSMEM_BITS_48; + if ((array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT_EXTREME())) + || (array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT()))) + return TRUE; + + info->max_physmem_bits = _MAX_PHYSMEM_BITS_52; + if ((array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT_EXTREME())) + || (array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT()))) + return TRUE; + + return FALSE; +} + int get_machdep_info_arm64(void) { - /* Determine if the PA address range is 52-bits: ARMv8.2-LPA */ - if (NUMBER(MAX_PHYSMEM_BITS) != NOT_FOUND_NUMBER) { - info->max_physmem_bits = NUMBER(MAX_PHYSMEM_BITS); - if (info->max_physmem_bits == 52) - lpa_52_bit_support_available = 1; - } else - info->max_physmem_bits = 48; - /* Check if va_bits is still not initialized. If still 0, call * get_versiondep_info() to initialize the same. */ @@ -428,9 +441,24 @@ get_machdep_info_arm64(void) info->section_size_bits = SECTIONS_SIZE_BITS; DEBUG_MSG("kimage_voffset : %lx\n", kimage_voffset); - DEBUG_MSG("max_physmem_bits : %ld\n", info->max_physmem_bits); DEBUG_MSG("section_size_bits: %ld\n", info->section_size_bits); + /* Determine if the PA address range is 52-bits: ARMv8.2-LPA */ + if (NUMBER(MAX_PHYSMEM_BITS) != NOT_FOUND_NUMBER) { + info->max_physmem_bits = NUMBER(MAX_PHYSMEM_BITS); + DEBUG_MSG("max_physmem_bits : %ld (vmcoreinfo)\n", + info->max_physmem_bits); + } else if (set_max_physmem_bits_arm64()) { + DEBUG_MSG("max_physmem_bits : %ld (detected)\n", + info->max_physmem_bits); + } else { + ERRMSG("Can't determine max_physmem_bits value\n"); + return FALSE; + } + + if (info->max_physmem_bits == 52) + lpa_52_bit_support_available = 1; + return TRUE; } Thanks, Kazu _______________________________________________ kexec mailing list kexec@lists.infradead.org http://lists.infradead.org/mailman/listinfo/kexec ^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH v2 1/2] makedumpfile/arm64: Add support for ARMv8.2-LPA (52-bit PA support) 2019-02-21 15:35 ` Kazuhito Hagio @ 2019-02-22 4:50 ` Bhupesh Sharma 2019-02-22 16:24 ` Kazuhito Hagio 0 siblings, 1 reply; 7+ messages in thread From: Bhupesh Sharma @ 2019-02-22 4:50 UTC (permalink / raw) To: Kazuhito Hagio; +Cc: kexec@lists.infradead.org Hi Kazu, Thanks for the review. On 02/21/2019 09:05 PM, Kazuhito Hagio wrote: > Hi Bhupesh, > > -----Original Message----- >> ARMv8.2-LPA architecture extension (if available on underlying hardware) >> can support 52-bit physical addresses, while the kernel virtual >> addresses remain 48-bit. >> >> This patch is in accordance with ARMv8 Architecture Reference Manual >> version D.a >> >> Make sure that we read the 52-bit PA address capability from >> 'MAX_PHYSMEM_BITS' variable (if available in vmcoreinfo) and >> accordingly change the pte_to_phy() mask values and also traverse >> the page-table walk accordingly. >> >> Also make sure that it works well for the existing 48-bit PA address >> platforms and also on environments which use newer kernels with 52-bit >> PA support but hardware which is not ARM8.2-LPA compliant. >> >> I have sent a kernel patch upstream to add 'MAX_PHYSMEM_BITS' to >> vmcoreinfo for arm64 (see [0]). >> >> [0]. http://lists.infradead.org/pipermail/kexec/2019-February/022411.html >> >> Signed-off-by: Bhupesh Sharma <bhsharma@redhat.com> > > This patch looks good to me. > For two slight things below, I will remove them when merging. > >> +/* >> + * Size mapped by an entry at level n ( 0 <= n <= 3) >> + * We map (PAGE_SHIFT - 3) at all translation levels and PAGE_SHIFT bits >> + * in the final page. The maximum number of translation levels supported by >> + * the architecture is 4. Hence, starting at at level n, we have further >> + * ((4 - n) - 1) levels of translation excluding the offset within the page. >> + * So, the total number of bits mapped by an entry at level n is : >> + * >> + * ((4 - n) - 1) * (PAGE_SHIFT - 3) + PAGE_SHIFT >> + * >> + * Rearranging it a bit we get : >> + * (4 - n) * (PAGE_SHIFT - 3) + 3 >> + */ > > Will remove this comment. Ok. >> +#define pmd_offset_pgtbl_lvl_2(dir, vaddr) ((pmd_t *)dir) >> +#define pmd_offset_pgtbl_lvl_3(dir, vaddr) (pud_page_paddr((*(dir))) + pmd_index(vaddr) * sizeof(pmd_t)) > > Will remove these two macros not in use. Ok. > > And, as I said on another thread, I'm thinking to merge the following > patch after your patch 1/2, it tested OK with 48-bit and 52-bit PA > without NUMBER(MAX_PHYSMEM_BITS) in vmcoreinfo. > Do you think of any case that this will not work well? > > diff --git a/arch/arm64.c b/arch/arm64.c > index 29247a7..c7e60e0 100644 > --- a/arch/arm64.c > +++ b/arch/arm64.c > @@ -127,6 +127,9 @@ typedef unsigned long pgdval_t; > */ > #define SECTIONS_SIZE_BITS 30 > > +#define _MAX_PHYSMEM_BITS_48 48 > +#define _MAX_PHYSMEM_BITS_52 52 > + > /* > * Hardware page table definitions. > * > @@ -402,17 +405,27 @@ get_stext_symbol(void) > return(found ? kallsym : FALSE); > } > > +static int > +set_max_physmem_bits_arm64(void) > +{ > + long array_len = ARRAY_LENGTH(mem_section); > + > + info->max_physmem_bits = _MAX_PHYSMEM_BITS_48; > + if ((array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT_EXTREME())) > + || (array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT()))) > + return TRUE; > + > + info->max_physmem_bits = _MAX_PHYSMEM_BITS_52; > + if ((array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT_EXTREME())) > + || (array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT()))) > + return TRUE; > + > + return FALSE; > +} > + > int > get_machdep_info_arm64(void) > { > - /* Determine if the PA address range is 52-bits: ARMv8.2-LPA */ > - if (NUMBER(MAX_PHYSMEM_BITS) != NOT_FOUND_NUMBER) { > - info->max_physmem_bits = NUMBER(MAX_PHYSMEM_BITS); > - if (info->max_physmem_bits == 52) > - lpa_52_bit_support_available = 1; > - } else > - info->max_physmem_bits = 48; > - > /* Check if va_bits is still not initialized. If still 0, call > * get_versiondep_info() to initialize the same. > */ > @@ -428,9 +441,24 @@ get_machdep_info_arm64(void) > info->section_size_bits = SECTIONS_SIZE_BITS; > > DEBUG_MSG("kimage_voffset : %lx\n", kimage_voffset); > - DEBUG_MSG("max_physmem_bits : %ld\n", info->max_physmem_bits); > DEBUG_MSG("section_size_bits: %ld\n", info->section_size_bits); > > + /* Determine if the PA address range is 52-bits: ARMv8.2-LPA */ > + if (NUMBER(MAX_PHYSMEM_BITS) != NOT_FOUND_NUMBER) { > + info->max_physmem_bits = NUMBER(MAX_PHYSMEM_BITS); > + DEBUG_MSG("max_physmem_bits : %ld (vmcoreinfo)\n", > + info->max_physmem_bits); > + } else if (set_max_physmem_bits_arm64()) { > + DEBUG_MSG("max_physmem_bits : %ld (detected)\n", > + info->max_physmem_bits); > + } else { > + ERRMSG("Can't determine max_physmem_bits value\n"); > + return FALSE; > + } > + > + if (info->max_physmem_bits == 52) > + lpa_52_bit_support_available = 1; > + > return TRUE; > } I have not tested the above suggestion on a real hardware or emulation model yet, but as we were discussing in the kernel patch review thread (see [0]), IMO, we don't need to carry the above hoops for 'MAX_PHYSMEM_BITS' calculation in makedumpfile code as it makes the code less portable for a newer kernel version and also since other user-space utilities (like crash) also need a mechanism to determine the PA_BITS supported by the underlying kernel, so we can use the same uniform method of using an exported 'MAX_PHYSMEM_BITS' value in the vmcoreinfo so that all user-land applications can use the same. I think Dave A. (crash utility maintainer) also pointed to a similar concern in the above thread. [0]. http://lists.infradead.org/pipermail/kexec/2019-February/022472.html Thanks, Bhupesh _______________________________________________ kexec mailing list kexec@lists.infradead.org http://lists.infradead.org/mailman/listinfo/kexec ^ permalink raw reply [flat|nested] 7+ messages in thread
* RE: [PATCH v2 1/2] makedumpfile/arm64: Add support for ARMv8.2-LPA (52-bit PA support) 2019-02-22 4:50 ` Bhupesh Sharma @ 2019-02-22 16:24 ` Kazuhito Hagio 2019-02-26 5:31 ` Bhupesh Sharma 0 siblings, 1 reply; 7+ messages in thread From: Kazuhito Hagio @ 2019-02-22 16:24 UTC (permalink / raw) To: Bhupesh Sharma; +Cc: kexec@lists.infradead.org Hi Bhupesh, -----Original Message----- > Hi Kazu, > > Thanks for the review. > > On 02/21/2019 09:05 PM, Kazuhito Hagio wrote: > > Hi Bhupesh, > > > > -----Original Message----- > >> ARMv8.2-LPA architecture extension (if available on underlying hardware) > >> can support 52-bit physical addresses, while the kernel virtual > >> addresses remain 48-bit. > >> > >> This patch is in accordance with ARMv8 Architecture Reference Manual > >> version D.a > >> > >> Make sure that we read the 52-bit PA address capability from > >> 'MAX_PHYSMEM_BITS' variable (if available in vmcoreinfo) and > >> accordingly change the pte_to_phy() mask values and also traverse > >> the page-table walk accordingly. > >> > >> Also make sure that it works well for the existing 48-bit PA address > >> platforms and also on environments which use newer kernels with 52-bit > >> PA support but hardware which is not ARM8.2-LPA compliant. > >> > >> I have sent a kernel patch upstream to add 'MAX_PHYSMEM_BITS' to > >> vmcoreinfo for arm64 (see [0]). > >> > >> [0]. http://lists.infradead.org/pipermail/kexec/2019-February/022411.html > >> > >> Signed-off-by: Bhupesh Sharma <bhsharma@redhat.com> > > > > This patch looks good to me. > > For two slight things below, I will remove them when merging. > > > >> +/* > >> + * Size mapped by an entry at level n ( 0 <= n <= 3) > >> + * We map (PAGE_SHIFT - 3) at all translation levels and PAGE_SHIFT bits > >> + * in the final page. The maximum number of translation levels supported by > >> + * the architecture is 4. Hence, starting at at level n, we have further > >> + * ((4 - n) - 1) levels of translation excluding the offset within the page. > >> + * So, the total number of bits mapped by an entry at level n is : > >> + * > >> + * ((4 - n) - 1) * (PAGE_SHIFT - 3) + PAGE_SHIFT > >> + * > >> + * Rearranging it a bit we get : > >> + * (4 - n) * (PAGE_SHIFT - 3) + 3 > >> + */ > > > > Will remove this comment. > > Ok. > > >> +#define pmd_offset_pgtbl_lvl_2(dir, vaddr) ((pmd_t *)dir) > >> +#define pmd_offset_pgtbl_lvl_3(dir, vaddr) (pud_page_paddr((*(dir))) + pmd_index(vaddr) * > sizeof(pmd_t)) > > > > Will remove these two macros not in use. > > Ok. > > > > > And, as I said on another thread, I'm thinking to merge the following > > patch after your patch 1/2, it tested OK with 48-bit and 52-bit PA > > without NUMBER(MAX_PHYSMEM_BITS) in vmcoreinfo. > > Do you think of any case that this will not work well? > > > > diff --git a/arch/arm64.c b/arch/arm64.c > > index 29247a7..c7e60e0 100644 > > --- a/arch/arm64.c > > +++ b/arch/arm64.c > > @@ -127,6 +127,9 @@ typedef unsigned long pgdval_t; > > */ > > #define SECTIONS_SIZE_BITS 30 > > > > +#define _MAX_PHYSMEM_BITS_48 48 > > +#define _MAX_PHYSMEM_BITS_52 52 > > + > > /* > > * Hardware page table definitions. > > * > > @@ -402,17 +405,27 @@ get_stext_symbol(void) > > return(found ? kallsym : FALSE); > > } > > > > +static int > > +set_max_physmem_bits_arm64(void) > > +{ > > + long array_len = ARRAY_LENGTH(mem_section); > > + > > + info->max_physmem_bits = _MAX_PHYSMEM_BITS_48; > > + if ((array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT_EXTREME())) > > + || (array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT()))) > > + return TRUE; > > + > > + info->max_physmem_bits = _MAX_PHYSMEM_BITS_52; > > + if ((array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT_EXTREME())) > > + || (array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT()))) > > + return TRUE; > > + > > + return FALSE; > > +} > > + > > int > > get_machdep_info_arm64(void) > > { > > - /* Determine if the PA address range is 52-bits: ARMv8.2-LPA */ > > - if (NUMBER(MAX_PHYSMEM_BITS) != NOT_FOUND_NUMBER) { > > - info->max_physmem_bits = NUMBER(MAX_PHYSMEM_BITS); > > - if (info->max_physmem_bits == 52) > > - lpa_52_bit_support_available = 1; > > - } else > > - info->max_physmem_bits = 48; > > - > > /* Check if va_bits is still not initialized. If still 0, call > > * get_versiondep_info() to initialize the same. > > */ > > @@ -428,9 +441,24 @@ get_machdep_info_arm64(void) > > info->section_size_bits = SECTIONS_SIZE_BITS; > > > > DEBUG_MSG("kimage_voffset : %lx\n", kimage_voffset); > > - DEBUG_MSG("max_physmem_bits : %ld\n", info->max_physmem_bits); > > DEBUG_MSG("section_size_bits: %ld\n", info->section_size_bits); > > > > + /* Determine if the PA address range is 52-bits: ARMv8.2-LPA */ > > + if (NUMBER(MAX_PHYSMEM_BITS) != NOT_FOUND_NUMBER) { > > + info->max_physmem_bits = NUMBER(MAX_PHYSMEM_BITS); > > + DEBUG_MSG("max_physmem_bits : %ld (vmcoreinfo)\n", > > + info->max_physmem_bits); > > + } else if (set_max_physmem_bits_arm64()) { > > + DEBUG_MSG("max_physmem_bits : %ld (detected)\n", > > + info->max_physmem_bits); > > + } else { > > + ERRMSG("Can't determine max_physmem_bits value\n"); > > + return FALSE; > > + } > > + > > + if (info->max_physmem_bits == 52) > > + lpa_52_bit_support_available = 1; > > + > > return TRUE; > > } > > I have not tested the above suggestion on a real hardware or emulation > model yet, but as we were discussing in the kernel patch review thread > (see [0]), IMO, we don't need to carry the above hoops for > 'MAX_PHYSMEM_BITS' calculation in makedumpfile code as it makes the code > less portable for a newer kernel version and also since other user-space > utilities (like crash) also need a mechanism to determine the PA_BITS > supported by the underlying kernel, so we can use the same uniform > method of using an exported 'MAX_PHYSMEM_BITS' value in the vmcoreinfo > so that all user-land applications can use the same. > > I think Dave A. (crash utility maintainer) also pointed to a similar > concern in the above thread. I see. As I replied [1], I also think ideally we should export it. [1] http://lists.infradead.org/pipermail/kexec/2019-February/022474.html Can we export it from kernel/crash_core.c? I'd like to avoid repeating such a discussion every time we need it for each architecture.. Thanks, Kazu > > [0]. http://lists.infradead.org/pipermail/kexec/2019-February/022472.html > > Thanks, > Bhupesh _______________________________________________ kexec mailing list kexec@lists.infradead.org http://lists.infradead.org/mailman/listinfo/kexec ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH v2 1/2] makedumpfile/arm64: Add support for ARMv8.2-LPA (52-bit PA support) 2019-02-22 16:24 ` Kazuhito Hagio @ 2019-02-26 5:31 ` Bhupesh Sharma 0 siblings, 0 replies; 7+ messages in thread From: Bhupesh Sharma @ 2019-02-26 5:31 UTC (permalink / raw) To: Kazuhito Hagio; +Cc: kexec@lists.infradead.org Hi Kazu, On 02/22/2019 09:54 PM, Kazuhito Hagio wrote: > Hi Bhupesh, > > -----Original Message----- >> Hi Kazu, >> >> Thanks for the review. >> >> On 02/21/2019 09:05 PM, Kazuhito Hagio wrote: >>> Hi Bhupesh, >>> >>> -----Original Message----- >>>> ARMv8.2-LPA architecture extension (if available on underlying hardware) >>>> can support 52-bit physical addresses, while the kernel virtual >>>> addresses remain 48-bit. >>>> >>>> This patch is in accordance with ARMv8 Architecture Reference Manual >>>> version D.a >>>> >>>> Make sure that we read the 52-bit PA address capability from >>>> 'MAX_PHYSMEM_BITS' variable (if available in vmcoreinfo) and >>>> accordingly change the pte_to_phy() mask values and also traverse >>>> the page-table walk accordingly. >>>> >>>> Also make sure that it works well for the existing 48-bit PA address >>>> platforms and also on environments which use newer kernels with 52-bit >>>> PA support but hardware which is not ARM8.2-LPA compliant. >>>> >>>> I have sent a kernel patch upstream to add 'MAX_PHYSMEM_BITS' to >>>> vmcoreinfo for arm64 (see [0]). >>>> >>>> [0]. http://lists.infradead.org/pipermail/kexec/2019-February/022411.html >>>> >>>> Signed-off-by: Bhupesh Sharma <bhsharma@redhat.com> >>> >>> This patch looks good to me. >>> For two slight things below, I will remove them when merging. >>> >>>> +/* >>>> + * Size mapped by an entry at level n ( 0 <= n <= 3) >>>> + * We map (PAGE_SHIFT - 3) at all translation levels and PAGE_SHIFT bits >>>> + * in the final page. The maximum number of translation levels supported by >>>> + * the architecture is 4. Hence, starting at at level n, we have further >>>> + * ((4 - n) - 1) levels of translation excluding the offset within the page. >>>> + * So, the total number of bits mapped by an entry at level n is : >>>> + * >>>> + * ((4 - n) - 1) * (PAGE_SHIFT - 3) + PAGE_SHIFT >>>> + * >>>> + * Rearranging it a bit we get : >>>> + * (4 - n) * (PAGE_SHIFT - 3) + 3 >>>> + */ >>> >>> Will remove this comment. >> >> Ok. >> >>>> +#define pmd_offset_pgtbl_lvl_2(dir, vaddr) ((pmd_t *)dir) >>>> +#define pmd_offset_pgtbl_lvl_3(dir, vaddr) (pud_page_paddr((*(dir))) + pmd_index(vaddr) * >> sizeof(pmd_t)) >>> >>> Will remove these two macros not in use. >> >> Ok. >> >>> >>> And, as I said on another thread, I'm thinking to merge the following >>> patch after your patch 1/2, it tested OK with 48-bit and 52-bit PA >>> without NUMBER(MAX_PHYSMEM_BITS) in vmcoreinfo. >>> Do you think of any case that this will not work well? >>> >>> diff --git a/arch/arm64.c b/arch/arm64.c >>> index 29247a7..c7e60e0 100644 >>> --- a/arch/arm64.c >>> +++ b/arch/arm64.c >>> @@ -127,6 +127,9 @@ typedef unsigned long pgdval_t; >>> */ >>> #define SECTIONS_SIZE_BITS 30 >>> >>> +#define _MAX_PHYSMEM_BITS_48 48 >>> +#define _MAX_PHYSMEM_BITS_52 52 >>> + >>> /* >>> * Hardware page table definitions. >>> * >>> @@ -402,17 +405,27 @@ get_stext_symbol(void) >>> return(found ? kallsym : FALSE); >>> } >>> >>> +static int >>> +set_max_physmem_bits_arm64(void) >>> +{ >>> + long array_len = ARRAY_LENGTH(mem_section); >>> + >>> + info->max_physmem_bits = _MAX_PHYSMEM_BITS_48; >>> + if ((array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT_EXTREME())) >>> + || (array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT()))) >>> + return TRUE; >>> + >>> + info->max_physmem_bits = _MAX_PHYSMEM_BITS_52; >>> + if ((array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT_EXTREME())) >>> + || (array_len == (NR_MEM_SECTIONS() / _SECTIONS_PER_ROOT()))) >>> + return TRUE; >>> + >>> + return FALSE; >>> +} >>> + >>> int >>> get_machdep_info_arm64(void) >>> { >>> - /* Determine if the PA address range is 52-bits: ARMv8.2-LPA */ >>> - if (NUMBER(MAX_PHYSMEM_BITS) != NOT_FOUND_NUMBER) { >>> - info->max_physmem_bits = NUMBER(MAX_PHYSMEM_BITS); >>> - if (info->max_physmem_bits == 52) >>> - lpa_52_bit_support_available = 1; >>> - } else >>> - info->max_physmem_bits = 48; >>> - >>> /* Check if va_bits is still not initialized. If still 0, call >>> * get_versiondep_info() to initialize the same. >>> */ >>> @@ -428,9 +441,24 @@ get_machdep_info_arm64(void) >>> info->section_size_bits = SECTIONS_SIZE_BITS; >>> >>> DEBUG_MSG("kimage_voffset : %lx\n", kimage_voffset); >>> - DEBUG_MSG("max_physmem_bits : %ld\n", info->max_physmem_bits); >>> DEBUG_MSG("section_size_bits: %ld\n", info->section_size_bits); >>> >>> + /* Determine if the PA address range is 52-bits: ARMv8.2-LPA */ >>> + if (NUMBER(MAX_PHYSMEM_BITS) != NOT_FOUND_NUMBER) { >>> + info->max_physmem_bits = NUMBER(MAX_PHYSMEM_BITS); >>> + DEBUG_MSG("max_physmem_bits : %ld (vmcoreinfo)\n", >>> + info->max_physmem_bits); >>> + } else if (set_max_physmem_bits_arm64()) { >>> + DEBUG_MSG("max_physmem_bits : %ld (detected)\n", >>> + info->max_physmem_bits); >>> + } else { >>> + ERRMSG("Can't determine max_physmem_bits value\n"); >>> + return FALSE; >>> + } >>> + >>> + if (info->max_physmem_bits == 52) >>> + lpa_52_bit_support_available = 1; >>> + >>> return TRUE; >>> } >> >> I have not tested the above suggestion on a real hardware or emulation >> model yet, but as we were discussing in the kernel patch review thread >> (see [0]), IMO, we don't need to carry the above hoops for >> 'MAX_PHYSMEM_BITS' calculation in makedumpfile code as it makes the code >> less portable for a newer kernel version and also since other user-space >> utilities (like crash) also need a mechanism to determine the PA_BITS >> supported by the underlying kernel, so we can use the same uniform >> method of using an exported 'MAX_PHYSMEM_BITS' value in the vmcoreinfo >> so that all user-land applications can use the same. >> >> I think Dave A. (crash utility maintainer) also pointed to a similar >> concern in the above thread. > > I see. As I replied [1], I also think ideally we should export it. > [1] http://lists.infradead.org/pipermail/kexec/2019-February/022474.html > > Can we export it from kernel/crash_core.c? > I'd like to avoid repeating such a discussion every time we need it > for each architecture.. Sure. I will try and convince the maintainers of other archs (x86 and ppc) for a common export via 'kernel/crash_core.c'. I will continue this discussion with the other maintainers on the kernel thread. In the meanwhile, from makedumpfile p-o-v, I will send a v3 to address your concerns on the LVA patch (v1), which I seem to have missed while sending out the v2. Thanks, Bhupesh _______________________________________________ kexec mailing list kexec@lists.infradead.org http://lists.infradead.org/mailman/listinfo/kexec ^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH v2 2/2] makedumpfile/arm64: Add support for ARMv8.2-LVA (52-bit user-space VA support) 2019-02-16 20:24 [PATCH v2 0/2] makedumpfile/arm64: Add support for ARMv8.2 extensions Bhupesh Sharma 2019-02-16 20:24 ` [PATCH v2 1/2] makedumpfile/arm64: Add support for ARMv8.2-LPA (52-bit PA support) Bhupesh Sharma @ 2019-02-16 20:24 ` Bhupesh Sharma 1 sibling, 0 replies; 7+ messages in thread From: Bhupesh Sharma @ 2019-02-16 20:24 UTC (permalink / raw) To: kexec; +Cc: bhsharma, bhupesh.linux, k-hagio With ARMv8.2-LVA architecture extension availability, arm64 hardware which supports this extension can support upto 52-bit virtual addresses. It is specially useful for having a 52-bit user-space virtual address space while the kernel can still retain 48-bit virtual addresses. Since at the moment we enable the support of this extension in the kernel via a CONFIG flag (CONFIG_ARM64_USER_VA_BITS_52), so there are no clear mechanisms in user-space to determine this CONFIG flag value and use it to determine the user-space VA address range values. 'makedumpfile' can instead use 'MAX_USER_VA_BITS' value to determine the maximum virtual physical address supported by user-space. If 'MAX_USER_VA_BITS' value is greater than 'VA_BITS' than we are running a use-case where user-space is 52-bit and underlying kernel is still 48-bit. The increased 'PTRS_PER_PGD' value for such cases can then be calculated as is done by the underlying kernel (see kernel file 'arch/arm64/include/asm/pgtable-hwdef.h' for details): #define PTRS_PER_PGD (1 << (MAX_USER_VA_BITS - PGDIR_SHIFT)) I have sent a kernel patch upstream to add 'MAX_USER_VA_BITS' to vmcoreinfo for arm64 (see [0]). This patch is in accordance with ARMv8 Architecture Reference Manual version D.a [0]. http://lists.infradead.org/pipermail/kexec/2019-February/022411.html Signed-off-by: Bhupesh Sharma <bhsharma@redhat.com> --- arch/arm64.c | 105 +++++++++++++++++++++++++++++++++++++++++---------------- makedumpfile.c | 2 ++ makedumpfile.h | 1 + 3 files changed, 79 insertions(+), 29 deletions(-) diff --git a/arch/arm64.c b/arch/arm64.c index 29247a7fa7db..a9abfa74574c 100644 --- a/arch/arm64.c +++ b/arch/arm64.c @@ -47,6 +47,7 @@ typedef struct { static int lpa_52_bit_support_available; static int pgtable_level; static int va_bits; +static int max_user_va_bits; static unsigned long kimage_voffset; #define SZ_4K 4096 @@ -120,7 +121,7 @@ typedef unsigned long pgdval_t; #define PGDIR_SHIFT ARM64_HW_PGTABLE_LEVEL_SHIFT(4 - (pgtable_level)) #define PGDIR_SIZE (_AC(1, UL) << PGDIR_SHIFT) #define PGDIR_MASK (~(PGDIR_SIZE-1)) -#define PTRS_PER_PGD (1 << ((va_bits) - PGDIR_SHIFT)) +#define PTRS_PER_PGD (1 << ((max_user_va_bits) - PGDIR_SHIFT)) /* * Section address mask and size definitions. @@ -402,6 +403,46 @@ get_stext_symbol(void) return(found ? kallsym : FALSE); } +static int +get_va_bits_from_stext_arm64(void) +{ + ulong _stext; + + _stext = get_stext_symbol(); + if (!_stext) { + ERRMSG("Can't get the symbol of _stext.\n"); + return FALSE; + } + + /* Derive va_bits as per arch/arm64/Kconfig */ + if ((_stext & PAGE_OFFSET_36) == PAGE_OFFSET_36) { + va_bits = 36; + } else if ((_stext & PAGE_OFFSET_39) == PAGE_OFFSET_39) { + va_bits = 39; + } else if ((_stext & PAGE_OFFSET_42) == PAGE_OFFSET_42) { + va_bits = 42; + } else if ((_stext & PAGE_OFFSET_47) == PAGE_OFFSET_47) { + va_bits = 47; + } else if ((_stext & PAGE_OFFSET_48) == PAGE_OFFSET_48) { + va_bits = 48; + } else { + ERRMSG("Cannot find a proper _stext for calculating VA_BITS\n"); + return FALSE; + } + + DEBUG_MSG("va_bits : %d\n", va_bits); + + return TRUE; +} + +static void +get_page_offset_arm64(void) +{ + info->page_offset = (0xffffffffffffffffUL) << (va_bits - 1); + + DEBUG_MSG("page_offset : %lx\n", info->page_offset); +} + int get_machdep_info_arm64(void) { @@ -416,8 +457,37 @@ get_machdep_info_arm64(void) /* Check if va_bits is still not initialized. If still 0, call * get_versiondep_info() to initialize the same. */ + if (NUMBER(VA_BITS) != NOT_FOUND_NUMBER) { + va_bits = NUMBER(VA_BITS); + DEBUG_MSG("va_bits : %d (vmcoreinfo)\n", + va_bits); + } + + /* Check if va_bits is still not initialized. If still 0, call + * get_versiondep_info() to initialize the same from _stext + * symbol. + */ if (!va_bits) - get_versiondep_info_arm64(); + if (get_va_bits_from_stext_arm64() == ERROR) + return ERROR; + + get_page_offset_arm64(); + + if (NUMBER(MAX_USER_VA_BITS) != NOT_FOUND_NUMBER) { + max_user_va_bits = NUMBER(MAX_USER_VA_BITS); + DEBUG_MSG("max_user_va_bits : %d (vmcoreinfo)\n", + max_user_va_bits); + } + + /* Check if max_user_va_bits is still not initialized. + * If still 0, its not available in vmcoreinfo and its + * safe to initialize it with va_bits. + */ + if (!max_user_va_bits) { + max_user_va_bits = va_bits; + DEBUG_MSG("max_user_va_bits : %d (default = va_bits)\n", + max_user_va_bits); + } if (!calculate_plat_config()) { ERRMSG("Can't determine platform config values\n"); @@ -455,34 +525,11 @@ get_xen_info_arm64(void) int get_versiondep_info_arm64(void) { - ulong _stext; - - _stext = get_stext_symbol(); - if (!_stext) { - ERRMSG("Can't get the symbol of _stext.\n"); - return FALSE; - } - - /* Derive va_bits as per arch/arm64/Kconfig */ - if ((_stext & PAGE_OFFSET_36) == PAGE_OFFSET_36) { - va_bits = 36; - } else if ((_stext & PAGE_OFFSET_39) == PAGE_OFFSET_39) { - va_bits = 39; - } else if ((_stext & PAGE_OFFSET_42) == PAGE_OFFSET_42) { - va_bits = 42; - } else if ((_stext & PAGE_OFFSET_47) == PAGE_OFFSET_47) { - va_bits = 47; - } else if ((_stext & PAGE_OFFSET_48) == PAGE_OFFSET_48) { - va_bits = 48; - } else { - ERRMSG("Cannot find a proper _stext for calculating VA_BITS\n"); - return FALSE; - } - - info->page_offset = (0xffffffffffffffffUL) << (va_bits - 1); + if (!va_bits) + if (get_va_bits_from_stext_arm64() == ERROR) + return ERROR; - DEBUG_MSG("va_bits : %d\n", va_bits); - DEBUG_MSG("page_offset : %lx\n", info->page_offset); + get_page_offset_arm64(); return TRUE; } diff --git a/makedumpfile.c b/makedumpfile.c index b7c1c01251bf..ccf7eb4408ba 100644 --- a/makedumpfile.c +++ b/makedumpfile.c @@ -2291,6 +2291,7 @@ write_vmcoreinfo_data(void) WRITE_NUMBER("HUGETLB_PAGE_DTOR", HUGETLB_PAGE_DTOR); #ifdef __aarch64__ + WRITE_NUMBER("MAX_USER_VA_BITS", MAX_USER_VA_BITS); WRITE_NUMBER("VA_BITS", VA_BITS); WRITE_NUMBER_UNSIGNED("PHYS_OFFSET", PHYS_OFFSET); WRITE_NUMBER_UNSIGNED("kimage_voffset", kimage_voffset); @@ -2694,6 +2695,7 @@ read_vmcoreinfo(void) READ_NUMBER("PAGE_BUDDY_MAPCOUNT_VALUE", PAGE_BUDDY_MAPCOUNT_VALUE); READ_NUMBER("phys_base", phys_base); #ifdef __aarch64__ + READ_NUMBER("MAX_USER_VA_BITS", MAX_USER_VA_BITS); READ_NUMBER("VA_BITS", VA_BITS); READ_NUMBER_UNSIGNED("PHYS_OFFSET", PHYS_OFFSET); READ_NUMBER_UNSIGNED("kimage_voffset", kimage_voffset); diff --git a/makedumpfile.h b/makedumpfile.h index d49f1f1fc1a9..3fa810e0bb40 100644 --- a/makedumpfile.h +++ b/makedumpfile.h @@ -1933,6 +1933,7 @@ struct number_table { long HUGETLB_PAGE_DTOR; long phys_base; #ifdef __aarch64__ + long MAX_USER_VA_BITS; long VA_BITS; unsigned long PHYS_OFFSET; unsigned long kimage_voffset; -- 2.7.4 _______________________________________________ kexec mailing list kexec@lists.infradead.org http://lists.infradead.org/mailman/listinfo/kexec ^ permalink raw reply related [flat|nested] 7+ messages in thread
end of thread, other threads:[~2019-02-26 5:31 UTC | newest] Thread overview: 7+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2019-02-16 20:24 [PATCH v2 0/2] makedumpfile/arm64: Add support for ARMv8.2 extensions Bhupesh Sharma 2019-02-16 20:24 ` [PATCH v2 1/2] makedumpfile/arm64: Add support for ARMv8.2-LPA (52-bit PA support) Bhupesh Sharma 2019-02-21 15:35 ` Kazuhito Hagio 2019-02-22 4:50 ` Bhupesh Sharma 2019-02-22 16:24 ` Kazuhito Hagio 2019-02-26 5:31 ` Bhupesh Sharma 2019-02-16 20:24 ` [PATCH v2 2/2] makedumpfile/arm64: Add support for ARMv8.2-LVA (52-bit user-space VA support) Bhupesh Sharma
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox