From mboxrd@z Thu Jan 1 00:00:00 1970 Reply-To: kernel-hardening@lists.openwall.com From: Thomas Garnier Date: Thu, 12 May 2016 09:08:58 -0700 Message-Id: <1463069340-117401-3-git-send-email-thgarnie@google.com> In-Reply-To: <1463069340-117401-1-git-send-email-thgarnie@google.com> References: <1463069340-117401-1-git-send-email-thgarnie@google.com> Subject: [kernel-hardening] [PATCH v4 2/4] x86, boot: PUD VA support for physical mapping (x86_64) To: "H . Peter Anvin" , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Thomas Garnier , Dmitry Vyukov , Paolo Bonzini , Dan Williams , Kees Cook , Stephen Smalley , Kefeng Wang , Jonathan Corbet , Matt Fleming , Toshi Kani , Alexander Kuleshov , Alexander Popov , Joerg Roedel , Dave Young , Baoquan He , Dave Hansen , Mark Salter , Boris Ostrovsky Cc: x86@kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, gthelen@google.com, kernel-hardening@lists.openwall.com List-ID: Minor change that allows early boot physical mapping of PUD level virtual addresses. The current implementation expect the virtual address to be PUD aligned. For KASLR memory randomization, we need to be able to randomize the offset used on the PUD table. It has no impact on current usage. Signed-off-by: Thomas Garnier --- Based on next-20160511 --- arch/x86/mm/init_64.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index bce2e5d..f205f39 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -454,10 +454,10 @@ phys_pud_init(pud_t *pud_page, unsigned long addr, unsigned long end, { unsigned long pages = 0, next; unsigned long last_map_addr = end; - int i = pud_index(addr); + int i = pud_index((unsigned long)__va(addr)); for (; i < PTRS_PER_PUD; i++, addr = next) { - pud_t *pud = pud_page + pud_index(addr); + pud_t *pud = pud_page + pud_index((unsigned long)__va(addr)); pmd_t *pmd; pgprot_t prot = PAGE_KERNEL; -- 2.8.0.rc3.226.g39d4020