From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A36D9C433DF for ; Mon, 19 Oct 2020 08:56:13 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 14A4D20790 for ; Mon, 19 Oct 2020 08:56:13 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="ZsrMuvQ7"; dkim=fail reason="signature verification failed" (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="lrzvy9ax" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 14A4D20790 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:To:Subject:Message-ID:Date:From:In-Reply-To: References:MIME-Version:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=e2OlR1QXqHMqgG9nfc/qRA08IaQU29FoFiCheXrgMf0=; b=ZsrMuvQ7AZy/sdFpHh52WPrsG pinrfxlF/BwP3LERnFBKLX2L6pzcrIWOctrqayVlCoQxznIW3/YNDRGBhBJ/5DuUK63a9nVY7wdMR 02+afBT7sAcofYnEl/oAOkjFTJQhb63/1yZmgP+V/FqJKGh3aPamNEj43bK0BMi3LkVqDUGo6Pn/Z +biQlLa334mPT6DNiNd0GFOCLH3Ggkp2pvH6DiDz4IgbktDCZDAu6KDB3sBRq1zTw0wirr6/6Ckh8 4H4eUGH336im41QAxc+4T2GLrHapxyU/SOVIGJPj+Dke5f1JWjtaGQTaT0Rl8ynfSRSZ9109TnP81 q0Tw/x4sA==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kUQwB-0002C2-2P; Mon, 19 Oct 2020 08:54:39 +0000 Received: from mail.kernel.org ([198.145.29.99]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kUQw8-0002BY-1y for linux-arm-kernel@lists.infradead.org; Mon, 19 Oct 2020 08:54:37 +0000 Received: from mail-ot1-f47.google.com (mail-ot1-f47.google.com [209.85.210.47]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 1078922276 for ; Mon, 19 Oct 2020 08:54:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1603097674; bh=F5mozNm22lV4t7L4VznINuR3Sv9lNR4jXpV+uoDVtcg=; h=References:In-Reply-To:From:Date:Subject:To:Cc:From; b=lrzvy9axApynZIkNtGiqv9vlnc/VwG+UzRiTQy3y+FJHFy9KEAPbhl3HI7g3/5IUK v/fNas5IU5rJPXGAHmxa9a0Zg27j6fOerejUGFZo8/Jy9YFh/F4fQfLvRFz+O6gTA0 4oSs4ziVuHjcZ7PSdB+gYgUWwfTsh7HKYp3OWVmw= Received: by mail-ot1-f47.google.com with SMTP id h62so5973927oth.9 for ; Mon, 19 Oct 2020 01:54:34 -0700 (PDT) X-Gm-Message-State: AOAM530xQe/ccvh0I2gsep+EPhIFA3mCkPfjIrsjc9GjfZAGLeLI1mGg Hqxe6ZNhGGNGZfYxQwX5BV4cHb4+BqLSKrgCwzY= X-Google-Smtp-Source: ABdhPJzCGPYcVDISxsbDA39jKhQN/eAOXVokoB7oyJhrOS/b6Notn4e6dt6ae9oIKm88xzZIAxKNA4AvylOHHF7de3w= X-Received: by 2002:a9d:6c92:: with SMTP id c18mr10727720otr.108.1603097673190; Mon, 19 Oct 2020 01:54:33 -0700 (PDT) MIME-Version: 1.0 References: <20201019084140.4532-1-linus.walleij@linaro.org> <20201019084140.4532-5-linus.walleij@linaro.org> In-Reply-To: <20201019084140.4532-5-linus.walleij@linaro.org> From: Ard Biesheuvel Date: Mon, 19 Oct 2020 10:54:22 +0200 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH 4/5 v16] ARM: Initialize the mapping of KASan shadow memory To: Linus Walleij X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201019_045436_261400_40F322AD X-CRM114-Status: GOOD ( 49.91 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Florian Fainelli , Ahmad Fatoum , Arnd Bergmann , Abbott Liu , Russell King , kasan-dev , Mike Rapoport , Alexander Potapenko , Dmitry Vyukov , Andrey Ryabinin , Linux ARM Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Hi Linus, On Mon, 19 Oct 2020 at 10:42, Linus Walleij wrote: > > This patch initializes KASan shadow region's page table and memory. > There are two stage for KASan initializing: > > 1. At early boot stage the whole shadow region is mapped to just > one physical page (kasan_zero_page). It is finished by the function > kasan_early_init which is called by __mmap_switched(arch/arm/kernel/ > head-common.S) > > 2. After the calling of paging_init, we use kasan_zero_page as zero > shadow for some memory that KASan does not need to track, and we > allocate a new shadow space for the other memory that KASan need to > track. These issues are finished by the function kasan_init which is > call by setup_arch. > > When using KASan we also need to increase the THREAD_SIZE_ORDER > from 1 to 2 as the extra calls for shadow memory uses quite a bit > of stack. > > As we need to make a temporary copy of the PGD when setting up > shadow memory we create a helpful PGD_SIZE definition for both > LPAE and non-LPAE setups. > > The KASan core code unconditionally calls pud_populate() so this > needs to be changed from BUG() to do {} while (0) when building > with KASan enabled. > > After the initial development by Andre Ryabinin several modifications > have been made to this code: > > Abbott Liu > - Add support ARM LPAE: If LPAE is enabled, KASan shadow region's > mapping table need be copied in the pgd_alloc() function. > - Change kasan_pte_populate,kasan_pmd_populate,kasan_pud_populate, > kasan_pgd_populate from .meminit.text section to .init.text section. > Reported by Florian Fainelli > > Linus Walleij : > - Drop the custom mainpulation of TTBR0 and just use > cpu_switch_mm() to switch the pgd table. > - Adopt to handle 4th level page tabel folding. > - Rewrite the entire page directory and page entry initialization > sequence to be recursive based on ARM64:s kasan_init.c. > > Ard Biesheuvel : > - Necessary underlying fixes. > - Crucial bug fixes to the memory set-up code. > > Cc: Alexander Potapenko > Cc: Dmitry Vyukov > Cc: kasan-dev@googlegroups.com > Cc: Mike Rapoport > Co-developed-by: Andrey Ryabinin > Co-developed-by: Abbott Liu > Co-developed-by: Ard Biesheuvel > Acked-by: Mike Rapoport > Reviewed-by: Ard Biesheuvel > Tested-by: Ard Biesheuvel # QEMU/KVM/mach-virt/LPAE/8G > Tested-by: Florian Fainelli # Brahma SoCs > Tested-by: Ahmad Fatoum # i.MX6Q > Reported-by: Russell King - ARM Linux > Reported-by: Florian Fainelli > Signed-off-by: Andrey Ryabinin > Signed-off-by: Abbott Liu > Signed-off-by: Florian Fainelli > Signed-off-by: Ard Biesheuvel > Signed-off-by: Linus Walleij > --- ... > diff --git a/arch/arm/mm/kasan_init.c b/arch/arm/mm/kasan_init.c > new file mode 100644 > index 000000000000..8afd5c017b7f > --- /dev/null > +++ b/arch/arm/mm/kasan_init.c > @@ -0,0 +1,292 @@ > +// SPDX-License-Identifier: GPL-2.0-only > +/* > + * This file contains kasan initialization code for ARM. > + * > + * Copyright (c) 2018 Samsung Electronics Co., Ltd. > + * Author: Andrey Ryabinin > + * Author: Linus Walleij > + */ > + > +#define pr_fmt(fmt) "kasan: " fmt > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > + > +#include "mm.h" > + > +static pgd_t tmp_pgd_table[PTRS_PER_PGD] __initdata __aligned(PGD_SIZE); > + > +pmd_t tmp_pmd_table[PTRS_PER_PMD] __page_aligned_bss; > + > +static __init void *kasan_alloc_block(size_t size) > +{ > + return memblock_alloc_try_nid(size, size, __pa(MAX_DMA_ADDRESS), > + MEMBLOCK_ALLOC_KASAN, NUMA_NO_NODE); > +} > + > +static void __init kasan_pte_populate(pmd_t *pmdp, unsigned long addr, > + unsigned long end, bool early) > +{ > + unsigned long next; > + pte_t *ptep = pte_offset_kernel(pmdp, addr); > + > + do { > + pte_t entry; > + void *p; > + > + next = addr + PAGE_SIZE; > + > + if (!early) { > + if (!pte_none(READ_ONCE(*ptep))) > + continue; > + > + p = kasan_alloc_block(PAGE_SIZE); > + if (!p) { > + panic("%s failed to allocate shadow page for address 0x%lx\n", > + __func__, addr); > + return; > + } > + memset(p, KASAN_SHADOW_INIT, PAGE_SIZE); > + entry = pfn_pte(virt_to_pfn(p), > + __pgprot(pgprot_val(PAGE_KERNEL))); > + } else if (pte_none(READ_ONCE(*ptep))) { > + /* > + * The early shadow memory is mapping all KASan > + * operations to one and the same page in memory, > + * "kasan_early_shadow_page" so that the instrumentation > + * will work on a scratch area until we can set up the > + * proper KASan shadow memory. > + */ > + entry = pfn_pte(virt_to_pfn(kasan_early_shadow_page), > + __pgprot(_L_PTE_DEFAULT | L_PTE_DIRTY | L_PTE_XN)); > + } else { > + /* > + * Early shadow mappings are PMD_SIZE aligned, so if the > + * first entry is already set, they must all be set. > + */ > + return; > + } > + > + set_pte_at(&init_mm, addr, ptep, entry); > + } while (ptep++, addr = next, addr != end); > +} > + > +/* > + * The pmd (page middle directory) is only used on LPAE > + */ > +static void __init kasan_pmd_populate(pud_t *pudp, unsigned long addr, > + unsigned long end, bool early) > +{ > + unsigned long next; > + pmd_t *pmdp = pmd_offset(pudp, addr); > + > + do { > + if (pmd_none(*pmdp)) { > + /* > + * We attempt to allocate a shadow block for the PMDs > + * used by the PTEs for this address if it isn't already > + * allocated. > + */ > + void *p = early ? kasan_early_shadow_pte : > + kasan_alloc_block(PAGE_SIZE); > + > + if (!p) { > + panic("%s failed to allocate shadow block for address 0x%lx\n", > + __func__, addr); > + return; > + } > + pmd_populate_kernel(&init_mm, pmdp, p); > + flush_pmd_entry(pmdp); > + } > + > + next = pmd_addr_end(addr, end); > + kasan_pte_populate(pmdp, addr, next, early); > + } while (pmdp++, addr = next, addr != end); > +} > + > +static void __init kasan_pgd_populate(unsigned long addr, unsigned long end, > + bool early) > +{ > + unsigned long next; > + pgd_t *pgdp; > + p4d_t *p4dp; > + pud_t *pudp; > + > + pgdp = pgd_offset_k(addr); > + > + do { > + /* > + * Allocate and populate the shadow block of p4d folded into > + * pud folded into pmd if it doesn't already exist > + */ > + if (!early && pgd_none(*pgdp)) { > + void *p = kasan_alloc_block(PAGE_SIZE); > + > + if (!p) { > + panic("%s failed to allocate shadow block for address 0x%lx\n", > + __func__, addr); > + return; > + } > + pgd_populate(&init_mm, pgdp, p); > + } > + > + next = pgd_addr_end(addr, end); > + /* > + * We just immediately jump over the p4d and pud page > + * directories since we believe ARM32 will never gain four > + * nor five level page tables. > + */ > + p4dp = p4d_offset(pgdp, addr); > + pudp = pud_offset(p4dp, addr); > + > + kasan_pmd_populate(pudp, addr, next, early); > + } while (pgdp++, addr = next, addr != end); > +} > + > +extern struct proc_info_list *lookup_processor_type(unsigned int); > + > +void __init kasan_early_init(void) > +{ > + struct proc_info_list *list; > + > + /* > + * locate processor in the list of supported processor > + * types. The linker builds this table for us from the > + * entries in arch/arm/mm/proc-*.S > + */ > + list = lookup_processor_type(read_cpuid_id()); > + if (list) { > +#ifdef MULTI_CPU > + processor = *list->proc; > +#endif > + } > + > + BUILD_BUG_ON((KASAN_SHADOW_END - (1UL << 29)) != KASAN_SHADOW_OFFSET); > + /* > + * We walk the page table and set all of the shadow memory to point > + * to the scratch page. > + */ > + kasan_pgd_populate(KASAN_SHADOW_START, KASAN_SHADOW_END, true); > +} > + > +static void __init clear_pgds(unsigned long start, > + unsigned long end) > +{ > + for (; start && start < end; start += PMD_SIZE) > + pmd_clear(pmd_off_k(start)); > +} > + > +static int __init create_mapping(void *start, void *end) > +{ > + void *shadow_start, *shadow_end; > + > + shadow_start = kasan_mem_to_shadow(start); > + shadow_end = kasan_mem_to_shadow(end); > + > + pr_info("Mapping kernel virtual memory block: %px-%px at shadow: %px-%px\n", > + start, end, shadow_start, shadow_end); > + > + kasan_pgd_populate((unsigned long)shadow_start & PAGE_MASK, > + PAGE_ALIGN((unsigned long)shadow_end), false); > + return 0; > +} > + > +void __init kasan_init(void) > +{ > + struct memblock_region *reg; > + int i; > + > + /* > + * We are going to perform proper setup of shadow memory. > + * > + * At first we should unmap early shadow (clear_pgds() call bellow). > + * However, instrumented code can't execute without shadow memory. > + * > + * To keep the early shadow memory MMU tables around while setting up > + * the proper shadow memory, we copy swapper_pg_dir (the initial page > + * table) to tmp_pgd_table and use that to keep the early shadow memory > + * mapped until the full shadow setup is finished. Then we swap back > + * to the proper swapper_pg_dir. > + */ > + > + memcpy(tmp_pgd_table, swapper_pg_dir, sizeof(tmp_pgd_table)); > +#ifdef CONFIG_ARM_LPAE > + /* We need to be in the same PGD or this won't work */ > + BUILD_BUG_ON(pgd_index(KASAN_SHADOW_START) != > + pgd_index(KASAN_SHADOW_END)); > + memcpy(tmp_pmd_table, > + pgd_page_vaddr(*pgd_offset_k(KASAN_SHADOW_START)), > + sizeof(tmp_pmd_table)); > + set_pgd(&tmp_pgd_table[pgd_index(KASAN_SHADOW_START)], > + __pgd(__pa(tmp_pmd_table) | PMD_TYPE_TABLE | L_PGD_SWAPPER)); > +#endif > + cpu_switch_mm(tmp_pgd_table, &init_mm); > + local_flush_tlb_all(); > + > + clear_pgds(KASAN_SHADOW_START, KASAN_SHADOW_END); > + > + kasan_populate_early_shadow(kasan_mem_to_shadow((void *)VMALLOC_START), > + kasan_mem_to_shadow((void *)-1UL) + 1); > + > + for_each_memblock(memory, reg) { > + void *start = __va(reg->base); > + void *end = __va(reg->base + reg->size); > + > + /* Do not attempt to shadow highmem */ > + if (reg->base >= arm_lowmem_limit) { > + pr_info("Skip highmem block %pap-%pap\n", > + ®->base, ®->base + reg->size); Adding reg->size to ®->base is not going to produce the expected value here. I think we can just drop it, and only keep the start address here (same below) > + continue; > + } > + if (reg->base + reg->size > arm_lowmem_limit) { > + pr_info("Truncating shadow for %pap-%pap to lowmem region\n", > + ®->base, ®->base + reg->size); > + end = __va(arm_lowmem_limit); > + } > + if (start >= end) { > + pr_info("Skipping invalid memory block %px-%px\n", > + start, end); > + continue; > + } > + > + create_mapping(start, end); > + } > + > + /* > + * 1. The module global variables are in MODULES_VADDR ~ MODULES_END, > + * so we need to map this area. > + * 2. PKMAP_BASE ~ PKMAP_BASE+PMD_SIZE's shadow and MODULES_VADDR > + * ~ MODULES_END's shadow is in the same PMD_SIZE, so we can't > + * use kasan_populate_zero_shadow. > + */ > + create_mapping((void *)MODULES_VADDR, (void *)(PKMAP_BASE + PMD_SIZE)); > + > + /* > + * KAsan may reuse the contents of kasan_early_shadow_pte directly, so > + * we should make sure that it maps the zero page read-only. > + */ > + for (i = 0; i < PTRS_PER_PTE; i++) > + set_pte_at(&init_mm, KASAN_SHADOW_START + i*PAGE_SIZE, > + &kasan_early_shadow_pte[i], > + pfn_pte(virt_to_pfn(kasan_early_shadow_page), > + __pgprot(pgprot_val(PAGE_KERNEL) > + | L_PTE_RDONLY))); > + > + cpu_switch_mm(swapper_pg_dir, &init_mm); > + local_flush_tlb_all(); > + > + memset(kasan_early_shadow_page, 0, PAGE_SIZE); > + pr_info("Kernel address sanitizer initialized\n"); > + init_task.kasan_depth = 0; > +} > diff --git a/arch/arm/mm/pgd.c b/arch/arm/mm/pgd.c > index c5e1b27046a8..f8e9bc58a84f 100644 > --- a/arch/arm/mm/pgd.c > +++ b/arch/arm/mm/pgd.c > @@ -66,7 +66,21 @@ pgd_t *pgd_alloc(struct mm_struct *mm) > new_pmd = pmd_alloc(mm, new_pud, 0); > if (!new_pmd) > goto no_pmd; > -#endif > +#ifdef CONFIG_KASAN > + /* > + * Copy PMD table for KASAN shadow mappings. > + */ > + init_pgd = pgd_offset_k(TASK_SIZE); > + init_p4d = p4d_offset(init_pgd, TASK_SIZE); > + init_pud = pud_offset(init_p4d, TASK_SIZE); > + init_pmd = pmd_offset(init_pud, TASK_SIZE); > + new_pmd = pmd_offset(new_pud, TASK_SIZE); > + memcpy(new_pmd, init_pmd, > + (pmd_index(MODULES_VADDR) - pmd_index(TASK_SIZE)) > + * sizeof(pmd_t)); > + clean_dcache_area(new_pmd, PTRS_PER_PMD * sizeof(pmd_t)); > +#endif /* CONFIG_KASAN */ > +#endif /* CONFIG_LPAE */ > > if (!vectors_high()) { > /* > -- > 2.26.2 > _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel