From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2F674ECAAA3 for ; Fri, 26 Aug 2022 15:10:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244060AbiHZPKR (ORCPT ); Fri, 26 Aug 2022 11:10:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35070 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244056AbiHZPJU (ORCPT ); Fri, 26 Aug 2022 11:09:20 -0400 Received: from mail-ed1-x54a.google.com (mail-ed1-x54a.google.com [IPv6:2a00:1450:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BE1F4DC09B for ; Fri, 26 Aug 2022 08:08:39 -0700 (PDT) Received: by mail-ed1-x54a.google.com with SMTP id z20-20020a05640235d400b0043e1e74a495so1247033edc.11 for ; Fri, 26 Aug 2022 08:08:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc; bh=lZyR+/TbY71+6irDHrQVY8i8lV0LAE0DR0x5Sx//ziQ=; b=Ofrs8PCMHX+W0ceEYWnDzVx1uzp/bWz8GLaWKdo+wAqOx3sPZu6pI92iq0IULzGOQ8 VJ09OsHIUk+49s1paxcvdc/dQgoZURGCS4X6UxVQKGmyCn/fzomHAQt3pDibyyvopGw+ A204SQq3udS4olps/yoQNs/HdBDLMH/XYtFo5t7S3WbS0/Mi6jEQiCtdTGIheelU/NKQ ODyynPxQ+JC42TN6qhou33wAGXEzHzmJj/qEXAgKq4QXYccBrT+Ih7NpvsAuafykD+5Y +aYVMqZk7TNUIRRwqC7Wdqslm1Z8gXmVvziP4uuhA49TmwX3KTg2b8vw2WCFS2T1Wo1D S3QQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc; bh=lZyR+/TbY71+6irDHrQVY8i8lV0LAE0DR0x5Sx//ziQ=; b=XURsEcfFc2VdhcVGgZPmntASGeMdaZsJgCwdkCNyDlVk5/h+1vp/STF+NvMvib4kYU +65i/H6Vvr96/4xd8qhWkO4le0NnbG9qzre9XaQyOrA39gPeg/MTyKZhqQOxgulKIJMY jOX3oTTvLj4QZNc593sqWZiVB930LLnOuDTTiECPNIELyfFqhECqB/A1DvDdZRrCHtDz bxCz9rYh+2xvP3BRzg0UBAKbp9DzuB3JuUlwPZVxgJJ7d+s+MMKev5FDDxbjlpX1YEZe Enghr29fgaO0GbvvD4/p9C4/whmtkDvWSuskAzeQGjWWxh3x1das84wMA3cknXIqMNOw XLJg== X-Gm-Message-State: ACgBeo17QZSZq74aZD+pqcwmoStVHYS4fvbroGSY2AoQg7Dk/VfZkX8z 45nO6PDaaDnxfmBW7Bmgpfxj+Qu/Q6A= X-Google-Smtp-Source: AA6agR6DZmZvABSNd8NirMJa2iJguxZBNB8hGvX5VWCga9N1u2wq6DvLGXRRSlQxms71XTCN2etlpU9v9zU= X-Received: from glider.muc.corp.google.com ([2a00:79e0:9c:201:5207:ac36:fdd3:502d]) (user=glider job=sendgmr) by 2002:a17:906:f88f:b0:731:463d:4b15 with SMTP id lg15-20020a170906f88f00b00731463d4b15mr5703796ejb.299.1661526518263; Fri, 26 Aug 2022 08:08:38 -0700 (PDT) Date: Fri, 26 Aug 2022 17:07:32 +0200 In-Reply-To: <20220826150807.723137-1-glider@google.com> Mime-Version: 1.0 References: <20220826150807.723137-1-glider@google.com> X-Mailer: git-send-email 2.37.2.672.g94769d06f0-goog Message-ID: <20220826150807.723137-10-glider@google.com> Subject: [PATCH v5 09/44] x86: kmsan: pgtable: reduce vmalloc space From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Alexei Starovoitov , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-arch@vger.kernel.org KMSAN is going to use 3/4 of existing vmalloc space to hold the metadata, therefore we lower VMALLOC_END to make sure vmalloc() doesn't allocate past the first 1/4. Signed-off-by: Alexander Potapenko --- v2: -- added x86: to the title v5: -- add comment for VMEMORY_END Link: https://linux-review.googlesource.com/id/I9d8b7f0a88a639f1263bc693cbd5c136626f7efd --- arch/x86/include/asm/pgtable_64_types.h | 47 ++++++++++++++++++++++++- arch/x86/mm/init_64.c | 2 +- 2 files changed, 47 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/pgtable_64_types.h b/arch/x86/include/asm/pgtable_64_types.h index 70e360a2e5fb7..04f36063ad546 100644 --- a/arch/x86/include/asm/pgtable_64_types.h +++ b/arch/x86/include/asm/pgtable_64_types.h @@ -139,7 +139,52 @@ extern unsigned int ptrs_per_p4d; # define VMEMMAP_START __VMEMMAP_BASE_L4 #endif /* CONFIG_DYNAMIC_MEMORY_LAYOUT */ -#define VMALLOC_END (VMALLOC_START + (VMALLOC_SIZE_TB << 40) - 1) +/* + * End of the region for which vmalloc page tables are pre-allocated. + * For non-KMSAN builds, this is the same as VMALLOC_END. + * For KMSAN builds, VMALLOC_START..VMEMORY_END is 4 times bigger than + * VMALLOC_START..VMALLOC_END (see below). + */ +#define VMEMORY_END (VMALLOC_START + (VMALLOC_SIZE_TB << 40) - 1) + +#ifndef CONFIG_KMSAN +#define VMALLOC_END VMEMORY_END +#else +/* + * In KMSAN builds vmalloc area is four times smaller, and the remaining 3/4 + * are used to keep the metadata for virtual pages. The memory formerly + * belonging to vmalloc area is now laid out as follows: + * + * 1st quarter: VMALLOC_START to VMALLOC_END - new vmalloc area + * 2nd quarter: KMSAN_VMALLOC_SHADOW_START to + * VMALLOC_END+KMSAN_VMALLOC_SHADOW_OFFSET - vmalloc area shadow + * 3rd quarter: KMSAN_VMALLOC_ORIGIN_START to + * VMALLOC_END+KMSAN_VMALLOC_ORIGIN_OFFSET - vmalloc area origins + * 4th quarter: KMSAN_MODULES_SHADOW_START to KMSAN_MODULES_ORIGIN_START + * - shadow for modules, + * KMSAN_MODULES_ORIGIN_START to + * KMSAN_MODULES_ORIGIN_START + MODULES_LEN - origins for modules. + */ +#define VMALLOC_QUARTER_SIZE ((VMALLOC_SIZE_TB << 40) >> 2) +#define VMALLOC_END (VMALLOC_START + VMALLOC_QUARTER_SIZE - 1) + +/* + * vmalloc metadata addresses are calculated by adding shadow/origin offsets + * to vmalloc address. + */ +#define KMSAN_VMALLOC_SHADOW_OFFSET VMALLOC_QUARTER_SIZE +#define KMSAN_VMALLOC_ORIGIN_OFFSET (VMALLOC_QUARTER_SIZE << 1) + +#define KMSAN_VMALLOC_SHADOW_START (VMALLOC_START + KMSAN_VMALLOC_SHADOW_OFFSET) +#define KMSAN_VMALLOC_ORIGIN_START (VMALLOC_START + KMSAN_VMALLOC_ORIGIN_OFFSET) + +/* + * The shadow/origin for modules are placed one by one in the last 1/4 of + * vmalloc space. + */ +#define KMSAN_MODULES_SHADOW_START (VMALLOC_END + KMSAN_VMALLOC_ORIGIN_OFFSET + 1) +#define KMSAN_MODULES_ORIGIN_START (KMSAN_MODULES_SHADOW_START + MODULES_LEN) +#endif /* CONFIG_KMSAN */ #define MODULES_VADDR (__START_KERNEL_map + KERNEL_IMAGE_SIZE) /* The module sections ends with the start of the fixmap */ diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index 0fe690ebc269b..39b6bfcaa0ed4 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -1287,7 +1287,7 @@ static void __init preallocate_vmalloc_pages(void) unsigned long addr; const char *lvl; - for (addr = VMALLOC_START; addr <= VMALLOC_END; addr = ALIGN(addr + 1, PGDIR_SIZE)) { + for (addr = VMALLOC_START; addr <= VMEMORY_END; addr = ALIGN(addr + 1, PGDIR_SIZE)) { pgd_t *pgd = pgd_offset_k(addr); p4d_t *p4d; pud_t *pud; -- 2.37.2.672.g94769d06f0-goog