From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1896AC001B0 for ; Wed, 9 Aug 2023 16:33:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=QhwXxKhW4qGJRQJnsQXeR6g2a7kBYhktrUKYe+pnccc=; b=1VpkwNQB75ut3o BOTRqbeT7mbsCTEPiOZRnDWcU6pWxlgY5liNQGKLACbFY2edKd5Jwo+upo33B7Gvau5UNwprVrY3a wzlblASmsuaReYTAiMNOSVYqOIHPgpj0ak5x9p0oaLnorZ4HI7Z//MDeS2peBzbd0LpvsTtRpy0GB wbnUgOFQnCb/LlhLi8PZuUyVkZSdEs6bb0TOuG+DPP5snf+DtNL/h/YNVoCnZz4hsVTLuppM807En sRCjW6iq4U8VelYWRXPHaT74csDSK04HYiFYpTonKYHV9e2JEeTrdDfru6qdnuIbtv2sQF46Gji+F jm2fI2vePMmQgy6Od3OA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qTm7B-005P7w-0v; Wed, 09 Aug 2023 16:32:53 +0000 Received: from mail-wr1-x436.google.com ([2a00:1450:4864:20::436]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qTm75-005P65-37 for linux-arm-kernel@lists.infradead.org; Wed, 09 Aug 2023 16:32:51 +0000 Received: by mail-wr1-x436.google.com with SMTP id ffacd0b85a97d-317f1c480eeso50389f8f.2 for ; Wed, 09 Aug 2023 09:32:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1691598763; x=1692203563; h=user-agent:in-reply-to:content-disposition:mime-version:references :message-id:subject:cc:to:from:date:from:to:cc:subject:date :message-id:reply-to; bh=fUKdVS5/N5qQkJwAZJURTrzizTk+EoulthkFub9h71Q=; b=LbvtTy/FLD0wzImORvHKE16saLu0nwLV93T+6LbYf8pYhsLhS5zapolbw8ClDY4ku0 FAGVzYJzteO97NgGOnBEEPDlI4o/J0VY5PvCMCvJKa3qSoUVEqUyO+7Q86tk3pQerMtk JYLxe3o3CIAUkcm/XPb5fAit9O3p/gAVNqqJ83JYfxg1p+5abXG9yl53yQlkafUu2qiR sCg5G24XshN+TJCHRtJvtOomCKnERp03MBMIma5jqNawRjea5gGz+6SPtv5yFWqOvaHJ z3DutLswTotTQXmdM5xOUh+6FaQie+rGM8K3wHUcNgwnmr1bhUXGT6BjG5wCwjj28VUp 0cBQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691598763; x=1692203563; h=user-agent:in-reply-to:content-disposition:mime-version:references :message-id:subject:cc:to:from:date:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=fUKdVS5/N5qQkJwAZJURTrzizTk+EoulthkFub9h71Q=; b=Xy4hIUm00D5dBV2paMGMbEdrtee09eLh2rqckiURuJ5sKx2zBfQhmis7nOPI+1CuhU bIXhCqmN+clFT4p9v7X5mHNCfWfWoDFTxOsHGM525+wHu4zUyBB/JMpWGJeZrA2hmxIR HsbrjlCVYzcLigLydL24kRQrh1nWp+e4wFoBmfIK9J49cG8dGYIKdZwmDzSdECxmOzpL S5NHSkrs7HUiO//eAUuhtfZ2j+z/FWQxxGtyxE/AJ2P5TobYgCd8OCI/eK+ZeErBu5eE JUvhx4TBpIfGwmuGxtDGXE+srZhGOgIwkLRahKUo22XWZS0Ii/0tK4jelPCVjh5BrCd6 lHzw== X-Gm-Message-State: AOJu0YzvtE+T/xuGS3crhTJVbWQNyDeW/P0/lWgpWTAqQ2kOTq5/RUUI p6wX1FgtknEam0hFa2OiMDKz5Q== X-Google-Smtp-Source: AGHT+IGdswAaush01nujfKLZz/6i7q9b1IdjA+301m8PBrxWscy8pmA9yLp4Rst9KIrsmBjxos6XvA== X-Received: by 2002:adf:ea4d:0:b0:317:9537:d73f with SMTP id j13-20020adfea4d000000b003179537d73fmr2324254wrn.30.1691598763235; Wed, 09 Aug 2023 09:32:43 -0700 (PDT) Received: from elver.google.com ([2a00:79e0:9c:201:9ce0:327a:6e5a:3533]) by smtp.gmail.com with ESMTPSA id d10-20020adffd8a000000b003143ba62cf4sm17138365wrr.86.2023.08.09.09.32.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 09 Aug 2023 09:32:42 -0700 (PDT) Date: Wed, 9 Aug 2023 18:32:37 +0200 From: Marco Elver To: Steven Rostedt Cc: Kees Cook , Andrew Morton , Guenter Roeck , Peter Zijlstra , Mark Rutland , Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon , Nathan Chancellor , Nick Desaulniers , Tom Rix , Miguel Ojeda , Sami Tolvanen , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, Dmitry Vyukov , Alexander Potapenko , kasan-dev@googlegroups.com, linux-toolchains@vger.kernel.org Subject: Re: [PATCH v3 3/3] list_debug: Introduce CONFIG_DEBUG_LIST_MINIMAL Message-ID: References: <20230808102049.465864-1-elver@google.com> <20230808102049.465864-3-elver@google.com> <202308081424.1DC7AA4AE3@keescook> <20230809113021.63e5ef66@gandalf.local.home> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20230809113021.63e5ef66@gandalf.local.home> User-Agent: Mutt/2.2.9 (2022-11-12) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230809_093248_007526_B79D61F8 X-CRM114-Status: GOOD ( 38.36 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Wed, Aug 09, 2023 at 11:30AM -0400, Steven Rostedt wrote: [...] > > I would actually prefer DEBUG_LIST to select HARDEN_LIST and not the other > way around. It logically doesn't make sense that HARDEN_LIST would select > DEBUG_LIST. That is, I could by default want HARDEN_LIST always on, but not > DEBUG_LIST (because who knows, it may add other features I don't want). But > then, I may have stumbled over something and want more info, and enable > DEBUG_LIST (while still having HARDEN_LIST) enabled. > > I think you are looking at this from an implementation perspective and not > the normal developer one. > [...] > > That is, if DEBUG_LIST is enabled, we always call the > __list_add_valid_or_report(), but if only HARDEN_LIST is enabled, then we > do the shortcut. Good point - I think this is better. See below tentative v4. Kees: Does that also look more like what you had in mind? Thanks, -- Marco ------ >8 ------ From: Marco Elver Date: Thu, 27 Jul 2023 22:19:02 +0200 Subject: [PATCH] list: Introduce CONFIG_HARDEN_LIST Numerous production kernel configs (see [1, 2]) are choosing to enable CONFIG_DEBUG_LIST, which is also being recommended by KSPP for hardened configs [3]. The motivation behind this is that the option can be used as a security hardening feature (e.g. CVE-2019-2215 and CVE-2019-2025 are mitigated by the option [4]). The feature has never been designed with performance in mind, yet common list manipulation is happening across hot paths all over the kernel. Introduce CONFIG_HARDEN_LIST, which performs list pointer checking inline, and only upon list corruption calls the reporting slow path. Since DEBUG_LIST is functionally a superset of HARDEN_LIST, the Kconfig variables are designed to reflect that: DEBUG_LIST selects HARDEN_LIST, whereas HARDEN_LIST itself has no dependency on DEBUG_LIST. To generate optimal machine code with CONFIG_HARDEN_LIST: 1. Elide checking for pointer values which upon dereference would result in an immediate access fault -- therefore "minimal" checks. The trade-off is lower-quality error reports. 2. Use the newly introduced __preserve_most function attribute (available with Clang, but not yet with GCC) to minimize the code footprint for calling the reporting slow path. As a result, function size of callers is reduced by avoiding saving registers before calling the rarely called reporting slow path. Note that all TUs in lib/Makefile already disable function tracing, including list_debug.c, and __preserve_most's implied notrace has no effect in this case. 3. Because the inline checks are a subset of the full set of checks in __list_*_valid_or_report(), always return false if the inline checks failed. This avoids redundant compare and conditional branch right after return from the slow path. As a side-effect of the checks being inline, if the compiler can prove some condition to always be true, it can completely elide some checks. Running netperf with CONFIG_HARDEN_LIST (using a Clang compiler with "preserve_most") shows throughput improvements, in my case of ~7% on average (up to 20-30% on some test cases). Link: https://r.android.com/1266735 [1] Link: https://gitlab.archlinux.org/archlinux/packaging/packages/linux/-/blob/main/config [2] Link: https://kernsec.org/wiki/index.php/Kernel_Self_Protection_Project/Recommended_Settings [3] Link: https://googleprojectzero.blogspot.com/2019/11/bad-binder-android-in-wild-exploit.html [4] Signed-off-by: Marco Elver --- v4: * Rename to CONFIG_HARDEN_LIST, which can independently be selected from CONFIG_DEBUG_LIST. v3: * Rename ___list_*_valid() to __list_*_valid_or_report(). * More comments. v2: * Note that lib/Makefile disables function tracing for everything and __preserve_most's implied notrace is a noop here. --- arch/arm64/kvm/hyp/nvhe/Makefile | 2 +- arch/arm64/kvm/hyp/nvhe/list_debug.c | 2 + include/linux/list.h | 64 +++++++++++++++++++++++++--- lib/Kconfig.debug | 9 +++- lib/Makefile | 2 +- lib/list_debug.c | 5 ++- security/Kconfig.hardening | 13 ++++++ 7 files changed, 86 insertions(+), 11 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile index 9ddc025e4b86..c89c85a41ac4 100644 --- a/arch/arm64/kvm/hyp/nvhe/Makefile +++ b/arch/arm64/kvm/hyp/nvhe/Makefile @@ -25,7 +25,7 @@ hyp-obj-y := timer-sr.o sysreg-sr.o debug-sr.o switch.o tlb.o hyp-init.o host.o cache.o setup.o mm.o mem_protect.o sys_regs.o pkvm.o stacktrace.o ffa.o hyp-obj-y += ../vgic-v3-sr.o ../aarch32.o ../vgic-v2-cpuif-proxy.o ../entry.o \ ../fpsimd.o ../hyp-entry.o ../exception.o ../pgtable.o -hyp-obj-$(CONFIG_DEBUG_LIST) += list_debug.o +hyp-obj-$(CONFIG_HARDEN_LIST) += list_debug.o hyp-obj-y += $(lib-objs) ## diff --git a/arch/arm64/kvm/hyp/nvhe/list_debug.c b/arch/arm64/kvm/hyp/nvhe/list_debug.c index 16266a939a4c..46a2d4f2b3c6 100644 --- a/arch/arm64/kvm/hyp/nvhe/list_debug.c +++ b/arch/arm64/kvm/hyp/nvhe/list_debug.c @@ -26,6 +26,7 @@ static inline __must_check bool nvhe_check_data_corruption(bool v) /* The predicates checked here are taken from lib/list_debug.c. */ +__list_valid_slowpath bool __list_add_valid_or_report(struct list_head *new, struct list_head *prev, struct list_head *next) { @@ -37,6 +38,7 @@ bool __list_add_valid_or_report(struct list_head *new, struct list_head *prev, return true; } +__list_valid_slowpath bool __list_del_entry_valid_or_report(struct list_head *entry) { struct list_head *prev, *next; diff --git a/include/linux/list.h b/include/linux/list.h index 130c6a1bb45c..ef899c27c68b 100644 --- a/include/linux/list.h +++ b/include/linux/list.h @@ -38,39 +38,91 @@ static inline void INIT_LIST_HEAD(struct list_head *list) WRITE_ONCE(list->prev, list); } +#ifdef CONFIG_HARDEN_LIST + #ifdef CONFIG_DEBUG_LIST +# define __list_valid_slowpath +#else +# define __list_valid_slowpath __cold __preserve_most +#endif + /* * Performs the full set of list corruption checks before __list_add(). * On list corruption reports a warning, and returns false. */ -extern bool __list_add_valid_or_report(struct list_head *new, - struct list_head *prev, - struct list_head *next); +extern bool __list_valid_slowpath __list_add_valid_or_report(struct list_head *new, + struct list_head *prev, + struct list_head *next); /* * Performs list corruption checks before __list_add(). Returns false if a * corruption is detected, true otherwise. + * + * With CONFIG_HARDEN_LIST only, performs minimal list integrity checking (that + * do not result in a fault) inline, and only if a corruption is detected calls + * the reporting function __list_add_valid_or_report(). */ static __always_inline bool __list_add_valid(struct list_head *new, struct list_head *prev, struct list_head *next) { - return __list_add_valid_or_report(new, prev, next); + bool ret = true; + + if (!IS_ENABLED(CONFIG_DEBUG_LIST)) { + /* + * With the hardening version, elide checking if next and prev + * are NULL, since the immediate dereference of them below would + * result in a fault if NULL. + * + * With the reduced set of checks, we can afford to inline the + * checks, which also gives the compiler a chance to elide some + * of them completely if they can be proven at compile-time. If + * one of the pre-conditions does not hold, the slow-path will + * show a report which pre-condition failed. + */ + if (likely(next->prev == prev && prev->next == next && new != prev && new != next)) + return true; + ret = false; + } + + ret &= __list_add_valid_or_report(new, prev, next); + return ret; } /* * Performs the full set of list corruption checks before __list_del_entry(). * On list corruption reports a warning, and returns false. */ -extern bool __list_del_entry_valid_or_report(struct list_head *entry); +extern bool __list_valid_slowpath __list_del_entry_valid_or_report(struct list_head *entry); /* * Performs list corruption checks before __list_del_entry(). Returns false if a * corruption is detected, true otherwise. + * + * With CONFIG_HARDEN_LIST only, performs minimal list integrity checking (that + * do not result in a fault) inline, and only if a corruption is detected calls + * the reporting function __list_del_entry_valid_or_report(). */ static __always_inline bool __list_del_entry_valid(struct list_head *entry) { - return __list_del_entry_valid_or_report(entry); + bool ret = true; + + if (!IS_ENABLED(CONFIG_DEBUG_LIST)) { + struct list_head *prev = entry->prev; + struct list_head *next = entry->next; + + /* + * With the hardening version, elide checking if next and prev + * are NULL, LIST_POISON1 or LIST_POISON2, since the immediate + * dereference of them below would result in a fault. + */ + if (likely(prev->next == entry && next->prev == entry)) + return true; + ret = false; + } + + ret &= __list_del_entry_valid_or_report(entry); + return ret; } #else static inline bool __list_add_valid(struct list_head *new, diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index fbc89baf7de6..9e38956d6f50 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -1674,9 +1674,14 @@ menu "Debug kernel data structures" config DEBUG_LIST bool "Debug linked list manipulation" depends on DEBUG_KERNEL || BUG_ON_DATA_CORRUPTION + select HARDEN_LIST help - Enable this to turn on extended checks in the linked-list - walking routines. + Enable this to turn on extended checks in the linked-list walking + routines. + + This option trades better quality error reports for performance, and + is more suitable for kernel debugging. If you care about performance, + you should only enable CONFIG_HARDEN_LIST instead. If unsure, say N. diff --git a/lib/Makefile b/lib/Makefile index 42d307ade225..a7ebc9090f04 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -161,7 +161,7 @@ obj-$(CONFIG_BTREE) += btree.o obj-$(CONFIG_INTERVAL_TREE) += interval_tree.o obj-$(CONFIG_ASSOCIATIVE_ARRAY) += assoc_array.o obj-$(CONFIG_DEBUG_PREEMPT) += smp_processor_id.o -obj-$(CONFIG_DEBUG_LIST) += list_debug.o +obj-$(CONFIG_HARDEN_LIST) += list_debug.o obj-$(CONFIG_DEBUG_OBJECTS) += debugobjects.o obj-$(CONFIG_BITREVERSE) += bitrev.o diff --git a/lib/list_debug.c b/lib/list_debug.c index 2def33b1491f..38ddc7c01eab 100644 --- a/lib/list_debug.c +++ b/lib/list_debug.c @@ -2,7 +2,8 @@ * Copyright 2006, Red Hat, Inc., Dave Jones * Released under the General Public License (GPL). * - * This file contains the linked list validation for DEBUG_LIST. + * This file contains the linked list validation and error reporting for + * HARDEN_LIST and DEBUG_LIST. */ #include @@ -17,6 +18,7 @@ * attempt). */ +__list_valid_slowpath bool __list_add_valid_or_report(struct list_head *new, struct list_head *prev, struct list_head *next) { @@ -39,6 +41,7 @@ bool __list_add_valid_or_report(struct list_head *new, struct list_head *prev, } EXPORT_SYMBOL(__list_add_valid_or_report); +__list_valid_slowpath bool __list_del_entry_valid_or_report(struct list_head *entry) { struct list_head *prev, *next; diff --git a/security/Kconfig.hardening b/security/Kconfig.hardening index 0f295961e773..19aa2b7d2f64 100644 --- a/security/Kconfig.hardening +++ b/security/Kconfig.hardening @@ -279,6 +279,19 @@ config ZERO_CALL_USED_REGS endmenu +menu "Hardening of kernel data structures" + +config HARDEN_LIST + bool "Check integrity of linked list manipulation" + help + Minimal integrity checking in the linked-list manipulation routines + to catch memory corruptions that are not guaranteed to result in an + immediate access fault. + + If unsure, say N. + +endmenu + config CC_HAS_RANDSTRUCT def_bool $(cc-option,-frandomize-layout-seed-file=/dev/null) # Randstruct was first added in Clang 15, but it isn't safe to use until -- 2.41.0.640.ga95def55d0-goog _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel