From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8810DC001B0 for ; Wed, 9 Aug 2023 15:30:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=6diy38nUKYvRnac9g2WR2qz686XEbJ96AcNDLCVP2MM=; b=sL7ptYmeSStMY9 jg4yC3RAG/oDEYoKiqxLtm+Q/yKfP7pyLyWQY8/CBLoSHhE+CDlLk0pqL/ruQzTYxjmP1hgJWTYAk SdhGFXS5w7RfAOBOnmZyqjzLyD8tdx58FMsd1Apgceu/uCykkskFtvFRD0xXvd4A9rKwFbzNNDh/n leiQ1xGiK6NfcGHN0WUazVyTQZRhNK+LYvlr7Ib3aXrFA5doDUx985nz5/+jqE+nxPQCr8tos72Vz hOE2mL1/QHHP5TJoS2trk+mW3cTsmAwTyKA9WJDMdyvVVYuNINfq13aiRBS8oQDyOwyo9yPvpYZAe y2zGWnyxuMrRju2KeqOw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qTl8q-005IOm-1V; Wed, 09 Aug 2023 15:30:32 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qTl8n-005INy-24 for linux-arm-kernel@lists.infradead.org; Wed, 09 Aug 2023 15:30:31 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id B8F3763D85; Wed, 9 Aug 2023 15:30:26 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 753E4C433C8; Wed, 9 Aug 2023 15:30:23 +0000 (UTC) Date: Wed, 9 Aug 2023 11:30:21 -0400 From: Steven Rostedt To: Marco Elver Cc: Kees Cook , Andrew Morton , Guenter Roeck , Peter Zijlstra , Mark Rutland , Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon , Nathan Chancellor , Nick Desaulniers , Tom Rix , Miguel Ojeda , Sami Tolvanen , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, Dmitry Vyukov , Alexander Potapenko , kasan-dev@googlegroups.com, linux-toolchains@vger.kernel.org Subject: Re: [PATCH v3 3/3] list_debug: Introduce CONFIG_DEBUG_LIST_MINIMAL Message-ID: <20230809113021.63e5ef66@gandalf.local.home> In-Reply-To: References: <20230808102049.465864-1-elver@google.com> <20230808102049.465864-3-elver@google.com> <202308081424.1DC7AA4AE3@keescook> X-Mailer: Claws Mail 3.19.1 (GTK+ 2.24.33; x86_64-pc-linux-gnu) MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230809_083029_789733_8C704B1A X-CRM114-Status: GOOD ( 23.07 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Wed, 9 Aug 2023 11:57:19 +0200 Marco Elver wrote: > static __always_inline bool __list_add_valid(struct list_head *new, > struct list_head *prev, > struct list_head *next) > { > - return __list_add_valid_or_report(new, prev, next); > + bool ret = true; > + > + if (IS_ENABLED(CONFIG_HARDEN_LIST)) { > + /* > + * With the hardening version, elide checking if next and prev > + * are NULL, since the immediate dereference of them below would > + * result in a fault if NULL. > + * > + * With the reduced set of checks, we can afford to inline the > + * checks, which also gives the compiler a chance to elide some > + * of them completely if they can be proven at compile-time. If > + * one of the pre-conditions does not hold, the slow-path will > + * show a report which pre-condition failed. > + */ > + if (likely(next->prev == prev && prev->next == next && new != prev && new != next)) > + return true; > + ret = false; > + } > + > + ret &= __list_add_valid_or_report(new, prev, next); > + return ret; > } I would actually prefer DEBUG_LIST to select HARDEN_LIST and not the other way around. It logically doesn't make sense that HARDEN_LIST would select DEBUG_LIST. That is, I could by default want HARDEN_LIST always on, but not DEBUG_LIST (because who knows, it may add other features I don't want). But then, I may have stumbled over something and want more info, and enable DEBUG_LIST (while still having HARDEN_LIST) enabled. I think you are looking at this from an implementation perspective and not the normal developer one. This would mean the above function should get enabled by CONFIG_HARDEN_LIST (and CONFIG_DEBUG would select CONFIG_HARDEN) and would look more like: static __always_inline bool __list_add_valid(struct list_head *new, struct list_head *prev, struct list_head *next) { bool ret = true; if (!IS_ENABLED(CONFIG_DEBUG_LIST)) { /* * With the hardening version, elide checking if next and prev * are NULL, since the immediate dereference of them below would * result in a fault if NULL. * * With the reduced set of checks, we can afford to inline the * checks, which also gives the compiler a chance to elide some * of them completely if they can be proven at compile-time. If * one of the pre-conditions does not hold, the slow-path will * show a report which pre-condition failed. */ if (likely(next->prev == prev && prev->next == next && new != prev && new != next)) return true; ret = false; } ret &= __list_add_valid_or_report(new, prev, next); return ret; } That is, if DEBUG_LIST is enabled, we always call the __list_add_valid_or_report(), but if only HARDEN_LIST is enabled, then we do the shortcut. -- Steve _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel