From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 345B3CF6491 for ; Sat, 28 Sep 2024 22:28:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:In-Reply-To:Content-Type: MIME-Version:References:Message-ID:Subject:Cc:To:From:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=5Tpgb3eH2OqbcivhrhwLchq0LO1TD/5iLs0OHvgtW8U=; b=XCQQkhGc0B9T7IH2bX4djtisVM 5jC1XDUEoQS3DOp10o68T0vQa74Mnl9N0wXpiOgerVDTbcleWozuzY67tbEPPMnDjO0UYTWauagHI B3jxU7zn1QBkYuW53eQsJEsXM73bPlDQFiwgzUV4SkpjdCEGcTIJVfaA4oKjOupgmKmPjJJzpFHqz l7Tides8MxmzVBZWAn0WWDkL+G+yQg95REuBMoUicD1+ZK1ih16xvMEY4SA4tL2OfKIo8GV2QpNSf v5DGztBnYQ+TGvBXRGbLfwnDdeccCizND5Ha/p/tKAOP8RhVJfNnbJK0xG8Kb21Nlv79x3x6G/rGR FehKMM6A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1sufvL-0000000DkHo-3xVc; Sat, 28 Sep 2024 22:28:23 +0000 Received: from zeniv.linux.org.uk ([2a03:a000:7:0:5054:ff:fe1c:15ff]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1sufu8-0000000Dk9B-4BWq for linux-arm-kernel@lists.infradead.org; Sat, 28 Sep 2024 22:27:11 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=linux.org.uk; s=zeniv-20220401; h=Sender:In-Reply-To:Content-Type: MIME-Version:References:Message-ID:Subject:Cc:To:From:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=5Tpgb3eH2OqbcivhrhwLchq0LO1TD/5iLs0OHvgtW8U=; b=KdUQgW6QBPshS5+H5KBPwxEjg2 T3Tzl4R7fbgrJzPT67AyCvVCkSVEb2V8FH1ljH8kqEdTWWeIgxewxOumZma5Yn/M4AMNTfkOis0Yo 8v6vYqFGjwp8FYU1Yypk53P9zlJPgjeZ4mUUsFNSsCLfjhXMp+Gk9IFvvWwnxwjEAnUd+XuohV9vR O22tcEU1xt3+Ga+4aAyILCDTVTfICrGFNV4zqar+0D9lA3nB00iGdRpwF13AD9o4D8q9doX9mAxAp Z5OAPHk8IWL8QujdzfjWQsbzlnHnPiNirx6aJc9hTil0QKp5z/IbGC8EO/kDznEyDVHoEwb6Fn3aR V2BQt4UQ==; Received: from viro by zeniv.linux.org.uk with local (Exim 4.98 #2 (Red Hat Linux)) id 1sufu2-0000000GI3d-3Bm5; Sat, 28 Sep 2024 22:27:02 +0000 Date: Sat, 28 Sep 2024 23:27:02 +0100 From: Al Viro To: Philipp Zabel Cc: linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kernel@pengutronix.de, Krzysztof Kozlowski , Linus Torvalds , Peter Zijlstra Subject: [heads-up] Re: [PATCH] reset: Further simplify locking with guard() Message-ID: <20240928222702.GX3550746@ZenIV> References: <20240927-reset-guard-v1-1-293bf1302210@pengutronix.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20240927-reset-guard-v1-1-293bf1302210@pengutronix.de> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240928_152709_490500_5665DE8A X-CRM114-Status: GOOD ( 16.66 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Fri, Sep 27, 2024 at 04:02:32PM +0200, Philipp Zabel wrote: > Use guard(mutex) to automatically unlock mutexes when going out of > scope. Simplify error paths by removing a goto and manual mutex > unlocking in multiple places. And that, folks, is a live example of the reasons why guard() is an attractive nuisance. We really need a very loud warning on cleanup.h stuff - otherwise such patches from well-meaning folks will keep coming. > @@ -1041,29 +1036,27 @@ __of_reset_control_get(struct device_node *node, const char *id, int index, > } > } > > - mutex_lock(&reset_list_mutex); > + guard(mutex)(&reset_list_mutex); > rcdev = __reset_find_rcdev(&args, gpio_fallback); > if (!rcdev) { > rstc = ERR_PTR(-EPROBE_DEFER); > - goto out_unlock; > + goto out_put; > } > > if (WARN_ON(args.args_count != rcdev->of_reset_n_cells)) { > rstc = ERR_PTR(-EINVAL); > - goto out_unlock; > + goto out_put; > } > > rstc_id = rcdev->of_xlate(rcdev, &args); > if (rstc_id < 0) { > rstc = ERR_PTR(rstc_id); > - goto out_unlock; > + goto out_put; > } > > /* reset_list_mutex also protects the rcdev's reset_control list */ > rstc = __reset_control_get_internal(rcdev, rstc_id, shared, acquired); > > -out_unlock: > - mutex_unlock(&reset_list_mutex); > out_put: > of_node_put(args.np); Guess what happens if you take goto out_put prior to the entire thing, in ret = __reset_add_reset_gpio_device(&args); if (ret) { rstc = ERR_PTR(ret); goto out_put; } That patch adds implicit mutex_unlock() at the points where we leave the scope. Which extends to the end of function. In other words, there is one downstream of out_put, turning any goto out_put upstream of guard() into a bug. What's worse, that bug is not caught by gcc - it quietly generates bogus code that will get unnoticed until we get an error from __reset_add_reset_gpio_device() call. At that point we'll look at the contents of uninitialized variable and, if we are unlucky, call mutex_unlock() (with hell knows what pointer passed to it - not that mutex_unlock(&reset_list_mutex) would do us any good at that point, since we hadn't locked it in the first place). Folks, that kind of cleanup patches is bloody dangerous; even something that currently avoids that crap can easily grow that kind of quiet breakage later.