From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.0 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_PASS, T_DKIMWL_WL_HIGH,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 696B6C43219 for ; Thu, 2 May 2019 15:39:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 31FE120449 for ; Thu, 2 May 2019 15:39:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1556811555; bh=/lnWhQL+d20XErr3t+AELXpFzYO/iy5TFR8rzFo2Dao=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=iTfnJZjofX/GpT3FidnsBCpK6gr59v9u/2G2ufPzwD08j18YWqCxFFrcJ/2TsqYoO 7lduWCg5MkR2j+uQnJR309BOZUzApQUZu7epEkfAolDhda02/QlZMbj6e18yv93PH2 iF0RbXOWcQ9pn2W3B1eB9CcbNYbfpg3CwI8k/qds= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727978AbfEBPjJ (ORCPT ); Thu, 2 May 2019 11:39:09 -0400 Received: from mail.kernel.org ([198.145.29.99]:47678 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727797AbfEBP3m (ORCPT ); Thu, 2 May 2019 11:29:42 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 67E2220675; Thu, 2 May 2019 15:29:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1556810981; bh=/lnWhQL+d20XErr3t+AELXpFzYO/iy5TFR8rzFo2Dao=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=tq0oXF8SqyF4u+zKX1gmY+dsuE2G8sHVEG3xt7G6F113uHKzSk4qSnVFbHHlLohQu YVuRRDuBBCpyroWmA4gmLlv9dDUr7b3ZB2S2A6vS0ObsxmoEmvFbHOeYbiKA2dHINU hbsEBE5Ijx9MPw4yXXq5x4fhVSS9OAHOL+R7Om/Y= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Matthew Wilcox , Jann Horn , stable@kernel.org, Linus Torvalds Subject: [PATCH 5.0 005/101] mm: add try_get_page() helper function Date: Thu, 2 May 2019 17:20:07 +0200 Message-Id: <20190502143339.868493580@linuxfoundation.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190502143339.434882399@linuxfoundation.org> References: <20190502143339.434882399@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Linus Torvalds commit 88b1a17dfc3ed7728316478fae0f5ad508f50397 upstream. This is the same as the traditional 'get_page()' function, but instead of unconditionally incrementing the reference count of the page, it only does so if the count was "safe". It returns whether the reference count was incremented (and is marked __must_check, since the caller obviously has to be aware of it). Also like 'get_page()', you can't use this function unless you already had a reference to the page. The intent is that you can use this exactly like get_page(), but in situations where you want to limit the maximum reference count. The code currently does an unconditional WARN_ON_ONCE() if we ever hit the reference count issues (either zero or negative), as a notification that the conditional non-increment actually happened. NOTE! The count access for the "safety" check is inherently racy, but that doesn't matter since the buffer we use is basically half the range of the reference count (ie we look at the sign of the count). Acked-by: Matthew Wilcox Cc: Jann Horn Cc: stable@kernel.org Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman --- include/linux/mm.h | 9 +++++++++ 1 file changed, 9 insertions(+) --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -980,6 +980,15 @@ static inline void get_page(struct page page_ref_inc(page); } +static inline __must_check bool try_get_page(struct page *page) +{ + page = compound_head(page); + if (WARN_ON_ONCE(page_ref_count(page) <= 0)) + return false; + page_ref_inc(page); + return true; +} + static inline void put_page(struct page *page) { page = compound_head(page);