From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.1 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id ED983C43219 for ; Sat, 27 Apr 2019 01:46:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id B7C6121537 for ; Sat, 27 Apr 2019 01:46:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1556329601; bh=UbU0W5REMsR23LNOykfGmpOPpJJneDMOJFFazNyD0Dk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=sBpX92kVJLx0VDIaaorYfvcrcANscbTIkk1KH4m0pj+AbEepbeWQmKB/AB3/ie9dk FVBPElLa0c83mlOmhq5z7z2q6yzOiwRSGOati+IY9F+O91sNCEzaUBtWJVBlJwz/xr U2KAX3RMDA5f8VvvvuA4knd4MqvFGFNN0xyFwkqc= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728804AbfD0Bqk (ORCPT ); Fri, 26 Apr 2019 21:46:40 -0400 Received: from mail.kernel.org ([198.145.29.99]:47522 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728179AbfD0BnV (ORCPT ); Fri, 26 Apr 2019 21:43:21 -0400 Received: from sasha-vm.mshome.net (c-73-47-72-35.hsd1.nh.comcast.net [73.47.72.35]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 4AF2A208CB; Sat, 27 Apr 2019 01:43:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1556329401; bh=UbU0W5REMsR23LNOykfGmpOPpJJneDMOJFFazNyD0Dk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=1xkUzQmT6fzoHGuRrBln475FTqThW2TZGKz7PJ1lwHFmk5CJPQZh0ehrL2AmY+N2E tD8NfW4S9kqSH05PuoQz0ND9BlmbSWkMnvEsLFlLYkIgtRzNxEbDC/sQ9ShVMWN7PR +6IiZsJCXPZHw/oIgdS2hl3PJgucRBA2aT73/gLM= From: Sasha Levin To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Linus Torvalds , Jann Horn , stable@kernel.org, Sasha Levin , linux-mm@kvack.org Subject: [PATCH AUTOSEL 4.14 31/32] mm: make page ref count overflow check tighter and more explicit Date: Fri, 26 Apr 2019 21:42:22 -0400 Message-Id: <20190427014224.8274-31-sashal@kernel.org> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20190427014224.8274-1-sashal@kernel.org> References: <20190427014224.8274-1-sashal@kernel.org> MIME-Version: 1.0 X-stable: review X-Patchwork-Hint: Ignore Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Linus Torvalds [ Upstream commit f958d7b528b1b40c44cfda5eabe2d82760d868c3 ] We have a VM_BUG_ON() to check that the page reference count doesn't underflow (or get close to overflow) by checking the sign of the count. That's all fine, but we actually want to allow people to use a "get page ref unless it's already very high" helper function, and we want that one to use the sign of the page ref (without triggering this VM_BUG_ON). Change the VM_BUG_ON to only check for small underflows (or _very_ close to overflowing), and ignore overflows which have strayed into negative territory. Acked-by: Matthew Wilcox Cc: Jann Horn Cc: stable@kernel.org Signed-off-by: Linus Torvalds Signed-off-by: Sasha Levin --- include/linux/mm.h | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 58f2263de4de..4023819837a6 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -824,6 +824,10 @@ static inline bool is_device_public_page(const struct page *page) #endif /* CONFIG_DEVICE_PRIVATE || CONFIG_DEVICE_PUBLIC */ +/* 127: arbitrary random number, small enough to assemble well */ +#define page_ref_zero_or_close_to_overflow(page) \ + ((unsigned int) page_ref_count(page) + 127u <= 127u) + static inline void get_page(struct page *page) { page = compound_head(page); @@ -831,7 +835,7 @@ static inline void get_page(struct page *page) * Getting a normal page or the head of a compound page * requires to already have an elevated page->_refcount. */ - VM_BUG_ON_PAGE(page_ref_count(page) <= 0, page); + VM_BUG_ON_PAGE(page_ref_zero_or_close_to_overflow(page), page); page_ref_inc(page); } -- 2.19.1