From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.4 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 73AE8C2D0D1 for ; Thu, 19 Dec 2019 00:35:27 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 149F422314 for ; Thu, 19 Dec 2019 00:35:26 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="cYi9HYXp" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 149F422314 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nvidia.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A38ED8E0146; Wed, 18 Dec 2019 19:35:25 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 9BFBB8E00F5; Wed, 18 Dec 2019 19:35:25 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8AE0B8E0146; Wed, 18 Dec 2019 19:35:25 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0135.hostedemail.com [216.40.44.135]) by kanga.kvack.org (Postfix) with ESMTP id 728878E00F5 for ; Wed, 18 Dec 2019 19:35:25 -0500 (EST) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with SMTP id 283E7282A for ; Thu, 19 Dec 2019 00:35:25 +0000 (UTC) X-FDA: 76280022210.07.price45_696909bcad12d X-HE-Tag: price45_696909bcad12d X-Filterd-Recvd-Size: 6919 Received: from hqnvemgate26.nvidia.com (hqnvemgate26.nvidia.com [216.228.121.65]) by imf24.hostedemail.com (Postfix) with ESMTP for ; Thu, 19 Dec 2019 00:35:24 +0000 (UTC) Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate26.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Wed, 18 Dec 2019 16:35:11 -0800 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Wed, 18 Dec 2019 16:35:20 -0800 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Wed, 18 Dec 2019 16:35:20 -0800 Received: from [10.2.165.11] (10.124.1.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Thu, 19 Dec 2019 00:35:19 +0000 Subject: Re: [PATCH v11 04/25] mm: devmap: refactor 1-based refcounting for ZONE_DEVICE pages To: "Kirill A. Shutemov" CC: Andrew Morton , Al Viro , Alex Williamson , Benjamin Herrenschmidt , =?UTF-8?B?QmrDtnJuIFTDtnBlbA==?= , Christoph Hellwig , Dan Williams , Daniel Vetter , Dave Chinner , David Airlie , "David S . Miller" , Ira Weiny , Jan Kara , Jason Gunthorpe , Jens Axboe , Jonathan Corbet , =?UTF-8?B?SsOpcsO0bWUgR2xpc3Nl?= , Magnus Karlsson , Mauro Carvalho Chehab , Michael Ellerman , Michal Hocko , Mike Kravetz , Paul Mackerras , Shuah Khan , Vlastimil Babka , , , , , , , , , , , , , LKML , Christoph Hellwig References: <20191216222537.491123-1-jhubbard@nvidia.com> <20191216222537.491123-5-jhubbard@nvidia.com> <20191218160420.gyt4c45e6zsnxqv6@box> From: John Hubbard X-Nvconfidentiality: public Message-ID: Date: Wed, 18 Dec 2019 16:32:28 -0800 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.3.0 MIME-Version: 1.0 In-Reply-To: <20191218160420.gyt4c45e6zsnxqv6@box> X-Originating-IP: [10.124.1.5] X-ClientProxiedBy: HQMAIL105.nvidia.com (172.20.187.12) To HQMAIL107.nvidia.com (172.20.187.13) Content-Type: text/plain; charset="utf-8"; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1576715711; bh=lLxcLcvWuEdaa9mbUu8pEb7NikbQGA+3x199V+Yf2Go=; h=X-PGP-Universal:Subject:To:CC:References:From:X-Nvconfidentiality: Message-ID:Date:User-Agent:MIME-Version:In-Reply-To: X-Originating-IP:X-ClientProxiedBy:Content-Type:Content-Language: Content-Transfer-Encoding; b=cYi9HYXpThVPsCQ2FAhPpBmvg9pzZEnocfVab3p1rbztjUsDI5b7umJJYuIEvZ0PR vx+h2sHA10T2/Q1X4aDK6z59gq6N+0cBNEv1SBLGL1VNlncyvmlNdNaO1iWAyTGswr udKbzxqm/wjq006CK108fXCeD0IesoZvzh/gx+LNePAFcn6r9jhacWQ3SpCYW8xspg AWQc+0t7+YICrLyj0yQoqsCoU6687MNUgeKhf8m6yAmv/OIxmxbjV85i5eQJrONL9B rit0zRnR4Y3ZhStASBSUCGicK22P7QWUhCF8hgj/JAlQLXg8wWjGsZLK//Xu0ERwbe cFLl9sQIYjnfg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 12/18/19 8:04 AM, Kirill A. Shutemov wrote: > On Mon, Dec 16, 2019 at 02:25:16PM -0800, John Hubbard wrote: >> An upcoming patch changes and complicates the refcounting and >> especially the "put page" aspects of it. In order to keep >> everything clean, refactor the devmap page release routines: >> >> * Rename put_devmap_managed_page() to page_is_devmap_managed(), >> and limit the functionality to "read only": return a bool, >> with no side effects. >> >> * Add a new routine, put_devmap_managed_page(), to handle checking >> what kind of page it is, and what kind of refcount handling it >> requires. >> >> * Rename __put_devmap_managed_page() to free_devmap_managed_page(), >> and limit the functionality to unconditionally freeing a devmap >> page. > > What the reason to separate put_devmap_managed_page() from > free_devmap_managed_page() if free_devmap_managed_page() has exacly one > caller? Is it preparation for the next patches? Yes. A later patch, #23, adds another caller: __unpin_devmap_managed_user_page(). ... >> @@ -971,7 +971,14 @@ static inline bool put_devmap_managed_page(struct page *page) >> return false; >> } >> >> +bool put_devmap_managed_page(struct page *page); >> + >> #else /* CONFIG_DEV_PAGEMAP_OPS */ >> +static inline bool page_is_devmap_managed(struct page *page) >> +{ >> + return false; >> +} >> + >> static inline bool put_devmap_managed_page(struct page *page) >> { >> return false; >> @@ -1028,8 +1035,10 @@ static inline void put_page(struct page *page) >> * need to inform the device driver through callback. See >> * include/linux/memremap.h and HMM for details. >> */ >> - if (put_devmap_managed_page(page)) >> + if (page_is_devmap_managed(page)) { >> + put_devmap_managed_page(page); > > put_devmap_managed_page() has yet another page_is_devmap_managed() check > inside. It looks strange. > Good point, it's an extra unnecessary check. So to clean it up, I'll note that the "if" check is required here in put_page(), in order to stay out of non-inlined function calls in the hot path (put_page()). So I'll do the following: * Leave the above code as it is here * Simplify put_devmap_managed_page(), it was trying to do two separate things, and those two things have different requirements. So change it to a void function, with a WARN_ON_ONCE to assert that page_is_devmap_managed() is true, * And change the other caller (release_pages()) to do that check. ... >> @@ -1102,3 +1102,27 @@ void __init swap_setup(void) >> * _really_ don't want to cluster much more >> */ >> } >> + >> +#ifdef CONFIG_DEV_PAGEMAP_OPS >> +bool put_devmap_managed_page(struct page *page) >> +{ >> + bool is_devmap = page_is_devmap_managed(page); >> + >> + if (is_devmap) { > > Reversing the condition would save you an indentation level. Yes. Done. I'll also git-reply with an updated patch so you can see what it looks like. thanks, -- John Hubbard NVIDIA