From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, MENTIONS_GIT_HOSTING,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 40297C433B4 for ; Mon, 5 Apr 2021 19:31:48 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A9A9B613B8 for ; Mon, 5 Apr 2021 19:31:47 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A9A9B613B8 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 284D96B0070; Mon, 5 Apr 2021 15:31:47 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 235F36B0073; Mon, 5 Apr 2021 15:31:47 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0FDE86B0075; Mon, 5 Apr 2021 15:31:47 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0046.hostedemail.com [216.40.44.46]) by kanga.kvack.org (Postfix) with ESMTP id E8BD76B0070 for ; Mon, 5 Apr 2021 15:31:46 -0400 (EDT) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id A31F31804472F for ; Mon, 5 Apr 2021 19:31:46 +0000 (UTC) X-FDA: 77999308212.18.7CEBE5F Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf17.hostedemail.com (Postfix) with ESMTP id 6626640002C6 for ; Mon, 5 Apr 2021 19:31:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=9SOtYaB8J8ELKMq0M1tuMmaAg1MyhsZCJsmHDday9kQ=; b=FEj9ROtIshiEtAvBdohBD7kcWv nRYVdS2KhJ0FXYGdhwKjpk6One/ypCjkmuRXSg4uvk6SHi9BDvelqoRxmae6pmxutye1xGe1qM2Yi c4ql44HXFUTVHnmCZJmuQRtBrl8HjSrzqMGk7HI9mnYY3FdPseqiliBLHqzPbSWI2m6n9+i+zaUNP 87JrxNKRJiTWzEV8BfXuz9MstUK5yJrd/8CfDPVjCb0eVrx+7ZBg8ozDLvCA33vyeFT+YneqiCFZg x34uhmHpqQneEVp38h+wysnFwQ6bWmk3prX7/fEIOW2CthpJmw9XWtCgxRjwkP8wUlZDaaKAb4ve8 0IRm1RJg==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1lTUwS-00BnIa-8v; Mon, 05 Apr 2021 19:31:28 +0000 Date: Mon, 5 Apr 2021 20:31:20 +0100 From: Matthew Wilcox To: Jeff Layton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-cachefs@redhat.com, linux-afs@lists.infradead.org Subject: Re: [PATCH v6 00/27] Memory Folios Message-ID: <20210405193120.GL2531743@casper.infradead.org> References: <20210331184728.1188084-1-willy@infradead.org> <759cfbb63ca960b2893f2b879035c2a42c80462d.camel@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <759cfbb63ca960b2893f2b879035c2a42c80462d.camel@kernel.org> X-Stat-Signature: c4ixgwusj4fnnpjhh38w9dz5mipdbe5d X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 6626640002C6 Received-SPF: none (infradead.org>: No applicable sender policy available) receiver=imf17; identity=mailfrom; envelope-from=""; helo=casper.infradead.org; client-ip=90.155.50.34 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1617651104-219768 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Apr 05, 2021 at 03:14:29PM -0400, Jeff Layton wrote: > On Wed, 2021-03-31 at 19:47 +0100, Matthew Wilcox (Oracle) wrote: > > Managing memory in 4KiB pages is a serious overhead. Many benchmarks > > exist which show the benefits of a larger "page size". As an example, > > an earlier iteration of this idea which used compound pages got a 7% > > performance boost when compiling the kernel using kernbench without any > > particular tuning. > > > > Using compound pages or THPs exposes a serious weakness in our type > > system. Functions are often unprepared for compound pages to be passed > > to them, and may only act on PAGE_SIZE chunks. Even functions which are > > aware of compound pages may expect a head page, and do the wrong thing > > if passed a tail page. > > > > There have been efforts to label function parameters as 'head' instead > > of 'page' to indicate that the function expects a head page, but this > > leaves us with runtime assertions instead of using the compiler to prove > > that nobody has mistakenly passed a tail page. Calling a struct page > > 'head' is also inaccurate as they will work perfectly well on base pages. > > The term 'nottail' has not proven popular. > > > > We also waste a lot of instructions ensuring that we're not looking at > > a tail page. Almost every call to PageFoo() contains one or more hidden > > calls to compound_head(). This also happens for get_page(), put_page() > > and many more functions. There does not appear to be a way to tell gcc > > that it can cache the result of compound_head(), nor is there a way to > > tell it that compound_head() is idempotent. > > > > This series introduces the 'struct folio' as a replacement for > > head-or-base pages. This initial set reduces the kernel size by > > approximately 5kB by removing conversions from tail pages to head pages. > > The real purpose of this series is adding infrastructure to enable > > further use of the folio. > > > > The medium-term goal is to convert all filesystems and some device > > drivers to work in terms of folios. This series contains a lot of > > explicit conversions, but it's important to realise it's removing a lot > > of implicit conversions in some relatively hot paths. There will be very > > few conversions from folios when this work is completed; filesystems, > > the page cache, the LRU and so on will generally only deal with folios. > > I too am a little concerned about the amount of churn this is likely to > cause, but this does seem like a fairly promising way forward for > actually using THPs in the pagecache. The set is fairly straightforward. > > That said, there are few callers of these new functions in here. Is this > set enough to allow converting some subsystem to use folios? It might be > good to do that if possible, so we can get an idea of how much work > we're in for. It isn't enough to start converting much. There needs to be a second set of patches which add all the infrastructure for converting a filesystem. Then we can start working on the filesystems. I have a start at that here: https://git.infradead.org/users/willy/pagecache.git/shortlog/refs/heads/folio I don't know if it's exactly how I'll arrange it for submission. It might be better to convert all the filesystem implementations of readpage to work on a folio, and then the big bang conversion of ->readpage to ->read_folio will look much more mechanical. But if I can't convince people that a folio approach is what we need, then I should stop working on it, and go back to fixing the endless stream of bugs that the thp-based approach surfaces.