From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0E265C433F5 for ; Mon, 30 May 2022 00:55:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 33FE68D0002; Sun, 29 May 2022 20:55:40 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2F2C78D0001; Sun, 29 May 2022 20:55:40 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 205468D0002; Sun, 29 May 2022 20:55:40 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 11D2B8D0001 for ; Sun, 29 May 2022 20:55:40 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id C94D31253 for ; Mon, 30 May 2022 00:55:39 +0000 (UTC) X-FDA: 79520591598.12.B222BD8 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf10.hostedemail.com (Postfix) with ESMTP id ACA2BC0045 for ; Mon, 30 May 2022 00:54:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=WYOLX8HovA/BSABYfgTLym5uG30L/Fgf5B8DvAQCglM=; b=n5o4AXUoeoibvMO9u8F1b5yvw7 TTLeNItWqJ56F6zta27N7u0ANmjcGNabXIBDLpQLCWrA3jYk0U68XL0JoSfPsOvHha0gRYNWra33x wjyasGcb4ACtDISXoJ1twG6+MwFsGQY2n0bQgPR5vE26VqRZ8bRp3hcXrkztb5LI3ENBDrNTzwRbA ZjF+MYSa3mbHiEUl1LPTRiolvqr2n28FTEWACfz/0HmutNHE9Nk+asWElTT0jDgSUyxFTTAxqStXn 0O7kfc0mW67T3+JFNyTPgeIW0qIKciljthZ5yRKyxHK2mhHPAZLPLfDqTCTsY/rLQICTbLbQ3oicD 32Av8IkA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nvTgg-0042qA-Mf; Mon, 30 May 2022 00:55:14 +0000 Date: Mon, 30 May 2022 01:55:14 +0100 From: Matthew Wilcox To: Muchun Song Cc: bh1scw@gmail.com, Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH] mm/slub: replace alloc_pages with folio_alloc Message-ID: References: <20220528161157.3934825-1-bh1scw@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=n5o4AXUo; dmarc=none; spf=none (imf10.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Rspam-User: X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: ACA2BC0045 X-Stat-Signature: jmsraw33md4to5b9he5n111a99a4tn83 X-HE-Tag: 1653872096-565618 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Sun, May 29, 2022 at 04:31:07AM +0100, Matthew Wilcox wrote: > On Sun, May 29, 2022 at 10:58:18AM +0800, Muchun Song wrote: > > On Sat, May 28, 2022 at 05:27:11PM +0100, Matthew Wilcox wrote: > > > On Sun, May 29, 2022 at 12:11:58AM +0800, bh1scw@gmail.com wrote: > > > > From: Fanjun Kong > > > > > > > > This patch will use folio allocation functions for allocating pages. > > > > > > That's not actually a good idea. folio_alloc() will do the > > > prep_transhuge_page() step which isn't needed for slab. > > > > You mean folio_alloc() is dedicated for THP allocation? It is a little > > surprise to me. I thought folio_alloc() is just a variant of alloc_page(), > > which returns a folio struct instead of a page. Seems like I was wrong. > > May I ask what made us decide to do this? > > Yeah, the naming isn't great here. The problem didn't really occur > to me until I saw this patch, and I don't have a good solution yet. OK, I have an idea. None of the sl*b allocators use the page refcount. So the atomic operations on it are just a waste of time. If we add an alloc_unref_page() to match our free_unref_page(), that'll be enough difference to stop pepole sending "helpful" patches. Also, it'll be a (small?) performance improvement for slab.