From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.3 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4E430C04FF3 for ; Mon, 24 May 2021 09:12:26 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E0100610A5 for ; Mon, 24 May 2021 09:12:25 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E0100610A5 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=techsingularity.net Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5DC2194005F; Mon, 24 May 2021 05:12:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 58BAE940055; Mon, 24 May 2021 05:12:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 42CAF94005F; Mon, 24 May 2021 05:12:25 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0228.hostedemail.com [216.40.44.228]) by kanga.kvack.org (Postfix) with ESMTP id 09633940055 for ; Mon, 24 May 2021 05:12:24 -0400 (EDT) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 9E17F180ACEF3 for ; Mon, 24 May 2021 09:12:24 +0000 (UTC) X-FDA: 78175558608.22.B4423A4 Received: from outbound-smtp16.blacknight.com (outbound-smtp16.blacknight.com [46.22.139.233]) by imf12.hostedemail.com (Postfix) with ESMTP id 0A4B82CA for ; Mon, 24 May 2021 09:12:16 +0000 (UTC) Received: from mail.blacknight.com (pemlinmail05.blacknight.ie [81.17.254.26]) by outbound-smtp16.blacknight.com (Postfix) with ESMTPS id 9D5031C36DF for ; Mon, 24 May 2021 10:12:22 +0100 (IST) Received: (qmail 26294 invoked from network); 24 May 2021 09:12:22 -0000 Received: from unknown (HELO techsingularity.net) (mgorman@techsingularity.net@[84.203.23.168]) by 81.17.254.9 with ESMTPSA (AES256-SHA encrypted, authenticated); 24 May 2021 09:12:22 -0000 Date: Mon, 24 May 2021 10:12:20 +0100 From: Mel Gorman To: Dave Hansen Cc: Linux-MM , Dave Hansen , Matthew Wilcox , Vlastimil Babka , Michal Hocko , Nicholas Piggin , LKML Subject: Re: [PATCH 4/6] mm/page_alloc: Scale the number of pages that are batch freed Message-ID: <20210524091220.GC30378@techsingularity.net> References: <20210521102826.28552-1-mgorman@techsingularity.net> <20210521102826.28552-5-mgorman@techsingularity.net> <8646d3ad-345f-7ec7-fe4a-ada2680487a3@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: <8646d3ad-345f-7ec7-fe4a-ada2680487a3@intel.com> User-Agent: Mutt/1.10.1 (2018-07-13) X-Rspamd-Queue-Id: 0A4B82CA Authentication-Results: imf12.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf12.hostedemail.com: domain of mgorman@techsingularity.net designates 46.22.139.233 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net X-Rspamd-Server: rspam04 X-Stat-Signature: eeoz14prhpkaqmxz57sycggyftjb5xgm X-HE-Tag: 1621847536-635535 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, May 21, 2021 at 03:36:05PM -0700, Dave Hansen wrote: > ... > > +static int nr_pcp_free(struct per_cpu_pages *pcp, int high, int batch) > > +{ > > + int min_nr_free, max_nr_free; > > + > > + /* Check for PCP disabled or boot pageset */ > > + if (unlikely(high < batch)) > > + return 1; > > + > > + min_nr_free = batch; > > + max_nr_free = high - batch; > > I puzzled over this for a minute. I *think* it means to say: "Leave at > least one batch worth of pages in the pcp at all times so that the next > allocation can still be satisfied from this pcp." > Yes, I added a comment. > > + batch <<= pcp->free_factor; > > + if (batch < max_nr_free) > > + pcp->free_factor++; > > + batch = clamp(batch, min_nr_free, max_nr_free); > > + > > + return batch; > > +} > > + > > static void free_unref_page_commit(struct page *page, unsigned long pfn, > > int migratetype) > > { > > struct zone *zone = page_zone(page); > > struct per_cpu_pages *pcp; > > + int high; > > > > __count_vm_event(PGFREE); > > pcp = this_cpu_ptr(zone->per_cpu_pageset); > > list_add(&page->lru, &pcp->lists[migratetype]); > > pcp->count++; > > - if (pcp->count >= READ_ONCE(pcp->high)) > > - free_pcppages_bulk(zone, READ_ONCE(pcp->batch), pcp); > > + high = READ_ONCE(pcp->high); > > + if (pcp->count >= high) { > > + int batch = READ_ONCE(pcp->batch); > > + > > + free_pcppages_bulk(zone, nr_pcp_free(pcp, high, batch), pcp); > > + } > > } > > > > /* > > @@ -3531,6 +3555,7 @@ static struct page *rmqueue_pcplist(struct zone *preferred_zone, > > > > local_lock_irqsave(&pagesets.lock, flags); > > pcp = this_cpu_ptr(zone->per_cpu_pageset); > > + pcp->free_factor >>= 1; > > list = &pcp->lists[migratetype]; > > page = __rmqueue_pcplist(zone, migratetype, alloc_flags, pcp, list); > > local_unlock_irqrestore(&pagesets.lock, flags); > > A high-level description of the algorithm in the changelog would also be > nice. I *think* it's basically: > > After hitting the high pcp mark, free one pcp->batch at a time. But, as > subsequent pcp free operations occur, keep doubling the size of the > freed batches. Cap them so that they always leave at least one > pcp->batch worth of pages. Scale the size back down by half whenever an > allocation that consumes a page from the pcp occurs. > > While I'd appreciate another comment or two, I do think this is worth > doing, and the approach seems sound: > > Acked-by: Dave Hansen Thanks, I added a few additional comments. -- Mel Gorman SUSE Labs