From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5CF3CC433EF for ; Tue, 8 Feb 2022 08:35:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=EarmNQ8LBILuJZmnx01zGIBXzI9KA5qucamIxVYEEd0=; b=0MNlBEwXHBKlxD aps0weZAj0JwWmKQp0Fbz9yFwblfhUTA3IqZJhRg/QrdRpGkVi04X6wg0SIDJNOzG92UPpBlZlxIh 9kqp4sxmYGXVYj6G8OUvqiIyOL3vebwvrNYjwiFtP1fHnj306hhaGVgXTMHQ4FGdinymtBR1rLAyn o9tIH1faaO25hZ0KQ9HGyGTr4B3b8FDtLh4fstkUQUcAos0RT7W9pqQHOzxuhy4gA+v66EucvaKE3 Qlq/3LKcgCHGt9DO86f3TIVojwUZ6G3wzFYhpVgeRt8LLW/tmMowOHgo9C8zXp4C5D3zh0flQJNL9 Krb+QusIoRcHX1EuTFeA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nHLwr-00CzuM-PH; Tue, 08 Feb 2022 08:34:07 +0000 Received: from mail-io1-xd2b.google.com ([2607:f8b0:4864:20::d2b]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nHLwi-00CzqW-Io for linux-arm-kernel@lists.infradead.org; Tue, 08 Feb 2022 08:33:59 +0000 Received: by mail-io1-xd2b.google.com with SMTP id s18so20125872ioa.12 for ; Tue, 08 Feb 2022 00:33:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=TmX+1PYwkEORQCHYHznIK9wXZEnAggXFtHQ61jgwI8E=; b=F1DRbLZ5uxQEM/yxdQLb3t9r7nntwgqyLTuAzsF5Opj1M1u0tS8rvS3QBiOxt6abNW Dru3VxUXL5P8zLkJHYrLyLjTSg+jkSW87ogwBhszAB7YtQkmq7XAEqmoX3ckUxJNX4or 3eVOT/ydg/tdlTekdgpRiloO2gHqvA7jZdtGhdTuBnqK5B+fYMXaBFeELwxbSvOBOXxj uFnhpX0QuViXN/MXSCHxbdlM95LVyjSkly4vM0970oOe9NpsEmsY4FSo3EtblxON7GtZ jnmTqIpwuNdDTCPW07q5jhMglEKQAp1bIcIXenxCimt4w4kJcqIDHmmG5kGIjuupJV5Z t4IA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=TmX+1PYwkEORQCHYHznIK9wXZEnAggXFtHQ61jgwI8E=; b=jojbGxN4bG2CGvRv4KiXpG4y4UDsda8yqIo2VHeCsKXYnJdV6i2rp7J9Jk7ZDDELhX zpXy9KKKkQs+zNjStkgS0YG1IMeuVHAf4bC34zds2UnM958a40D8Mo/b7wxtxCYJ7RTF /xc85RsHA8tJorc/QF9+TGAa2WesUEvFW8azYXLdk4S30C2PkLvY26Zol2y71JhxeXr/ Y8e2ky4Shgjzm22xMXPaEx469rIfW3krZmNEaik9pG3IpAuJ1x6k7m1A3/i59hty4t/D USLNbywyR93rlNj2HZyK+titQEYsWlvkkiZ7BoemM81DKR8VCQK3gmcpua9irh3grySa tRmg== X-Gm-Message-State: AOAM5301zW+2ktKCB5XWT9KsKW/lTVbTtmA739cJ6+nEsHWzbTxq6eax wGAGfBWYrZhwcMi94JV4wlmBNw== X-Google-Smtp-Source: ABdhPJyDrXBrTyfm6ERK0nAhlTVpEl6d4xHVrRz2ufYaoKdT+J6LuWshLihhaBirxIcGILBBOKvg7g== X-Received: by 2002:a05:6638:d4c:: with SMTP id d12mr1560333jak.283.1644309234407; Tue, 08 Feb 2022 00:33:54 -0800 (PST) Received: from google.com ([2620:15c:183:200:5f31:19c3:21f5:7300]) by smtp.gmail.com with ESMTPSA id r18sm1585519iln.1.2022.02.08.00.33.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 08 Feb 2022 00:33:53 -0800 (PST) Date: Tue, 8 Feb 2022 01:33:49 -0700 From: Yu Zhao To: Andrew Morton , Johannes Weiner , Mel Gorman , Michal Hocko Cc: Andi Kleen , Aneesh Kumar , Barry Song <21cnbao@gmail.com>, Catalin Marinas , Dave Hansen , Hillf Danton , Jens Axboe , Jesse Barnes , Jonathan Corbet , Linus Torvalds , Matthew Wilcox , Michael Larabel , Mike Rapoport , Rik van Riel , Vlastimil Babka , Will Deacon , Ying Huang , linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, page-reclaim@google.com, x86@kernel.org, Brian Geffon , Jan Alexander Steffens , Oleksandr Natalenko , Steven Barrett , Suleiman Souhlal , Daniel Byrne , Donald Carr , Holger =?iso-8859-1?Q?Hoffst=E4tte?= , Konstantin Kharlamov , Shuang Zhai , Sofia Trinh Subject: Re: [PATCH v7 05/12] mm: multigenerational LRU: minimal implementation Message-ID: References: <20220208081902.3550911-1-yuzhao@google.com> <20220208081902.3550911-6-yuzhao@google.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20220208081902.3550911-6-yuzhao@google.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220208_003356_659224_624A0B7B X-CRM114-Status: GOOD ( 21.75 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Tue, Feb 08, 2022 at 01:18:55AM -0700, Yu Zhao wrote: > diff --git a/mm/Kconfig b/mm/Kconfig > index 3326ee3903f3..e899623d5df0 100644 > --- a/mm/Kconfig > +++ b/mm/Kconfig > @@ -892,6 +892,50 @@ config ANON_VMA_NAME > area from being merged with adjacent virtual memory areas due to the > difference in their name. > > +# multigenerational LRU { > +config LRU_GEN > + bool "Multigenerational LRU" > + depends on MMU > + # the following options can use up the spare bits in page flags > + depends on !MAXSMP && (64BIT || !SPARSEMEM || SPARSEMEM_VMEMMAP) > + help > + A high performance LRU implementation for memory overcommit. See > + Documentation/admin-guide/mm/multigen_lru.rst and > + Documentation/vm/multigen_lru.rst for details. > + > +config NR_LRU_GENS > + int "Max number of generations" > + depends on LRU_GEN > + range 4 31 > + default 4 > + help > + Do not increase this value unless you plan to use working set > + estimation and proactive reclaim to optimize job scheduling in data > + centers. > + > + This option uses order_base_2(N+1) bits in page flags. > + > +config TIERS_PER_GEN > + int "Number of tiers per generation" > + depends on LRU_GEN > + range 2 4 > + default 4 > + help > + Do not decrease this value unless you run out of spare bits in page > + flags, i.e., you see the "Not enough bits in page flags" build error. > + > + This option uses N-2 bits in page flags. Moved Kconfig to this patch as suggested by: https://lore.kernel.org/linux-mm/Yd6uHYtjGfgqjDpw@dhcp22.suse.cz/ Added two new macros as requested here: https://lore.kernel.org/linux-mm/87czkyzhfe.fsf@linux.ibm.com/ > diff --git a/mm/vmscan.c b/mm/vmscan.c > index d75a5738d1dc..5f0d92838712 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -1285,9 +1285,11 @@ static int __remove_mapping(struct address_space *mapping, struct page *page, > > if (PageSwapCache(page)) { > swp_entry_t swap = { .val = page_private(page) }; > - mem_cgroup_swapout(page, swap); > + > + /* get a shadow entry before mem_cgroup_swapout() clears folio_memcg() */ > if (reclaimed && !mapping_exiting(mapping)) > shadow = workingset_eviction(page, target_memcg); > + mem_cgroup_swapout(page, swap); > __delete_from_swap_cache(page, swap, shadow); > xa_unlock_irq(&mapping->i_pages); > put_swap_page(page, swap); > @@ -2721,6 +2723,9 @@ static void prepare_scan_count(pg_data_t *pgdat, struct scan_control *sc) > unsigned long file; > struct lruvec *target_lruvec; > > + if (lru_gen_enabled()) > + return; > + > target_lruvec = mem_cgroup_lruvec(sc->target_mem_cgroup, pgdat); > > /* > @@ -3042,15 +3047,47 @@ static bool can_age_anon_pages(struct pglist_data *pgdat, > > #ifdef CONFIG_LRU_GEN > > +enum { > + TYPE_ANON, > + TYPE_FILE, > +}; Added two new macros as requested here: https://lore.kernel.org/linux-mm/87czkyzhfe.fsf@linux.ibm.com/ > +static void age_lruvec(struct lruvec *lruvec, struct scan_control *sc) > +{ > + bool need_aging; > + long nr_to_scan; > + struct mem_cgroup *memcg = lruvec_memcg(lruvec); > + int swappiness = get_swappiness(memcg); > + DEFINE_MAX_SEQ(lruvec); > + DEFINE_MIN_SEQ(lruvec); > + > + mem_cgroup_calculate_protection(NULL, memcg); > + > + if (mem_cgroup_below_min(memcg)) > + return; Added mem_cgroup_calculate_protection() for readability as requested here: https://lore.kernel.org/linux-mm/Ydf9RXPch5ddg%2FWC@dhcp22.suse.cz/ _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel