From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1A5F3C433EF for ; Tue, 8 Feb 2022 08:39:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8F1066B0081; Tue, 8 Feb 2022 03:39:33 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 89FCA6B0082; Tue, 8 Feb 2022 03:39:33 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 768876B0083; Tue, 8 Feb 2022 03:39:33 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0097.hostedemail.com [216.40.44.97]) by kanga.kvack.org (Postfix) with ESMTP id 678676B0081 for ; Tue, 8 Feb 2022 03:39:33 -0500 (EST) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 1F1E18249980 for ; Tue, 8 Feb 2022 08:39:33 +0000 (UTC) X-FDA: 79118963826.28.974D864 Received: from mail-io1-f49.google.com (mail-io1-f49.google.com [209.85.166.49]) by imf12.hostedemail.com (Postfix) with ESMTP id C836040004 for ; Tue, 8 Feb 2022 08:39:32 +0000 (UTC) Received: by mail-io1-f49.google.com with SMTP id r144so20175174iod.9 for ; Tue, 08 Feb 2022 00:39:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=L8z0e4fny5EqPGw8SvPl3dRoqRtOM5UTngWFBuQgHQk=; b=nFkxDiGyPNqAuqHFGZrYM9OkswChGFfi/2rsTJZWPBqVhneR4FX5/aD8yQmHseZDKA MSr7fYcaHuYxcCUEaWw2A6ljXlmTbWevzF4CPb0uZIGt/c5MBfPEie9i+HJtPSIBhjAE xnGsF8RPIiz2lP+T73s9zBsTMwKdjbXi/r1+SZ60IRaicanIO1Sl3WMo53g/UJwmrUIm 7TxPNnjspVt4j/7JwMv2INfRWhU8EGabXNVjJt8x8O6OLSg5b12B7bFAIu/1iDBZeACd 3X1DZsd3gGVMv2EXp68yQGSX0ePk0ztUWc/3nD2oa6iyLCjFdbxvWvgsT/tJQcCp8FV3 9Rgw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=L8z0e4fny5EqPGw8SvPl3dRoqRtOM5UTngWFBuQgHQk=; b=Ca0IV0Ot8xqJ/8SjS+qnmjC+bXd1OXBEJbLByhigpK0OlAW/4zD5JxT3CnLYcyjBPM xYagkldA88Zqtlr+n1kIwkThAhgwuo8f9LbMt1F5CUlLRru/uxMMS9QWrA1QRUGd0oHX 1TOaKspzgH2ky41zTVHgcXnYjlOBTN4kRo0utJUgYo/U5Et7gI+7ZGpZG9S/ij2yGSQ0 +OZ5TNgQvOLVth2dblPJmx4BDcx50sqEvkuGhPxf/tLhMbyhgL9UGLNtiYFecLf7EzWm pmbvQWOL34fGlKupMTGzU3bIWyQ8l0rAYRRnh4EOxrCBtUSjdD45Pkk7q/Nq9gBsD8nO Q1Tw== X-Gm-Message-State: AOAM531iEfzfD0B4yhfOTIAIwbTDK23dKLSIszhq7WdSJAKTNSEAaEwj pudKQwB9YejhbsYz4HE7Y9Cv0g== X-Google-Smtp-Source: ABdhPJyBbK7ehMBZdLHuV7DbnNODmVCvjijq8ijXQY33cd4K3b4anwZcNGc08jzu3lpwjAKWUaojeA== X-Received: by 2002:a5e:da01:: with SMTP id x1mr1562524ioj.218.1644309571992; Tue, 08 Feb 2022 00:39:31 -0800 (PST) Received: from google.com ([2620:15c:183:200:5f31:19c3:21f5:7300]) by smtp.gmail.com with ESMTPSA id s18sm7924009iov.5.2022.02.08.00.39.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 08 Feb 2022 00:39:31 -0800 (PST) Date: Tue, 8 Feb 2022 01:39:27 -0700 From: Yu Zhao To: Andrew Morton , Johannes Weiner , Mel Gorman , Michal Hocko Cc: Andi Kleen , Aneesh Kumar , Barry Song <21cnbao@gmail.com>, Catalin Marinas , Dave Hansen , Hillf Danton , Jens Axboe , Jesse Barnes , Jonathan Corbet , Linus Torvalds , Matthew Wilcox , Michael Larabel , Mike Rapoport , Rik van Riel , Vlastimil Babka , Will Deacon , Ying Huang , linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, page-reclaim@google.com, x86@kernel.org, Brian Geffon , Jan Alexander Steffens , Oleksandr Natalenko , Steven Barrett , Suleiman Souhlal , Daniel Byrne , Donald Carr , Holger =?iso-8859-1?Q?Hoffst=E4tte?= , Konstantin Kharlamov , Shuang Zhai , Sofia Trinh Subject: Re: [PATCH v7 07/12] mm: multigenerational LRU: support page table walks Message-ID: References: <20220208081902.3550911-1-yuzhao@google.com> <20220208081902.3550911-8-yuzhao@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220208081902.3550911-8-yuzhao@google.com> Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=nFkxDiGy; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf12.hostedemail.com: domain of yuzhao@google.com designates 209.85.166.49 as permitted sender) smtp.mailfrom=yuzhao@google.com X-Rspam-User: X-Rspamd-Queue-Id: C836040004 X-Stat-Signature: 19wmk8ba3m319nxiwzn95h6qwxtukexh X-Rspamd-Server: rspam07 X-HE-Tag: 1644309572-360091 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Feb 08, 2022 at 01:18:57AM -0700, Yu Zhao wrote: > To avoid confusions, the term "iteration" specifically means the > traversal of an entire mm_struct list; the term "walk" will be applied > to page tables and the rmap, as usual. > > To further exploit spatial locality, the aging prefers to walk page > tables to search for young PTEs and promote hot pages. A runtime > switch will be added in the next patch to enable or disable this > feature. Without it, the aging relies on the rmap only. Clarified that page table scanning is optional as requested here: https://lore.kernel.org/linux-mm/YdxEqFPLDf+wI0xX@dhcp22.suse.cz/ > NB: this feature has nothing similar with the page table scanning in > the 2.4 kernel [1], which searches page tables for old PTEs, adds cold > pages to swapcache and unmap them. > > An mm_struct list is maintained for each memcg, and an mm_struct > follows its owner task to the new memcg when this task is migrated. > Given an lruvec, the aging iterates lruvec_memcg()->mm_list and calls > walk_page_range() with each mm_struct on this list to promote hot > pages before it increments max_seq. > > When multiple page table walkers (threads) iterate the same list, each > of them gets a unique mm_struct; therefore they can run concurrently. > Page table walkers ignore any misplaced pages, e.g., if an mm_struct > was migrated, pages it left in the previous memcg won't be promoted > when its current memcg is under reclaim. Similarly, page table walkers > won't promote pages from nodes other than the one under reclaim. Clarified the interaction between task migration and reclaim as requested here: https://lore.kernel.org/linux-mm/YdxPEdsfl771Z7IX@dhcp22.suse.cz/ > Server benchmark results: > Single workload: > fio (buffered I/O): no change > > Single workload: > memcached (anon): +[5.5, 7.5]% > Ops/sec KB/sec > patch1-6: 1015292.83 39490.38 > patch1-7: 1080856.82 42040.53 > > Configurations: > no change > > Client benchmark results: > kswapd profiles: > patch1-6 > 45.49% lzo1x_1_do_compress (real work) > 7.38% page_vma_mapped_walk > 7.24% _raw_spin_unlock_irq > 2.64% ptep_clear_flush > 2.31% __zram_bvec_write > 2.13% do_raw_spin_lock > 2.09% lru_gen_look_around > 1.89% free_unref_page_list > 1.85% memmove > 1.74% obj_malloc > > patch1-7 > 47.73% lzo1x_1_do_compress (real work) > 6.84% page_vma_mapped_walk > 6.14% _raw_spin_unlock_irq > 2.86% walk_pte_range > 2.79% ptep_clear_flush > 2.24% __zram_bvec_write > 2.10% do_raw_spin_lock > 1.94% free_unref_page_list > 1.80% memmove > 1.75% obj_malloc > > Configurations: > no change Added benchmark results to show the difference between page table scanning and no page table scanning, as requested here: https://lore.kernel.org/linux-mm/Ye6xS6xUD1SORdHJ@dhcp22.suse.cz/ > +static void walk_mm(struct lruvec *lruvec, struct mm_struct *mm, struct lru_gen_mm_walk *walk) > +{ > + static const struct mm_walk_ops mm_walk_ops = { > + .test_walk = should_skip_vma, > + .p4d_entry = walk_pud_range, > + }; > + > + int err; > + struct mem_cgroup *memcg = lruvec_memcg(lruvec); > + > + walk->next_addr = FIRST_USER_ADDRESS; > + > + do { > + err = -EBUSY; > + > + /* folio_update_gen() requires stable folio_memcg() */ > + if (!mem_cgroup_trylock_pages(memcg)) > + break; Added a comment on the stable folio_memcg() requirement as requested here: https://lore.kernel.org/linux-mm/Yd6q0QdLVTS53vu4@dhcp22.suse.cz/ > +static struct lru_gen_mm_walk *alloc_mm_walk(void) > +{ > + if (current->reclaim_state && current->reclaim_state->mm_walk) > + return current->reclaim_state->mm_walk; > + > + return kzalloc(sizeof(struct lru_gen_mm_walk), > + __GFP_HIGH | __GFP_NOMEMALLOC | __GFP_NOWARN); > +} Replaced kvzalloc() with kzalloc() as requested here: https://lore.kernel.org/linux-mm/Yd6tafG3CS7BoRYn@dhcp22.suse.cz/ Replaced GFP_KERNEL with __GFP_HIGH|__GFP_NOMEMALLOC|__GFP_NOWARN as requested here: https://lore.kernel.org/linux-mm/YefddYm8FAfJalNa@dhcp22.suse.cz/