From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 07754C4332F for ; Tue, 8 Feb 2022 08:41:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=t+m/oDQ7mclQi828JMu+3UMpE9kSOKBpdgbrQTfFV6k=; b=Be595FziMI/Qj7 0SvtVRVi46MTxhaNvQ0Pop3EAjsFcF4Xaz7E/ThyKU4Lahpxc+yphQhRQwyoiHuLMIapbKbPtgrzn v/c638x1cZxCrvqZVDWPSY0AAnCiVbJAmxc4OjcWBfVEuVUUw/N1tIWTj6PAw+EJBXq4HK2icBm0E hkbG8P3iomSfFM2S2ciOW7owyOJ2/f+NwEfHrV8mSXv9U+pJEilC4DzUNCQhsQXsAw7SoebmEyzo0 /ErQH1Uuwa7LBSYAq8ck0jUNqDF6piIvVHIwgFa0DZiadH/lfmJb0ojRoHvEjaEI4pOrJBuwmRfmB epdCg9gQq/tspM5s9RbQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nHM2D-00D1ER-U6; Tue, 08 Feb 2022 08:39:38 +0000 Received: from mail-io1-xd2c.google.com ([2607:f8b0:4864:20::d2c]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nHM28-00D1Cg-Qi for linux-arm-kernel@lists.infradead.org; Tue, 08 Feb 2022 08:39:34 +0000 Received: by mail-io1-xd2c.google.com with SMTP id c188so20191002iof.6 for ; Tue, 08 Feb 2022 00:39:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=L8z0e4fny5EqPGw8SvPl3dRoqRtOM5UTngWFBuQgHQk=; b=nFkxDiGyPNqAuqHFGZrYM9OkswChGFfi/2rsTJZWPBqVhneR4FX5/aD8yQmHseZDKA MSr7fYcaHuYxcCUEaWw2A6ljXlmTbWevzF4CPb0uZIGt/c5MBfPEie9i+HJtPSIBhjAE xnGsF8RPIiz2lP+T73s9zBsTMwKdjbXi/r1+SZ60IRaicanIO1Sl3WMo53g/UJwmrUIm 7TxPNnjspVt4j/7JwMv2INfRWhU8EGabXNVjJt8x8O6OLSg5b12B7bFAIu/1iDBZeACd 3X1DZsd3gGVMv2EXp68yQGSX0ePk0ztUWc/3nD2oa6iyLCjFdbxvWvgsT/tJQcCp8FV3 9Rgw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=L8z0e4fny5EqPGw8SvPl3dRoqRtOM5UTngWFBuQgHQk=; b=qM0tvZXpNqMkp1DZV4F0EAVqtQtPJNRd5MwRcYvjk73nm2Afv1cnuO55mEP5Fpk37m fC627wbthctervS2RnZNLja+4tH8m3BeXhHjVOk1w8g1s+EAOnOAYr8HlntmluXxLUuG duVLEV8xe9EWhtTa2D78WcGmSGB5DpfFvwADhR/gHQPFw91yOGJdPJ6h10aiErgHCdIE E0s8CcmMWcVayjZfwYJxhytJ6VpXwPpfsiE6cZSyX8hxfv29TaaQDJRobJwiKKy3YXnu sQWlvu+KwdZppJFBxSKduaeJsFZ7qYiRU5qmz6RMAtRcYapknwdoBAAPPM9eWOB2tn2k fcrA== X-Gm-Message-State: AOAM530OFCjDxhM6seKnCM2BJSqo4EinDZJX8fY13awmeft6UDCLOBCS Ajpexx6oj0YJsrBI6TAMXKOYpA== X-Google-Smtp-Source: ABdhPJyBbK7ehMBZdLHuV7DbnNODmVCvjijq8ijXQY33cd4K3b4anwZcNGc08jzu3lpwjAKWUaojeA== X-Received: by 2002:a5e:da01:: with SMTP id x1mr1562524ioj.218.1644309571992; Tue, 08 Feb 2022 00:39:31 -0800 (PST) Received: from google.com ([2620:15c:183:200:5f31:19c3:21f5:7300]) by smtp.gmail.com with ESMTPSA id s18sm7924009iov.5.2022.02.08.00.39.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 08 Feb 2022 00:39:31 -0800 (PST) Date: Tue, 8 Feb 2022 01:39:27 -0700 From: Yu Zhao To: Andrew Morton , Johannes Weiner , Mel Gorman , Michal Hocko Cc: Andi Kleen , Aneesh Kumar , Barry Song <21cnbao@gmail.com>, Catalin Marinas , Dave Hansen , Hillf Danton , Jens Axboe , Jesse Barnes , Jonathan Corbet , Linus Torvalds , Matthew Wilcox , Michael Larabel , Mike Rapoport , Rik van Riel , Vlastimil Babka , Will Deacon , Ying Huang , linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, page-reclaim@google.com, x86@kernel.org, Brian Geffon , Jan Alexander Steffens , Oleksandr Natalenko , Steven Barrett , Suleiman Souhlal , Daniel Byrne , Donald Carr , Holger =?iso-8859-1?Q?Hoffst=E4tte?= , Konstantin Kharlamov , Shuang Zhai , Sofia Trinh Subject: Re: [PATCH v7 07/12] mm: multigenerational LRU: support page table walks Message-ID: References: <20220208081902.3550911-1-yuzhao@google.com> <20220208081902.3550911-8-yuzhao@google.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20220208081902.3550911-8-yuzhao@google.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220208_003932_899056_DC1E777B X-CRM114-Status: GOOD ( 25.34 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Tue, Feb 08, 2022 at 01:18:57AM -0700, Yu Zhao wrote: > To avoid confusions, the term "iteration" specifically means the > traversal of an entire mm_struct list; the term "walk" will be applied > to page tables and the rmap, as usual. > > To further exploit spatial locality, the aging prefers to walk page > tables to search for young PTEs and promote hot pages. A runtime > switch will be added in the next patch to enable or disable this > feature. Without it, the aging relies on the rmap only. Clarified that page table scanning is optional as requested here: https://lore.kernel.org/linux-mm/YdxEqFPLDf+wI0xX@dhcp22.suse.cz/ > NB: this feature has nothing similar with the page table scanning in > the 2.4 kernel [1], which searches page tables for old PTEs, adds cold > pages to swapcache and unmap them. > > An mm_struct list is maintained for each memcg, and an mm_struct > follows its owner task to the new memcg when this task is migrated. > Given an lruvec, the aging iterates lruvec_memcg()->mm_list and calls > walk_page_range() with each mm_struct on this list to promote hot > pages before it increments max_seq. > > When multiple page table walkers (threads) iterate the same list, each > of them gets a unique mm_struct; therefore they can run concurrently. > Page table walkers ignore any misplaced pages, e.g., if an mm_struct > was migrated, pages it left in the previous memcg won't be promoted > when its current memcg is under reclaim. Similarly, page table walkers > won't promote pages from nodes other than the one under reclaim. Clarified the interaction between task migration and reclaim as requested here: https://lore.kernel.org/linux-mm/YdxPEdsfl771Z7IX@dhcp22.suse.cz/ > Server benchmark results: > Single workload: > fio (buffered I/O): no change > > Single workload: > memcached (anon): +[5.5, 7.5]% > Ops/sec KB/sec > patch1-6: 1015292.83 39490.38 > patch1-7: 1080856.82 42040.53 > > Configurations: > no change > > Client benchmark results: > kswapd profiles: > patch1-6 > 45.49% lzo1x_1_do_compress (real work) > 7.38% page_vma_mapped_walk > 7.24% _raw_spin_unlock_irq > 2.64% ptep_clear_flush > 2.31% __zram_bvec_write > 2.13% do_raw_spin_lock > 2.09% lru_gen_look_around > 1.89% free_unref_page_list > 1.85% memmove > 1.74% obj_malloc > > patch1-7 > 47.73% lzo1x_1_do_compress (real work) > 6.84% page_vma_mapped_walk > 6.14% _raw_spin_unlock_irq > 2.86% walk_pte_range > 2.79% ptep_clear_flush > 2.24% __zram_bvec_write > 2.10% do_raw_spin_lock > 1.94% free_unref_page_list > 1.80% memmove > 1.75% obj_malloc > > Configurations: > no change Added benchmark results to show the difference between page table scanning and no page table scanning, as requested here: https://lore.kernel.org/linux-mm/Ye6xS6xUD1SORdHJ@dhcp22.suse.cz/ > +static void walk_mm(struct lruvec *lruvec, struct mm_struct *mm, struct lru_gen_mm_walk *walk) > +{ > + static const struct mm_walk_ops mm_walk_ops = { > + .test_walk = should_skip_vma, > + .p4d_entry = walk_pud_range, > + }; > + > + int err; > + struct mem_cgroup *memcg = lruvec_memcg(lruvec); > + > + walk->next_addr = FIRST_USER_ADDRESS; > + > + do { > + err = -EBUSY; > + > + /* folio_update_gen() requires stable folio_memcg() */ > + if (!mem_cgroup_trylock_pages(memcg)) > + break; Added a comment on the stable folio_memcg() requirement as requested here: https://lore.kernel.org/linux-mm/Yd6q0QdLVTS53vu4@dhcp22.suse.cz/ > +static struct lru_gen_mm_walk *alloc_mm_walk(void) > +{ > + if (current->reclaim_state && current->reclaim_state->mm_walk) > + return current->reclaim_state->mm_walk; > + > + return kzalloc(sizeof(struct lru_gen_mm_walk), > + __GFP_HIGH | __GFP_NOMEMALLOC | __GFP_NOWARN); > +} Replaced kvzalloc() with kzalloc() as requested here: https://lore.kernel.org/linux-mm/Yd6tafG3CS7BoRYn@dhcp22.suse.cz/ Replaced GFP_KERNEL with __GFP_HIGH|__GFP_NOMEMALLOC|__GFP_NOWARN as requested here: https://lore.kernel.org/linux-mm/YefddYm8FAfJalNa@dhcp22.suse.cz/ _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel