From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 12007C433EF for ; Sun, 23 Jan 2022 21:30:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=OnlnlFaM2njNESq64P26Wq2WUC5Nlk99+PHB+fyjmHA=; b=kaR+zfmXw7JI2L CCWjc1NVEPGtNPeu63UA19puy0+dfGs1cHoSb7NkPmRYnq0fhB7Q6XcJusZEe7Af9WEUZ0+TXJLiO 79l6Beg3GSnxuclRfN0xndWQCz6zkfJPky4OIPC5OftyNoPQE8RfMYldYesiNGZjWNH1wEkORDKb+ 6UO24Mg0ujGSlQpGFy89xlmyitXAEv2KXnLyN4EQ6RaRpw9IBFIU+/DVb6CHaHSeCF5ApclUVJ5y3 0GoieUm+Enbjj28srlNmYCT98fUNoWMSIzjcwRoHKY9p1WvlmwvXblHt4dOt5+g6GWAIKy4pWwT4S gZHjwvm7yQNJPuzocgVw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nBkPj-001p0Z-Nh; Sun, 23 Jan 2022 21:28:43 +0000 Received: from mail-io1-xd35.google.com ([2607:f8b0:4864:20::d35]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nBkPf-001ozq-Bw for linux-arm-kernel@lists.infradead.org; Sun, 23 Jan 2022 21:28:41 +0000 Received: by mail-io1-xd35.google.com with SMTP id i62so1559237ioa.1 for ; Sun, 23 Jan 2022 13:28:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=swLwtzNGa0BLS9qGQNxNDECZZ7FgkLXC8k48BAMqfzQ=; b=m8u25CZK/NaxekBNiF59T0qYuTxYjt79FVlr1cEyVjfkADMyDT7/HdLKHDV9jVUfIE lU0NKstg+Lei5dQUTO0zhgwmAxf9mTHDPyBlcnTkb3lGUHA0RMyje4VbobjnIOVUAxtV RtMy/TpoBQqhbmXpCX9beWCZLLYZll3Hn2om4Ium/XXjQSQ9RwHk4eu7uAd1vRHgvjpT vfVX0n1bpLH55waFiPA14wuAsOhnZzPGWhj1TA55XIbAFaQCWcxDv7stFPN6owxITxNu xddGsp0FRvHgHQ1oIlZJ+5rlZQGdpHTPqliSnrgbfbh+gS3ljIc3O+3uGb12HXwMD01J OZig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=swLwtzNGa0BLS9qGQNxNDECZZ7FgkLXC8k48BAMqfzQ=; b=aLFtV/tUeilsifl1OVn/rqXlMfmXp7ds/lNWF0eo8WDGX6DxP04sqViygVxK8B865s 8RFE4YverKGx01tPzhLOQi6FwF+s9TQWQvq7MKhEwANLzA2xAtgMHSLu7h+Yk2g1mtjp de6u7LOIKIPbWhCweblP61Q2aENdXbS2M7xKkBKhwwuCl32jLOAQCjHyyH2RMGXZ+vR3 +PGKUrBqxk6d7d7JbugHXp2DHb6Q8oPZt+33fNC9kWkL2nIN0oOfZwmQEkD5wzZaPU0H UPO2Hr10Yft8ZTZVn4Y3JhapSJavHEgR4rQQ1FV5WlFook7mgOKvOPD3eYcNaaVFsEQj Ff+w== X-Gm-Message-State: AOAM533jBwcgUx4yGwuOWrMh5e3AVYp/DCHDjenNCONcT5BkB4XJspzg igeqpfFi9apSOYC0J7r5LbKTiw== X-Google-Smtp-Source: ABdhPJzXkD0aqnEOI5RZMJ8R3m+ByF7nTAGJE0B/pYCXplCgx7IEmrFj3AtBrcpzsNx3rPJ4TWfF3g== X-Received: by 2002:a02:cdc5:: with SMTP id m5mr4172098jap.101.1642973315070; Sun, 23 Jan 2022 13:28:35 -0800 (PST) Received: from google.com ([2620:15c:183:200:b551:d37:7fd2:5a1a]) by smtp.gmail.com with ESMTPSA id e17sm6176582ilm.67.2022.01.23.13.28.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 23 Jan 2022 13:28:34 -0800 (PST) Date: Sun, 23 Jan 2022 14:28:30 -0700 From: Yu Zhao To: Michal Hocko Cc: Andrew Morton , Linus Torvalds , Andi Kleen , Catalin Marinas , Dave Hansen , Hillf Danton , Jens Axboe , Jesse Barnes , Johannes Weiner , Jonathan Corbet , Matthew Wilcox , Mel Gorman , Michael Larabel , Rik van Riel , Vlastimil Babka , Will Deacon , Ying Huang , linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, page-reclaim@google.com, x86@kernel.org, Konstantin Kharlamov Subject: Re: [PATCH v6 6/9] mm: multigenerational lru: aging Message-ID: References: <20220104202227.2903605-1-yuzhao@google.com> <20220104202227.2903605-7-yuzhao@google.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220123_132839_438760_519B6003 X-CRM114-Status: GOOD ( 43.04 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Wed, Jan 19, 2022 at 10:42:47AM +0100, Michal Hocko wrote: > On Wed 19-01-22 00:04:10, Yu Zhao wrote: > > On Mon, Jan 10, 2022 at 11:54:42AM +0100, Michal Hocko wrote: > > > On Sun 09-01-22 21:47:57, Yu Zhao wrote: > > > > On Fri, Jan 07, 2022 at 03:44:50PM +0100, Michal Hocko wrote: > > > > > On Tue 04-01-22 13:22:25, Yu Zhao wrote: > > > > > [...] > > > > > > +static void walk_mm(struct lruvec *lruvec, struct mm_struct *mm, struct lru_gen_mm_walk *walk) > > > > > > +{ > > > > > > + static const struct mm_walk_ops mm_walk_ops = { > > > > > > + .test_walk = should_skip_vma, > > > > > > + .p4d_entry = walk_pud_range, > > > > > > + }; > > > > > > + > > > > > > + int err; > > > > > > +#ifdef CONFIG_MEMCG > > > > > > + struct mem_cgroup *memcg = lruvec_memcg(lruvec); > > > > > > +#endif > > > > > > + > > > > > > + walk->next_addr = FIRST_USER_ADDRESS; > > > > > > + > > > > > > + do { > > > > > > + unsigned long start = walk->next_addr; > > > > > > + unsigned long end = mm->highest_vm_end; > > > > > > + > > > > > > + err = -EBUSY; > > > > > > + > > > > > > + rcu_read_lock(); > > > > > > +#ifdef CONFIG_MEMCG > > > > > > + if (memcg && atomic_read(&memcg->moving_account)) > > > > > > + goto contended; > > > > > > +#endif > > > > > > + if (!mmap_read_trylock(mm)) > > > > > > + goto contended; > > > > > > > > > > Have you evaluated the behavior under mmap_sem contention? I mean what > > > > > would be an effect of some mms being excluded from the walk? This path > > > > > is called from direct reclaim and we do allocate with exclusive mmap_sem > > > > > IIRC and the trylock can fail in a presence of pending writer if I am > > > > > not mistaken so even the read lock holder (e.g. an allocation from the #PF) > > > > > can bypass the walk. > > > > > > > > You are right. Here it must be a trylock; otherwise it can deadlock. > > > > > > Yeah, this is clear. > > > > > > > I think there might be a misunderstanding: the aging doesn't > > > > exclusively rely on page table walks to gather the accessed bit. It > > > > prefers page table walks but it can also fallback to the rmap-based > > > > function, i.e., lru_gen_look_around(), which only gathers the accessed > > > > bit from at most 64 PTEs and therefore is less efficient. But it still > > > > retains about 80% of the performance gains. > > > > > > I have to say that I really have hard time to understand the runtime > > > behavior depending on that interaction. How does the reclaim behave when > > > the virtual scan is enabled, partially enabled and almost completely > > > disabled due to different constrains? I do not see any such an > > > evaluation described in changelogs and I consider this to be a rather > > > important information to judge the overall behavior. > > > > It doesn't have (partially) enabled/disabled states nor does its > > behavior change with different reclaim constraints. Having either > > would make its design too complex to implement or benchmark. > > Let me clarify. By "partially enabled" I really meant behavior depedning > on runtime conditions. Say mmap_sem cannot be locked for half of scanned > tasks and/or allocation for the mm walker fails due to lack of memory. > How does this going to affect reclaim efficiency. Understood. This is not only possible -- it's the default for our ARM hardware that doesn't support the accessed bit, i.e., CPUs that don't automatically set the accessed bit. In try_to_inc_max_seq(), we have: /* * If the hardware doesn't automatically set the accessed bit, fallback * to lru_gen_look_around(), which only clears the accessed bit in a * handful of PTEs. Spreading the work out over a period of time usually * is less efficient, but it avoids bursty page faults. */ if the accessed bit is not supported return if alloc_mm_walk() fails return walk_mm() if mmap_sem contented return scan page tables We have a microbenchmark that specifically measures this worst case scenario by entirely disabling page table scanning. Its results showed that this still retains more than 90% of the optimal performance. I'll share this microbenchmark in another email when answering Barry's questions regarding the accessed bit. Our profiling infra also indirectly confirms this: it collects data from real users running on hardware with and without the accessed bit. Users running on hardware without the accessed bit indeed suffer a small performance degradation, compared with users running on hardware with it. But they still benefit almost as much, compared with users running on the same hardware but without MGLRU. > How does a user/admin > know that the memory reclaim is in a "degraded" mode because of the > contention? As we previously discussed here: https://lore.kernel.org/linux-mm/Ydu6fXg2FmrseQOn@google.com/ there used to be a counter measuring the contention, and it was deemed unnecessary and removed in v4. But I don't have a problem if we want to revive it. _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel