From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0BB17C433EF for ; Thu, 6 Jan 2022 21:29:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=jDNTiAq4lb1KYoTBjiVG/Z3qinEnROE0Nk4aEadXRm0=; b=PAmdzbjcYKopXT arGMr40vhkeF81T01l3ngu67C2rS52rl4Cs8a7ELO+meyFXmJ0XZGXw5aUh9y9S6XWSfpoP/zXRBe 5Xk04eeglngki/TRcCjQmBVSKDwFTEZt+Ryxd/bWQfAWxblO4D2f15aYVi1AaobfXvEvqQ2MB4aUY x4F2bzt0+Mwm6hV0mawVWI2zKPWxm9lw0+X3/dXMn1J4uP9Hu2OOArkedEVOn3tEF5rxlpLgxA7qE uZHKTHFDARkObfvVjO7REDDdG0hSN+BBCgqjZf/si/bkZNImf7nnMW0hAFhGj+X52emE5hgLR2RKA AuoJL+RLXZvYmu6homRw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1n5aIq-001PXP-Og; Thu, 06 Jan 2022 21:28:08 +0000 Received: from mail-io1-xd29.google.com ([2607:f8b0:4864:20::d29]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1n5aIm-001PUO-6n for linux-arm-kernel@lists.infradead.org; Thu, 06 Jan 2022 21:28:06 +0000 Received: by mail-io1-xd29.google.com with SMTP id e128so4783133iof.1 for ; Thu, 06 Jan 2022 13:28:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=eWXKe6ofHR5YtEe8ndfEgo9MmkCiJN00xcKWgTh/Pis=; b=BolcB8rWOsXln3zfSDdLRY0wY8sV7BCnJsuEW8CPCdM9aJ6RY9bTUa8knlVhmhYUt8 NEQrSyQs08aIzjvdat/3newfT1mEmXsSrMZ6su0ICLx4aHSXiSB9u5zf9PRWihodxYV1 +2iaeoZ1G7VaxI33AnURMolAn4S88Z0iu4FxK3d5Ek2Ky0bLhUCRMhP90iUt2NJLQmvJ o/cj5d3ivi9oK5yB5Z2Khq/l3hecz3coEcCOQxJfOszbMtlE3lYaweDMqpvIrxz7CeR8 SLqCx+lmvrPt7RklREA3WFFi82Hak/gqh8NlDgUoL7QpkiwnJ2XOUckhKwtKkR+PEPpL pZUQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=eWXKe6ofHR5YtEe8ndfEgo9MmkCiJN00xcKWgTh/Pis=; b=fKcKYH4zrBozdwiTgW25rHH2RvN8OT8vqoUOkHvP+Ux7JyzfnXmlnorYAr51ZSmk3x EusQzFjguDmnyqtIuJoYENqoOplDhUqOzZwW89DIrh2SUJECT8t2OACNuqD0NJ5TLOug oNA9GwXqYMJrkxGnNlgZumbUCbh/xrte2moZlSJwaPD0p1IwkUDPHr1UzIW77KNf3Eik oe+CrOnEN0jN5aDIxQCxynwQGUXg2TXeBkq0tCVjAfjiudJebvtazPYQZmItBeIgfnzf VG5mYeqJ7K6TGX5oH9UiEgtaEBZOkswF1LWQvFs0RqZ1BqEj85Q2OSy+V9kzQf9CEgGx WhnQ== X-Gm-Message-State: AOAM533Ny+w0r+zJDC8QucU7ugjfXw4mP+CXpRdLjCGBscMJ2nrD0hlK J+CLEWOsVHJk+93YAukxPs5lk+l096HJ6NRI X-Google-Smtp-Source: ABdhPJz3ylKxooCGfJpO0rKP+86X+4pj60biR0zjBJEL77j+Y9m/Z8ljRDcrsoM24gj56IOBzSZTYw== X-Received: by 2002:a05:6602:2f0e:: with SMTP id q14mr4472905iow.75.1641504479938; Thu, 06 Jan 2022 13:27:59 -0800 (PST) Received: from google.com ([2620:15c:183:200:2e0a:5e5e:fac:e07b]) by smtp.gmail.com with ESMTPSA id o1sm1656628ilj.41.2022.01.06.13.27.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 06 Jan 2022 13:27:56 -0800 (PST) Date: Thu, 6 Jan 2022 14:27:52 -0700 From: Yu Zhao To: Michal Hocko Cc: Andrew Morton , Linus Torvalds , Andi Kleen , Catalin Marinas , Dave Hansen , Hillf Danton , Jens Axboe , Jesse Barnes , Johannes Weiner , Jonathan Corbet , Matthew Wilcox , Mel Gorman , Michael Larabel , Rik van Riel , Vlastimil Babka , Will Deacon , Ying Huang , linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, page-reclaim@google.com, x86@kernel.org, Konstantin Kharlamov Subject: Re: [PATCH v6 6/9] mm: multigenerational lru: aging Message-ID: References: <20220104202227.2903605-1-yuzhao@google.com> <20220104202227.2903605-7-yuzhao@google.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220106_132804_287812_3018B031 X-CRM114-Status: GOOD ( 35.62 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Thu, Jan 06, 2022 at 05:06:42PM +0100, Michal Hocko wrote: > I am still reading through the series. It is a lot of code and quite > hard to wrap ones head around so these are mostly random things I have > run into. More will likely follow up. > > On Tue 04-01-22 13:22:25, Yu Zhao wrote: > [...] > > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h > > index aba18cd101db..028afdb81c10 100644 > > --- a/include/linux/memcontrol.h > > +++ b/include/linux/memcontrol.h > > @@ -1393,18 +1393,24 @@ mem_cgroup_print_oom_meminfo(struct mem_cgroup *memcg) > > > > static inline void lock_page_memcg(struct page *page) > > { > > + /* to match folio_memcg_rcu() */ > > + rcu_read_lock(); > > } > > > > static inline void unlock_page_memcg(struct page *page) > > { > > + rcu_read_unlock(); > > } > > > > static inline void folio_memcg_lock(struct folio *folio) > > { > > + /* to match folio_memcg_rcu() */ > > + rcu_read_lock(); > > } > > > > static inline void folio_memcg_unlock(struct folio *folio) > > { > > + rcu_read_unlock(); > > } > > This should go into a separate patch and merge it independently. I > haven't really realized that !MEMCG configuration has a different > locking scopes. Considered it done. > > diff --git a/include/linux/oom.h b/include/linux/oom.h > > index 2db9a1432511..9c7a4fae0661 100644 > > --- a/include/linux/oom.h > > +++ b/include/linux/oom.h > > @@ -57,6 +57,22 @@ struct oom_control { > > extern struct mutex oom_lock; > > extern struct mutex oom_adj_mutex; > > > > +#ifdef CONFIG_MMU > > +extern struct task_struct *oom_reaper_list; > > +extern struct wait_queue_head oom_reaper_wait; > > + > > +static inline bool oom_reaping_in_progress(void) > > +{ > > + /* a racy check can be used to reduce the chance of overkilling */ > > + return READ_ONCE(oom_reaper_list) || !waitqueue_active(&oom_reaper_wait); > > +} > > +#else > > +static inline bool oom_reaping_in_progress(void) > > +{ > > + return false; > > +} > > +#endif > > I do not like this. These are internal oom reaper's and no code should > really make any decisions based on that. oom_reaping_in_progress is not > telling much anyway. There is a perfectly legitimate reason for this. If there is already a oom kill victim and the oom reaper is making progress, the system may still be under memory pressure until the oom reaping is done. The page reclaim has two choices in this transient state: kill more processes or keep reclaiming (a few more) hot pages. The first choice, AKA overkilling, is generally a bad one. The oom reaper is single threaded and it can't go faster with additional victims. Additional processes are sacrificed for nothing -- this is an overcorrection of a system that tries to strike a balance between the tendencies to release memory pressure and to improve memory utilization. > This is a global queue for oom reaper that can > contain oom victims from different oom scopes (e.g. global OOM, memcg > OOM or memory policy OOM). True, but this is a wrong reason to make the conclusion below. Oom kill scopes do NOT matter; only the pool the freed memory goes into does. And there is only one global pool free pages. > Your lru_gen_age_node uses this to decide whether to trigger > out_of_memory and that is clearly wrong for the above reasons. I hope my explanation above is clear enough. There is nothing wrong with the purpose and the usage of oom_reaping_in_progress(), and it has been well tested in the Arch Linux Zen kernel. Without it, overkills can be easily reproduced by the following simple script. That is additional oom kills happen to processes other than "tail". # enable zram while true; do tail /dev/zero done > out_of_memory is designed to skip over any action if there is an oom > victim pending from the oom domain (have a look at oom_evaluate_task). Where exactly? Point me to the code please. I don't see such a logic inside out_of_memory() or oom_evaluate_task(). Currently the only thing that could remotely prevent overkills is oom_lock. But it's inadequate. This is the entire pipeline: low on memory -> out_of_memory() -> oom_reaper() -> free memory To avoid overkills, we need to consider the later half of it too. oom_reaping_in_progress() is exactly for this purpose. > > +static bool age_lruvec(struct lruvec *lruvec, struct scan_control *sc, > > + unsigned long min_ttl) > > +{ > > + bool need_aging; > > + long nr_to_scan; > > + struct mem_cgroup *memcg = lruvec_memcg(lruvec); > > + int swappiness = get_swappiness(memcg); > > + DEFINE_MAX_SEQ(lruvec); > > + DEFINE_MIN_SEQ(lruvec); > > + > > + if (mem_cgroup_below_min(memcg)) > > + return false; > > mem_cgroup_below_min requires effective values to be calculated for the > reclaimed hierarchy. Have a look at mem_cgroup_calculate_protection I always keep that in mind, and age_lruvec() is called *after* mem_cgroup_calculate_protection(): balance_pgdat() memcgs_need_aging = 0 do { lru_gen_age_node() if (!memcgs_need_aging) { memcgs_need_aging = 1 return } age_lruvec() shrink_node_memcgs() mem_cgroup_calculate_protection() lru_gen_shrink_lruvec() if ... memcgs_need_aging = 0 } while ... _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel