From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C4216C433F5 for ; Tue, 12 Apr 2022 02:16:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243942AbiDLCSp (ORCPT ); Mon, 11 Apr 2022 22:18:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48292 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244900AbiDLCSn (ORCPT ); Mon, 11 Apr 2022 22:18:43 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 104F433E05; Mon, 11 Apr 2022 19:16:25 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 8E592616B4; Tue, 12 Apr 2022 02:16:24 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6F72DC385A4; Tue, 12 Apr 2022 02:16:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1649729784; bh=czcqbI3NFXXdksle1p+xxdrXMUzuEzwDxXYTfL5oygc=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=rQA0FJhIvd3j58VrCNw7I1Yfx0ql9MAgEtN6wa/W51NW9ByMZZi7Af4aiWAOCcGev l203DMHBLREf/WeMVdQJvt10IiSkJaXQK+nP5Mr2J8mjmckbSDbH2MLxYT67D0h2y0 8eC7yVrTHMivKkxXx8Iah8DroA5D+MGlY9IBvK/4= Date: Mon, 11 Apr 2022 19:16:21 -0700 From: Andrew Morton To: Yu Zhao Cc: Stephen Rothwell , linux-mm@kvack.org, Andi Kleen , Aneesh Kumar , Barry Song <21cnbao@gmail.com>, Catalin Marinas , Dave Hansen , Hillf Danton , Jens Axboe , Jesse Barnes , Johannes Weiner , Jonathan Corbet , Linus Torvalds , Matthew Wilcox , Mel Gorman , Michael Larabel , Michal Hocko , Mike Rapoport , Rik van Riel , Vlastimil Babka , Will Deacon , Ying Huang , linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, page-reclaim@google.com, x86@kernel.org, Brian Geffon , Jan Alexander Steffens , Oleksandr Natalenko , Steven Barrett , Suleiman Souhlal , Daniel Byrne , Donald Carr , =?ISO-8859-1?Q? "Holger_Hoffst=E4tte" ?= , Konstantin Kharlamov , Shuang Zhai , Sofia Trinh , Vaibhav Jain Subject: Re: [PATCH v10 08/14] mm: multi-gen LRU: support page table walks Message-Id: <20220411191621.0378467ad99ebc822d5ad005@linux-foundation.org> In-Reply-To: <20220407031525.2368067-9-yuzhao@google.com> References: <20220407031525.2368067-1-yuzhao@google.com> <20220407031525.2368067-9-yuzhao@google.com> X-Mailer: Sylpheed 3.7.0 (GTK+ 2.24.33; x86_64-pc-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-doc@vger.kernel.org On Wed, 6 Apr 2022 21:15:20 -0600 Yu Zhao wrote: > > ... > > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -6083,6 +6083,29 @@ static void mem_cgroup_move_task(void) > } > #endif > > +#ifdef CONFIG_LRU_GEN > +static void mem_cgroup_attach(struct cgroup_taskset *tset) > +{ > + struct cgroup_subsys_state *css; > + struct task_struct *task = NULL; > + > + cgroup_taskset_for_each_leader(task, css, tset) > + break; Does this actually do anything? > + if (!task) > + return; > + > + task_lock(task); > + if (task->mm && task->mm->owner == task) > + lru_gen_migrate_mm(task->mm); > + task_unlock(task); > +} > > ... > > +static void update_batch_size(struct lru_gen_mm_walk *walk, struct folio *folio, > + int old_gen, int new_gen) > +{ > + int type = folio_is_file_lru(folio); > + int zone = folio_zonenum(folio); > + int delta = folio_nr_pages(folio); > + > + VM_BUG_ON(old_gen >= MAX_NR_GENS); > + VM_BUG_ON(new_gen >= MAX_NR_GENS); General rule: don't add new BUG_ONs, because they crash the kenrel. It's better to use WARN_ON or WARN_ON_ONCE then try to figure out a way to keep the kernel limping along. At least so the poor user can gather logs. > + walk->batched++; > + > + walk->nr_pages[old_gen][type][zone] -= delta; > + walk->nr_pages[new_gen][type][zone] += delta; > +} > +