From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2302AC433EF for ; Thu, 19 May 2022 07:42:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AD55A6B0072; Thu, 19 May 2022 03:42:42 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A85AE6B0073; Thu, 19 May 2022 03:42:42 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9745F6B0074; Thu, 19 May 2022 03:42:42 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 8894C6B0072 for ; Thu, 19 May 2022 03:42:42 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay12.hostedemail.com (Postfix) with ESMTP id 5D1D31208D6 for ; Thu, 19 May 2022 07:42:42 +0000 (UTC) X-FDA: 79481700564.22.335D519 Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by imf21.hostedemail.com (Postfix) with ESMTP id 4C74E1C00DA for ; Thu, 19 May 2022 07:42:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1652946159; x=1684482159; h=message-id:subject:from:to:cc:date:in-reply-to: references:mime-version:content-transfer-encoding; bh=2Ojw62lZylIf6juVQUL8OPlwJ0l8Bpb1S6O2GhP47Ls=; b=dWAQNDvudWyRhTEdnXAH9YFDnvOI4K/3okReJrddMHt9pvTJqz14NWP9 ZimJhiyzJxn8I/lFs/X5chLQ7wAuaEk2bV/4Wn41EwQme7Ygl99HTXxIJ kRtEDKHIUIdSEGEMa2+x4pWA0dmHtiAg6jGOZMaJHV0JT0cq+DP3xOJqN cKSiyiZ+YK5BpfGeL99LEPYh+bv36XTIkqnPEqAly1cga+ndzvN5knda2 nb4K1iTXD7luNwknPGGwMIKypl+C5dBAGXD8EoQSo7ZHA7CPCoB+djafl XqHU26i+yQmeSDPC4eycHAI3MspUkeCgnMAmeMU9EKh6rUxXl2ttP0oDK A==; X-IronPort-AV: E=McAfee;i="6400,9594,10351"; a="251970473" X-IronPort-AV: E=Sophos;i="5.91,237,1647327600"; d="scan'208";a="251970473" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 May 2022 00:42:37 -0700 X-IronPort-AV: E=Sophos;i="5.91,237,1647327600"; d="scan'208";a="545950159" Received: from xiaominc-mobl1.ccr.corp.intel.com ([10.254.213.242]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 May 2022 00:42:34 -0700 Message-ID: Subject: Re: [PATCH] Revert "mm/vmscan: never demote for memcg reclaim" From: "ying.huang@intel.com" To: Johannes Weiner , Dave Hansen , Yang Shi , Andrew Morton Cc: linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@fb.com, Zi Yan , Michal Hocko , Shakeel Butt , Roman Gushchin , Tim Chen Date: Thu, 19 May 2022 15:42:31 +0800 In-Reply-To: <20220518190911.82400-1-hannes@cmpxchg.org> References: <20220518190911.82400-1-hannes@cmpxchg.org> Content-Type: text/plain; charset="UTF-8" User-Agent: Evolution 3.38.3-1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 4C74E1C00DA X-Stat-Signature: biuy4j11w3b1wic7syh79khtunq4sfuo Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=dWAQNDvu; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf21.hostedemail.com: domain of ying.huang@intel.com has no SPF policy when checking 192.55.52.136) smtp.mailfrom=ying.huang@intel.com X-Rspam-User: X-HE-Tag: 1652946149-153007 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, 2022-05-18 at 15:09 -0400, Johannes Weiner wrote: > This reverts commit 3a235693d3930e1276c8d9cc0ca5807ef292cf0a. > > Its premise was that cgroup reclaim cares about freeing memory inside > the cgroup, and demotion just moves them around within the cgroup > limit. Hence, pages from toptier nodes should be reclaimed directly. > > However, with NUMA balancing now doing tier promotions, demotion is > part of the page aging process. Global reclaim demotes the coldest > toptier pages to secondary memory, where their life continues and from > which they have a chance to get promoted back. Essentially, tiered > memory systems have an LRU order that spans multiple nodes. > > When cgroup reclaims pages coming off the toptier directly, there can > be colder pages on lower tier nodes that were demoted by global > reclaim. This is an aging inversion, not unlike if cgroups were to > reclaim directly from the active lists while there are inactive pages. > > Proactive reclaim is another factor. The goal of that it is to offload > colder pages from expensive RAM to cheaper storage. When lower tier > memory is available as an intermediate layer, we want offloading to > take advantage of it instead of bypassing to storage. > > Revert the patch so that cgroups respect the LRU order spanning the > memory hierarchy. > > Of note is a specific undercommit scenario, where all cgroup limits in > the system add up to <= available toptier memory. In that case, > shuffling pages out to lower tiers first to reclaim them from there is > inefficient. This is something could be optimized/short-circuited > later on (although care must be taken not to accidentally recreate the > aging inversion). Let's ensure correctness first. > > Signed-off-by: Johannes Weiner > Cc: Dave Hansen > Cc: "Huang, Ying" > Cc: Yang Shi > Cc: Zi Yan > Cc: Michal Hocko > Cc: Shakeel Butt > Cc: Roman Gushchin Reviewed-by: "Huang, Ying" This is also required by Tim's DRAM partition among cgroups in tiered sytstem. Best Regards, Huang, Ying > --- >  mm/vmscan.c | 9 ++------- >  1 file changed, 2 insertions(+), 7 deletions(-) > > diff --git a/mm/vmscan.c b/mm/vmscan.c > index c6918fff06e1..7a4090712177 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -528,13 +528,8 @@ static bool can_demote(int nid, struct scan_control *sc) >  { >   if (!numa_demotion_enabled) >   return false; > - if (sc) { > - if (sc->no_demotion) > - return false; > - /* It is pointless to do demotion in memcg reclaim */ > - if (cgroup_reclaim(sc)) > - return false; > - } > + if (sc && sc->no_demotion) > + return false; >   if (next_demotion_node(nid) == NUMA_NO_NODE) >   return false; >   > > >