From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2719BC433EF for ; Fri, 13 May 2022 07:04:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 99D466B0073; Fri, 13 May 2022 03:04:51 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 926626B0075; Fri, 13 May 2022 03:04:51 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 779856B0078; Fri, 13 May 2022 03:04:51 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 617BF6B0073 for ; Fri, 13 May 2022 03:04:51 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay11.hostedemail.com (Postfix) with ESMTP id 2954B809EC for ; Fri, 13 May 2022 07:04:51 +0000 (UTC) X-FDA: 79459832382.17.A5C66DA Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by imf31.hostedemail.com (Postfix) with ESMTP id 6E6F7200AE for ; Fri, 13 May 2022 07:04:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1652425490; x=1683961490; h=message-id:subject:from:to:cc:date:in-reply-to: references:mime-version:content-transfer-encoding; bh=sawKbh59agg1iUxoGQ2w1FRcN8D+N8ccR6rZnCreP0M=; b=eGWJh0gzKbdtuvAYEBm2HD6LfQfiPiYrOG1OLT7DPP89qHZ3PMxVRa6K cueX4hw77Hx1qzWkSry/XuWD8D1t5Zz7LnhL5kzvF3ETipTpwYmaSXw0y DcPhdqFF6nySgjsDinp8RyKehAciyADCgZ6vmM945ZSg/sjPYLAWwTU8D 02hPMSKab0zRLYeIwMgUsnxmmPwOy4byQTHO2COKe9KYTaKZeTNwETFwo TobGDPw/jo+N1ZDwEL/thWI+IqnmyT7CikwbQHeMBjDbPqIdjeS2B4uoH 1Buzk79n8XYi8Z9q/F0XWAHEbXruUgBJYWfpOccErUtEUI1LboAMjJNLW A==; X-IronPort-AV: E=McAfee;i="6400,9594,10345"; a="269071227" X-IronPort-AV: E=Sophos;i="5.91,221,1647327600"; d="scan'208";a="269071227" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 May 2022 00:04:48 -0700 X-IronPort-AV: E=Sophos;i="5.91,221,1647327600"; d="scan'208";a="595085051" Received: from jliu69-mobl.ccr.corp.intel.com ([10.254.212.158]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 May 2022 00:04:43 -0700 Message-ID: Subject: Re: RFC: Memory Tiering Kernel Interfaces (v2) From: "ying.huang@intel.com" To: Wei Xu Cc: Andrew Morton , Greg Thelen , "Aneesh Kumar K.V" , Yang Shi , Linux Kernel Mailing List , Jagdish Gediya , Michal Hocko , Tim C Chen , Dave Hansen , Alistair Popple , Baolin Wang , Feng Tang , Jonathan Cameron , Davidlohr Bueso , Dan Williams , David Rientjes , Linux MM , Brice Goglin , Hesham Almatary Date: Fri, 13 May 2022 15:04:39 +0800 In-Reply-To: References: <69f2d063a15f8c4afb4688af7b7890f32af55391.camel@intel.com> Content-Type: text/plain; charset="UTF-8" User-Agent: Evolution 3.38.3-1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 6E6F7200AE X-Stat-Signature: wztbf38qm97wf7imwtye1553ux8bkfw6 Authentication-Results: imf31.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=eGWJh0gz; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf31.hostedemail.com: domain of ying.huang@intel.com has no SPF policy when checking 192.55.52.120) smtp.mailfrom=ying.huang@intel.com X-Rspam-User: X-HE-Tag: 1652425466-460718 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, 2022-05-12 at 23:36 -0700, Wei Xu wrote: > On Thu, May 12, 2022 at 8:25 PM ying.huang@intel.com > wrote: > > > > On Wed, 2022-05-11 at 23:22 -0700, Wei Xu wrote: > > > > > > Memory Allocation for Demotion > > > ============================== > > > > > > To allocate a new page as the demotion target for a page, the kernel > > > calls the allocation function (__alloc_pages_nodemask) with the > > > source page node as the preferred node and the union of all lower > > > tier nodes as the allowed nodemask. The actual target node selection > > > then follows the allocation fallback order that the kernel has > > > already defined. > > > > > > The pseudo code looks like: > > > > > >     targets = NODE_MASK_NONE; > > >     src_nid = page_to_nid(page); > > >     src_tier = node_tier_map[src_nid]; > > >     for (i = src_tier + 1; i < MAX_MEMORY_TIERS; i++) > > >             nodes_or(targets, targets, memory_tiers[i]); > > >     new_page = __alloc_pages_nodemask(gfp, order, src_nid, targets); > > > > > > The memopolicy of cpuset, vma and owner task of the source page can > > > be set to refine the demotion target nodemask, e.g. to prevent > > > demotion or select a particular allowed node as the demotion target. > > > > Consider a system with 3 tiers, if we want to demote some pages from > > tier 0, the desired behavior is, > > > > - Allocate pages from tier 1 > > - If there's no enough free pages in tier 1, wakeup kswapd of tier 1 so > > demote some pages from tier 1 to tier 2 > > - If there's still no enough free pages in tier 1, allocate pages from > > tier 2. > > > > In this way, tier 0 will have the hottest pages, while tier 1 will have > > the coldest pages. > > When we are already in the allocation path for the demotion of a page > from tier 0, I think we'd better not block this allocation to wait for > kswapd to demote pages from tier 1 to tier 2. Instead, we should > directly allocate from tier 2. Meanwhile, this demotion can wakeup > kswapd to demote from tier 1 to tier 2 in the background. Yes. That's what I want too. My original words may be misleading. > > With your proposed method, the demoting from tier 0 behavior is, > > > > - Allocate pages from tier 1 > > - If there's no enough free pages in tier 1, allocate pages in tier 2 > > > > The kswapd of tier 1 will not be waken up until there's no enough free > > pages in tier 2. In quite long time, there's no much hot/cold > > differentiation between tier 1 and tier 2. > > This is true with the current allocation code. But I think we can make > some changes for demotion allocations. For example, we can add a > GFP_DEMOTE flag and update the allocation function to wake up kswapd > when this flag is set and we need to fall back to another node. > > > This isn't hard to be fixed, just call __alloc_pages_nodemask() for each > > tier one by one considering page allocation fallback order. > > That would have worked, except that there is an example earlier, in > which it is actually preferred for some nodes to demote to their tier > + 2, not tier +1. > > More specifically, the example is: > >                  20 >    Node 0 (DRAM) -- Node 1 (DRAM) >     | | | | >     | | 30 120 | | >     | v v | 100 > 100 | Node 2 (PMEM) | >     | | | >     | | 100 | >      \ v v >       -> Node 3 (Large Mem) > > Node distances: > node 0 1 2 3 >    0 10 20 30 100 >    1 20 10 120 100 >    2 30 120 10 100 >    3 100 100 100 10 > > 3 memory tiers are defined: > tier 0: 0-1 > tier 1: 2 > tier 2: 3 > > The demotion fallback order is: > node 0: 2, 3 > node 1: 3, 2 > node 2: 3 > node 3: empty > > Note that even though node 3 is in tier 2 and node 2 is in tier 1, > node 1 (tier 0) still prefers node 3 as its first demotion target, not > node 2. Yes. I understand that we need to support this use case. We can use the tier order in allocation fallback list instead of from small to large. That is, for node 1, the tier order for demotion is tier 2, tier 1. Best Regards, Huang, Ying