From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C3B65C433EF for ; Fri, 6 May 2022 00:25:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1036E6B0071; Thu, 5 May 2022 20:25:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0B37B6B0073; Thu, 5 May 2022 20:25:49 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E21AA6B0074; Thu, 5 May 2022 20:25:48 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id CA2FF6B0071 for ; Thu, 5 May 2022 20:25:48 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay13.hostedemail.com (Postfix) with ESMTP id A2CF460AA9 for ; Fri, 6 May 2022 00:25:48 +0000 (UTC) X-FDA: 79433425176.29.4EE4FE8 Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2061.outbound.protection.outlook.com [40.107.243.61]) by imf11.hostedemail.com (Postfix) with ESMTP id D034C40088 for ; Fri, 6 May 2022 00:25:43 +0000 (UTC) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=EWKyKGhF7hwQscPJuMAdg/PiArP4cV5U4Pm+Ltc1I2RnvJSoJ1EyhWpL9wjOJZqTK1jUKTLUmHteuIxGYEUwmQtDMX9Zc3WQIk6ZvjKuz1ycQrsY2kA9smhAqOURrWLyY/KsaFdkXDHQHcoMTvAldvAVjGT69aq9bsgbfDKs3bqEjB/w1uL3litROY3pULnuqe3s0NHHK/xoSLSapBnOaYsg6d5q0WUAcaRjowEs1c48z6G7FHUwSqi1K+h4IBWdv4k76rLNHFZ5NbInywNWKN8HoYF/Vpy6A6goQ3T5vMe2r+2CHj/DLSV2h3XD+DGHPDE0R6HktYafXt89K6CRhg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=YAjnZWN3hOl5bKj6VrwNBIFLyS+o6XbmqbE7Jp9Pz0E=; b=eME/1MkC6zWjqBImb1w6HfZsFyqbuKa3MXsLKgRQpHGioL5T7OLPrHyDoMuXqWPMf4/9fp9py89z9MaTVo56BwtVY8yRZ7A+jkYyVMkZfhsC0upW8gPS3K9k4ue1ddqvHZbaWKxQAhfymzFry/5lb/1S/NEKihpPvczw/l0BkdicJKHJY+5pL3XcrlroMNHCRlCntofAomKmwJb7igCmbg8sYvV7BH0Bz90hrhmgRwUI/7amZzzz21meE+iZu2pYMDFEnC7yyQo1ryXS4QSDkjtOTKxiJm0zw7+HVQeKnjbGIwRuoTjzsGVIIa67SV1koBX9WknUJBa4HWtFxAqg9g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.236) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=YAjnZWN3hOl5bKj6VrwNBIFLyS+o6XbmqbE7Jp9Pz0E=; b=TxYMeX9qxJF0AyXL0cipQracnvTJM/xQv7pfKJGOOy2ZvmARBpZ1GJiYDAmxLSI/Sn/nshBlWj0kMNg/3kA+eVq2YjSGzbimTf1UD28tpC+OIJ0qqmi3Z+KsQvLKIXZKlLT0XXpLbJ2XmvQ5DbM0w+lDc3xo5RkP+YXU6qgDk+LBPwH0T5jycqJ2ymBqWF/365J0jVra1cwZibnyAO2res3KlyEt3YMRxUs2yi2xrCUiq01EDNR7QO5thDO54jQd3rVBgoCRRuT/i8rCGH6V+LMLncN9If19p4SFDyDSE+XsWaVWtaUPaz3HlBITO3SOVcy8wobLR6c7HhrbIwISHQ== Received: from BN9PR03CA0925.namprd03.prod.outlook.com (2603:10b6:408:107::30) by MWHPR12MB1647.namprd12.prod.outlook.com (2603:10b6:301:c::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5206.25; Fri, 6 May 2022 00:25:45 +0000 Received: from BN8NAM11FT038.eop-nam11.prod.protection.outlook.com (2603:10b6:408:107:cafe::9c) by BN9PR03CA0925.outlook.office365.com (2603:10b6:408:107::30) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5206.25 via Frontend Transport; Fri, 6 May 2022 00:25:45 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.236) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.236 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.236; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.236) by BN8NAM11FT038.mail.protection.outlook.com (10.13.176.246) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5227.15 via Frontend Transport; Fri, 6 May 2022 00:25:44 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL109.nvidia.com (10.27.9.19) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Fri, 6 May 2022 00:25:44 +0000 Received: from nvdebian.localnet (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Thu, 5 May 2022 17:25:43 -0700 From: Alistair Popple To: Wei Xu CC: Andrew Morton , Dave Hansen , Huang Ying , "Dan Williams" , Yang Shi , Linux MM , Greg Thelen , Aneesh Kumar K.V , Jagdish Gediya , "Linux Kernel Mailing List" , Davidlohr Bueso , Michal Hocko , Baolin Wang , Brice Goglin , "Feng Tang" , Subject: Re: RFC: Memory Tiering Kernel Interfaces Date: Fri, 6 May 2022 10:25:42 +1000 Message-ID: <2915539.lHT9kPPJqt@nvdebian> In-Reply-To: <87v8ujh6ow.fsf@nvdebian.thelocal> References: <87v8ujh6ow.fsf@nvdebian.thelocal> MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="us-ascii" X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: c72e7496-8827-4ba3-d55f-08da2ef6f5fe X-MS-TrafficTypeDiagnostic: MWHPR12MB1647:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: buQ8/DDCSs780xjTKHMhFcZqAywv6vapbqVfCXNMt7xuWuMTTcUPs+mjkm9pkQVWgxyQqhBUNWlcmq3xbzz7bTbaF5B2/94iLmKfqC/oDeHss+HYEHHdUr+5ymo4h6+/MUaX4yjBwQ2xNeMXjY89HaNAevuC1fL9V9DeFJGjevPCnqCLkesf8IsUpJqF2sxy/5T06Rp2ezh1Wla1htU2vojn/WdoXKjOwSn9aAon6++DGCmb8Vorg0JAt4GgCI4ED/nIji29oeCyCSuXRfdfpwff918YsXplriFatU+wOWqI+mdAgnA8Gbw5qjZuIdkm6NlSqlzmxCF+/pPUk4PRUdFf8USWA6nHOZOSk80TMLpSNQBtkjReKy6I5gdnOP/IcOxHsN/cz3wxGOtco8xqXDmxGAOwsVryEsMHEvoZV9Z509kcQ38BjXYJY7IbJtsf9BPrfuZqW7t25LKHehJXG1vIhV3bpZffUmA/uBxHmAdcfN3ssBYtkTnFZ+I9o/3k0LSniTVQrxwfFlAwgHH2C46yOafAtn0X+tcZ2DFD7SqzFdxvM4vqVMgjLsRlvm7ldl2bWvbqvrTeWtCJoKeyK0OZJDL9S4l6fsiUJVDAOkk18x4pPdpMyuCXs/xV2maNgan3d00Fefz8ke16ea6hYjFgCeMDICrAb0MrWGnUXHk6n2EYtTm7J6vBHLw4LCWRogc43pmpzywWCZ2IWhUGeLpxayVZYSBMkLw0sTY9r6bPtO8/k8poT03aP/4BVM4hI7EtZJPvCkNc/ztvOOKFevq0MMhXpmFju7KY1YWHQl4b4z58t6XZGeziwANTWER0MBURN6lmToRq2a/PwYUNTsIrwfpmokDNcEWtZPneqkA= X-Forefront-Antispam-Report: CIP:12.22.5.236;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:InfoNoRecords;CAT:NONE;SFS:(13230001)(4636009)(36840700001)(46966006)(40470700004)(81166007)(2906002)(4326008)(316002)(33716001)(508600001)(70586007)(70206006)(8676002)(7416002)(5660300002)(8936002)(9576002)(356005)(36860700001)(54906003)(6916009)(83380400001)(82310400005)(336012)(47076005)(426003)(16526019)(186003)(40460700003)(9686003)(26005)(86362001)(39026012)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 May 2022 00:25:44.7954 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: c72e7496-8827-4ba3-d55f-08da2ef6f5fe X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[12.22.5.236];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT038.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR12MB1647 X-Stat-Signature: ggr7tpzf9zoq4ccy8jugpmc3dsgsmiik X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: D034C40088 Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=TxYMeX9q; spf=none (imf11.hostedemail.com: domain of apopple@nvidia.com has no SPF policy when checking 40.107.243.61) smtp.mailfrom=apopple@nvidia.com; dmarc=pass (policy=reject) header.from=nvidia.com X-Rspam-User: X-HE-Tag: 1651796743-550298 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Please ignore this one, apologies for the noise. On Friday, 6 May 2022 9:57:54 AM AEST Alistair Popple wrote: > Wei Xu writes: > > > The current kernel has the basic memory tiering support: Inactive > > pages on a higher tier NUMA node can be migrated (demoted) to a lower > > tier NUMA node to make room for new allocations on the higher tier > > NUMA node. Frequently accessed pages on a lower tier NUMA node can be > > migrated (promoted) to a higher tier NUMA node to improve the > > performance. > > > > A tiering relationship between NUMA nodes in the form of demotion path > > is created during the kernel initialization and updated when a NUMA > > node is hot-added or hot-removed. The current implementation puts all > > nodes with CPU into the top tier, and then builds the tiering hierarchy > > tier-by-tier by establishing the per-node demotion targets based on > > the distances between nodes. > > > > The current memory tiering interface needs to be improved to address > > several important use cases: > > > > * The current tiering initialization code always initializes > > each memory-only NUMA node into a lower tier. But a memory-only > > NUMA node may have a high performance memory device (e.g. a DRAM > > device attached via CXL.mem or a DRAM-backed memory-only node on > > a virtual machine) and should be put into the top tier. > > > > * The current tiering hierarchy always puts CPU nodes into the top > > tier. But on a system with HBM (e.g. GPU memory) devices, these > > memory-only HBM NUMA nodes should be in the top tier, and DRAM nodes > > with CPUs are better to be placed into the next lower tier. > > > > * Also because the current tiering hierarchy always puts CPU nodes > > into the top tier, when a CPU is hot-added (or hot-removed) and > > triggers a memory node from CPU-less into a CPU node (or vice > > versa), the memory tiering hierarchy gets changed, even though no > > memory node is added or removed. This can make the tiering > > hierarchy much less stable. > > > > * A higher tier node can only be demoted to selected nodes on the > > next lower tier, not any other node from the next lower tier. This > > strict, hard-coded demotion order does not work in all use cases > > (e.g. some use cases may want to allow cross-socket demotion to > > another node in the same demotion tier as a fallback when the > > preferred demotion node is out of space), and has resulted in the > > feature request for an interface to override the system-wide, > > per-node demotion order from the userspace. > > > > * There are no interfaces for the userspace to learn about the memory > > tiering hierarchy in order to optimize its memory allocations. > > > > I'd like to propose revised memory tiering kernel interfaces based on > > the discussions in the threads: > > > > - > > - > > > > > > Sysfs Interfaces > > `==============' > > > > * /sys/devices/system/node/memory_tiers > > > > Format: node list (one tier per line, in the tier order) > > > > When read, list memory nodes by tiers. > > > > When written (one tier per line), take the user-provided node-tier > > assignment as the new tiering hierarchy and rebuild the per-node > > demotion order. It is allowed to only override the top tiers, in > > which cases, the kernel will establish the lower tiers automatically. > > > > > > Kernel Representation > > `===================' > > > > * nodemask_t node_states[N_TOPTIER_MEMORY] > > > > Store all top-tier memory nodes. > > > > * nodemask_t memory_tiers[MAX_TIERS] > > > > Store memory nodes by tiers. > > > > * struct demotion_nodes node_demotion[] > > > > where: struct demotion_nodes { nodemask_t preferred; nodemask_t allowed; } > > > > For a node N: > > > > node_demotion[N].preferred lists all preferred demotion targets; > > > > node_demotion[N].allowed lists all allowed demotion targets > > (initialized to be all the nodes in the same demotion tier). > > > > > > Tiering Hierarchy Initialization > > `==============================' > > > > By default, all memory nodes are in the top tier (N_TOPTIER_MEMORY). > > > > A device driver can remove its memory nodes from the top tier, e.g. > > a dax driver can remove PMEM nodes from the top tier. > > > > The kernel builds the memory tiering hierarchy and per-node demotion > > order tier-by-tier starting from N_TOPTIER_MEMORY. For a node N, the > > best distance nodes in the next lower tier are assigned to > > node_demotion[N].preferred and all the nodes in the next lower tier > > are assigned to node_demotion[N].allowed. > > > > node_demotion[N].preferred can be empty if no preferred demotion node > > is available for node N. > > > > If the userspace overrides the tiers via the memory_tiers sysfs > > interface, the kernel then only rebuilds the per-node demotion order > > accordingly. > > > > Memory tiering hierarchy is rebuilt upon hot-add or hot-remove of a > > memory node, but is NOT rebuilt upon hot-add or hot-remove of a CPU > > node. > > > > > > Memory Allocation for Demotion > > `============================' > > > > When allocating a new demotion target page, both a preferred node > > and the allowed nodemask are provided to the allocation function. > > The default kernel allocation fallback order is used to allocate the > > page from the specified node and nodemask. > > > > The memopolicy of cpuset, vma and owner task of the source page can > > be set to refine the demotion nodemask, e.g. to prevent demotion or > > select a particular allowed node as the demotion target. > > > > > > Examples > > `======' > > > > * Example 1: > > Node 0 & 1 are DRAM nodes, node 2 & 3 are PMEM nodes. > > > > Node 0 has node 2 as the preferred demotion target and can also > > fallback demotion to node 3. > > > > Node 1 has node 3 as the preferred demotion target and can also > > fallback demotion to node 2. > > > > Set mempolicy to prevent cross-socket demotion and memory access, > > e.g. cpuset.mems=0,2 > > > > node distances: > > node 0 1 2 3 > > 0 10 20 30 40 > > 1 20 10 40 30 > > 2 30 40 10 40 > > 3 40 30 40 10 > > > > /sys/devices/system/node/memory_tiers > > 0-1 > > 2-3 > > > > N_TOPTIER_MEMORY: 0-1 > > > > node_demotion[]: > > 0: [2], [2-3] > > 1: [3], [2-3] > > 2: [], [] > > 3: [], [] > > > > * Example 2: > > Node 0 & 1 are DRAM nodes. > > Node 2 is a PMEM node and closer to node 0. > > > > Node 0 has node 2 as the preferred and only demotion target. > > > > Node 1 has no preferred demotion target, but can still demote > > to node 2. > > > > Set mempolicy to prevent cross-socket demotion and memory access, > > e.g. cpuset.mems=0,2 > > > > node distances: > > node 0 1 2 > > 0 10 20 30 > > 1 20 10 40 > > 2 30 40 10 > > > > /sys/devices/system/node/memory_tiers > > 0-1 > > 2 > > > > N_TOPTIER_MEMORY: 0-1 > > > > node_demotion[]: > > 0: [2], [2] > > 1: [], [2] > > 2: [], [] > > > > > > * Example 3: > > Node 0 & 1 are DRAM nodes. > > Node 2 is a PMEM node and has the same distance to node 0 & 1. > > > > Node 0 has node 2 as the preferred and only demotion target. > > > > Node 1 has node 2 as the preferred and only demotion target. > > > > node distances: > > node 0 1 2 > > 0 10 20 30 > > 1 20 10 30 > > 2 30 30 10 > > > > /sys/devices/system/node/memory_tiers > > 0-1 > > 2 > > > > N_TOPTIER_MEMORY: 0-1 > > > > node_demotion[]: > > 0: [2], [2] > > 1: [2], [2] > > 2: [], [] > > > > > > * Example 4: > > Node 0 & 1 are DRAM nodes, Node 2 is a memory-only DRAM node. > > > > All nodes are top-tier. > > > > node distances: > > node 0 1 2 > > 0 10 20 30 > > 1 20 10 30 > > 2 30 30 10 > > > > /sys/devices/system/node/memory_tiers > > 0-2 > > > > N_TOPTIER_MEMORY: 0-2 > > > > node_demotion[]: > > 0: [], [] > > 1: [], [] > > 2: [], [] > > > > > > * Example 5: > > Node 0 is a DRAM node with CPU. > > Node 1 is a HBM node. > > Node 2 is a PMEM node. > > > > With userspace override, node 1 is the top tier and has node 0 as > > the preferred and only demotion target. > > > > Node 0 is in the second tier, tier 1, and has node 2 as the > > preferred and only demotion target. > > > > Node 2 is in the lowest tier, tier 2, and has no demotion targets. > > > > node distances: > > node 0 1 2 > > 0 10 21 30 > > 1 21 10 40 > > 2 30 40 10 > > > > /sys/devices/system/node/memory_tiers (userspace override) > > 1 > > 0 > > 2 > > > > N_TOPTIER_MEMORY: 1 > > > > node_demotion[]: > > 0: [2], [2] > > 1: [0], [0] > > 2: [], [] > > > > -- Wei >