From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 877C3D2C122 for ; Tue, 5 Nov 2024 15:16:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F306A6B0098; Tue, 5 Nov 2024 10:16:23 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id EDFD66B009B; Tue, 5 Nov 2024 10:16:23 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DA79E6B009C; Tue, 5 Nov 2024 10:16:23 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id BA6906B0098 for ; Tue, 5 Nov 2024 10:16:23 -0500 (EST) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 3A6DD41342 for ; Tue, 5 Nov 2024 15:16:23 +0000 (UTC) X-FDA: 82752391932.18.85EAB07 Received: from mail-qv1-f51.google.com (mail-qv1-f51.google.com [209.85.219.51]) by imf27.hostedemail.com (Postfix) with ESMTP id 535BA4000B for ; Tue, 5 Nov 2024 15:15:47 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=gourry.net header.s=google header.b=WIQzSRz8; spf=pass (imf27.hostedemail.com: domain of gourry@gourry.net designates 209.85.219.51 as permitted sender) smtp.mailfrom=gourry@gourry.net; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1730819697; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=UsHvyzwRf5/kTw6CpaYrJy4qdW947vGUGh6/HpaFcVs=; b=RA8Y05n26TDxFX+Uqxfch3Yng99csmgLfWKb2Ug2kbdKo2p0KoDeqdPeoQMKEky1fm/Sc4 1Xir/HHL/Tc+uvqE+MA8NaLLG72IuIxEqu4QTBPCx43p05VbzINYYa34I7LL3GLrqtWHQB b4S9DfoV/fmkHG93GQOEK5qQAIAqRV0= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1730819697; a=rsa-sha256; cv=none; b=rMOsfkndl4za3RDse/9cmfDkWPaQo8Wsh5d1fje+T6Ltes9bll5eZ+pfYCSjqf9DcSfUC3 C4Gvp6dBIDDl65PJI1q8jNu6//Wc9ZAZeMUrcGGI9H67PG0hxw9oBCuqJGZAT15oYl64bY IX6ciXDkKV6xO12kmf1fbztPaBPFs4w= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=gourry.net header.s=google header.b=WIQzSRz8; spf=pass (imf27.hostedemail.com: domain of gourry@gourry.net designates 209.85.219.51 as permitted sender) smtp.mailfrom=gourry@gourry.net; dmarc=none Received: by mail-qv1-f51.google.com with SMTP id 6a1803df08f44-6cbe3ea8e3fso38500226d6.0 for ; Tue, 05 Nov 2024 07:16:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gourry.net; s=google; t=1730819780; x=1731424580; darn=kvack.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=UsHvyzwRf5/kTw6CpaYrJy4qdW947vGUGh6/HpaFcVs=; b=WIQzSRz8yHvQPLCYfMIxqgdFaRZGTiBCZtUliys9Hh3GOqw49y8DdsnGl6iqnzOiBH V4QZCcbdoB1VoHqFLrCQYahQxVssnqk0tO2lGPLqjMBgSqUY4qEUaIBxB0O3eurwxoDs VjfUwLAZRh+2HA8DOCCvfK8NO5CJgGZIvbg/KatxEk55inKIq2gLPr0jkaAc15EpNbB/ 8kNSrav2/JJ7ec/DuOuFqQ6c3KVxEyAUUyWMrvDmnqhRWoWImKjsMolAe0TerPo569cc SkTthh6AlzHsmmLEXMoW7vOLnRUPMgTWAwP4tZRCtnb2jv/A/pwtm7MVCFhio4T1DWMm V9+A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730819780; x=1731424580; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=UsHvyzwRf5/kTw6CpaYrJy4qdW947vGUGh6/HpaFcVs=; b=CtYl43cDrVBsj0JiU9dXLQzPZ/lk3bvErIQERVbmDVZaTNPogH7092JCjkXMthhk8j lkT+bZZYIzCvJEb14IjP+wZ/KznqmfMTjAk1W1qs05Aa+HN+fHDTDKD5EYoSOsqm7H5I hzW6cyuBzmaUSEHkRCusJu6g+jfPNTsQIVWIBhYb1V36QiXplUq36qxh+UKBGyZ8xpFu u93pIcGxV3kYS7m2v7304CV+OZL8eeukPm+mD9DcuztFE2X+olfAmJ5vqJtUx42WHX3u HH66wJ3yUSyKAtS0+kxwoogEa7DpiY/JYbXjruuGKb2l9yeqRVU8nLMJ1746GzLDKbM4 ZX4Q== X-Gm-Message-State: AOJu0Yzwp6fdvjNcgKwDOkNer/3INqJNV5pDX4JjgCqLSbqFV0oBIaoy yBhld6Eo3MG5Y0Onx+7FuSCFrA7DhHOMlNppWTSp0D2un28zuJwg2xMOfP4/5NQ= X-Google-Smtp-Source: AGHT+IHzbmk+6Nhdx/Q+cCNji2tD4RIJAp+WTiu6+n2i5b2IanzdzzHP/MHl1OYO5jysNjrrfAy67Q== X-Received: by 2002:a05:6214:285:b0:6d3:5681:977e with SMTP id 6a1803df08f44-6d3568199e7mr223219806d6.51.1730819780226; Tue, 05 Nov 2024 07:16:20 -0800 (PST) Received: from PC2K9PVX.TheFacebook.com (pool-173-79-56-208.washdc.fios.verizon.net. [173.79.56.208]) by smtp.gmail.com with ESMTPSA id 6a1803df08f44-6d35415a703sm60937326d6.85.2024.11.05.07.16.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 05 Nov 2024 07:16:19 -0800 (PST) Date: Tue, 5 Nov 2024 10:16:10 -0500 From: Gregory Price To: "Huang, Ying" Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, akpm@linux-foundation.org, david@redhat.com, nphamcs@gmail.com, nehagholkar@meta.com, abhishekd@meta.com, Johannes Weiner , Feng Tang Subject: Re: [PATCH 0/3] mm,TPP: Enable promotion of unmapped pagecache Message-ID: References: <20240803094715.23900-1-gourry@gourry.net> <875xrxhs5j.fsf@yhuang6-desk2.ccr.corp.intel.com> <87ikvefswp.fsf@yhuang6-desk2.ccr.corp.intel.com> <87jzdi782s.fsf@yhuang6-desk2.ccr.corp.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <87jzdi782s.fsf@yhuang6-desk2.ccr.corp.intel.com> X-Rspamd-Server: rspam10 X-Stat-Signature: 9q1tppd3qgzfgf18wpjt5i4a1oq7a1a8 X-Rspamd-Queue-Id: 535BA4000B X-Rspam-User: X-HE-Tag: 1730819747-397747 X-HE-Meta: U2FsdGVkX1/21/kfFnN7fV6P6lZy3AyyE6l5Vi4MKZFXFGmOcHgm2sK89NlK54q4erYBcw00XFOaiYeuuIV3XuXiNeKaQrVaayCldEZ8uiAdhW/j4GuCY/fZPRxq8XO9qOc9LV3v6MWHjV4gpbGtbtr1yFiB5ge1zKUVxnGRbhpv7AuMdPwOVcsQBjE5HRcFFxf7vs3uNeIdMPtws8ewmfw/yjcJ7gE8y+rGsRkA19J0nxdu7UyKWotD6i/zQnkExJKGDt2riYnYy6ydiO8Fjd+lmjcVnftEMF39I02fMBWON80T+mrjPB11lzUG7XYR3Z8d4GqclIrnyJ82J85tIHHZWf/d93wKME0y039cg+Xhb7m37gi5fCpNHLG1Rs4Xg3+2dltrUktVVkUCzWVuOZcy+pM5XCIlefBpGyhlSj+9J0vU+p0tNMPbqck9jOTYxokpXhaKMv5Yob0mj+bNZjKKPmXXAeHS9J2siqezVHNWZgf+O1PYfZb0QX4ZP5H49Np0JTOBPT3sXSRHfhU/atvRUBC6Hmuwlku50KmAWOVHC1vToo1rpK3YL8wF/uJVpxO6AfIBM9IHED2zNV7zwwXkKTP6zlcSlFTEpNYozG6zq3oGmsOUZOrKXXhsVhaHck6ymT4yGwMzpLeXIHC9ACDbDjZOn9XhQmq8+k2Sy8mNax5XRuAcV6x6/LNReID1MR7Py5E+rb1RfLqNsMOiUWAl9zWk2U/dsXTeMHvZLaWSlksodxnqQUysy06lDxq8WAQgBSBmTQYxUigAegDa+vogt1EHNod0iSCO1qVumCiIqRsmf+/SDPtWYcTWPHzemMm/E1nXU7mi85CnnudvhjC4AaRisHbkxFlwjSFwETQ4qu7Ue7ODK9VKqHs6UQ1uzOxvvo5UQm87EIXoHa4ZLZc0lgwCPiNYgbli5HpdODm+bm54tyo1t4+pRro4RMLU80CZBOZWW2F/obCvCjG sF2SjaTQ Po4Ee6eJFFEPNFWUIdTrVEQe1sKBnsXmEQtIJ0GIZDQthGGoyJEUsPLPMmtmVLPnD7NNvDp+ZaItI0somzvTFUR1ahv5XvbG5yfkM55ei5G22MspnedqH826wFrz6UhvFLABZQUbAcsgKT0UDjYOx1sRiRmcC73hPwsLfvGwnGCl5OROpzcGGoZx3AJq173W1NST7u1iFNPmwo2aiJx7Y4rdzewFhUrP73xKpVdjiB9OOOyCQeV6FujcxNhaqIJfSe2au2rZfzlSTatR8TbowP9C1UaIumxP7lCZzHB/qXOqwj6sVyMkPwYVFNQE1zwey7VxdD3nbLAiVB4gx/8fuk9EnQAtk14zGIATtrhbm/W4fuDtyTEclieItew== X-Bogosity: Ham, tests=bogofilter, spamicity=0.001346, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Tue, Nov 05, 2024 at 10:00:59AM +0800, Huang, Ying wrote: > Hi, Gregory, > > Gregory Price writes: > > > My observations between these 3 proposals: > > > > - The page-lock state is complex while trying interpose in mark_folio_accessed, > > meaning inline promotion inside that interface is a non-starter. > > > > We found one deadlock during task exit due to the PTL being held. > > > > This worries me more generally, but we did find some success changing certain > > calls to mark_folio_accessed to mark_folio_accessed_and_promote - rather than > > modifying mark_folio_accessed. This ends up changing code in similar places > > to your hook - but catches a more conditions that mark a page accessed. > > > > - For Keith's proposal, promotions via LRU requires memory pressure on the lower > > tier to cause a shrink and therefore promotions. I'm not well versed in LRU > > LRU sematics, but it seems we could try proactive reclaim here. > > > > Doing promote-reclaim and demote/swap/evict reclaim on the same triggers > > seems counter-intuitive. > > IIUC, in TPP paper (https://arxiv.org/abs/2206.02878), a similar method > is proposed for page promoting. I guess that it works together with > proactive reclaiming. > Each process is responsible for doing page table scanning for numa hint faults and producing a promotion. Since the structure used there is the page tables themselves, there isn't an existing recording mechanism for us to piggy-back on to defer migrations to later. > > - Doing promotions inline with access creates overhead. I've seen some research > > suggesting 60us+ per migration - so aggressiveness could harm performance. > > > > Doing it async would alleviate inline access overheads - but it could also make > > promotion pointless if time-to-promote is to far from liveliness of the pages. > > Async promotion needs to deal with the resource (CPU/memory) charging > too. You do some work for a task, so you need to charge the consumed > resource for the task. > This is a good point, and would heavily complicate things. Simple is better, let's avoid that. > > - Doing async-promotion may also require something like PG_PROMOTABLE (as proposed > > by Keith's patch), which will obviously be a very contentious topic. > > Some additional data structure can be used to record pages. > I have an idea inspired by these three sets, i'll bumble my way through a prototype. > > Reading more into the code surrounding this and other migration logic, I also > > think we should explore an optimization to mempolicy that tries to aggressively > > keep certain classes of memory on the local node (RX memory and stack > > for example). > > > > Other areas of reclaim try to actively prevent demoting this type of memory, so we > > should try not to allocate it there in the first place. > > We have already used DRAM first allocation policy. So, we need to > measure its effect firstly. > Yes, but also as the weighted interleave patch set demonstrated, it can be beneficial to change this to distribute allocations from the outset - however, distributing all allocations lead to less reliable performance than just distributing the heap. Another topic for another thread. ~Gregory