From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CA7EDC36002 for ; Wed, 9 Apr 2025 16:19:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3770A6B0007; Wed, 9 Apr 2025 12:19:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3259C6B0022; Wed, 9 Apr 2025 12:19:49 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 216026B0008; Wed, 9 Apr 2025 12:19:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id F1A12280092 for ; Wed, 9 Apr 2025 12:19:48 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 9FFCA160E4A for ; Wed, 9 Apr 2025 16:19:49 +0000 (UTC) X-FDA: 83315016498.24.E2582D5 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf22.hostedemail.com (Postfix) with ESMTP id D1ABFC000A for ; Wed, 9 Apr 2025 16:19:47 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=VztxvMqX; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf22.hostedemail.com: domain of rppt@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=rppt@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1744215588; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=W+rzVntm5p2Upg2kRDHuwQQd262xO4O+7g2Yr15egZc=; b=cbPmtxclQwfkI/KNZ17Ixdkr7vpSTTn/K5Y3k9RGDwn4arLUS/tdtf+DHWG3qceHzSqotX +sv6bVHX6bYEQTp+j3MKnvUoEAANqaNhJu6wwbRAINGMg9LD+q0aXLnPomo2obMa4OHr84 JIF1/EUGQ8VQTDsSmR0qODgWN0gikq8= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1744215588; a=rsa-sha256; cv=none; b=VQhBYpKCjt8xTosWr6603/l4LgGGZrtWlfZYfug0vkydP4msl0rWauEXZOiapndEawOOGT yiOUHICHvfPYyb7gelHtEpFieuu8aUK5MdsYQghJC2vnm02wfsAfn+C9Grv6OPQ/JKyqC9 hSrsqd/okghuwf8zOacY+w0saMHHfHQ= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=VztxvMqX; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf22.hostedemail.com: domain of rppt@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=rppt@kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 668755C0E69; Wed, 9 Apr 2025 16:17:29 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6114CC4CEE7; Wed, 9 Apr 2025 16:19:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1744215586; bh=O547vvi8wN4fNcefZC+j8pm7sBawXkTtl7ULJh5r69Y=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=VztxvMqXxrpm841YCPlRs5HxRWtyQCzteRRbsExjaVh5HQZx45QxpYyiLBSJuv3X2 MN7vgPzgclauypOQrUuLMro0cb2A0r82+RdvOdFbdWLfEEJnlIKYXhzyYcrWqjWDQY Q2bNXiFlR1BufsnZpfyjBbXsf215Q3IWHCFYnUYB2UaCy9B+xsxoGFnRP/J8guSrrR tE+UzMvqzhuoQsiyE/Fb8/shQw2OD0K6GR+ygi9Kvnxt8ywUgCPvZUdDJ+rdXobs4P AmIZmTueujQ0kDezDalKcjSW/fDLL+xKB1kEOBg1cCPtZwRGJQ0wI8e45idxGkET1w JGcNybozD2e9A== Date: Wed, 9 Apr 2025 19:19:30 +0300 From: Mike Rapoport To: Jason Gunthorpe Cc: Pratyush Yadav , Changyuan Lyu , linux-kernel@vger.kernel.org, graf@amazon.com, akpm@linux-foundation.org, luto@kernel.org, anthony.yznaga@oracle.com, arnd@arndb.de, ashish.kalra@amd.com, benh@kernel.crashing.org, bp@alien8.de, catalin.marinas@arm.com, dave.hansen@linux.intel.com, dwmw2@infradead.org, ebiederm@xmission.com, mingo@redhat.com, jgowans@amazon.com, corbet@lwn.net, krzk@kernel.org, mark.rutland@arm.com, pbonzini@redhat.com, pasha.tatashin@soleen.com, hpa@zytor.com, peterz@infradead.org, robh+dt@kernel.org, robh@kernel.org, saravanak@google.com, skinsburskii@linux.microsoft.com, rostedt@goodmis.org, tglx@linutronix.de, thomas.lendacky@amd.com, usama.arif@bytedance.com, will@kernel.org, devicetree@vger.kernel.org, kexec@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org Subject: Re: [PATCH v5 09/16] kexec: enable KHO support for memory preservation Message-ID: References: <20250404143031.GB1336818@nvidia.com> <20250407141626.GB1557073@nvidia.com> <20250407170305.GI1557073@nvidia.com> <20250409125630.GI1778492@nvidia.com> <20250409153714.GK1778492@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20250409153714.GK1778492@nvidia.com> X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: D1ABFC000A X-Stat-Signature: aqbnu8wat9bjc5xbo8o8e4d1cehawj4k X-Rspam-User: X-HE-Tag: 1744215587-84730 X-HE-Meta: U2FsdGVkX18zWW0lJ2KVal6Uq/yGD0b1PzsAbVIJutGJnFHdJDgnlCcKRjbysEOfjFlRByChh4hQmsos6ey/p/vbgXNakaTyEoTcM+JNkOx4HtM/ej4C/GfEo3EcNn5WJ+7CPnODvwq9DbzRcASSCaMpRVvovxb119ZRecSAXb0yEJUhr1TTO6BuQ1UijmoHVpemLAGmfTxop5K/2qDBaq1ijqzfZA5KnSIKQRyYfwGqwtM2EmpffvxMOm4j3y4ekF6kSY2IOSm42rR3rnj6//f3Xk01gFKlEUV4Q0bekdZtDhkYyVerv3J0/N1Z+tb4sMq4H1sLfP7Ub/f1RdmY5BAbqvy340LqviD0n2wk7JlEIlv2Udv5lyJOMzYbWRni3zTZGfMpjYwB32bwOOaxuvgzFZutkn5f3oRmwH+7g7n26JDCmLGCmUvg+1UioBoP5Q59efB7sUMAgla+PWqFeOJtQ+9A134zKxp1a6f+0aWZCZ6LGB66rq+oPRaSq1nFHiVloPKsbZIOj7KnHDzB/mmzWzSFzvTT7bn1Nk6UFBm6VKiBbBK6Gu5hYkdgPn0aeDtCOzGUa58SjT+soJO07CMDxQ8rqyoW2NxM3G6ww+Wd9O0RNyb/1rRepvTxRmqyIJeOOAVDBub8ErhXdgV9zoR+FBjD0zQb1fy8CJWEzgc/y3xZ9h+Ya+TYYlTnPvxQcIpToLdsL5aQz8VVhtHWEhkiBc6gbslG9AqDLLvBebyXef3t6kP+f/h0/TvRgiylEny/b++q+jY8xtgPiC3mO/7r/Wh56D3pTh5Hg+nIOtdkHtd/O7WX3kNWmnZx/BJko+c7kKse+so6FbO1gllu1F07DQ9Cv99OwEckelf6uzgN9bYyZ+2Cg8zQilqRugm3Nn6BRN56PARkBnO0xEjbz1/UL+dxI5Cs4GVEWB/Y3Efmh6pkOsudJKmjZEGwLdklSX9cej7LFvmWoPZuAJl RX+n3qZ8 8eQP+UGP8g4DnJxbZGQihEmZHBEIiRXzhdEC6zgQegaBztGndG2sP+xPjtjGiaDF3hmjEXclib3AXzJx3ZyHTX58AZDKFXR0XQxelLz8ZWt8v+lODrFRBaWIubOqrOu/5eGqh6/1R5LqR4HfUqvu9D2GFJQTCyv2d3cEceu/R+QLRqtHjKuf/y3N4gOwd1k9IgBsdgZqYdmou+49g1acPoSEZVmy6XKHnAAOjn/3eQEqZipDEcs8Pe3N56c364dkCTyS8OqdqMBQjQHcqKEp6d0I3kbfZXGtksDSYO70UXoSfTc7+66PXvQNCv8RHi8frPSW7rxCVgyG4rqU= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Wed, Apr 09, 2025 at 12:37:14PM -0300, Jason Gunthorpe wrote: > On Wed, Apr 09, 2025 at 04:58:16PM +0300, Mike Rapoport wrote: > > > > > > I think we still don't really know what will be needed, so I'd stick > > > with folio only as that allows building the memfd and a potential slab > > > preservation system. > > > > void * seems to me much more reasonable than folio one as the starting > > point because it allows preserving folios with the right order but it's not > > limited to it. > > It would just call kho_preserve_folio() under the covers though. How that will work for memblock and 1G pages? > > I don't mind having kho_preserve_folio() from day 1 and even stretching the > > use case we have right now to use it to preserve FDT memory. > > > > But kho_preserve_folio() does not make sense for reserve_mem and it won't > > make sense for vmalloc. > > It does for vmalloc too, just stop thinking about it as a > folio-for-pagecache and instead as an arbitary order handle to buddy > allocator memory that will someday be changed to a memdesc :| But we have memdesc today, it's struct page. It will be shrinked and maybe renamed, it will contain a pointer rather than data, but that's what basic memdesc is. And when the data structure that memdesc points to will be allocated separately folios won't make sense for order-0 allocations. > > The weird games slab does with casting back and forth to folio also seem to > > me like transitional and there won't be that folios in slab later. > > Yes transitional, but we are at the transitional point and KHO should > fit in. > > The lowest allocator primitive returns folios, which can represent any > order, and the caller casts to their own memdesc. The lowest allocation primitive returns pages. struct folio *__folio_alloc_noprof(gfp_t gfp, unsigned int order, int preferred_nid, nodemask_t *nodemask) { struct page *page = __alloc_pages_noprof(gfp | __GFP_COMP, order, preferred_nid, nodemask); return page_rmappable_folio(page); } EXPORT_SYMBOL(__folio_alloc_noprof); And page_rmappable_folio() clues about folio-for-pagecache very clearly. And I don't think folio will be a lowest primitive buddy returns anytime soon if ever. > Jason > -- Sincerely yours, Mike.