From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CD64FC27C4F for ; Wed, 29 May 2024 12:33:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8033E6B0098; Wed, 29 May 2024 08:33:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7B3AC6B0099; Wed, 29 May 2024 08:33:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 67ACD6B009A; Wed, 29 May 2024 08:33:35 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 4A0E46B0098 for ; Wed, 29 May 2024 08:33:35 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id B53B81A10B2 for ; Wed, 29 May 2024 12:33:34 +0000 (UTC) X-FDA: 82171374348.12.033E596 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf23.hostedemail.com (Postfix) with ESMTP id 50699140002 for ; Wed, 29 May 2024 12:33:31 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=a6b17E4Q; dmarc=none; spf=none (imf23.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1716986012; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=vyavl8bUVzZCXQtI4594RI/wEFqh4ezm1Mt/2ImtITM=; b=qVzyN+0AMP9v+44DF8QYw+xNl5LAfTu5hzy+1sddWYId7PvBeucXyKGDTkjPMtMsV7n1NG sKH11L7Z1MTrPFSk2m2XXVPwE79crlAoaPghxP9krGptsQruHguQYvf+0SihtPdyx5WUmc FcvFQLbfLgqx2Ting/SPwPENLSltbyo= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=a6b17E4Q; dmarc=none; spf=none (imf23.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1716986012; a=rsa-sha256; cv=none; b=uwwZxFOz4os44/Z26/1vxhs5Mo+HY0y/Md9WxKy5wOcjWKMn0VtTpL8iV4EhyVuJqsLS5Z ntTW2+Ox0gpX3FOqbJhlrCYikm6BZDZHJYLD7ThEQAfGAbEfKM0cc1bkUcl0fLAUe5fROD PTYOpE+JvyzhS0MOWXUkCvWsVq+15Vk= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Transfer-Encoding: Content-Type:MIME-Version:References:Message-ID:Subject:Cc:To:From:Date: Sender:Reply-To:Content-ID:Content-Description; bh=vyavl8bUVzZCXQtI4594RI/wEFqh4ezm1Mt/2ImtITM=; b=a6b17E4Qp1O6OTcsAA4XRSI+aj 6gtwW4h97N19nfw7CWiBk0UvTPwiHkhIo5vD6S2e94caZAoyT/jJAHsBYMnQ1eFjeBBY28Z2B4i/o jWIJNk1yw4QlOjJtnpMg3f/zt0yTxJKcR+Zmg4OVBBhfDIIVLgrz+vxptPVU8bXjRBrDI98pwoK6R yy4rLv0ckdePuEHY3ZJI8pXs9a++PgDoZjg4KE1HIvKILu7z2MuvF2CYnEdowoogpnRHg7FzdTKlu AEA+eHN/bkFh5Q6N4WTI6hPmBC+v2XMq8tHnmw6kXfsxqCtFXmiiA3ZRyapVexBbIgFfwJie5MJQu EdRwUWaQ==; Received: from willy by casper.infradead.org with local (Exim 4.97.1 #2 (Red Hat Linux)) id 1sCIUe-00000009oMZ-18o5; Wed, 29 May 2024 12:33:24 +0000 Date: Wed, 29 May 2024 13:33:24 +0100 From: Matthew Wilcox To: Chris Li Cc: Karim Manaouil , Jan Kara , Chuanhua Han , linux-mm , lsf-pc@lists.linux-foundation.org, ryan.roberts@arm.com, 21cnbao@gmail.com, david@redhat.com Subject: Re: [Lsf-pc] [LSF/MM/BPF TOPIC] Swap Abstraction "the pony" Message-ID: References: <039190fb-81da-c9b3-3f33-70069cdb27b0@oppo.com> <20240307140344.4wlumk6zxustylh6@quack3> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: X-Rspamd-Queue-Id: 50699140002 X-Stat-Signature: c7f1dfxgrqnmuzxmxwrazmbe6cc7s5qb X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1716986011-797519 X-HE-Meta: U2FsdGVkX1/W0MGqPX3guAYYeo1RsfplSr4KCyQNMAci5s29xCt3yFq8F6//EB30C1iKmZmr72Lox4tNmq2uqX0Juwb0+jwysgXimFdxKJhLgHkJzSEmYpbbAaiWUriXH5LbMkuCUa+g1j5NsSPYm0p+YZBjtCSX6yST+1Pt/541bHve2wNllvmC83db3EiApdvqhnZqzJIlSGj7GcCflHQ49BBkz/mjPuXMVaMmrcZd12szYlnliTUb5mBomDAb0JWiACT1bGuhVZNy+yRyf71tUpl5PDyu1SVjMAT7O04QpmJGE1lzYV9b1yH5x5ApynyCHzxsEuP7D2oS/HbMUlownjgzcpSL+cHpJVwbwDE//iHtrDXHh6ewY69oMrJhcCFePoJPd9ISOsOYThvwczA6Etv3pu7wsQIkP0GfMeC2RC1OxSTAHP9Q+L5EX/MxiufH8SSXqduui599v46+DIg1CJzW6/l7SuIMJrHwIPpCbwMFe9SPz49Lw5zfy6BXBskr8kUoP3OiA/qQEzKQ+Cu95GDtMq76B5qO61O2YibfgWIH2MTF9NLzhnUglUpGoxzIHnHKSjrxIvL64pg2lqv97TpjTPFpa9Xl76txMUOp1DEAPbzGtUBGPWUjAp7rsvaLelti5dls0T/CXH0IRAoQ5x+IKTDjj2vXGKeV5ajWX/zaCOhiZhzXPUcxZpcoNiK4Q83WzrBnNzuiIoZhfoz8YHIvRNexjueReHKmOoihey1DGGRGAnm00KlbzWC2CFiJpK2Rkhxw3YXBKm/mY3TtIUQTTHn0pQXQSt2gwvAQ6od58etFW5iIlbV5nMCAfz9gw03mk0qyVotv4mv8GGYayV1szdRJkmwDeXj9qlHZqOfiTZFBkoj/sFwMqSWLa2hx9ANM+KIDV0q3rzIbMyO3U3OSLplsEcasS195AypCPLhyFClTRwF1e1FF+2CursuesDpCO3QYMWy73Rp auTCnbUP z4r1R3XJuqyIsH9rSIim6puuVJ/ojjWzKc5doMkYb50b2qBTqccVb0ZSfkdu+KHshqZSPeSGm2jWUAsRtI7UEze+aTvSAG+CB2vzFW0Dypc8/cU3f5Cgr6jU+q0LgkluUNyJtfoYKm4Tsk5cK+BeCtAfNuBuszfJLG6iGBd8Kdc7lu2aGf0PXayg3BgiB78+Eq1PbKYX6lScJIQ98tZmL+9pLeo14DtR3KOk0pYV46UojEJ8yanIbIN12sn69ear1HNx2MYRqYkMoqFY7IZNF8k2H5nRjBfZCgMEf3pH91V0n494qV2c3Pb3wkSfu6YBJpDi2CXvON1m0QkS3McD8RX+IBjlD+XA6HIKz X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Tue, May 28, 2024 at 11:50:47PM -0700, Chris Li wrote: > On Tue, May 28, 2024 at 8:57 PM Matthew Wilcox wrote: > > > > On Tue, May 21, 2024 at 01:40:56PM -0700, Chris Li wrote: > > > > Filesystems already implemented a lot of solutions for fragmentation > > > > avoidance that are more apropriate for slow storage media. > > > > > > Swap and file systems have very different requirements and usage > > > patterns and IO patterns. > > > > Should they, though? Filesystems noticed that handling pages in LRU > > order was inefficient and so they stopped doing that (see the removal > > of aops->writepage in favour of ->writepages, along with where each are > > called from). Maybe it's time for swap to start doing writes in the order > > of virtual addresses within a VMA, instead of LRU order. > > Well, swap has one fundamental difference than file system: > the dirty file system cache will need to eventually write to file > backing at least once, otherwise machine reboots you lose the data. Yes, that's why we write back data from the page cache every 30 seconds or so. It's still important to not write back too early, otherwise you need to write the same block multiple times. The differences aren't as stark as you're implying. > Where the anonymous memory case, the dirty page does not have to write > to swap. It is optional, so which page you choose to swap out is > critical, you want to swap out the coldest page, the page that is > least likely to get swapin. Therefore, the LRU makes sense. Disagree. There are two things you want and the LRU serves neither particularly well. One is that when you want to reclaim memory, you want to find some memory that is likely to not be accessed in the next few seconds/minutes/hours. It doesn't need to be the coldest, just in (say) the coldest 10% or so of memory. And it needs to already be clean, otherwise you have to wait for it to writeback, and you can't afford that. The second thing you need to be able to do is find pages which are already dirty, and not likely to be written to soon, and write those back so they join the pool of clean pages which are eligible for reclaim. Again, the LRU isn't really the best tool for the job. > In VMA swap out, the question is, which VMA you choose from first? To > make things more complicated, the same page can map into different > processes in more than one VMA as well. This is why we have the anon_vma, to handle the same pages mapped from multiple VMAs. > > Indeed, if we're open to radical ideas, the LRU sucks. A physical scan > > is 40x faster: > > https://lore.kernel.org/linux-mm/ZTc7SHQ4RbPkD3eZ@casper.infradead.org/ > > That simulation assumes the page struct has access to information already. > On the physical CPU level, the access bit is from the PTE. If you scan > from physical page order, you need to use rmap to find the PTE to > check the access bit. It is not a simple pfn page order walk. You need > to scan the PTE first then move the access information into page > struct. We already maintain the dirty bit on the folio when we take a write-fault for file memory. If we do that for anon memory as well, we don't need to do an rmap walk at scan time. > > > One challenging aspect is that the current swap back end has a very > > > low per swap entry memory overhead. It is about 1 byte (swap_map), 2 > > > byte (swap cgroup), 8 byte(swap cache pointer). The inode struct is > > > more than 64 bytes per file. That is a big jump if you map a swap > > > entry to a file. If you map more than one swap entry to a file, then > > > you need to track the mapping of file offset to swap entry, and the > > > reverse lookup of swap entry to a file with offset. Whichever way you > > > cut it, it will significantly increase the per swap entry memory > > > overhead. > > > > Not necessarily, no. If your workload uses a lot of order-2, order-4 > > and order-9 folios, then the current scheme is using 11 bytes per page, > > so 44 bytes per order-2 folio, 176 per order-4 folio and 5632 per > > order-9 folio. That's a lot of bytes we can use for an extent-based > > scheme. > > Yes, if we allow dynamic allocation of swap entry, the 24B option. > Then sub entries inside the compound swap entry structure can share > the same compound swap struct pointer. > > > > > Also, why would you compare the size of an inode to the size of an > > inode? inode is ~equivalent to an anon_vma, not to a swap entry. > > I am not assigning inode to one swap entry. That is covered in my > description of "if you map more than one swap entry to a file". If you > want to map each inode to anon_vma, you need to have a way to map > inode and file offset into swap entry encoding. In your anon_vma as > inode world, how do you deal with two different vma containing the > same page? Once we have more detail of the swap entry mapping scheme, > we can analyse the pros and cons. Are you confused between an anon_vma and an anon vma? The naming in this area is terrible. Maybe we should call it an mnode instead of an anon_vma. The parallel with inode would be more obvious ...