From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id ACE3EC54E41 for ; Tue, 5 Mar 2024 08:16:02 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E9CE56B0080; Tue, 5 Mar 2024 03:16:01 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E4CA16B0082; Tue, 5 Mar 2024 03:16:01 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D143D6B0085; Tue, 5 Mar 2024 03:16:01 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id BF8286B0080 for ; Tue, 5 Mar 2024 03:16:01 -0500 (EST) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 8A26B4022E for ; Tue, 5 Mar 2024 08:16:01 +0000 (UTC) X-FDA: 81862277322.12.0132690 Received: from out-185.mta1.migadu.com (out-185.mta1.migadu.com [95.215.58.185]) by imf22.hostedemail.com (Postfix) with ESMTP id 822D8C0012 for ; Tue, 5 Mar 2024 08:15:59 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=UOlhd4PA; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf22.hostedemail.com: domain of chengming.zhou@linux.dev designates 95.215.58.185 as permitted sender) smtp.mailfrom=chengming.zhou@linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1709626559; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=dk4DnCwo/F0x9f6uVxiMIa4xKDPJlxklwhS6xbK/5N0=; b=6G9seLircaIyt2X7c0MSlmZwpbfxnTEaOA5gp9yl5KbIrg7a67TmKvnDZQTx0Htt/h0wIv vBeGd2T0yYhgaXEhS3xrlVHCnQQkuiUNqc9XEi9yj/1NcouMpSYJw7SYnut4xTmKlpTQO2 9F7/VsYBhwk5DUtMHo0qG/GYLN1IJRU= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=UOlhd4PA; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf22.hostedemail.com: domain of chengming.zhou@linux.dev designates 95.215.58.185 as permitted sender) smtp.mailfrom=chengming.zhou@linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1709626559; a=rsa-sha256; cv=none; b=o2mwoJxmpQyHrkE7c1p2ZTqYT2Gti6mJbw8pPADWGDZenN1O/BdLa87RkGdd53H3B4HGjh APO4QGtHiVewwvJ9l/ZbGYbqRPPttRXTDeoSeL/Ea8AWi9UbNqKdyOJlvunIb8zoqYPD45 Yja81U/fOueZt2JmGkfQcwvZa58/EW4= Message-ID: <63cdfad3-c27c-4232-8bca-9cdb3ec0c6f5@linux.dev> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1709626557; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=dk4DnCwo/F0x9f6uVxiMIa4xKDPJlxklwhS6xbK/5N0=; b=UOlhd4PAK1hRsWV0EJ4FJWvqBUKqMo93Hu7+XicqCLwk2alYD8TaChu7TFNC+npQj8qWIT Tj1TGSB3235l+K3oF1jY/+GoUfz73GY7XO+ErF8q0U/hbl+LRNQle8w0H6dq69e0lh3zlM fnlXCAXRf2LSUgfO7C9uHXiEVcbBmko= Date: Tue, 5 Mar 2024 16:15:47 +0800 MIME-Version: 1.0 Subject: Re: [LSF/MM/BPF TOPIC] Swap Abstraction "the pony" Content-Language: en-US To: Chris Li Cc: Matthew Wilcox , Nhat Pham , lsf-pc@lists.linux-foundation.org, linux-mm , ryan.roberts@arm.com, David Hildenbrand , Barry Song <21cnbao@gmail.com>, Chuanhua Han References: X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Chengming Zhou In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Migadu-Flow: FLOW_OUT X-Rspamd-Queue-Id: 822D8C0012 X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: txfeimden779xr1u6adfhxzto8qwywzd X-HE-Tag: 1709626559-952870 X-HE-Meta: U2FsdGVkX1/OiBDmFnZA3SnrJ0ReDwcrmbir53mSOArcAOndCt+3yuLHcXCpeZTQIGtWrQ2O89EjmgSZL9TZ3t1RgyBgo08WCxiYjdP0/EAWyQWvTu44062f+Xm7ggAkchO4IePg0xo76r68t/36hbU8TMaOy8Fhn0eeC91//eM5zHyZ5ZPc7vEz/fLK/0+4WoEkyyV1LKDv+Pn+tDJScHarIxiYD3MKGlJpxQq40XYy0ySpteQWB4G7fxyDnaUr2wcA25hwz6LrtVAZB12hYcxCyI40l6LUAlXtqpvdUKNzhHx+3X6ze+rGvAs5qtXa1cN24esBmypDxfXx+B3pltfwfyxZ+poFlwTG/OvNsPQSWGAxvD8UOQTNQp7kBsCR+1T0Lx3qbl+w8vghwuLrcl6KkPpV1bsWPqF6IliRREX9ZWlbH/UmmDPRRHCIqL8JY5nmbAUmMZ3xSdHXLneCW3maSSiV8NlqknXx+y7QgEpkMpgpZASB1/xr5RUR1r45xVUIq37Pku7mY5O0jRLmnZIIyJCPeyhLp6ADQsz14VNg1VC184DEQLhljAy1Ddj3tFQt2Ht9qXAheiR3eQYGi/pLRovCtRDK7ifA6lX39Olry8Akd24sHHKjd2B/zHWEVPP+qczD4lLQNpSY9djAJLfX0ntp9bjgb2Hdd7KU/9Qbqkfwl1NtbSPxt3mbQ5kcq3mSqZ8YkFmBu/d9khV+IzBsSz+oYVYJ+qq0Jn3pE6VE7YER8YFdaecR9Ek1sUyF0wGaKSMpRXPqmkvBa4xAP9ez1IAK6hRKD1Xn033jMlxT5xhBgwrn2UjEkplQl+2TnnG84uWN74WgFExKbnOhwhhtvxcMWS5qr/W0wgRl0iOT5FsUhlhUt2QbJL89VyGhaDfKvHiM1xLSgoWvohjM2Yfc0ksnlBla86CM192nesonl51dHpRVrkKtyXzECvUoSk1CyanUhdhS1P5211Q HUf7f6n0 4ePOJ5YCH5ZrQLOGOXcjbD8jNKIxGGovdNvdkx3BrINokblN6Cr+ovAftOy2bv37ThMToY6fGQkv8Qe4D48X/RNRxCl7v0zRVvL2co+q6jBftw2+b3NQNhdlv8LtRkOafPaBDhIy9wCMsmWvaBBfEYRsAaA8B8GSN7Sxh4y0NTdUAy5o= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000003, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2024/3/5 15:44, Chris Li wrote: > On Mon, Mar 4, 2024 at 7:24 PM Chengming Zhou wrote: >> >> On 2024/3/5 06:58, Matthew Wilcox wrote: >>> On Fri, Mar 01, 2024 at 04:53:43PM +0700, Nhat Pham wrote: >>>> IMHO, one thing this new abstraction should support is seamless >>>> transfer/migration of pages from one backend to another (perhaps from >>>> high to low priority backends, i.e writeback). >>>> >>>> I think this will require some careful redesigns. The closest thing we >>>> have right now is zswap -> backing swapfile. But it is currently >>>> handled in a rather peculiar manner - the underlying swap slot has >>>> already been reserved for the zswap entry. But there's a couple of >>>> problems with this: >>>> >>>> a) This is wasteful. We're essentially having the same piece of data >>>> occupying spaces in two levels in the hierarchies. >>>> b) How do we generalize to a multi-tier hierarchy? >>>> c) This is a bit too backend-specific. It'd be nice if we can make >>>> this as backend-agnostic as possible (if possible). >>>> >>>> Motivation: I'm currently working/thinking about decoupling zswap and >>>> swap, and this is one of the more challenging aspects (as I can't seem >>>> to find a precedent in the swap world for inter-swap backends pages >>>> migration), and especially with respect to concurrent loads (and >>>> swapcache interactions). >>> >>> Have you considered (and already rejected?) the opposite approach -- >>> coupling zswap and swap more tightly? That is, we always write out >>> the original pages today. Why don't we write out the compressed pages >>> instead? For the same amount of I/O, we'd free up more memory! That >>> sounds like a win to me. > > I have considered that as well, that is further than writing from one > swap device to another. The current swap device currently can't accept > write on non page aligned offset. If we allow byte aligned write out > size, the whole swap entry offset stuff needs some heavy changes. > > If we write out 4K pages, and the compression ratio is lower than 50%, > it means a combination of two compressed pages can't fit into one > page. Which means some of the page read back will need to overflow > into another page. We kind of need a small file system to keep track > of how the compressed data is stored, because it is not page aligned > size any more. > > We can write out zsmalloc blocks of data as it is, however there is no > guarantee the data in zsmalloc blocks have the same LRU order. Right, so we should choose to write out objects based on the LRU order in zswap, but don't decompress it, write out it directly to swap file. > > It makes more sense when writing higher order > 0 swap pages. e.g > writing 64K pages in one buffer, then we can write out compressed data > as page boundary aligned and page sizes, accepting the waste on the > last compressed page, might not fill up the whole page. > >> >> Right, I also thought about this direction for some time. >> Apart from fewer IO, there are more advantages we can see: >> >> 1. Don't need to allocate a page when write out compressed data. >> This method actually has its own problem[1], by allocating a new page and >> put on LRU list, wait for writeback and reclaim. >> If we write out compressed data directly, so don't need to allocated page, >> these problems can be avoided. > > Does it go through swap cache at all? If not, there will be some > interesting synchronization issues when other races swap in the page > and modify it. No, right, we have to handle the races. (Maybe we can leave "shadow" entry in zswap, which can be used for synchronization) > >> >> 2. Don't need to decompress when write out compressed data. > > Yes. > >> >> [1] https://lore.kernel.org/all/20240209115950.3885183-1-chengming.zhou@linux.dev/ >> >>> >>> I'm sure it'd be a big redesign, but that seems to be what we're talking >>> about anyway. >>> >> >> Yes, we need to do modifications in some parts: >> >> 1. zsmalloc: compressed objects can be migrated anytime, we need to support pinning. > > Or use a bounce buffer to read it out. Yeah, also a choice if pinning is not easy to implement :) > >> >> 2. swapout: need to support non-folio write out. > > Yes. Non page aligned write out will change swap back end design dramatically. > >> >> 3. zswap: zswap need to handle synchronization between compressed write out and swapin, >> since they share the same swap entry. > > Exactly. Same for ZRAM as well. > > Chris