From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-pj1-f43.google.com (mail-pj1-f43.google.com [209.85.216.43]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0ED16379C33 for ; Wed, 13 May 2026 08:04:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.43 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778659497; cv=none; b=rndNibTsYudAfo+7Qd2W/1JAHBeo+eLNYAmFHJw6IAsbHzfpXBVDSrts45syykz+YCRaHZMaQCJ/5dYRi6ifhsC+BC2qwOQp4K4ebsyC72/WGqvKOO7k7ZgNUCczIP4k3yABG8H034PorRu1fq0kTmAyHwN5YL2qEc6m67u4HNU= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778659497; c=relaxed/simple; bh=syLqyXWh8onmszP6ITzEgIzgnnVbMA0gxWTg9tqsQps=; h=Message-ID:Date:MIME-Version:Subject:To:Cc:References:From: In-Reply-To:Content-Type; b=ktVSOwJSoRF1MOwDqSG4AmsGHFN9nk45CFchF3aQO1gcf9NViN1TrQLqS246tlGuwgcFcyt0nHesLosW9x1/zXrgNYD0Xz8GUqwybTbBDXgfS8OIl7JOn5JpgAAKcxZWnqlWfTFvUvnQ2WOq8xbfC3uvsL+OeOvoYdg3wqUsGIY= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=h4NStl2j; arc=none smtp.client-ip=209.85.216.43 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="h4NStl2j" Received: by mail-pj1-f43.google.com with SMTP id 98e67ed59e1d1-3660daea6a5so3511598a91.1 for ; Wed, 13 May 2026 01:04:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20251104; t=1778659495; x=1779264295; darn=vger.kernel.org; h=content-transfer-encoding:in-reply-to:from:references:cc:to:subject :user-agent:mime-version:date:message-id:from:to:cc:subject:date :message-id:reply-to; bh=y1dxIimePE2VOkjQz2/ma0Z1DDa/BQxq5kyPGrdKr+k=; b=h4NStl2jxSMK6+B1o4jurkW9rqCadgbuX83hXb6eM/WCAOemBuWouBhdi2Y1NfOtRT laQa/HXLVmkX5teleljn+oiFsFuFDls5WDsTYYR+rZA0WW24yts5cH7l8CnK4wQA3hcg m3zqGWJN9ZmuyqBcqu6tt0qCjwyXdpK9+JJvGTDjt1ejiF1MqbVmF1p8vtcjOPO5TX9K cUoLc993FGcDyivx9zuyk1Ej2EsEbP3og31nl/W6RXqJ8fc6WzCXy8rLN3NgRPSqE1G4 vyi5HZPrSnI02amtWqrN6QA02CriUyOBCdQcMLpDduO8YBgWg+aEZaQ9GSRFYXkKMg4x X/jg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1778659495; x=1779264295; h=content-transfer-encoding:in-reply-to:from:references:cc:to:subject :user-agent:mime-version:date:message-id:x-gm-gg:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=y1dxIimePE2VOkjQz2/ma0Z1DDa/BQxq5kyPGrdKr+k=; b=nTatNZOlfHD0d8cZYKZzIrcx1jg94I6OYy+qYPEsl9OtIjzSmzXtndUJ0ZCBBb/Gdf WUwOwbv9KxAlb0jb5n5AGxxOAi23u0jdwk+MLjknltGWguSVlqynp68hlOBsABROrBbs OxKxQmI9Boic0t8N+p+jk2BIRBVB7UN3N/ZlHpQqizQWMevw+X1mW45qXqjnceghwStX Y5Ou8X5wJMoTE3MNr0Ow1H6hV07weJosPXjuBRLNFcRfQaK+bcnJfdG3K/5NlETna5yr UWwgEcDxCjB92POVKHfGV2EvdZ9EEy6nLTCn8MNbzZaEyQmiUhIO0CRSZZcjUAQJI6ke fk6w== X-Forwarded-Encrypted: i=1; AFNElJ+z6LImzbwH3HVk19aBbV9EeoBG/Bv7Ng+C3w/n8k86pSfn1Hb28lFlzFA40+Qgs+99JMK5mgmJOjMEVgI=@vger.kernel.org X-Gm-Message-State: AOJu0Yz7pXMDhBTiuALcwWOponnDb+Pr/AYBpbmJl0OaJ7Cq0X6qHb14 3p2crM6Y7bUToiLdjExkJ+k143IAj66uOfMrMdYQndXIf0xvTyhceuDd X-Gm-Gg: Acq92OGZqUmTojQKfZ+OIUk8kqWsWOEee3mj6VZ/WzalTjefYbId7REUCSZTjpL1Mbb gP/fx31Aq4+nbpE5u2TEGnxI2N/NEoqABYmLZSuvcgHQilr8WzPtD6F+gVde8j7BoMy4j8B03gA Wa3e/5SqIUuABx5UWxjwX/Q1B0FT5oIBgE59MDpNdupynKtkBfDW2xbO2QZ8W6itGFF/hRcTNry SJcnmFGz6WBW+WS/ytUK2FhkYuV1cs6KCCZdUAmNePDVTn1K7l9veb8VRT2/VQ1JuysuW8cChlc sESt6Wd4qrgJu9/UCt5cVLKJywtvBcTY8JupWY0K6U7s3eN9mXVZyxMMABMSB8OGm+rfkId913V H4dyIHehTgnQYG1SSrm39cSuZp+hzo9Y17euwjiZGGYEz0zsXU62AFvBT3nuHF2mmi4nsbdifsl ++7tcGDE/EegOVAHYjocLgxSYxsXsnA+9SK6cLynYkRrc= X-Received: by 2002:a17:90b:3f8c:b0:368:ed92:6f5 with SMTP id 98e67ed59e1d1-368f77f6980mr1854188a91.4.1778659494907; Wed, 13 May 2026 01:04:54 -0700 (PDT) Received: from [10.125.192.65] ([210.184.73.204]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-368ede2d545sm2895850a91.6.2026.05.13.01.04.26 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 13 May 2026 01:04:53 -0700 (PDT) Message-ID: <6fc7fdf0-368c-5129-038e-623f9db2aa88@gmail.com> Date: Wed, 13 May 2026 16:04:21 +0800 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0) Gecko/20100101 Thunderbird/102.15.0 Subject: Re: [PATCH 2/3] mm/zswap: Implement proactive writeback To: Nhat Pham Cc: Yosry Ahmed , akpm@linux-foundation.org, tj@kernel.org, hannes@cmpxchg.org, shakeel.butt@linux.dev, mhocko@kernel.org, mkoutny@suse.com, chengming.zhou@linux.dev, muchun.song@linux.dev, roman.gushchin@linux.dev, cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, Hao Jia , Alexandre Ghiti References: <20260511105149.75584-1-jiahao.kernel@gmail.com> <20260511105149.75584-3-jiahao.kernel@gmail.com> <12e4784e-2add-d849-7e54-bde8abfa6e78@gmail.com> From: Hao Jia In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit On 2026/5/12 23:47, Nhat Pham wrote: > On Tue, May 12, 2026 at 2:32 AM Hao Jia wrote: >> >> >> >> On 2026/5/12 03:57, Yosry Ahmed wrote: >>> On Mon, May 11, 2026 at 12:49 PM Nhat Pham wrote: >>>> >>>> On Mon, May 11, 2026 at 3:52 AM Hao Jia wrote: >>>>> >>>>> From: Hao Jia >>>>> >>>>> Zswap currently writes back pages to backing swap devices reactively, >>>>> triggered either by memory pressure via the shrinker or by the pool >>>>> reaching its size limit. This reactive approach offers no precise >>>>> control over when writeback happens, which can disturb latency-sensitive >>>>> workloads, and it cannot direct writeback at a specific memory cgroup. >>>>> However, there are scenarios where users might want to proactively >>>>> write back cold pages from zswap to the backing swap device, for >>>>> example, to free up memory for other applications or to prepare for >>>>> upcoming memory-intensive workloads. >>>>> >>>>> Therefore, implement a proactive writeback mechanism for zswap by >>>>> adding a new cgroup interface file memory.zswap.proactive_writeback >>>>> within the memory controller. >>>> >> >> Thanks Nhat, Yosry — let me address both comments together. >> >>>> >>>> We already have memory.reclaim, no? Would that not work to create >>>> headroom generally for your use case? Is there a reason why we are >>>> treating zswap memory as special here? >>> >> >> Apologies for the lack of detailed explanation in the patch description, >> which led to the confusion. >> >> While we are already utilizing memory.reclaim, it does not fully address >> our requirements. >> >> Our deployment runs a userspace proactive reclaimer that drives >> memory.reclaim based on the system's runtime state (memory/CPU/IO >> pressure, refault rate, ...) and workload-specific >> policy. That first stage compresses cold anon pages into zswap. Entries >> that then remain in zswap past a policy-defined age threshold are >> considered "twice cold", and the reclaimer wants >> to write them back to the backing swap device at a moment of its own >> choosing, to further reclaim the DRAM still held by the compressed data. >> >> This is the "second-level offloading" pattern described in Meta's TMO >> paper [1]. zswap proactive writeback is what this series introduces to >> address that second-level offloading stage. >> >> [1] https://www.pdl.cmu.edu/ftp/NVM/tmo_asplos22.pdf > > Yeah that's what we've been trying to work on as well :) We are > working on a couple of improvements to the mechanism side of this path > (cc Alex) - hopefully it will help your use case too! > > Anyway, back to my original inquiry: I understand your use case. It's > pretty similar to our goal. What I'm not getting is why is > memory.reclaim (which you already use) not sufficient for zswap -> > disk swap offloading too? > > Zswap objects are organized into LRU and exposed to the shrinker > interface. Echo-ing to memory.reclaim should also offload some zswap > entries, correct? Are there still cold zswap entries that escape this, > somehow? > Yes, the memory.reclaim path does drive some zswap writeback, but it is not enough for our case. 1. For a memcg that has reached steady state (a common case being when memory.current is below the policy target), the userspace reclaimer may not invoke memory.reclaim on it for a long time, and so no second-level offloading happens through memory.reclaim. In this state we want memory.zswap.proactive_writeback to write back entries that have sat in zswap past an age threshold, to further reclaim the DRAM still held by the compressed data. 2. Even when memory.reclaim is running, the fraction of zswap residency that ends up reaching the backing swap device is still very small for many of our workloads, and the userspace reclaimer has no way to participate in or control the granularity of zswap writeback. So in our deployment we prefer to leave the zswap shrinker disabled, decouple LRU -> zswap from zswap -> swap, and use a dedicated proactive-writeback interface that lifts the writeback policy into userspace where it can evolve independently of the kernel. Thanks, Hao > Furthermore, we already have a way to detect the "twice cold" entries > you mentioned: the referenced bit. This is analogous to the way we > treat uncompressed pages. > >> >> >>> +1, why do we need to specifically proactively reclaim the compressed memory? >>> >>> Also, if we do need to minimize the compressed memory and force higher >>> writeback rates, we can do so with memory.zswap.max, right? >> >> Here are a few reasons why memory.zswap.max is not enough: >> >> 1. Writing memory.zswap.max itself does not trigger any writeback >> immediately. For a memcg that has reached steady state (on which the >> userspace reclaimer is no longer invoking >> memory.reclaim), after enough time has passed, the reclaimer has no good >> way to trigger proactive writeback for second-level offloading by >> lowering memory.zswap.max, because in steady >> state nothing drives the zswap_store() -> shrink_memcg() path. The >> userspace reclaimer still has no control over when proactive writeback >> happens. >> >> 2. memory.zswap.max currently triggers zswap writeback via zswap_store() >> -> shrink_memcg(), and each over-limit event can write back at most >> NR_NODES entries. If zswap residency is far >> above memory.zswap.max, converging to the target size requires at least >> O(over-limit pages / NR_NODES) zswap_store() events, with no batching — >> proactive writeback therefore has >> significant latency. >> >> 3. memory.zswap.max is a stateful interface. If the userspace reclaimer >> crashes for any reason mid-operation, it may leave memory.zswap.max at >> some set value, putting the application in a >> persistently throttled bad state. >> >> 4. Once the userspace reclaimer has lowered memory.zswap.max, if the >> workload is rapidly expanding and triggers memory reclaim via >> memory.high / kswapd / etc., the actual amount written >> back can exceed what was intended. > > One more reason: IIRC, when you set memory.zswap.max to a value other > than 0 max, every zswap store incurs a pretty expensive check > (obj_cgroup_may_zswap), which does a force flush > (__mem_cgroup_flush_stats). That was pretty expensive last time some > of our internal services played with it. So yeah, it's not ideal... > > (if you're using this, might wanna profile this as well). > >> >> Thanks, >> Hao