From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-ot1-f45.google.com (mail-ot1-f45.google.com [209.85.210.45]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E828B481FC1 for ; Tue, 5 May 2026 15:39:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.45 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777995556; cv=none; b=HONlTEuNr7Xqu7TkWgXckdEOYzztsGFYxOSW5e1Nf16RrJcs7vpZd2kcrQd0oRUQ8LyzIHrlqFksqMZOk7/a23/OwpU4z5BGnd9iz+m0e0VwXJ1OEqQkoGyqvcVPad+qt5SRAVGnk3eCxoXMthZB6iH8R2YjiECHIDZFJ2lANXk= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777995556; c=relaxed/simple; bh=KKQvkwhF38kTq+40p6Tm1SHwPX9kCl97FNjAtyA010o=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=sztOtZxdyn98a+IpzCxCT3rQCXTeDvG1SlX/6fJ/wKGaPNad3wa/hm3p30WHLbzaD0VG0ikaNGtvzqKbIomeck4NFVMGzn5waxDM/dJlvjmjJBYzBB0SZo+1xf1jO4/ESjZoVLo/5arbBfwTJV2rKnk7ULFXTcxtXnpkA2j4bOQ= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=QJMG5fAW; arc=none smtp.client-ip=209.85.210.45 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="QJMG5fAW" Received: by mail-ot1-f45.google.com with SMTP id 46e09a7af769-7df05fc49e5so1759543a34.3 for ; Tue, 05 May 2026 08:39:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20251104; t=1777995554; x=1778600354; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=iUDpmLgZqt3X5ZiWGGoaa0MqcSYAh7xQIoGoSWJ/lxg=; b=QJMG5fAWKXmq8x0Y6Br8Ax0Y3PODQvrLgPDWwGaASe4Z9MtDUbTulgGEIUwYlmeehV Dh77wWXfA6E2b3wvsFlEG5iI+mdnu6a0RrIugJzi5hlWZZlfRTgzS9cS/9lnPkG5wL8M jN5ig23EfAFJYCBI/s8w/5ysLFdqLLpv+oMzt+4fvMrpwevwokmAwN++uyopOs30Ndza x1RTGxBpO/H867wXRVYqSvMj6WNre62YIJdNKGvRGz7FRf2rY8dbGldqd8eKkpvs2o71 Krq43DKJgtHSumvx6o5y0HY8F26saQPnGsP4DF5UktlKX9h8KEbgNTejE+9PYiXGv05c /4nQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1777995554; x=1778600354; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=iUDpmLgZqt3X5ZiWGGoaa0MqcSYAh7xQIoGoSWJ/lxg=; b=FGidmJlNjrNWWPd5A5EybPmsvgcYMKokTeB+jOn/AkVK1F9BZ7tEQFlPwXtGkJXfQ2 ZRO9Vufzw+3P05TuvTe6cVquUh9LCcf9D8JfH7xAaBny0UhFTv4Zk1q+r7VylRFEXbbq JVRG6kzzBY3ZEYarUCzycVoOcriO4wqDi9iMGG1YTCNjcLjtZATXaJEF/fqoOamK6T58 QH5vkvwtkbGlJY2lN2laqHb93TP5bMkwqvowmqh7BEFH7Uy1azCRHVOSDIVgJqiSBkfI tz9tSRhf+A40Sb8Ip/dVlnoxwrHf1m0kFzBlW97YvbNizsRHJTNePsvSZy0DZV0JN6Hk tgeA== X-Forwarded-Encrypted: i=1; AFNElJ8ZKYpy9Ji+I2+d+ym+dwz4xdoIO6Vafgor7eJain/SBYef0+iazP+cgMKn/i54O4tCm/CcVl41PoNKQOw=@vger.kernel.org X-Gm-Message-State: AOJu0YxxCtgBjJM5GPVUCQGev1XWEiZLmTLNW91Pb3G19TXHaQTgDR0N gQZMZVvM3Fc1qdBnl0//bE+bVt3TNX/0dnPeyAJ4j214fg5jHkxVxfNv X-Gm-Gg: AeBDietL9wzokrcpOycxKYs06R43bxEmQO/mR0ooN7HaFUkvRAcbH3+I2Qmsgt6xjRY ZBkcTrGj2TMi+rcg2D+dBjI/1SCrTGuHgl4fjwxLWZU2/fwGgIW/IXZJDBzvjmg8MIuKeLUt4iu mEr2IcPruvW3quEFturzaWYm2K2G2f3ez837gu+HWpco6XeWLSpkGQY6bOltY5deuOhtV4BogyD QP3lO1ANlVQTLl/0ZDB/ivNPozJBX7E7AgsvADvBbEDNX1c1ZYyOHASeWmRSsUAssYWXSjgFBIN O2i3QibuaucQn5hBbCS2+DUz83VxFFbK0dbHky4WZ6C7ONXFC3gb//Ps8+o0fYxOEauPAEI/QSw 3CzllaRumXigIzLgxeQpc4baNPg/1wPNIXBNuXvOd8pvMN45QUSLfQtbNE7WoTKzq9ks4BKuCT/ CLn0difpVn57QyM6PSvW0aq0acPOWRQC0DZnp/QKyf5BU0ViUhNcF6XI/f X-Received: by 2002:a05:6830:83b7:b0:7dd:e032:3cdb with SMTP id 46e09a7af769-7dee1431d6fmr8694065a34.18.1777995553888; Tue, 05 May 2026 08:39:13 -0700 (PDT) Received: from localhost ([2a03:2880:10ff:45::]) by smtp.gmail.com with ESMTPSA id 46e09a7af769-7decae2be8dsm10220161a34.25.2026.05.05.08.39.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 05 May 2026 08:39:13 -0700 (PDT) From: Nhat Pham To: kasong@tencent.com Cc: Liam.Howlett@oracle.com, akpm@linux-foundation.org, apopple@nvidia.com, axelrasmussen@google.com, baohua@kernel.org, baolin.wang@linux.alibaba.com, bhe@redhat.com, byungchul@sk.com, cgroups@vger.kernel.org, chengming.zhou@linux.dev, chrisl@kernel.org, corbet@lwn.net, david@kernel.org, dev.jain@arm.com, gourry@gourry.net, hannes@cmpxchg.org, hughd@google.com, jannh@google.com, joshua.hahnjy@gmail.com, lance.yang@linux.dev, lenb@kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-pm@vger.kernel.org, lorenzo.stoakes@oracle.com, matthew.brost@intel.com, mhocko@suse.com, muchun.song@linux.dev, npache@redhat.com, nphamcs@gmail.com, pavel@kernel.org, peterx@redhat.com, peterz@infradead.org, pfalcato@suse.de, rafael@kernel.org, rakie.kim@sk.com, roman.gushchin@linux.dev, rppt@kernel.org, ryan.roberts@arm.com, shakeel.butt@linux.dev, shikemeng@huaweicloud.com, surenb@google.com, tglx@kernel.org, vbabka@suse.cz, weixugc@google.com, ying.huang@linux.alibaba.com, yosry.ahmed@linux.dev, yuanchu@google.com, zhengqi.arch@bytedance.com, ziy@nvidia.com, kernel-team@meta.com, riel@surriel.com, haowenchao22@gmail.com Subject: [PATCH v6 08/22] zswap: prepare zswap for swap virtualization Date: Tue, 5 May 2026 08:38:37 -0700 Message-ID: <20260505153854.1612033-9-nphamcs@gmail.com> X-Mailer: git-send-email 2.52.0 In-Reply-To: <20260505153854.1612033-1-nphamcs@gmail.com> References: <20260505153854.1612033-1-nphamcs@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit The zswap tree code, specifically the range partition logic, can no longer easily be reused for the new virtual swap space design. Use a simple unified zswap tree in the new implementation for now. Signed-off-by: Nhat Pham --- include/linux/zswap.h | 7 ----- mm/swapfile.c | 9 +----- mm/zswap.c | 69 +++++++------------------------------------ 3 files changed, 11 insertions(+), 74 deletions(-) diff --git a/include/linux/zswap.h b/include/linux/zswap.h index 30c193a1207e..1a04caf283dc 100644 --- a/include/linux/zswap.h +++ b/include/linux/zswap.h @@ -28,8 +28,6 @@ unsigned long zswap_total_pages(void); bool zswap_store(struct folio *folio); int zswap_load(struct folio *folio); void zswap_invalidate(swp_entry_t swp); -int zswap_swapon(int type, unsigned long nr_pages); -void zswap_swapoff(int type); void zswap_memcg_offline_cleanup(struct mem_cgroup *memcg); void zswap_lruvec_state_init(struct lruvec *lruvec); void zswap_folio_swapin(struct folio *folio); @@ -50,11 +48,6 @@ static inline int zswap_load(struct folio *folio) } static inline void zswap_invalidate(swp_entry_t swp) {} -static inline int zswap_swapon(int type, unsigned long nr_pages) -{ - return 0; -} -static inline void zswap_swapoff(int type) {} static inline void zswap_memcg_offline_cleanup(struct mem_cgroup *memcg) {} static inline void zswap_lruvec_state_init(struct lruvec *lruvec) {} static inline void zswap_folio_swapin(struct folio *folio) {} diff --git a/mm/swapfile.c b/mm/swapfile.c index 6b155471941c..0372062743ef 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -2972,7 +2972,6 @@ SYSCALL_DEFINE1(swapoff, const char __user *, specialfile) spin_unlock(&p->lock); spin_unlock(&swap_lock); arch_swap_invalidate_area(p->type); - zswap_swapoff(p->type); mutex_unlock(&swapon_mutex); kfree(p->global_cluster); p->global_cluster = NULL; @@ -3615,10 +3614,6 @@ SYSCALL_DEFINE2(swapon, const char __user *, specialfile, int, swap_flags) } } - error = zswap_swapon(si->type, maxpages); - if (error) - goto bad_swap_unlock_inode; - /* * Flush any pending IO and dirty mappings before we start using this * swap device. @@ -3627,7 +3622,7 @@ SYSCALL_DEFINE2(swapon, const char __user *, specialfile, int, swap_flags) error = inode_drain_writes(inode); if (error) { inode->i_flags &= ~S_SWAPFILE; - goto free_swap_zswap; + goto bad_swap_unlock_inode; } mutex_lock(&swapon_mutex); @@ -3650,8 +3645,6 @@ SYSCALL_DEFINE2(swapon, const char __user *, specialfile, int, swap_flags) error = 0; goto out; -free_swap_zswap: - zswap_swapoff(si->type); bad_swap_unlock_inode: inode_unlock(inode); bad_swap: diff --git a/mm/zswap.c b/mm/zswap.c index a5a3f068bd1a..f7313261673f 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -197,8 +197,6 @@ struct zswap_entry { struct list_head lru; }; -static struct xarray *zswap_trees[MAX_SWAPFILES]; -static unsigned int nr_zswap_trees[MAX_SWAPFILES]; /* RCU-protected iteration */ static LIST_HEAD(zswap_pools); @@ -225,45 +223,35 @@ static bool zswap_has_pool; * helpers and fwd declarations **********************************/ -/* One swap address space for each 64M swap space */ -#define ZSWAP_ADDRESS_SPACE_SHIFT 14 -#define ZSWAP_ADDRESS_SPACE_PAGES (1 << ZSWAP_ADDRESS_SPACE_SHIFT) -static inline struct xarray *swap_zswap_tree(swp_entry_t swp) -{ - return &zswap_trees[swp_type(swp)][swp_offset(swp) - >> ZSWAP_ADDRESS_SPACE_SHIFT]; -} +static DEFINE_XARRAY(zswap_tree); + +#define zswap_tree_index(entry) (entry.val) static inline void *zswap_entry_store(swp_entry_t swpentry, struct zswap_entry *entry) { - struct xarray *tree = swap_zswap_tree(swpentry); - pgoff_t offset = swp_offset(swpentry); + pgoff_t offset = zswap_tree_index(swpentry); - return xa_store(tree, offset, entry, GFP_KERNEL); + return xa_store(&zswap_tree, offset, entry, GFP_KERNEL); } static inline void *zswap_entry_load(swp_entry_t swpentry) { - struct xarray *tree = swap_zswap_tree(swpentry); - pgoff_t offset = swp_offset(swpentry); + pgoff_t offset = zswap_tree_index(swpentry); - return xa_load(tree, offset); + return xa_load(&zswap_tree, offset); } static inline void *zswap_entry_erase(swp_entry_t swpentry) { - struct xarray *tree = swap_zswap_tree(swpentry); - pgoff_t offset = swp_offset(swpentry); + pgoff_t offset = zswap_tree_index(swpentry); - return xa_erase(tree, offset); + return xa_erase(&zswap_tree, offset); } static inline bool zswap_empty(swp_entry_t swpentry) { - struct xarray *tree = swap_zswap_tree(swpentry); - - return xa_empty(tree); + return xa_empty(&zswap_tree); } #define zswap_pool_debug(msg, p) \ @@ -1691,43 +1679,6 @@ void zswap_invalidate(swp_entry_t swp) zswap_entry_free(entry); } -int zswap_swapon(int type, unsigned long nr_pages) -{ - struct xarray *trees, *tree; - unsigned int nr, i; - - nr = DIV_ROUND_UP(nr_pages, ZSWAP_ADDRESS_SPACE_PAGES); - trees = kvcalloc(nr, sizeof(*tree), GFP_KERNEL); - if (!trees) { - pr_err("alloc failed, zswap disabled for swap type %d\n", type); - return -ENOMEM; - } - - for (i = 0; i < nr; i++) - xa_init(trees + i); - - nr_zswap_trees[type] = nr; - zswap_trees[type] = trees; - return 0; -} - -void zswap_swapoff(int type) -{ - struct xarray *trees = zswap_trees[type]; - unsigned int i; - - if (!trees) - return; - - /* try_to_unuse() invalidated all the entries already */ - for (i = 0; i < nr_zswap_trees[type]; i++) - WARN_ON_ONCE(!xa_empty(trees + i)); - - kvfree(trees); - nr_zswap_trees[type] = 0; - zswap_trees[type] = NULL; -} - /********************************* * debugfs functions **********************************/ -- 2.52.0