linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 00/22] mm/zsmalloc: add zpdesc memory descriptor for zswap.zpool
@ 2024-07-29 11:25 alexs
  2024-07-29 11:25 ` [PATCH v4 01/22] " alexs
                   ` (22 more replies)
  0 siblings, 23 replies; 50+ messages in thread
From: alexs @ 2024-07-29 11:25 UTC (permalink / raw)
  To: Vitaly Wool, Miaohe Lin, Andrew Morton, linux-kernel, linux-mm,
	minchan, willy, senozhatsky, david, 42.hyeyoo, Yosry Ahmed,
	nphamcs
  Cc: Alex Shi

From: Alex Shi <alexs@kernel.org>

According to Metthew's plan, the page descriptor will be replace by a 8
bytes mem_desc on destination purpose.
https://lore.kernel.org/lkml/YvV1KTyzZ+Jrtj9x@casper.infradead.org/

Here is a implement on zsmalloc to replace page descriptor by 'zpdesc',
which is still overlay on struct page now. but it's a step move forward
above destination.

To name the struct zpdesc instead of zsdesc, since there are still 3
zpools under zswap: zbud, z3fold, zsmalloc for now(z3fold maybe removed
soon), and we could easyly extend it to other zswap.zpool in needs.

For all zswap.zpools, they are all using single page since often used
under memory pressure. So the conversion via folio series helper is
better than page's for compound_head check saving.

For now, all zpools are using some page struct members, like page.flags
for PG_private/PG_locked. and list_head lru, page.mapping for page migration.

This patachset does not increase the descriptor size nor introduce any
functional changes, and could save about 122Kbytes zsmalloc.o size.

Thanks
Alex

---
v3->v4:
- rebase on akpm/mm-unstable Jul 21
- fixed a build warning reported by LKP
- Add a comment update for struct page to zpdesc change

v2->v3:
- Fix LKP reported build issue
- Update the Usage of struct zpdesc fields.
- Rebase onto latest mm-unstable commit 2073cda629a4

v1->v2: 
- Take Yosry and Yoo's suggestion to add more members in zpdesc,
- Rebase on latest mm-unstable commit 31334cf98dbd
---

Alex Shi (11):
  mm/zsmalloc: add zpdesc memory descriptor for zswap.zpool
  mm/zsmalloc: use zpdesc in trylock_zspage/lock_zspage
  mm/zsmalloc: convert create_page_chain() and its users to use zpdesc
  mm/zsmalloc: rename reset_page to reset_zpdesc and use zpdesc in it
  mm/zsmalloc: convert SetZsPageMovable and remove unused funcs
  mm/zsmalloc: convert get/set_first_obj_offset() to take zpdesc
  mm/zsmalloc: introduce __zpdesc_clear_movable
  mm/zsmalloc: introduce __zpdesc_clear_zsmalloc
  mm/zsmalloc: introduce __zpdesc_set_zsmalloc()
  mm/zsmalloc: fix build warning from lkp testing
  mm/zsmalloc: update comments for page->zpdesc changes

Hyeonggon Yoo (11):
  mm/zsmalloc: convert __zs_map_object/__zs_unmap_object to use zpdesc
  mm/zsmalloc: add and use pfn/zpdesc seeking funcs
  mm/zsmalloc: convert obj_malloc() to use zpdesc
  mm/zsmalloc: convert obj_allocated() and related helpers to use zpdesc
  mm/zsmalloc: convert init_zspage() to use zpdesc
  mm/zsmalloc: convert obj_to_page() and zs_free() to use zpdesc
  mm/zsmalloc: add zpdesc_is_isolated/zpdesc_zone helper for
    zs_page_migrate
  mm/zsmalloc: convert __free_zspage() to use zdsesc
  mm/zsmalloc: convert location_to_obj() to take zpdesc
  mm/zsmalloc: convert migrate_zspage() to use zpdesc
  mm/zsmalloc: convert get_zspage() to take zpdesc

 mm/zpdesc.h   | 146 ++++++++++++++++
 mm/zsmalloc.c | 466 +++++++++++++++++++++++++++-----------------------
 2 files changed, 401 insertions(+), 211 deletions(-)
 create mode 100644 mm/zpdesc.h

-- 
2.43.0



^ permalink raw reply	[flat|nested] 50+ messages in thread

* [PATCH v4 01/22] mm/zsmalloc: add zpdesc memory descriptor for zswap.zpool
  2024-07-29 11:25 [PATCH v4 00/22] mm/zsmalloc: add zpdesc memory descriptor for zswap.zpool alexs
@ 2024-07-29 11:25 ` alexs
  2024-08-02 18:52   ` Vishal Moola
  2024-08-02 19:30   ` Matthew Wilcox
  2024-07-29 11:25 ` [PATCH v4 02/22] mm/zsmalloc: use zpdesc in trylock_zspage/lock_zspage alexs
                   ` (21 subsequent siblings)
  22 siblings, 2 replies; 50+ messages in thread
From: alexs @ 2024-07-29 11:25 UTC (permalink / raw)
  To: Vitaly Wool, Miaohe Lin, Andrew Morton, linux-kernel, linux-mm,
	minchan, willy, senozhatsky, david, 42.hyeyoo, Yosry Ahmed,
	nphamcs
  Cc: Alex Shi

From: Alex Shi <alexs@kernel.org>

The 1st patch introduces new memory decriptor zpdesc and rename
zspage.first_page to zspage.first_zpdesc, no functional change.

We removed PG_owner_priv_1 since it was moved to zspage after
commit a41ec880aa7b ("zsmalloc: move huge compressed obj from
page to zspage").

And keep the memcg_data member, since as Yosry pointed out:
"When the pages are freed, put_page() -> folio_put() -> __folio_put() will call
mem_cgroup_uncharge(). The latter will call folio_memcg() (which reads
folio->memcg_data) to figure out if uncharging needs to be done.

There are also other similar code paths that will check
folio->memcg_data. It is currently expected to be present for all
folios. So until we have custom code paths per-folio type for
allocation/freeing/etc, we need to keep folio->memcg_data present and
properly initialized."

Originally-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Signed-off-by: Alex Shi <alexs@kernel.org>
---
 mm/zpdesc.h   | 66 +++++++++++++++++++++++++++++++++++++++++++++++++++
 mm/zsmalloc.c | 21 ++++++++--------
 2 files changed, 76 insertions(+), 11 deletions(-)
 create mode 100644 mm/zpdesc.h

diff --git a/mm/zpdesc.h b/mm/zpdesc.h
new file mode 100644
index 000000000000..2dbef231f616
--- /dev/null
+++ b/mm/zpdesc.h
@@ -0,0 +1,66 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* zpdesc.h: zswap.zpool memory descriptor
+ *
+ * Written by Alex Shi <alexs@kernel.org>
+ *	      Hyeonggon Yoo <42.hyeyoo@gmail.com>
+ */
+#ifndef __MM_ZPDESC_H__
+#define __MM_ZPDESC_H__
+
+/*
+ * struct zpdesc -	Memory descriptor for zpool memory, now is for zsmalloc
+ * @flags:		Page flags, PG_private: identifies the first component page
+ * @lru:		Indirectly used by page migration
+ * @mops:		Used by page migration
+ * @next:		Next zpdesc in a zspage in zsmalloc zpool
+ * @handle:		For huge zspage in zsmalloc zpool
+ * @zspage:		Pointer to zspage in zsmalloc
+ * @memcg_data:		Memory Control Group data.
+ *
+ * This struct overlays struct page for now. Do not modify without a good
+ * understanding of the issues.
+ */
+struct zpdesc {
+	unsigned long flags;
+	struct list_head lru;
+	struct movable_operations *mops;
+	union {
+		/* Next zpdescs in a zspage in zsmalloc zpool */
+		struct zpdesc *next;
+		/* For huge zspage in zsmalloc zpool */
+		unsigned long handle;
+	};
+	struct zspage *zspage;
+	unsigned long _zp_pad_1;
+#ifdef CONFIG_MEMCG
+	unsigned long memcg_data;
+#endif
+};
+#define ZPDESC_MATCH(pg, zp) \
+	static_assert(offsetof(struct page, pg) == offsetof(struct zpdesc, zp))
+
+ZPDESC_MATCH(flags, flags);
+ZPDESC_MATCH(lru, lru);
+ZPDESC_MATCH(mapping, mops);
+ZPDESC_MATCH(index, next);
+ZPDESC_MATCH(index, handle);
+ZPDESC_MATCH(private, zspage);
+#ifdef CONFIG_MEMCG
+ZPDESC_MATCH(memcg_data, memcg_data);
+#endif
+#undef ZPDESC_MATCH
+static_assert(sizeof(struct zpdesc) <= sizeof(struct page));
+
+#define zpdesc_page(zp)			(_Generic((zp),			\
+	const struct zpdesc *:		(const struct page *)(zp),	\
+	struct zpdesc *:		(struct page *)(zp)))
+
+#define zpdesc_folio(zp)		(_Generic((zp),			\
+	const struct zpdesc *:		(const struct folio *)(zp),	\
+	struct zpdesc *:		(struct folio *)(zp)))
+
+#define page_zpdesc(p)			(_Generic((p),			\
+	const struct page *:		(const struct zpdesc *)(p),	\
+	struct page *:			(struct zpdesc *)(p)))
+
+#endif
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 5d6581ab7c07..a532851025f9 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -13,20 +13,18 @@
 
 /*
  * Following is how we use various fields and flags of underlying
- * struct page(s) to form a zspage.
+ * struct zpdesc(page) to form a zspage.
  *
- * Usage of struct page fields:
- *	page->private: points to zspage
- *	page->index: links together all component pages of a zspage
+ * Usage of struct zpdesc fields:
+ *	zpdesc->zspage: points to zspage
+ *	zpdesc->next: links together all component pages of a zspage
  *		For the huge page, this is always 0, so we use this field
  *		to store handle.
  *	page->page_type: PG_zsmalloc, lower 16 bit locate the first object
  *		offset in a subpage of a zspage
  *
- * Usage of struct page flags:
+ * Usage of struct zpdesc(page) flags:
  *	PG_private: identifies the first component page
- *	PG_owner_priv_1: identifies the huge component page
- *
  */
 
 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
@@ -64,6 +62,7 @@
 #include <linux/pagemap.h>
 #include <linux/fs.h>
 #include <linux/local_lock.h>
+#include "zpdesc.h"
 
 #define ZSPAGE_MAGIC	0x58
 
@@ -253,7 +252,7 @@ struct zspage {
 	};
 	unsigned int inuse;
 	unsigned int freeobj;
-	struct page *first_page;
+	struct zpdesc *first_zpdesc;
 	struct list_head list; /* fullness list */
 	struct zs_pool *pool;
 	rwlock_t lock;
@@ -448,7 +447,7 @@ static inline void mod_zspage_inuse(struct zspage *zspage, int val)
 
 static inline struct page *get_first_page(struct zspage *zspage)
 {
-	struct page *first_page = zspage->first_page;
+	struct page *first_page = zpdesc_page(zspage->first_zpdesc);
 
 	VM_BUG_ON_PAGE(!is_first_page(first_page), first_page);
 	return first_page;
@@ -948,7 +947,7 @@ static void create_page_chain(struct size_class *class, struct zspage *zspage,
 		set_page_private(page, (unsigned long)zspage);
 		page->index = 0;
 		if (i == 0) {
-			zspage->first_page = page;
+			zspage->first_zpdesc = page_zpdesc(page);
 			SetPagePrivate(page);
 			if (unlikely(class->objs_per_zspage == 1 &&
 					class->pages_per_zspage == 1))
@@ -1324,7 +1323,7 @@ static unsigned long obj_malloc(struct zs_pool *pool,
 		link->handle = handle | OBJ_ALLOCATED_TAG;
 	else
 		/* record handle to page->index */
-		zspage->first_page->index = handle | OBJ_ALLOCATED_TAG;
+		zspage->first_zpdesc->handle = handle | OBJ_ALLOCATED_TAG;
 
 	kunmap_atomic(vaddr);
 	mod_zspage_inuse(zspage, 1);
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH v4 02/22] mm/zsmalloc: use zpdesc in trylock_zspage/lock_zspage
  2024-07-29 11:25 [PATCH v4 00/22] mm/zsmalloc: add zpdesc memory descriptor for zswap.zpool alexs
  2024-07-29 11:25 ` [PATCH v4 01/22] " alexs
@ 2024-07-29 11:25 ` alexs
  2024-08-02 19:02   ` Vishal Moola
  2024-07-29 11:25 ` [PATCH v4 03/22] mm/zsmalloc: convert __zs_map_object/__zs_unmap_object to use zpdesc alexs
                   ` (20 subsequent siblings)
  22 siblings, 1 reply; 50+ messages in thread
From: alexs @ 2024-07-29 11:25 UTC (permalink / raw)
  To: Vitaly Wool, Miaohe Lin, Andrew Morton, linux-kernel, linux-mm,
	minchan, willy, senozhatsky, david, 42.hyeyoo, Yosry Ahmed,
	nphamcs
  Cc: Alex Shi

From: Alex Shi <alexs@kernel.org>

To use zpdesc in trylock_zspage/lock_zspage funcs, we add couple of helpers:
zpdesc_lock/zpdesc_unlock/zpdesc_trylock/zpdesc_wait_locked and
zpdesc_get/zpdesc_put for this purpose.

Here we use the folio series func in guts for 2 reasons, one zswap.zpool
only get single page, and use folio could save some compound_head checking;
two, folio_put could bypass devmap checking that we don't need.

Originally-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Signed-off-by: Alex Shi <alexs@kernel.org>
---
 mm/zpdesc.h   | 30 ++++++++++++++++++++++++
 mm/zsmalloc.c | 64 ++++++++++++++++++++++++++++++++++-----------------
 2 files changed, 73 insertions(+), 21 deletions(-)

diff --git a/mm/zpdesc.h b/mm/zpdesc.h
index 2dbef231f616..3b04197cec9d 100644
--- a/mm/zpdesc.h
+++ b/mm/zpdesc.h
@@ -63,4 +63,34 @@ static_assert(sizeof(struct zpdesc) <= sizeof(struct page));
 	const struct page *:		(const struct zpdesc *)(p),	\
 	struct page *:			(struct zpdesc *)(p)))
 
+static inline void zpdesc_lock(struct zpdesc *zpdesc)
+{
+	folio_lock(zpdesc_folio(zpdesc));
+}
+
+static inline bool zpdesc_trylock(struct zpdesc *zpdesc)
+{
+	return folio_trylock(zpdesc_folio(zpdesc));
+}
+
+static inline void zpdesc_unlock(struct zpdesc *zpdesc)
+{
+	folio_unlock(zpdesc_folio(zpdesc));
+}
+
+static inline void zpdesc_wait_locked(struct zpdesc *zpdesc)
+{
+	folio_wait_locked(zpdesc_folio(zpdesc));
+}
+
+static inline void zpdesc_get(struct zpdesc *zpdesc)
+{
+	folio_get(zpdesc_folio(zpdesc));
+}
+
+static inline void zpdesc_put(struct zpdesc *zpdesc)
+{
+	folio_put(zpdesc_folio(zpdesc));
+}
+
 #endif
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index a532851025f9..243677a9c6d2 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -433,13 +433,17 @@ static __maybe_unused int is_first_page(struct page *page)
 	return PagePrivate(page);
 }
 
+static int is_first_zpdesc(struct zpdesc *zpdesc)
+{
+	return PagePrivate(zpdesc_page(zpdesc));
+}
+
 /* Protected by class->lock */
 static inline int get_zspage_inuse(struct zspage *zspage)
 {
 	return zspage->inuse;
 }
 
-
 static inline void mod_zspage_inuse(struct zspage *zspage, int val)
 {
 	zspage->inuse += val;
@@ -453,6 +457,14 @@ static inline struct page *get_first_page(struct zspage *zspage)
 	return first_page;
 }
 
+static struct zpdesc *get_first_zpdesc(struct zspage *zspage)
+{
+	struct zpdesc *first_zpdesc = zspage->first_zpdesc;
+
+	VM_BUG_ON_PAGE(!is_first_zpdesc(first_zpdesc), zpdesc_page(first_zpdesc));
+	return first_zpdesc;
+}
+
 #define FIRST_OBJ_PAGE_TYPE_MASK	0xffff
 
 static inline void reset_first_obj_offset(struct page *page)
@@ -745,6 +757,16 @@ static struct page *get_next_page(struct page *page)
 	return (struct page *)page->index;
 }
 
+static struct zpdesc *get_next_zpdesc(struct zpdesc *zpdesc)
+{
+	struct zspage *zspage = get_zspage(zpdesc_page(zpdesc));
+
+	if (unlikely(ZsHugePage(zspage)))
+		return NULL;
+
+	return zpdesc->next;
+}
+
 /**
  * obj_to_location - get (<page>, <obj_idx>) from encoded object value
  * @obj: the encoded object value
@@ -815,11 +837,11 @@ static void reset_page(struct page *page)
 
 static int trylock_zspage(struct zspage *zspage)
 {
-	struct page *cursor, *fail;
+	struct zpdesc *cursor, *fail;
 
-	for (cursor = get_first_page(zspage); cursor != NULL; cursor =
-					get_next_page(cursor)) {
-		if (!trylock_page(cursor)) {
+	for (cursor = get_first_zpdesc(zspage); cursor != NULL; cursor =
+					get_next_zpdesc(cursor)) {
+		if (!zpdesc_trylock(cursor)) {
 			fail = cursor;
 			goto unlock;
 		}
@@ -827,9 +849,9 @@ static int trylock_zspage(struct zspage *zspage)
 
 	return 1;
 unlock:
-	for (cursor = get_first_page(zspage); cursor != fail; cursor =
-					get_next_page(cursor))
-		unlock_page(cursor);
+	for (cursor = get_first_zpdesc(zspage); cursor != fail; cursor =
+					get_next_zpdesc(cursor))
+		zpdesc_unlock(cursor);
 
 	return 0;
 }
@@ -1658,7 +1680,7 @@ static int putback_zspage(struct size_class *class, struct zspage *zspage)
  */
 static void lock_zspage(struct zspage *zspage)
 {
-	struct page *curr_page, *page;
+	struct zpdesc *curr_zpdesc, *zpdesc;
 
 	/*
 	 * Pages we haven't locked yet can be migrated off the list while we're
@@ -1670,24 +1692,24 @@ static void lock_zspage(struct zspage *zspage)
 	 */
 	while (1) {
 		migrate_read_lock(zspage);
-		page = get_first_page(zspage);
-		if (trylock_page(page))
+		zpdesc = get_first_zpdesc(zspage);
+		if (zpdesc_trylock(zpdesc))
 			break;
-		get_page(page);
+		zpdesc_get(zpdesc);
 		migrate_read_unlock(zspage);
-		wait_on_page_locked(page);
-		put_page(page);
+		zpdesc_wait_locked(zpdesc);
+		zpdesc_put(zpdesc);
 	}
 
-	curr_page = page;
-	while ((page = get_next_page(curr_page))) {
-		if (trylock_page(page)) {
-			curr_page = page;
+	curr_zpdesc = zpdesc;
+	while ((zpdesc = get_next_zpdesc(curr_zpdesc))) {
+		if (zpdesc_trylock(zpdesc)) {
+			curr_zpdesc = zpdesc;
 		} else {
-			get_page(page);
+			zpdesc_get(zpdesc);
 			migrate_read_unlock(zspage);
-			wait_on_page_locked(page);
-			put_page(page);
+			zpdesc_wait_locked(zpdesc);
+			zpdesc_put(zpdesc);
 			migrate_read_lock(zspage);
 		}
 	}
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH v4 03/22] mm/zsmalloc: convert __zs_map_object/__zs_unmap_object to use zpdesc
  2024-07-29 11:25 [PATCH v4 00/22] mm/zsmalloc: add zpdesc memory descriptor for zswap.zpool alexs
  2024-07-29 11:25 ` [PATCH v4 01/22] " alexs
  2024-07-29 11:25 ` [PATCH v4 02/22] mm/zsmalloc: use zpdesc in trylock_zspage/lock_zspage alexs
@ 2024-07-29 11:25 ` alexs
  2024-07-30  9:38   ` Sergey Senozhatsky
  2024-07-29 11:25 ` [PATCH v4 04/22] mm/zsmalloc: add and use pfn/zpdesc seeking funcs alexs
                   ` (19 subsequent siblings)
  22 siblings, 1 reply; 50+ messages in thread
From: alexs @ 2024-07-29 11:25 UTC (permalink / raw)
  To: Vitaly Wool, Miaohe Lin, Andrew Morton, linux-kernel, linux-mm,
	minchan, willy, senozhatsky, david, 42.hyeyoo, Yosry Ahmed,
	nphamcs
  Cc: Alex Shi

From: Hyeonggon Yoo <42.hyeyoo@gmail.com>

These two functions take pointer to an array of struct page. Introduce
zpdesc_kmap_atomic() and make __zs_{map,unmap}_object() take pointer
to an array of zpdesc instead of page.

Add silly type casting when calling them. Casting will be removed late.

Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Signed-off-by: Alex Shi <alexs@kernel.org>
---
 mm/zsmalloc.c | 21 +++++++++++++--------
 1 file changed, 13 insertions(+), 8 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 243677a9c6d2..68a39c233d34 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -243,6 +243,11 @@ struct zs_pool {
 	atomic_t compaction_in_progress;
 };
 
+static inline void *zpdesc_kmap_atomic(struct zpdesc *zpdesc)
+{
+	return kmap_atomic(zpdesc_page(zpdesc));
+}
+
 struct zspage {
 	struct {
 		unsigned int huge:HUGE_BITS;
@@ -1061,7 +1066,7 @@ static inline void __zs_cpu_down(struct mapping_area *area)
 }
 
 static void *__zs_map_object(struct mapping_area *area,
-			struct page *pages[2], int off, int size)
+			struct zpdesc *zpdescs[2], int off, int size)
 {
 	int sizes[2];
 	void *addr;
@@ -1078,10 +1083,10 @@ static void *__zs_map_object(struct mapping_area *area,
 	sizes[1] = size - sizes[0];
 
 	/* copy object to per-cpu buffer */
-	addr = kmap_atomic(pages[0]);
+	addr = zpdesc_kmap_atomic(zpdescs[0]);
 	memcpy(buf, addr + off, sizes[0]);
 	kunmap_atomic(addr);
-	addr = kmap_atomic(pages[1]);
+	addr = zpdesc_kmap_atomic(zpdescs[1]);
 	memcpy(buf + sizes[0], addr, sizes[1]);
 	kunmap_atomic(addr);
 out:
@@ -1089,7 +1094,7 @@ static void *__zs_map_object(struct mapping_area *area,
 }
 
 static void __zs_unmap_object(struct mapping_area *area,
-			struct page *pages[2], int off, int size)
+			struct zpdesc *zpdescs[2], int off, int size)
 {
 	int sizes[2];
 	void *addr;
@@ -1108,10 +1113,10 @@ static void __zs_unmap_object(struct mapping_area *area,
 	sizes[1] = size - sizes[0];
 
 	/* copy per-cpu buffer to object */
-	addr = kmap_atomic(pages[0]);
+	addr = zpdesc_kmap_atomic(zpdescs[0]);
 	memcpy(addr + off, buf, sizes[0]);
 	kunmap_atomic(addr);
-	addr = kmap_atomic(pages[1]);
+	addr = zpdesc_kmap_atomic(zpdescs[1]);
 	memcpy(addr, buf + sizes[0], sizes[1]);
 	kunmap_atomic(addr);
 
@@ -1252,7 +1257,7 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle,
 	pages[1] = get_next_page(page);
 	BUG_ON(!pages[1]);
 
-	ret = __zs_map_object(area, pages, off, class->size);
+	ret = __zs_map_object(area, (struct zpdesc **)pages, off, class->size);
 out:
 	if (likely(!ZsHugePage(zspage)))
 		ret += ZS_HANDLE_SIZE;
@@ -1287,7 +1292,7 @@ void zs_unmap_object(struct zs_pool *pool, unsigned long handle)
 		pages[1] = get_next_page(page);
 		BUG_ON(!pages[1]);
 
-		__zs_unmap_object(area, pages, off, class->size);
+		__zs_unmap_object(area, (struct zpdesc **)pages, off, class->size);
 	}
 	local_unlock(&zs_map_area.lock);
 
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH v4 04/22] mm/zsmalloc: add and use pfn/zpdesc seeking funcs
  2024-07-29 11:25 [PATCH v4 00/22] mm/zsmalloc: add zpdesc memory descriptor for zswap.zpool alexs
                   ` (2 preceding siblings ...)
  2024-07-29 11:25 ` [PATCH v4 03/22] mm/zsmalloc: convert __zs_map_object/__zs_unmap_object to use zpdesc alexs
@ 2024-07-29 11:25 ` alexs
  2024-07-29 11:25 ` [PATCH v4 05/22] mm/zsmalloc: convert obj_malloc() to use zpdesc alexs
                   ` (18 subsequent siblings)
  22 siblings, 0 replies; 50+ messages in thread
From: alexs @ 2024-07-29 11:25 UTC (permalink / raw)
  To: Vitaly Wool, Miaohe Lin, Andrew Morton, linux-kernel, linux-mm,
	minchan, willy, senozhatsky, david, 42.hyeyoo, Yosry Ahmed,
	nphamcs
  Cc: Alex Shi

From: Hyeonggon Yoo <42.hyeyoo@gmail.com>

Add pfn_zpdesc conversion, convert obj_to_location() to take zpdesc
and also convert its users to use zpdesc.

Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Signed-off-by: Alex Shi <alexs@kernel.org>
---
 mm/zpdesc.h   |  9 +++++++
 mm/zsmalloc.c | 75 ++++++++++++++++++++++++++-------------------------
 2 files changed, 47 insertions(+), 37 deletions(-)

diff --git a/mm/zpdesc.h b/mm/zpdesc.h
index 3b04197cec9d..79ec40b03956 100644
--- a/mm/zpdesc.h
+++ b/mm/zpdesc.h
@@ -93,4 +93,13 @@ static inline void zpdesc_put(struct zpdesc *zpdesc)
 	folio_put(zpdesc_folio(zpdesc));
 }
 
+static inline unsigned long zpdesc_pfn(struct zpdesc *zpdesc)
+{
+	return page_to_pfn(zpdesc_page(zpdesc));
+}
+
+static inline struct zpdesc *pfn_zpdesc(unsigned long pfn)
+{
+	return page_zpdesc(pfn_to_page(pfn));
+}
 #endif
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 68a39c233d34..149fe2b332cb 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -773,15 +773,15 @@ static struct zpdesc *get_next_zpdesc(struct zpdesc *zpdesc)
 }
 
 /**
- * obj_to_location - get (<page>, <obj_idx>) from encoded object value
+ * obj_to_location - get (<zpdesc>, <obj_idx>) from encoded object value
  * @obj: the encoded object value
- * @page: page object resides in zspage
+ * @zpdesc: zpdesc object resides in zspage
  * @obj_idx: object index
  */
-static void obj_to_location(unsigned long obj, struct page **page,
+static void obj_to_location(unsigned long obj, struct zpdesc **zpdesc,
 				unsigned int *obj_idx)
 {
-	*page = pfn_to_page(obj >> OBJ_INDEX_BITS);
+	*zpdesc = pfn_zpdesc(obj >> OBJ_INDEX_BITS);
 	*obj_idx = (obj & OBJ_INDEX_MASK);
 }
 
@@ -1208,13 +1208,13 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle,
 			enum zs_mapmode mm)
 {
 	struct zspage *zspage;
-	struct page *page;
+	struct zpdesc *zpdesc;
 	unsigned long obj, off;
 	unsigned int obj_idx;
 
 	struct size_class *class;
 	struct mapping_area *area;
-	struct page *pages[2];
+	struct zpdesc *zpdescs[2];
 	void *ret;
 
 	/*
@@ -1227,8 +1227,8 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle,
 	/* It guarantees it can get zspage from handle safely */
 	read_lock(&pool->migrate_lock);
 	obj = handle_to_obj(handle);
-	obj_to_location(obj, &page, &obj_idx);
-	zspage = get_zspage(page);
+	obj_to_location(obj, &zpdesc, &obj_idx);
+	zspage = get_zspage(zpdesc_page(zpdesc));
 
 	/*
 	 * migration cannot move any zpages in this zspage. Here, class->lock
@@ -1247,17 +1247,17 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle,
 	area->vm_mm = mm;
 	if (off + class->size <= PAGE_SIZE) {
 		/* this object is contained entirely within a page */
-		area->vm_addr = kmap_atomic(page);
+		area->vm_addr = zpdesc_kmap_atomic(zpdesc);
 		ret = area->vm_addr + off;
 		goto out;
 	}
 
 	/* this object spans two pages */
-	pages[0] = page;
-	pages[1] = get_next_page(page);
-	BUG_ON(!pages[1]);
+	zpdescs[0] = zpdesc;
+	zpdescs[1] = get_next_zpdesc(zpdesc);
+	BUG_ON(!zpdescs[1]);
 
-	ret = __zs_map_object(area, (struct zpdesc **)pages, off, class->size);
+	ret = __zs_map_object(area, zpdescs, off, class->size);
 out:
 	if (likely(!ZsHugePage(zspage)))
 		ret += ZS_HANDLE_SIZE;
@@ -1269,7 +1269,7 @@ EXPORT_SYMBOL_GPL(zs_map_object);
 void zs_unmap_object(struct zs_pool *pool, unsigned long handle)
 {
 	struct zspage *zspage;
-	struct page *page;
+	struct zpdesc *zpdesc;
 	unsigned long obj, off;
 	unsigned int obj_idx;
 
@@ -1277,8 +1277,8 @@ void zs_unmap_object(struct zs_pool *pool, unsigned long handle)
 	struct mapping_area *area;
 
 	obj = handle_to_obj(handle);
-	obj_to_location(obj, &page, &obj_idx);
-	zspage = get_zspage(page);
+	obj_to_location(obj, &zpdesc, &obj_idx);
+	zspage = get_zspage(zpdesc_page(zpdesc));
 	class = zspage_class(pool, zspage);
 	off = offset_in_page(class->size * obj_idx);
 
@@ -1286,13 +1286,13 @@ void zs_unmap_object(struct zs_pool *pool, unsigned long handle)
 	if (off + class->size <= PAGE_SIZE)
 		kunmap_atomic(area->vm_addr);
 	else {
-		struct page *pages[2];
+		struct zpdesc *zpdescs[2];
 
-		pages[0] = page;
-		pages[1] = get_next_page(page);
-		BUG_ON(!pages[1]);
+		zpdescs[0] = zpdesc;
+		zpdescs[1] = get_next_zpdesc(zpdesc);
+		BUG_ON(!zpdescs[1]);
 
-		__zs_unmap_object(area, (struct zpdesc **)pages, off, class->size);
+		__zs_unmap_object(area, zpdescs, off, class->size);
 	}
 	local_unlock(&zs_map_area.lock);
 
@@ -1434,23 +1434,24 @@ static void obj_free(int class_size, unsigned long obj)
 {
 	struct link_free *link;
 	struct zspage *zspage;
-	struct page *f_page;
+	struct zpdesc *f_zpdesc;
 	unsigned long f_offset;
 	unsigned int f_objidx;
 	void *vaddr;
 
-	obj_to_location(obj, &f_page, &f_objidx);
+
+	obj_to_location(obj, &f_zpdesc, &f_objidx);
 	f_offset = offset_in_page(class_size * f_objidx);
-	zspage = get_zspage(f_page);
+	zspage = get_zspage(zpdesc_page(f_zpdesc));
 
-	vaddr = kmap_atomic(f_page);
+	vaddr = zpdesc_kmap_atomic(f_zpdesc);
 	link = (struct link_free *)(vaddr + f_offset);
 
 	/* Insert this object in containing zspage's freelist */
 	if (likely(!ZsHugePage(zspage)))
 		link->next = get_freeobj(zspage) << OBJ_TAG_BITS;
 	else
-		f_page->index = 0;
+		f_zpdesc->next = NULL;
 	set_freeobj(zspage, f_objidx);
 
 	kunmap_atomic(vaddr);
@@ -1495,7 +1496,7 @@ EXPORT_SYMBOL_GPL(zs_free);
 static void zs_object_copy(struct size_class *class, unsigned long dst,
 				unsigned long src)
 {
-	struct page *s_page, *d_page;
+	struct zpdesc *s_zpdesc, *d_zpdesc;
 	unsigned int s_objidx, d_objidx;
 	unsigned long s_off, d_off;
 	void *s_addr, *d_addr;
@@ -1504,8 +1505,8 @@ static void zs_object_copy(struct size_class *class, unsigned long dst,
 
 	s_size = d_size = class->size;
 
-	obj_to_location(src, &s_page, &s_objidx);
-	obj_to_location(dst, &d_page, &d_objidx);
+	obj_to_location(src, &s_zpdesc, &s_objidx);
+	obj_to_location(dst, &d_zpdesc, &d_objidx);
 
 	s_off = offset_in_page(class->size * s_objidx);
 	d_off = offset_in_page(class->size * d_objidx);
@@ -1516,8 +1517,8 @@ static void zs_object_copy(struct size_class *class, unsigned long dst,
 	if (d_off + class->size > PAGE_SIZE)
 		d_size = PAGE_SIZE - d_off;
 
-	s_addr = kmap_atomic(s_page);
-	d_addr = kmap_atomic(d_page);
+	s_addr = zpdesc_kmap_atomic(s_zpdesc);
+	d_addr = zpdesc_kmap_atomic(d_zpdesc);
 
 	while (1) {
 		size = min(s_size, d_size);
@@ -1542,17 +1543,17 @@ static void zs_object_copy(struct size_class *class, unsigned long dst,
 		if (s_off >= PAGE_SIZE) {
 			kunmap_atomic(d_addr);
 			kunmap_atomic(s_addr);
-			s_page = get_next_page(s_page);
-			s_addr = kmap_atomic(s_page);
-			d_addr = kmap_atomic(d_page);
+			s_zpdesc = get_next_zpdesc(s_zpdesc);
+			s_addr = zpdesc_kmap_atomic(s_zpdesc);
+			d_addr = zpdesc_kmap_atomic(d_zpdesc);
 			s_size = class->size - written;
 			s_off = 0;
 		}
 
 		if (d_off >= PAGE_SIZE) {
 			kunmap_atomic(d_addr);
-			d_page = get_next_page(d_page);
-			d_addr = kmap_atomic(d_page);
+			d_zpdesc = get_next_zpdesc(d_zpdesc);
+			d_addr = zpdesc_kmap_atomic(d_zpdesc);
 			d_size = class->size - written;
 			d_off = 0;
 		}
@@ -1791,7 +1792,7 @@ static int zs_page_migrate(struct page *newpage, struct page *page,
 	struct zs_pool *pool;
 	struct size_class *class;
 	struct zspage *zspage;
-	struct page *dummy;
+	struct zpdesc *dummy;
 	void *s_addr, *d_addr, *addr;
 	unsigned int offset;
 	unsigned long handle;
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH v4 05/22] mm/zsmalloc: convert obj_malloc() to use zpdesc
  2024-07-29 11:25 [PATCH v4 00/22] mm/zsmalloc: add zpdesc memory descriptor for zswap.zpool alexs
                   ` (3 preceding siblings ...)
  2024-07-29 11:25 ` [PATCH v4 04/22] mm/zsmalloc: add and use pfn/zpdesc seeking funcs alexs
@ 2024-07-29 11:25 ` alexs
  2024-07-29 11:25 ` [PATCH v4 06/22] mm/zsmalloc: convert create_page_chain() and its users " alexs
                   ` (17 subsequent siblings)
  22 siblings, 0 replies; 50+ messages in thread
From: alexs @ 2024-07-29 11:25 UTC (permalink / raw)
  To: Vitaly Wool, Miaohe Lin, Andrew Morton, linux-kernel, linux-mm,
	minchan, willy, senozhatsky, david, 42.hyeyoo, Yosry Ahmed,
	nphamcs
  Cc: Alex Shi

From: Hyeonggon Yoo <42.hyeyoo@gmail.com>

Use get_first_zpdesc/get_next_zpdesc to replace
get_first_page/get_next_page. no functional change.

Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Signed-off-by: Alex Shi <alexs@kernel.org>
---
 mm/zsmalloc.c | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 149fe2b332cb..bbc165cb587d 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1322,12 +1322,12 @@ EXPORT_SYMBOL_GPL(zs_huge_class_size);
 static unsigned long obj_malloc(struct zs_pool *pool,
 				struct zspage *zspage, unsigned long handle)
 {
-	int i, nr_page, offset;
+	int i, nr_zpdesc, offset;
 	unsigned long obj;
 	struct link_free *link;
 	struct size_class *class;
 
-	struct page *m_page;
+	struct zpdesc *m_zpdesc;
 	unsigned long m_offset;
 	void *vaddr;
 
@@ -1335,14 +1335,14 @@ static unsigned long obj_malloc(struct zs_pool *pool,
 	obj = get_freeobj(zspage);
 
 	offset = obj * class->size;
-	nr_page = offset >> PAGE_SHIFT;
+	nr_zpdesc = offset >> PAGE_SHIFT;
 	m_offset = offset_in_page(offset);
-	m_page = get_first_page(zspage);
+	m_zpdesc = get_first_zpdesc(zspage);
 
-	for (i = 0; i < nr_page; i++)
-		m_page = get_next_page(m_page);
+	for (i = 0; i < nr_zpdesc; i++)
+		m_zpdesc = get_next_zpdesc(m_zpdesc);
 
-	vaddr = kmap_atomic(m_page);
+	vaddr = zpdesc_kmap_atomic(m_zpdesc);
 	link = (struct link_free *)vaddr + m_offset / sizeof(*link);
 	set_freeobj(zspage, link->next >> OBJ_TAG_BITS);
 	if (likely(!ZsHugePage(zspage)))
@@ -1355,7 +1355,7 @@ static unsigned long obj_malloc(struct zs_pool *pool,
 	kunmap_atomic(vaddr);
 	mod_zspage_inuse(zspage, 1);
 
-	obj = location_to_obj(m_page, obj);
+	obj = location_to_obj(zpdesc_page(m_zpdesc), obj);
 	record_obj(handle, obj);
 
 	return obj;
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH v4 06/22] mm/zsmalloc: convert create_page_chain() and its users to use zpdesc
  2024-07-29 11:25 [PATCH v4 00/22] mm/zsmalloc: add zpdesc memory descriptor for zswap.zpool alexs
                   ` (4 preceding siblings ...)
  2024-07-29 11:25 ` [PATCH v4 05/22] mm/zsmalloc: convert obj_malloc() to use zpdesc alexs
@ 2024-07-29 11:25 ` alexs
  2024-08-02 19:09   ` Vishal Moola
  2024-07-29 11:25 ` [PATCH v4 07/22] mm/zsmalloc: convert obj_allocated() and related helpers " alexs
                   ` (16 subsequent siblings)
  22 siblings, 1 reply; 50+ messages in thread
From: alexs @ 2024-07-29 11:25 UTC (permalink / raw)
  To: Vitaly Wool, Miaohe Lin, Andrew Morton, linux-kernel, linux-mm,
	minchan, willy, senozhatsky, david, 42.hyeyoo, Yosry Ahmed,
	nphamcs
  Cc: Alex Shi

From: Alex Shi <alexs@kernel.org>

Introduce a few helper functions for conversion to convert create_page_chain()
to use zpdesc, then use zpdesc in replace_sub_page() too.

Originally-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Signed-off-by: Alex Shi <alexs@kernel.org>
---
 mm/zpdesc.h   |   6 +++
 mm/zsmalloc.c | 115 +++++++++++++++++++++++++++++++++-----------------
 2 files changed, 82 insertions(+), 39 deletions(-)

diff --git a/mm/zpdesc.h b/mm/zpdesc.h
index 79ec40b03956..2293453f5d57 100644
--- a/mm/zpdesc.h
+++ b/mm/zpdesc.h
@@ -102,4 +102,10 @@ static inline struct zpdesc *pfn_zpdesc(unsigned long pfn)
 {
 	return page_zpdesc(pfn_to_page(pfn));
 }
+
+static inline void __zpdesc_set_movable(struct zpdesc *zpdesc,
+					const struct movable_operations *mops)
+{
+	__SetPageMovable(zpdesc_page(zpdesc), mops);
+}
 #endif
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index bbc165cb587d..a8f390beeab8 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -248,6 +248,41 @@ static inline void *zpdesc_kmap_atomic(struct zpdesc *zpdesc)
 	return kmap_atomic(zpdesc_page(zpdesc));
 }
 
+static inline void zpdesc_set_zspage(struct zpdesc *zpdesc,
+				     struct zspage *zspage)
+{
+	zpdesc->zspage = zspage;
+}
+
+static inline void zpdesc_set_first(struct zpdesc *zpdesc)
+{
+	SetPagePrivate(zpdesc_page(zpdesc));
+}
+
+static inline void zpdesc_inc_zone_page_state(struct zpdesc *zpdesc)
+{
+	inc_zone_page_state(zpdesc_page(zpdesc), NR_ZSPAGES);
+}
+
+static inline void zpdesc_dec_zone_page_state(struct zpdesc *zpdesc)
+{
+	dec_zone_page_state(zpdesc_page(zpdesc), NR_ZSPAGES);
+}
+
+static inline struct zpdesc *alloc_zpdesc(gfp_t gfp)
+{
+	struct page *page = alloc_page(gfp);
+
+	return page_zpdesc(page);
+}
+
+static inline void free_zpdesc(struct zpdesc *zpdesc)
+{
+	struct page *page = zpdesc_page(zpdesc);
+
+	__free_page(page);
+}
+
 struct zspage {
 	struct {
 		unsigned int huge:HUGE_BITS;
@@ -954,35 +989,35 @@ static void init_zspage(struct size_class *class, struct zspage *zspage)
 }
 
 static void create_page_chain(struct size_class *class, struct zspage *zspage,
-				struct page *pages[])
+				struct zpdesc *zpdescs[])
 {
 	int i;
-	struct page *page;
-	struct page *prev_page = NULL;
-	int nr_pages = class->pages_per_zspage;
+	struct zpdesc *zpdesc;
+	struct zpdesc *prev_zpdesc = NULL;
+	int nr_zpdescs = class->pages_per_zspage;
 
 	/*
 	 * Allocate individual pages and link them together as:
-	 * 1. all pages are linked together using page->index
-	 * 2. each sub-page point to zspage using page->private
+	 * 1. all pages are linked together using zpdesc->next
+	 * 2. each sub-page point to zspage using zpdesc->zspage
 	 *
-	 * we set PG_private to identify the first page (i.e. no other sub-page
+	 * we set PG_private to identify the first zpdesc (i.e. no other zpdesc
 	 * has this flag set).
 	 */
-	for (i = 0; i < nr_pages; i++) {
-		page = pages[i];
-		set_page_private(page, (unsigned long)zspage);
-		page->index = 0;
+	for (i = 0; i < nr_zpdescs; i++) {
+		zpdesc = zpdescs[i];
+		zpdesc_set_zspage(zpdesc, zspage);
+		zpdesc->next = NULL;
 		if (i == 0) {
-			zspage->first_zpdesc = page_zpdesc(page);
-			SetPagePrivate(page);
+			zspage->first_zpdesc = zpdesc;
+			zpdesc_set_first(zpdesc);
 			if (unlikely(class->objs_per_zspage == 1 &&
 					class->pages_per_zspage == 1))
 				SetZsHugePage(zspage);
 		} else {
-			prev_page->index = (unsigned long)page;
+			prev_zpdesc->next = zpdesc;
 		}
-		prev_page = page;
+		prev_zpdesc = zpdesc;
 	}
 }
 
@@ -994,7 +1029,7 @@ static struct zspage *alloc_zspage(struct zs_pool *pool,
 					gfp_t gfp)
 {
 	int i;
-	struct page *pages[ZS_MAX_PAGES_PER_ZSPAGE];
+	struct zpdesc *zpdescs[ZS_MAX_PAGES_PER_ZSPAGE];
 	struct zspage *zspage = cache_alloc_zspage(pool, gfp);
 
 	if (!zspage)
@@ -1004,25 +1039,25 @@ static struct zspage *alloc_zspage(struct zs_pool *pool,
 	migrate_lock_init(zspage);
 
 	for (i = 0; i < class->pages_per_zspage; i++) {
-		struct page *page;
+		struct zpdesc *zpdesc;
 
-		page = alloc_page(gfp);
-		if (!page) {
+		zpdesc = alloc_zpdesc(gfp);
+		if (!zpdesc) {
 			while (--i >= 0) {
-				dec_zone_page_state(pages[i], NR_ZSPAGES);
-				__ClearPageZsmalloc(pages[i]);
-				__free_page(pages[i]);
+				zpdesc_dec_zone_page_state(zpdescs[i]);
+				__ClearPageZsmalloc(zpdesc_page(zpdescs[i]));
+				free_zpdesc(zpdescs[i]);
 			}
 			cache_free_zspage(pool, zspage);
 			return NULL;
 		}
-		__SetPageZsmalloc(page);
+		__SetPageZsmalloc(zpdesc_page(zpdesc));
 
-		inc_zone_page_state(page, NR_ZSPAGES);
-		pages[i] = page;
+		zpdesc_inc_zone_page_state(zpdesc);
+		zpdescs[i] = zpdesc;
 	}
 
-	create_page_chain(class, zspage, pages);
+	create_page_chain(class, zspage, zpdescs);
 	init_zspage(class, zspage);
 	zspage->pool = pool;
 	zspage->class = class->index;
@@ -1753,26 +1788,28 @@ static void migrate_write_unlock(struct zspage *zspage)
 static const struct movable_operations zsmalloc_mops;
 
 static void replace_sub_page(struct size_class *class, struct zspage *zspage,
-				struct page *newpage, struct page *oldpage)
+				struct zpdesc *newzpdesc, struct zpdesc *oldzpdesc)
 {
-	struct page *page;
-	struct page *pages[ZS_MAX_PAGES_PER_ZSPAGE] = {NULL, };
+	struct zpdesc *zpdesc;
+	struct zpdesc *zpdescs[ZS_MAX_PAGES_PER_ZSPAGE] = {NULL, };
+	unsigned int first_obj_offset;
 	int idx = 0;
 
-	page = get_first_page(zspage);
+	zpdesc = get_first_zpdesc(zspage);
 	do {
-		if (page == oldpage)
-			pages[idx] = newpage;
+		if (zpdesc == oldzpdesc)
+			zpdescs[idx] = newzpdesc;
 		else
-			pages[idx] = page;
+			zpdescs[idx] = zpdesc;
 		idx++;
-	} while ((page = get_next_page(page)) != NULL);
+	} while ((zpdesc = get_next_zpdesc(zpdesc)) != NULL);
 
-	create_page_chain(class, zspage, pages);
-	set_first_obj_offset(newpage, get_first_obj_offset(oldpage));
+	create_page_chain(class, zspage, zpdescs);
+	first_obj_offset = get_first_obj_offset(zpdesc_page(oldzpdesc));
+	set_first_obj_offset(zpdesc_page(newzpdesc), first_obj_offset);
 	if (unlikely(ZsHugePage(zspage)))
-		newpage->index = oldpage->index;
-	__SetPageMovable(newpage, &zsmalloc_mops);
+		newzpdesc->handle = oldzpdesc->handle;
+	__zpdesc_set_movable(newzpdesc, &zsmalloc_mops);
 }
 
 static bool zs_page_isolate(struct page *page, isolate_mode_t mode)
@@ -1845,7 +1882,7 @@ static int zs_page_migrate(struct page *newpage, struct page *page,
 	}
 	kunmap_atomic(s_addr);
 
-	replace_sub_page(class, zspage, newpage, page);
+	replace_sub_page(class, zspage, page_zpdesc(newpage), page_zpdesc(page));
 	/*
 	 * Since we complete the data copy and set up new zspage structure,
 	 * it's okay to release migration_lock.
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH v4 07/22] mm/zsmalloc: convert obj_allocated() and related helpers to use zpdesc
  2024-07-29 11:25 [PATCH v4 00/22] mm/zsmalloc: add zpdesc memory descriptor for zswap.zpool alexs
                   ` (5 preceding siblings ...)
  2024-07-29 11:25 ` [PATCH v4 06/22] mm/zsmalloc: convert create_page_chain() and its users " alexs
@ 2024-07-29 11:25 ` alexs
  2024-07-29 11:25 ` [PATCH v4 08/22] mm/zsmalloc: convert init_zspage() " alexs
                   ` (15 subsequent siblings)
  22 siblings, 0 replies; 50+ messages in thread
From: alexs @ 2024-07-29 11:25 UTC (permalink / raw)
  To: Vitaly Wool, Miaohe Lin, Andrew Morton, linux-kernel, linux-mm,
	minchan, willy, senozhatsky, david, 42.hyeyoo, Yosry Ahmed,
	nphamcs
  Cc: Alex Shi

From: Hyeonggon Yoo <42.hyeyoo@gmail.com>

Convert obj_allocated(), and related helpers to take zpdesc. Also make
its callers to cast (struct page *) to (struct zpdesc *) when calling them.
The users will be converted gradually as there are many.

Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Signed-off-by: Alex Shi <alexs@kernel.org>
---
 mm/zsmalloc.c | 20 ++++++++++----------
 1 file changed, 10 insertions(+), 10 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index a8f390beeab8..29b9fa5baa46 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -845,15 +845,15 @@ static unsigned long handle_to_obj(unsigned long handle)
 	return *(unsigned long *)handle;
 }
 
-static inline bool obj_allocated(struct page *page, void *obj,
+static inline bool obj_allocated(struct zpdesc *zpdesc, void *obj,
 				 unsigned long *phandle)
 {
 	unsigned long handle;
-	struct zspage *zspage = get_zspage(page);
+	struct zspage *zspage = get_zspage(zpdesc_page(zpdesc));
 
 	if (unlikely(ZsHugePage(zspage))) {
-		VM_BUG_ON_PAGE(!is_first_page(page), page);
-		handle = page->index;
+		VM_BUG_ON_PAGE(!is_first_zpdesc(zpdesc), zpdesc_page(zpdesc));
+		handle = zpdesc->handle;
 	} else
 		handle = *(unsigned long *)obj;
 
@@ -1603,18 +1603,18 @@ static void zs_object_copy(struct size_class *class, unsigned long dst,
  * return handle.
  */
 static unsigned long find_alloced_obj(struct size_class *class,
-				      struct page *page, int *obj_idx)
+				      struct zpdesc *zpdesc, int *obj_idx)
 {
 	unsigned int offset;
 	int index = *obj_idx;
 	unsigned long handle = 0;
-	void *addr = kmap_atomic(page);
+	void *addr = zpdesc_kmap_atomic(zpdesc);
 
-	offset = get_first_obj_offset(page);
+	offset = get_first_obj_offset(zpdesc_page(zpdesc));
 	offset += class->size * index;
 
 	while (offset < PAGE_SIZE) {
-		if (obj_allocated(page, addr + offset, &handle))
+		if (obj_allocated(zpdesc, addr + offset, &handle))
 			break;
 
 		offset += class->size;
@@ -1638,7 +1638,7 @@ static void migrate_zspage(struct zs_pool *pool, struct zspage *src_zspage,
 	struct size_class *class = pool->size_class[src_zspage->class];
 
 	while (1) {
-		handle = find_alloced_obj(class, s_page, &obj_idx);
+		handle = find_alloced_obj(class, page_zpdesc(s_page), &obj_idx);
 		if (!handle) {
 			s_page = get_next_page(s_page);
 			if (!s_page)
@@ -1871,7 +1871,7 @@ static int zs_page_migrate(struct page *newpage, struct page *page,
 
 	for (addr = s_addr + offset; addr < s_addr + PAGE_SIZE;
 					addr += class->size) {
-		if (obj_allocated(page, addr, &handle)) {
+		if (obj_allocated(page_zpdesc(page), addr, &handle)) {
 
 			old_obj = handle_to_obj(handle);
 			obj_to_location(old_obj, &dummy, &obj_idx);
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH v4 08/22] mm/zsmalloc: convert init_zspage() to use zpdesc
  2024-07-29 11:25 [PATCH v4 00/22] mm/zsmalloc: add zpdesc memory descriptor for zswap.zpool alexs
                   ` (6 preceding siblings ...)
  2024-07-29 11:25 ` [PATCH v4 07/22] mm/zsmalloc: convert obj_allocated() and related helpers " alexs
@ 2024-07-29 11:25 ` alexs
  2024-07-29 11:25 ` [PATCH v4 09/22] mm/zsmalloc: convert obj_to_page() and zs_free() " alexs
                   ` (14 subsequent siblings)
  22 siblings, 0 replies; 50+ messages in thread
From: alexs @ 2024-07-29 11:25 UTC (permalink / raw)
  To: Vitaly Wool, Miaohe Lin, Andrew Morton, linux-kernel, linux-mm,
	minchan, willy, senozhatsky, david, 42.hyeyoo, Yosry Ahmed,
	nphamcs
  Cc: Alex Shi

From: Hyeonggon Yoo <42.hyeyoo@gmail.com>

Replace get_first/next_page func series and kmap_atomic to new helper,
no functional change.

Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Signed-off-by: Alex Shi <alexs@kernel.org>
---
 mm/zsmalloc.c | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 29b9fa5baa46..d3558f3f8bc3 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -948,16 +948,16 @@ static void init_zspage(struct size_class *class, struct zspage *zspage)
 {
 	unsigned int freeobj = 1;
 	unsigned long off = 0;
-	struct page *page = get_first_page(zspage);
+	struct zpdesc *zpdesc = get_first_zpdesc(zspage);
 
-	while (page) {
-		struct page *next_page;
+	while (zpdesc) {
+		struct zpdesc *next_zpdesc;
 		struct link_free *link;
 		void *vaddr;
 
-		set_first_obj_offset(page, off);
+		set_first_obj_offset(zpdesc_page(zpdesc), off);
 
-		vaddr = kmap_atomic(page);
+		vaddr = zpdesc_kmap_atomic(zpdesc);
 		link = (struct link_free *)vaddr + off / sizeof(*link);
 
 		while ((off += class->size) < PAGE_SIZE) {
@@ -970,8 +970,8 @@ static void init_zspage(struct size_class *class, struct zspage *zspage)
 		 * page, which must point to the first object on the next
 		 * page (if present)
 		 */
-		next_page = get_next_page(page);
-		if (next_page) {
+		next_zpdesc = get_next_zpdesc(zpdesc);
+		if (next_zpdesc) {
 			link->next = freeobj++ << OBJ_TAG_BITS;
 		} else {
 			/*
@@ -981,7 +981,7 @@ static void init_zspage(struct size_class *class, struct zspage *zspage)
 			link->next = -1UL << OBJ_TAG_BITS;
 		}
 		kunmap_atomic(vaddr);
-		page = next_page;
+		zpdesc = next_zpdesc;
 		off %= PAGE_SIZE;
 	}
 
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH v4 09/22] mm/zsmalloc: convert obj_to_page() and zs_free() to use zpdesc
  2024-07-29 11:25 [PATCH v4 00/22] mm/zsmalloc: add zpdesc memory descriptor for zswap.zpool alexs
                   ` (7 preceding siblings ...)
  2024-07-29 11:25 ` [PATCH v4 08/22] mm/zsmalloc: convert init_zspage() " alexs
@ 2024-07-29 11:25 ` alexs
  2024-07-29 11:25 ` [PATCH v4 10/22] mm/zsmalloc: add zpdesc_is_isolated/zpdesc_zone helper for zs_page_migrate alexs
                   ` (13 subsequent siblings)
  22 siblings, 0 replies; 50+ messages in thread
From: alexs @ 2024-07-29 11:25 UTC (permalink / raw)
  To: Vitaly Wool, Miaohe Lin, Andrew Morton, linux-kernel, linux-mm,
	minchan, willy, senozhatsky, david, 42.hyeyoo, Yosry Ahmed,
	nphamcs
  Cc: Alex Shi

From: Hyeonggon Yoo <42.hyeyoo@gmail.com>

Rename obj_to_page() to obj_to_zpdesc() and also convert it and
its user zs_free() to use zpdesc.

Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Signed-off-by: Alex Shi <alexs@kernel.org>
---
 mm/zsmalloc.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index d3558f3f8bc3..7aa4a4acaec9 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -820,9 +820,9 @@ static void obj_to_location(unsigned long obj, struct zpdesc **zpdesc,
 	*obj_idx = (obj & OBJ_INDEX_MASK);
 }
 
-static void obj_to_page(unsigned long obj, struct page **page)
+static void obj_to_zpdesc(unsigned long obj, struct zpdesc **zpdesc)
 {
-	*page = pfn_to_page(obj >> OBJ_INDEX_BITS);
+	*zpdesc = pfn_zpdesc(obj >> OBJ_INDEX_BITS);
 }
 
 /**
@@ -1496,7 +1496,7 @@ static void obj_free(int class_size, unsigned long obj)
 void zs_free(struct zs_pool *pool, unsigned long handle)
 {
 	struct zspage *zspage;
-	struct page *f_page;
+	struct zpdesc *f_zpdesc;
 	unsigned long obj;
 	struct size_class *class;
 	int fullness;
@@ -1510,8 +1510,8 @@ void zs_free(struct zs_pool *pool, unsigned long handle)
 	 */
 	read_lock(&pool->migrate_lock);
 	obj = handle_to_obj(handle);
-	obj_to_page(obj, &f_page);
-	zspage = get_zspage(f_page);
+	obj_to_zpdesc(obj, &f_zpdesc);
+	zspage = get_zspage(zpdesc_page(f_zpdesc));
 	class = zspage_class(pool, zspage);
 	spin_lock(&class->lock);
 	read_unlock(&pool->migrate_lock);
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH v4 10/22] mm/zsmalloc: add zpdesc_is_isolated/zpdesc_zone helper for zs_page_migrate
  2024-07-29 11:25 [PATCH v4 00/22] mm/zsmalloc: add zpdesc memory descriptor for zswap.zpool alexs
                   ` (8 preceding siblings ...)
  2024-07-29 11:25 ` [PATCH v4 09/22] mm/zsmalloc: convert obj_to_page() and zs_free() " alexs
@ 2024-07-29 11:25 ` alexs
  2024-07-29 11:25 ` [PATCH v4 11/22] mm/zsmalloc: rename reset_page to reset_zpdesc and use zpdesc in it alexs
                   ` (12 subsequent siblings)
  22 siblings, 0 replies; 50+ messages in thread
From: alexs @ 2024-07-29 11:25 UTC (permalink / raw)
  To: Vitaly Wool, Miaohe Lin, Andrew Morton, linux-kernel, linux-mm,
	minchan, willy, senozhatsky, david, 42.hyeyoo, Yosry Ahmed,
	nphamcs
  Cc: Alex Shi

From: Hyeonggon Yoo <42.hyeyoo@gmail.com>

To convert page to zpdesc in zs_page_migrate(), we added
zpdesc_is_isolated/zpdesc_zone helpers. No functional change.

Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Signed-off-by: Alex Shi <alexs@kernel.org>
---
 mm/zpdesc.h   | 11 +++++++++++
 mm/zsmalloc.c | 30 ++++++++++++++++--------------
 2 files changed, 27 insertions(+), 14 deletions(-)

diff --git a/mm/zpdesc.h b/mm/zpdesc.h
index 2293453f5d57..ad04c8337cae 100644
--- a/mm/zpdesc.h
+++ b/mm/zpdesc.h
@@ -108,4 +108,15 @@ static inline void __zpdesc_set_movable(struct zpdesc *zpdesc,
 {
 	__SetPageMovable(zpdesc_page(zpdesc), mops);
 }
+
+static inline bool zpdesc_is_isolated(struct zpdesc *zpdesc)
+{
+	return PageIsolated(zpdesc_page(zpdesc));
+}
+
+static inline struct zone *zpdesc_zone(struct zpdesc *zpdesc)
+{
+	return page_zone(zpdesc_page(zpdesc));
+}
+
 #endif
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 7aa4a4acaec9..9bc9b14187ed 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1830,19 +1830,21 @@ static int zs_page_migrate(struct page *newpage, struct page *page,
 	struct size_class *class;
 	struct zspage *zspage;
 	struct zpdesc *dummy;
+	struct zpdesc *newzpdesc = page_zpdesc(newpage);
+	struct zpdesc *zpdesc = page_zpdesc(page);
 	void *s_addr, *d_addr, *addr;
 	unsigned int offset;
 	unsigned long handle;
 	unsigned long old_obj, new_obj;
 	unsigned int obj_idx;
 
-	VM_BUG_ON_PAGE(!PageIsolated(page), page);
+	VM_BUG_ON_PAGE(!zpdesc_is_isolated(zpdesc), zpdesc_page(zpdesc));
 
 	/* We're committed, tell the world that this is a Zsmalloc page. */
-	__SetPageZsmalloc(newpage);
+	__SetPageZsmalloc(zpdesc_page(newzpdesc));
 
 	/* The page is locked, so this pointer must remain valid */
-	zspage = get_zspage(page);
+	zspage = get_zspage(zpdesc_page(zpdesc));
 	pool = zspage->pool;
 
 	/*
@@ -1859,30 +1861,30 @@ static int zs_page_migrate(struct page *newpage, struct page *page,
 	/* the migrate_write_lock protects zpage access via zs_map_object */
 	migrate_write_lock(zspage);
 
-	offset = get_first_obj_offset(page);
-	s_addr = kmap_atomic(page);
+	offset = get_first_obj_offset(zpdesc_page(zpdesc));
+	s_addr = zpdesc_kmap_atomic(zpdesc);
 
 	/*
 	 * Here, any user cannot access all objects in the zspage so let's move.
 	 */
-	d_addr = kmap_atomic(newpage);
+	d_addr = zpdesc_kmap_atomic(newzpdesc);
 	copy_page(d_addr, s_addr);
 	kunmap_atomic(d_addr);
 
 	for (addr = s_addr + offset; addr < s_addr + PAGE_SIZE;
 					addr += class->size) {
-		if (obj_allocated(page_zpdesc(page), addr, &handle)) {
+		if (obj_allocated(zpdesc, addr, &handle)) {
 
 			old_obj = handle_to_obj(handle);
 			obj_to_location(old_obj, &dummy, &obj_idx);
-			new_obj = (unsigned long)location_to_obj(newpage,
+			new_obj = (unsigned long)location_to_obj(zpdesc_page(newzpdesc),
 								obj_idx);
 			record_obj(handle, new_obj);
 		}
 	}
 	kunmap_atomic(s_addr);
 
-	replace_sub_page(class, zspage, page_zpdesc(newpage), page_zpdesc(page));
+	replace_sub_page(class, zspage, newzpdesc, zpdesc);
 	/*
 	 * Since we complete the data copy and set up new zspage structure,
 	 * it's okay to release migration_lock.
@@ -1891,14 +1893,14 @@ static int zs_page_migrate(struct page *newpage, struct page *page,
 	spin_unlock(&class->lock);
 	migrate_write_unlock(zspage);
 
-	get_page(newpage);
-	if (page_zone(newpage) != page_zone(page)) {
-		dec_zone_page_state(page, NR_ZSPAGES);
-		inc_zone_page_state(newpage, NR_ZSPAGES);
+	zpdesc_get(newzpdesc);
+	if (zpdesc_zone(newzpdesc) != zpdesc_zone(zpdesc)) {
+		zpdesc_dec_zone_page_state(zpdesc);
+		zpdesc_inc_zone_page_state(newzpdesc);
 	}
 
 	reset_page(page);
-	put_page(page);
+	zpdesc_put(zpdesc);
 
 	return MIGRATEPAGE_SUCCESS;
 }
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH v4 11/22] mm/zsmalloc: rename reset_page to reset_zpdesc and use zpdesc in it
  2024-07-29 11:25 [PATCH v4 00/22] mm/zsmalloc: add zpdesc memory descriptor for zswap.zpool alexs
                   ` (9 preceding siblings ...)
  2024-07-29 11:25 ` [PATCH v4 10/22] mm/zsmalloc: add zpdesc_is_isolated/zpdesc_zone helper for zs_page_migrate alexs
@ 2024-07-29 11:25 ` alexs
  2024-07-29 11:25 ` [PATCH v4 12/22] mm/zsmalloc: convert __free_zspage() to use zdsesc alexs
                   ` (11 subsequent siblings)
  22 siblings, 0 replies; 50+ messages in thread
From: alexs @ 2024-07-29 11:25 UTC (permalink / raw)
  To: Vitaly Wool, Miaohe Lin, Andrew Morton, linux-kernel, linux-mm,
	minchan, willy, senozhatsky, david, 42.hyeyoo, Yosry Ahmed,
	nphamcs
  Cc: Alex Shi

From: Alex Shi <alexs@kernel.org>

zpdesc.zspage matches with page.private, zpdesc.next matches with
page.index. They will be reset in reset_page() wich is called prior to
free base pages of a zspage.
Use zpdesc to replace page struct and rename it to reset_zpdesc(), few
page helper still left since they are used too widely.

Signed-off-by: Alex Shi <alexs@kernel.org>
---
 mm/zsmalloc.c | 12 +++++++-----
 1 file changed, 7 insertions(+), 5 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 9bc9b14187ed..6d1971836391 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -865,12 +865,14 @@ static inline bool obj_allocated(struct zpdesc *zpdesc, void *obj,
 	return true;
 }
 
-static void reset_page(struct page *page)
+static void reset_zpdesc(struct zpdesc *zpdesc)
 {
+	struct page *page = zpdesc_page(zpdesc);
+
 	__ClearPageMovable(page);
 	ClearPagePrivate(page);
-	set_page_private(page, 0);
-	page->index = 0;
+	zpdesc->zspage = NULL;
+	zpdesc->next = NULL;
 	reset_first_obj_offset(page);
 	__ClearPageZsmalloc(page);
 }
@@ -910,7 +912,7 @@ static void __free_zspage(struct zs_pool *pool, struct size_class *class,
 	do {
 		VM_BUG_ON_PAGE(!PageLocked(page), page);
 		next = get_next_page(page);
-		reset_page(page);
+		reset_zpdesc(page_zpdesc(page));
 		unlock_page(page);
 		dec_zone_page_state(page, NR_ZSPAGES);
 		put_page(page);
@@ -1899,7 +1901,7 @@ static int zs_page_migrate(struct page *newpage, struct page *page,
 		zpdesc_inc_zone_page_state(newzpdesc);
 	}
 
-	reset_page(page);
+	reset_zpdesc(zpdesc);
 	zpdesc_put(zpdesc);
 
 	return MIGRATEPAGE_SUCCESS;
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH v4 12/22] mm/zsmalloc: convert __free_zspage() to use zdsesc
  2024-07-29 11:25 [PATCH v4 00/22] mm/zsmalloc: add zpdesc memory descriptor for zswap.zpool alexs
                   ` (10 preceding siblings ...)
  2024-07-29 11:25 ` [PATCH v4 11/22] mm/zsmalloc: rename reset_page to reset_zpdesc and use zpdesc in it alexs
@ 2024-07-29 11:25 ` alexs
  2024-07-29 11:25 ` [PATCH v4 13/22] mm/zsmalloc: convert location_to_obj() to take zpdesc alexs
                   ` (10 subsequent siblings)
  22 siblings, 0 replies; 50+ messages in thread
From: alexs @ 2024-07-29 11:25 UTC (permalink / raw)
  To: Vitaly Wool, Miaohe Lin, Andrew Morton, linux-kernel, linux-mm,
	minchan, willy, senozhatsky, david, 42.hyeyoo, Yosry Ahmed,
	nphamcs
  Cc: Alex Shi

From: Hyeonggon Yoo <42.hyeyoo@gmail.com>

Introduce zpdesc_is_locked() and convert __free_zspage() to use zpdesc.

Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Signed-off-by: Alex Shi <alexs@kernel.org>
---
 mm/zpdesc.h   |  4 ++++
 mm/zsmalloc.c | 20 ++++++++++----------
 2 files changed, 14 insertions(+), 10 deletions(-)

diff --git a/mm/zpdesc.h b/mm/zpdesc.h
index ad04c8337cae..72c8c072b4c8 100644
--- a/mm/zpdesc.h
+++ b/mm/zpdesc.h
@@ -119,4 +119,8 @@ static inline struct zone *zpdesc_zone(struct zpdesc *zpdesc)
 	return page_zone(zpdesc_page(zpdesc));
 }
 
+static inline bool zpdesc_is_locked(struct zpdesc *zpdesc)
+{
+	return PageLocked(zpdesc_page(zpdesc));
+}
 #endif
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 6d1971836391..68fdea7b6e0d 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -901,23 +901,23 @@ static int trylock_zspage(struct zspage *zspage)
 static void __free_zspage(struct zs_pool *pool, struct size_class *class,
 				struct zspage *zspage)
 {
-	struct page *page, *next;
+	struct zpdesc *zpdesc, *next;
 
 	assert_spin_locked(&class->lock);
 
 	VM_BUG_ON(get_zspage_inuse(zspage));
 	VM_BUG_ON(zspage->fullness != ZS_INUSE_RATIO_0);
 
-	next = page = get_first_page(zspage);
+	next = zpdesc = get_first_zpdesc(zspage);
 	do {
-		VM_BUG_ON_PAGE(!PageLocked(page), page);
-		next = get_next_page(page);
-		reset_zpdesc(page_zpdesc(page));
-		unlock_page(page);
-		dec_zone_page_state(page, NR_ZSPAGES);
-		put_page(page);
-		page = next;
-	} while (page != NULL);
+		VM_BUG_ON_PAGE(!zpdesc_is_locked(zpdesc), zpdesc_page(zpdesc));
+		next = get_next_zpdesc(zpdesc);
+		reset_zpdesc(zpdesc);
+		zpdesc_unlock(zpdesc);
+		zpdesc_dec_zone_page_state(zpdesc);
+		zpdesc_put(zpdesc);
+		zpdesc = next;
+	} while (zpdesc != NULL);
 
 	cache_free_zspage(pool, zspage);
 
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH v4 13/22] mm/zsmalloc: convert location_to_obj() to take zpdesc
  2024-07-29 11:25 [PATCH v4 00/22] mm/zsmalloc: add zpdesc memory descriptor for zswap.zpool alexs
                   ` (11 preceding siblings ...)
  2024-07-29 11:25 ` [PATCH v4 12/22] mm/zsmalloc: convert __free_zspage() to use zdsesc alexs
@ 2024-07-29 11:25 ` alexs
  2024-07-29 11:25 ` [PATCH v4 14/22] mm/zsmalloc: convert migrate_zspage() to use zpdesc alexs
                   ` (9 subsequent siblings)
  22 siblings, 0 replies; 50+ messages in thread
From: alexs @ 2024-07-29 11:25 UTC (permalink / raw)
  To: Vitaly Wool, Miaohe Lin, Andrew Morton, linux-kernel, linux-mm,
	minchan, willy, senozhatsky, david, 42.hyeyoo, Yosry Ahmed,
	nphamcs
  Cc: Alex Shi

From: Hyeonggon Yoo <42.hyeyoo@gmail.com>

As all users of location_to_obj() now use zpdesc, convert
location_to_obj() to take zpdesc.

Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Signed-off-by: Alex Shi <alexs@kernel.org>
---
 mm/zsmalloc.c | 13 ++++++-------
 1 file changed, 6 insertions(+), 7 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 68fdea7b6e0d..e291c7319485 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -826,15 +826,15 @@ static void obj_to_zpdesc(unsigned long obj, struct zpdesc **zpdesc)
 }
 
 /**
- * location_to_obj - get obj value encoded from (<page>, <obj_idx>)
- * @page: page object resides in zspage
+ * location_to_obj - get obj value encoded from (<zpdesc>, <obj_idx>)
+ * @zpdesc: zpdesc object resides in zspage
  * @obj_idx: object index
  */
-static unsigned long location_to_obj(struct page *page, unsigned int obj_idx)
+static unsigned long location_to_obj(struct zpdesc *zpdesc, unsigned int obj_idx)
 {
 	unsigned long obj;
 
-	obj = page_to_pfn(page) << OBJ_INDEX_BITS;
+	obj = zpdesc_pfn(zpdesc) << OBJ_INDEX_BITS;
 	obj |= obj_idx & OBJ_INDEX_MASK;
 
 	return obj;
@@ -1392,7 +1392,7 @@ static unsigned long obj_malloc(struct zs_pool *pool,
 	kunmap_atomic(vaddr);
 	mod_zspage_inuse(zspage, 1);
 
-	obj = location_to_obj(zpdesc_page(m_zpdesc), obj);
+	obj = location_to_obj(m_zpdesc, obj);
 	record_obj(handle, obj);
 
 	return obj;
@@ -1879,8 +1879,7 @@ static int zs_page_migrate(struct page *newpage, struct page *page,
 
 			old_obj = handle_to_obj(handle);
 			obj_to_location(old_obj, &dummy, &obj_idx);
-			new_obj = (unsigned long)location_to_obj(zpdesc_page(newzpdesc),
-								obj_idx);
+			new_obj = (unsigned long)location_to_obj(newzpdesc, obj_idx);
 			record_obj(handle, new_obj);
 		}
 	}
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH v4 14/22] mm/zsmalloc: convert migrate_zspage() to use zpdesc
  2024-07-29 11:25 [PATCH v4 00/22] mm/zsmalloc: add zpdesc memory descriptor for zswap.zpool alexs
                   ` (12 preceding siblings ...)
  2024-07-29 11:25 ` [PATCH v4 13/22] mm/zsmalloc: convert location_to_obj() to take zpdesc alexs
@ 2024-07-29 11:25 ` alexs
  2024-07-29 11:25 ` [PATCH v4 15/22] mm/zsmalloc: convert get_zspage() to take zpdesc alexs
                   ` (8 subsequent siblings)
  22 siblings, 0 replies; 50+ messages in thread
From: alexs @ 2024-07-29 11:25 UTC (permalink / raw)
  To: Vitaly Wool, Miaohe Lin, Andrew Morton, linux-kernel, linux-mm,
	minchan, willy, senozhatsky, david, 42.hyeyoo, Yosry Ahmed,
	nphamcs
  Cc: Alex Shi

From: Hyeonggon Yoo <42.hyeyoo@gmail.com>

Use get_first_zpdesc/get_next_zpdesc to replace get_first/next_page. No
functional change.

Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Signed-off-by: Alex Shi <alexs@kernel.org>
---
 mm/zsmalloc.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index e291c7319485..7d039b0c66db 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1636,14 +1636,14 @@ static void migrate_zspage(struct zs_pool *pool, struct zspage *src_zspage,
 	unsigned long used_obj, free_obj;
 	unsigned long handle;
 	int obj_idx = 0;
-	struct page *s_page = get_first_page(src_zspage);
+	struct zpdesc *s_zpdesc = get_first_zpdesc(src_zspage);
 	struct size_class *class = pool->size_class[src_zspage->class];
 
 	while (1) {
-		handle = find_alloced_obj(class, page_zpdesc(s_page), &obj_idx);
+		handle = find_alloced_obj(class, s_zpdesc, &obj_idx);
 		if (!handle) {
-			s_page = get_next_page(s_page);
-			if (!s_page)
+			s_zpdesc = get_next_zpdesc(s_zpdesc);
+			if (!s_zpdesc)
 				break;
 			obj_idx = 0;
 			continue;
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH v4 15/22] mm/zsmalloc: convert get_zspage() to take zpdesc
  2024-07-29 11:25 [PATCH v4 00/22] mm/zsmalloc: add zpdesc memory descriptor for zswap.zpool alexs
                   ` (13 preceding siblings ...)
  2024-07-29 11:25 ` [PATCH v4 14/22] mm/zsmalloc: convert migrate_zspage() to use zpdesc alexs
@ 2024-07-29 11:25 ` alexs
  2024-07-29 11:25 ` [PATCH v4 16/22] mm/zsmalloc: convert SetZsPageMovable and remove unused funcs alexs
                   ` (7 subsequent siblings)
  22 siblings, 0 replies; 50+ messages in thread
From: alexs @ 2024-07-29 11:25 UTC (permalink / raw)
  To: Vitaly Wool, Miaohe Lin, Andrew Morton, linux-kernel, linux-mm,
	minchan, willy, senozhatsky, david, 42.hyeyoo, Yosry Ahmed,
	nphamcs
  Cc: Alex Shi

From: Hyeonggon Yoo <42.hyeyoo@gmail.com>

Now that all users except get_next_page() (which will be removed in
later patch) use zpdesc, convert get_zspage() to take zpdesc instead
of page.

Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Signed-off-by: Alex Shi <alexs@kernel.org>
---
 mm/zsmalloc.c | 20 ++++++++++----------
 1 file changed, 10 insertions(+), 10 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 7d039b0c66db..458ad696b473 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -779,9 +779,9 @@ static int fix_fullness_group(struct size_class *class, struct zspage *zspage)
 	return newfg;
 }
 
-static struct zspage *get_zspage(struct page *page)
+static struct zspage *get_zspage(struct zpdesc *zpdesc)
 {
-	struct zspage *zspage = (struct zspage *)page_private(page);
+	struct zspage *zspage = zpdesc->zspage;
 
 	BUG_ON(zspage->magic != ZSPAGE_MAGIC);
 	return zspage;
@@ -789,7 +789,7 @@ static struct zspage *get_zspage(struct page *page)
 
 static struct page *get_next_page(struct page *page)
 {
-	struct zspage *zspage = get_zspage(page);
+	struct zspage *zspage = get_zspage(page_zpdesc(page));
 
 	if (unlikely(ZsHugePage(zspage)))
 		return NULL;
@@ -799,7 +799,7 @@ static struct page *get_next_page(struct page *page)
 
 static struct zpdesc *get_next_zpdesc(struct zpdesc *zpdesc)
 {
-	struct zspage *zspage = get_zspage(zpdesc_page(zpdesc));
+	struct zspage *zspage = get_zspage(zpdesc);
 
 	if (unlikely(ZsHugePage(zspage)))
 		return NULL;
@@ -849,7 +849,7 @@ static inline bool obj_allocated(struct zpdesc *zpdesc, void *obj,
 				 unsigned long *phandle)
 {
 	unsigned long handle;
-	struct zspage *zspage = get_zspage(zpdesc_page(zpdesc));
+	struct zspage *zspage = get_zspage(zpdesc);
 
 	if (unlikely(ZsHugePage(zspage))) {
 		VM_BUG_ON_PAGE(!is_first_zpdesc(zpdesc), zpdesc_page(zpdesc));
@@ -1265,7 +1265,7 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle,
 	read_lock(&pool->migrate_lock);
 	obj = handle_to_obj(handle);
 	obj_to_location(obj, &zpdesc, &obj_idx);
-	zspage = get_zspage(zpdesc_page(zpdesc));
+	zspage = get_zspage(zpdesc);
 
 	/*
 	 * migration cannot move any zpages in this zspage. Here, class->lock
@@ -1315,7 +1315,7 @@ void zs_unmap_object(struct zs_pool *pool, unsigned long handle)
 
 	obj = handle_to_obj(handle);
 	obj_to_location(obj, &zpdesc, &obj_idx);
-	zspage = get_zspage(zpdesc_page(zpdesc));
+	zspage = get_zspage(zpdesc);
 	class = zspage_class(pool, zspage);
 	off = offset_in_page(class->size * obj_idx);
 
@@ -1479,7 +1479,7 @@ static void obj_free(int class_size, unsigned long obj)
 
 	obj_to_location(obj, &f_zpdesc, &f_objidx);
 	f_offset = offset_in_page(class_size * f_objidx);
-	zspage = get_zspage(zpdesc_page(f_zpdesc));
+	zspage = get_zspage(f_zpdesc);
 
 	vaddr = zpdesc_kmap_atomic(f_zpdesc);
 	link = (struct link_free *)(vaddr + f_offset);
@@ -1513,7 +1513,7 @@ void zs_free(struct zs_pool *pool, unsigned long handle)
 	read_lock(&pool->migrate_lock);
 	obj = handle_to_obj(handle);
 	obj_to_zpdesc(obj, &f_zpdesc);
-	zspage = get_zspage(zpdesc_page(f_zpdesc));
+	zspage = get_zspage(f_zpdesc);
 	class = zspage_class(pool, zspage);
 	spin_lock(&class->lock);
 	read_unlock(&pool->migrate_lock);
@@ -1846,7 +1846,7 @@ static int zs_page_migrate(struct page *newpage, struct page *page,
 	__SetPageZsmalloc(zpdesc_page(newzpdesc));
 
 	/* The page is locked, so this pointer must remain valid */
-	zspage = get_zspage(zpdesc_page(zpdesc));
+	zspage = get_zspage(zpdesc);
 	pool = zspage->pool;
 
 	/*
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH v4 16/22] mm/zsmalloc: convert SetZsPageMovable and remove unused funcs
  2024-07-29 11:25 [PATCH v4 00/22] mm/zsmalloc: add zpdesc memory descriptor for zswap.zpool alexs
                   ` (14 preceding siblings ...)
  2024-07-29 11:25 ` [PATCH v4 15/22] mm/zsmalloc: convert get_zspage() to take zpdesc alexs
@ 2024-07-29 11:25 ` alexs
  2024-07-29 11:25 ` [PATCH v4 17/22] mm/zsmalloc: convert get/set_first_obj_offset() to take zpdesc alexs
                   ` (6 subsequent siblings)
  22 siblings, 0 replies; 50+ messages in thread
From: alexs @ 2024-07-29 11:25 UTC (permalink / raw)
  To: Vitaly Wool, Miaohe Lin, Andrew Morton, linux-kernel, linux-mm,
	minchan, willy, senozhatsky, david, 42.hyeyoo, Yosry Ahmed,
	nphamcs
  Cc: Alex Shi

From: Alex Shi <alexs@kernel.org>

Convert SetZsPageMovable() to use zpdesc, and then remove unused
funcs:get_next_page/get_first_page/is_first_page.

Originally-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Signed-off-by: Alex Shi <alexs@kernel.org>
---
 mm/zsmalloc.c | 33 +++++----------------------------
 1 file changed, 5 insertions(+), 28 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 458ad696b473..8b713ac03902 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -468,11 +468,6 @@ static DEFINE_PER_CPU(struct mapping_area, zs_map_area) = {
 	.lock	= INIT_LOCAL_LOCK(lock),
 };
 
-static __maybe_unused int is_first_page(struct page *page)
-{
-	return PagePrivate(page);
-}
-
 static int is_first_zpdesc(struct zpdesc *zpdesc)
 {
 	return PagePrivate(zpdesc_page(zpdesc));
@@ -489,14 +484,6 @@ static inline void mod_zspage_inuse(struct zspage *zspage, int val)
 	zspage->inuse += val;
 }
 
-static inline struct page *get_first_page(struct zspage *zspage)
-{
-	struct page *first_page = zpdesc_page(zspage->first_zpdesc);
-
-	VM_BUG_ON_PAGE(!is_first_page(first_page), first_page);
-	return first_page;
-}
-
 static struct zpdesc *get_first_zpdesc(struct zspage *zspage)
 {
 	struct zpdesc *first_zpdesc = zspage->first_zpdesc;
@@ -787,16 +774,6 @@ static struct zspage *get_zspage(struct zpdesc *zpdesc)
 	return zspage;
 }
 
-static struct page *get_next_page(struct page *page)
-{
-	struct zspage *zspage = get_zspage(page_zpdesc(page));
-
-	if (unlikely(ZsHugePage(zspage)))
-		return NULL;
-
-	return (struct page *)page->index;
-}
-
 static struct zpdesc *get_next_zpdesc(struct zpdesc *zpdesc)
 {
 	struct zspage *zspage = get_zspage(zpdesc);
@@ -1970,13 +1947,13 @@ static void init_deferred_free(struct zs_pool *pool)
 
 static void SetZsPageMovable(struct zs_pool *pool, struct zspage *zspage)
 {
-	struct page *page = get_first_page(zspage);
+	struct zpdesc *zpdesc = get_first_zpdesc(zspage);
 
 	do {
-		WARN_ON(!trylock_page(page));
-		__SetPageMovable(page, &zsmalloc_mops);
-		unlock_page(page);
-	} while ((page = get_next_page(page)) != NULL);
+		WARN_ON(!zpdesc_trylock(zpdesc));
+		__zpdesc_set_movable(zpdesc, &zsmalloc_mops);
+		zpdesc_unlock(zpdesc);
+	} while ((zpdesc = get_next_zpdesc(zpdesc)) != NULL);
 }
 #else
 static inline void zs_flush_migration(struct zs_pool *pool) { }
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH v4 17/22] mm/zsmalloc: convert get/set_first_obj_offset() to take zpdesc
  2024-07-29 11:25 [PATCH v4 00/22] mm/zsmalloc: add zpdesc memory descriptor for zswap.zpool alexs
                   ` (15 preceding siblings ...)
  2024-07-29 11:25 ` [PATCH v4 16/22] mm/zsmalloc: convert SetZsPageMovable and remove unused funcs alexs
@ 2024-07-29 11:25 ` alexs
  2024-07-29 11:25 ` [PATCH v4 18/22] mm/zsmalloc: introduce __zpdesc_clear_movable alexs
                   ` (5 subsequent siblings)
  22 siblings, 0 replies; 50+ messages in thread
From: alexs @ 2024-07-29 11:25 UTC (permalink / raw)
  To: Vitaly Wool, Miaohe Lin, Andrew Morton, linux-kernel, linux-mm,
	minchan, willy, senozhatsky, david, 42.hyeyoo, Yosry Ahmed,
	nphamcs
  Cc: Alex Shi

From: Alex Shi <alexs@kernel.org>

Now that all users of get/set_first_obj_offset() are converted
to use zpdesc, convert them to take zpdesc.

Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Signed-off-by: Alex Shi <alexs@kernel.org>
---
 mm/zpdesc.h   |  7 ++++++-
 mm/zsmalloc.c | 36 ++++++++++++++++++------------------
 2 files changed, 24 insertions(+), 19 deletions(-)

diff --git a/mm/zpdesc.h b/mm/zpdesc.h
index 72c8c072b4c8..f64e813f4847 100644
--- a/mm/zpdesc.h
+++ b/mm/zpdesc.h
@@ -15,6 +15,8 @@
  * @next:		Next zpdesc in a zspage in zsmalloc zpool
  * @handle:		For huge zspage in zsmalloc zpool
  * @zspage:		Pointer to zspage in zsmalloc
+ * @first_obj_offset:	First object offset in zsmalloc zpool
+ * @_refcount:		Indirectly use by page migration
  * @memcg_data:		Memory Control Group data.
  *
  * This struct overlays struct page for now. Do not modify without a good
@@ -31,7 +33,8 @@ struct zpdesc {
 		unsigned long handle;
 	};
 	struct zspage *zspage;
-	unsigned long _zp_pad_1;
+	unsigned int first_obj_offset;
+	atomic_t _refcount;
 #ifdef CONFIG_MEMCG
 	unsigned long memcg_data;
 #endif
@@ -45,6 +48,8 @@ ZPDESC_MATCH(mapping, mops);
 ZPDESC_MATCH(index, next);
 ZPDESC_MATCH(index, handle);
 ZPDESC_MATCH(private, zspage);
+ZPDESC_MATCH(page_type, first_obj_offset);
+ZPDESC_MATCH(_refcount, _refcount);
 #ifdef CONFIG_MEMCG
 ZPDESC_MATCH(memcg_data, memcg_data);
 #endif
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 8b713ac03902..bb8b5f13a966 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -20,8 +20,8 @@
  *	zpdesc->next: links together all component pages of a zspage
  *		For the huge page, this is always 0, so we use this field
  *		to store handle.
- *	page->page_type: PG_zsmalloc, lower 16 bit locate the first object
- *		offset in a subpage of a zspage
+ *	zpdesc->first_obj_offset: PG_zsmalloc, lower 16 bit locate the first
+ *		object offset in a subpage of a zspage
  *
  * Usage of struct zpdesc(page) flags:
  *	PG_private: identifies the first component page
@@ -494,26 +494,26 @@ static struct zpdesc *get_first_zpdesc(struct zspage *zspage)
 
 #define FIRST_OBJ_PAGE_TYPE_MASK	0xffff
 
-static inline void reset_first_obj_offset(struct page *page)
+static inline void reset_first_obj_offset(struct zpdesc *zpdesc)
 {
-	VM_WARN_ON_ONCE(!PageZsmalloc(page));
-	page->page_type |= FIRST_OBJ_PAGE_TYPE_MASK;
+	VM_WARN_ON_ONCE(!PageZsmalloc(zpdesc_page(zpdesc)));
+	zpdesc->first_obj_offset |= FIRST_OBJ_PAGE_TYPE_MASK;
 }
 
-static inline unsigned int get_first_obj_offset(struct page *page)
+static inline unsigned int get_first_obj_offset(struct zpdesc *zpdesc)
 {
-	VM_WARN_ON_ONCE(!PageZsmalloc(page));
-	return page->page_type & FIRST_OBJ_PAGE_TYPE_MASK;
+	VM_WARN_ON_ONCE(!PageZsmalloc(zpdesc_page(zpdesc)));
+	return zpdesc->first_obj_offset & FIRST_OBJ_PAGE_TYPE_MASK;
 }
 
-static inline void set_first_obj_offset(struct page *page, unsigned int offset)
+static inline void set_first_obj_offset(struct zpdesc *zpdesc, unsigned int offset)
 {
 	/* With 16 bit available, we can support offsets into 64 KiB pages. */
 	BUILD_BUG_ON(PAGE_SIZE > SZ_64K);
-	VM_WARN_ON_ONCE(!PageZsmalloc(page));
+	VM_WARN_ON_ONCE(!PageZsmalloc(zpdesc_page(zpdesc)));
 	VM_WARN_ON_ONCE(offset & ~FIRST_OBJ_PAGE_TYPE_MASK);
-	page->page_type &= ~FIRST_OBJ_PAGE_TYPE_MASK;
-	page->page_type |= offset & FIRST_OBJ_PAGE_TYPE_MASK;
+	zpdesc->first_obj_offset &= ~FIRST_OBJ_PAGE_TYPE_MASK;
+	zpdesc->first_obj_offset |= offset & FIRST_OBJ_PAGE_TYPE_MASK;
 }
 
 static inline unsigned int get_freeobj(struct zspage *zspage)
@@ -850,7 +850,7 @@ static void reset_zpdesc(struct zpdesc *zpdesc)
 	ClearPagePrivate(page);
 	zpdesc->zspage = NULL;
 	zpdesc->next = NULL;
-	reset_first_obj_offset(page);
+	reset_first_obj_offset(zpdesc);
 	__ClearPageZsmalloc(page);
 }
 
@@ -934,7 +934,7 @@ static void init_zspage(struct size_class *class, struct zspage *zspage)
 		struct link_free *link;
 		void *vaddr;
 
-		set_first_obj_offset(zpdesc_page(zpdesc), off);
+		set_first_obj_offset(zpdesc, off);
 
 		vaddr = zpdesc_kmap_atomic(zpdesc);
 		link = (struct link_free *)vaddr + off / sizeof(*link);
@@ -1589,7 +1589,7 @@ static unsigned long find_alloced_obj(struct size_class *class,
 	unsigned long handle = 0;
 	void *addr = zpdesc_kmap_atomic(zpdesc);
 
-	offset = get_first_obj_offset(zpdesc_page(zpdesc));
+	offset = get_first_obj_offset(zpdesc);
 	offset += class->size * index;
 
 	while (offset < PAGE_SIZE) {
@@ -1784,8 +1784,8 @@ static void replace_sub_page(struct size_class *class, struct zspage *zspage,
 	} while ((zpdesc = get_next_zpdesc(zpdesc)) != NULL);
 
 	create_page_chain(class, zspage, zpdescs);
-	first_obj_offset = get_first_obj_offset(zpdesc_page(oldzpdesc));
-	set_first_obj_offset(zpdesc_page(newzpdesc), first_obj_offset);
+	first_obj_offset = get_first_obj_offset(oldzpdesc);
+	set_first_obj_offset(newzpdesc, first_obj_offset);
 	if (unlikely(ZsHugePage(zspage)))
 		newzpdesc->handle = oldzpdesc->handle;
 	__zpdesc_set_movable(newzpdesc, &zsmalloc_mops);
@@ -1840,7 +1840,7 @@ static int zs_page_migrate(struct page *newpage, struct page *page,
 	/* the migrate_write_lock protects zpage access via zs_map_object */
 	migrate_write_lock(zspage);
 
-	offset = get_first_obj_offset(zpdesc_page(zpdesc));
+	offset = get_first_obj_offset(zpdesc);
 	s_addr = zpdesc_kmap_atomic(zpdesc);
 
 	/*
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH v4 18/22] mm/zsmalloc: introduce __zpdesc_clear_movable
  2024-07-29 11:25 [PATCH v4 00/22] mm/zsmalloc: add zpdesc memory descriptor for zswap.zpool alexs
                   ` (16 preceding siblings ...)
  2024-07-29 11:25 ` [PATCH v4 17/22] mm/zsmalloc: convert get/set_first_obj_offset() to take zpdesc alexs
@ 2024-07-29 11:25 ` alexs
  2024-07-30  9:34   ` Sergey Senozhatsky
  2024-07-29 11:25 ` [PATCH v4 19/22] mm/zsmalloc: introduce __zpdesc_clear_zsmalloc alexs
                   ` (4 subsequent siblings)
  22 siblings, 1 reply; 50+ messages in thread
From: alexs @ 2024-07-29 11:25 UTC (permalink / raw)
  To: Vitaly Wool, Miaohe Lin, Andrew Morton, linux-kernel, linux-mm,
	minchan, willy, senozhatsky, david, 42.hyeyoo, Yosry Ahmed,
	nphamcs
  Cc: Alex Shi

From: Alex Shi <alexs@kernel.org>

Add a helper __zpdesc_clear_movable() for __ClearPageMovable(), and use it
in callers to make code clear.

Signed-off-by: Alex Shi <alexs@kernel.org>
---
 mm/zpdesc.h   | 5 +++++
 mm/zsmalloc.c | 2 +-
 2 files changed, 6 insertions(+), 1 deletion(-)

diff --git a/mm/zpdesc.h b/mm/zpdesc.h
index f64e813f4847..5db4fbe2d139 100644
--- a/mm/zpdesc.h
+++ b/mm/zpdesc.h
@@ -114,6 +114,11 @@ static inline void __zpdesc_set_movable(struct zpdesc *zpdesc,
 	__SetPageMovable(zpdesc_page(zpdesc), mops);
 }
 
+static inline void __zpdesc_clear_movable(struct zpdesc *zpdesc)
+{
+	__ClearPageMovable(zpdesc_page(zpdesc));
+}
+
 static inline bool zpdesc_is_isolated(struct zpdesc *zpdesc)
 {
 	return PageIsolated(zpdesc_page(zpdesc));
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index bb8b5f13a966..e1d3ad50538c 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -846,7 +846,7 @@ static void reset_zpdesc(struct zpdesc *zpdesc)
 {
 	struct page *page = zpdesc_page(zpdesc);
 
-	__ClearPageMovable(page);
+	__zpdesc_clear_movable(zpdesc);
 	ClearPagePrivate(page);
 	zpdesc->zspage = NULL;
 	zpdesc->next = NULL;
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH v4 19/22] mm/zsmalloc: introduce __zpdesc_clear_zsmalloc
  2024-07-29 11:25 [PATCH v4 00/22] mm/zsmalloc: add zpdesc memory descriptor for zswap.zpool alexs
                   ` (17 preceding siblings ...)
  2024-07-29 11:25 ` [PATCH v4 18/22] mm/zsmalloc: introduce __zpdesc_clear_movable alexs
@ 2024-07-29 11:25 ` alexs
  2024-07-29 11:25 ` [PATCH v4 20/22] mm/zsmalloc: introduce __zpdesc_set_zsmalloc() alexs
                   ` (3 subsequent siblings)
  22 siblings, 0 replies; 50+ messages in thread
From: alexs @ 2024-07-29 11:25 UTC (permalink / raw)
  To: Vitaly Wool, Miaohe Lin, Andrew Morton, linux-kernel, linux-mm,
	minchan, willy, senozhatsky, david, 42.hyeyoo, Yosry Ahmed,
	nphamcs
  Cc: Alex Shi

From: Alex Shi <alexs@kernel.org>

Add a helper __zpdesc_clear_zsmalloc() for __ClearPageZsmalloc(), and use
it in callers to make code clear.

Signed-off-by: Alex Shi <alexs@kernel.org>
---
 mm/zpdesc.h   | 5 +++++
 mm/zsmalloc.c | 4 ++--
 2 files changed, 7 insertions(+), 2 deletions(-)

diff --git a/mm/zpdesc.h b/mm/zpdesc.h
index 5db4fbe2d139..05def4d45265 100644
--- a/mm/zpdesc.h
+++ b/mm/zpdesc.h
@@ -119,6 +119,11 @@ static inline void __zpdesc_clear_movable(struct zpdesc *zpdesc)
 	__ClearPageMovable(zpdesc_page(zpdesc));
 }
 
+static inline void __zpdesc_clear_zsmalloc(struct zpdesc *zpdesc)
+{
+	__ClearPageZsmalloc(zpdesc_page(zpdesc));
+}
+
 static inline bool zpdesc_is_isolated(struct zpdesc *zpdesc)
 {
 	return PageIsolated(zpdesc_page(zpdesc));
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index e1d3ad50538c..d88602fb0233 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -851,7 +851,7 @@ static void reset_zpdesc(struct zpdesc *zpdesc)
 	zpdesc->zspage = NULL;
 	zpdesc->next = NULL;
 	reset_first_obj_offset(zpdesc);
-	__ClearPageZsmalloc(page);
+	__zpdesc_clear_zsmalloc(zpdesc);
 }
 
 static int trylock_zspage(struct zspage *zspage)
@@ -1024,7 +1024,7 @@ static struct zspage *alloc_zspage(struct zs_pool *pool,
 		if (!zpdesc) {
 			while (--i >= 0) {
 				zpdesc_dec_zone_page_state(zpdescs[i]);
-				__ClearPageZsmalloc(zpdesc_page(zpdescs[i]));
+				__zpdesc_clear_zsmalloc(zpdescs[i]);
 				free_zpdesc(zpdescs[i]);
 			}
 			cache_free_zspage(pool, zspage);
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH v4 20/22] mm/zsmalloc: introduce __zpdesc_set_zsmalloc()
  2024-07-29 11:25 [PATCH v4 00/22] mm/zsmalloc: add zpdesc memory descriptor for zswap.zpool alexs
                   ` (18 preceding siblings ...)
  2024-07-29 11:25 ` [PATCH v4 19/22] mm/zsmalloc: introduce __zpdesc_clear_zsmalloc alexs
@ 2024-07-29 11:25 ` alexs
  2024-08-02 19:11   ` Vishal Moola
  2024-07-29 11:25 ` [PATCH v4 21/22] mm/zsmalloc: fix build warning from lkp testing alexs
                   ` (2 subsequent siblings)
  22 siblings, 1 reply; 50+ messages in thread
From: alexs @ 2024-07-29 11:25 UTC (permalink / raw)
  To: Vitaly Wool, Miaohe Lin, Andrew Morton, linux-kernel, linux-mm,
	minchan, willy, senozhatsky, david, 42.hyeyoo, Yosry Ahmed,
	nphamcs
  Cc: Alex Shi

From: Alex Shi <alexs@kernel.org>

Add a helper __zpdesc_set_zsmalloc() for __SetPageZsmalloc(), and use
it in callers to make code clear.

Signed-off-by: Alex Shi <alexs@kernel.org>
---
 mm/zpdesc.h   | 5 +++++
 mm/zsmalloc.c | 4 ++--
 2 files changed, 7 insertions(+), 2 deletions(-)

diff --git a/mm/zpdesc.h b/mm/zpdesc.h
index 05def4d45265..06371ce60cd1 100644
--- a/mm/zpdesc.h
+++ b/mm/zpdesc.h
@@ -119,6 +119,11 @@ static inline void __zpdesc_clear_movable(struct zpdesc *zpdesc)
 	__ClearPageMovable(zpdesc_page(zpdesc));
 }
 
+static inline void __zpdesc_set_zsmalloc(struct zpdesc *zpdesc)
+{
+	__SetPageZsmalloc(zpdesc_page(zpdesc));
+}
+
 static inline void __zpdesc_clear_zsmalloc(struct zpdesc *zpdesc)
 {
 	__ClearPageZsmalloc(zpdesc_page(zpdesc));
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index d88602fb0233..7f8e02df4e3e 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1030,7 +1030,7 @@ static struct zspage *alloc_zspage(struct zs_pool *pool,
 			cache_free_zspage(pool, zspage);
 			return NULL;
 		}
-		__SetPageZsmalloc(zpdesc_page(zpdesc));
+		__zpdesc_set_zsmalloc(zpdesc);
 
 		zpdesc_inc_zone_page_state(zpdesc);
 		zpdescs[i] = zpdesc;
@@ -1820,7 +1820,7 @@ static int zs_page_migrate(struct page *newpage, struct page *page,
 	VM_BUG_ON_PAGE(!zpdesc_is_isolated(zpdesc), zpdesc_page(zpdesc));
 
 	/* We're committed, tell the world that this is a Zsmalloc page. */
-	__SetPageZsmalloc(zpdesc_page(newzpdesc));
+	__zpdesc_set_zsmalloc(newzpdesc);
 
 	/* The page is locked, so this pointer must remain valid */
 	zspage = get_zspage(zpdesc);
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH v4 21/22] mm/zsmalloc: fix build warning from lkp testing
  2024-07-29 11:25 [PATCH v4 00/22] mm/zsmalloc: add zpdesc memory descriptor for zswap.zpool alexs
                   ` (19 preceding siblings ...)
  2024-07-29 11:25 ` [PATCH v4 20/22] mm/zsmalloc: introduce __zpdesc_set_zsmalloc() alexs
@ 2024-07-29 11:25 ` alexs
  2024-08-02 19:13   ` Vishal Moola
  2024-07-29 11:25 ` [PATCH v4 22/22] mm/zsmalloc: update comments for page->zpdesc changes alexs
  2024-07-30 12:31 ` [PATCH 23/23] mm/zsmalloc: introduce zpdesc_clear_first() helper alexs
  22 siblings, 1 reply; 50+ messages in thread
From: alexs @ 2024-07-29 11:25 UTC (permalink / raw)
  To: Vitaly Wool, Miaohe Lin, Andrew Morton, linux-kernel, linux-mm,
	minchan, willy, senozhatsky, david, 42.hyeyoo, Yosry Ahmed,
	nphamcs
  Cc: Alex Shi, kernel test robot

From: Alex Shi <alexs@kernel.org>

LKP reported the following warning w/o CONFIG_DEBUG_VM:
	mm/zsmalloc.c:471:12: warning: function 'is_first_zpdesc' is not
	needed and will not be emitted [-Wunneeded-internal-declaration]
To remove this warning, better to incline the function is_first_zpdesc

Reported-by: kernel test robot <lkp@intel.com>
Closes: https://lore.kernel.org/oe-kbuild-all/202407052102.qbT7nLMK-lkp@intel.com/
Signed-off-by: Alex Shi <alexs@kernel.org>
---
 mm/zsmalloc.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 7f8e02df4e3e..64e523ae71f8 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -468,7 +468,7 @@ static DEFINE_PER_CPU(struct mapping_area, zs_map_area) = {
 	.lock	= INIT_LOCAL_LOCK(lock),
 };
 
-static int is_first_zpdesc(struct zpdesc *zpdesc)
+static inline bool is_first_zpdesc(struct zpdesc *zpdesc)
 {
 	return PagePrivate(zpdesc_page(zpdesc));
 }
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH v4 22/22] mm/zsmalloc: update comments for page->zpdesc changes
  2024-07-29 11:25 [PATCH v4 00/22] mm/zsmalloc: add zpdesc memory descriptor for zswap.zpool alexs
                   ` (20 preceding siblings ...)
  2024-07-29 11:25 ` [PATCH v4 21/22] mm/zsmalloc: fix build warning from lkp testing alexs
@ 2024-07-29 11:25 ` alexs
  2024-07-30  9:37   ` Sergey Senozhatsky
  2024-07-30 12:31 ` [PATCH 23/23] mm/zsmalloc: introduce zpdesc_clear_first() helper alexs
  22 siblings, 1 reply; 50+ messages in thread
From: alexs @ 2024-07-29 11:25 UTC (permalink / raw)
  To: Vitaly Wool, Miaohe Lin, Andrew Morton, linux-kernel, linux-mm,
	minchan, willy, senozhatsky, david, 42.hyeyoo, Yosry Ahmed,
	nphamcs
  Cc: Alex Shi

From: Alex Shi <alexs@kernel.org>

Signed-off-by: Alex Shi <alexs@kernel.org>
---
 mm/zsmalloc.c | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 64e523ae71f8..50ce4a3b8279 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -967,7 +967,7 @@ static void init_zspage(struct size_class *class, struct zspage *zspage)
 	set_freeobj(zspage, 0);
 }
 
-static void create_page_chain(struct size_class *class, struct zspage *zspage,
+static void create_zpdesc_chain(struct size_class *class, struct zspage *zspage,
 				struct zpdesc *zpdescs[])
 {
 	int i;
@@ -976,9 +976,9 @@ static void create_page_chain(struct size_class *class, struct zspage *zspage,
 	int nr_zpdescs = class->pages_per_zspage;
 
 	/*
-	 * Allocate individual pages and link them together as:
-	 * 1. all pages are linked together using zpdesc->next
-	 * 2. each sub-page point to zspage using zpdesc->zspage
+	 * Allocate individual zpdescs and link them together as:
+	 * 1. all zpdescs are linked together using zpdesc->next
+	 * 2. each sub-zpdesc point to zspage using zpdesc->zspage
 	 *
 	 * we set PG_private to identify the first zpdesc (i.e. no other zpdesc
 	 * has this flag set).
@@ -1036,7 +1036,7 @@ static struct zspage *alloc_zspage(struct zs_pool *pool,
 		zpdescs[i] = zpdesc;
 	}
 
-	create_page_chain(class, zspage, zpdescs);
+	create_zpdesc_chain(class, zspage, zpdescs);
 	init_zspage(class, zspage);
 	zspage->pool = pool;
 	zspage->class = class->index;
@@ -1363,7 +1363,7 @@ static unsigned long obj_malloc(struct zs_pool *pool,
 		/* record handle in the header of allocated chunk */
 		link->handle = handle | OBJ_ALLOCATED_TAG;
 	else
-		/* record handle to page->index */
+		/* record handle to zpdesc->handle */
 		zspage->first_zpdesc->handle = handle | OBJ_ALLOCATED_TAG;
 
 	kunmap_atomic(vaddr);
@@ -1783,7 +1783,7 @@ static void replace_sub_page(struct size_class *class, struct zspage *zspage,
 		idx++;
 	} while ((zpdesc = get_next_zpdesc(zpdesc)) != NULL);
 
-	create_page_chain(class, zspage, zpdescs);
+	create_zpdesc_chain(class, zspage, zpdescs);
 	first_obj_offset = get_first_obj_offset(oldzpdesc);
 	set_first_obj_offset(newzpdesc, first_obj_offset);
 	if (unlikely(ZsHugePage(zspage)))
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 50+ messages in thread

* Re: [PATCH v4 18/22] mm/zsmalloc: introduce __zpdesc_clear_movable
  2024-07-29 11:25 ` [PATCH v4 18/22] mm/zsmalloc: introduce __zpdesc_clear_movable alexs
@ 2024-07-30  9:34   ` Sergey Senozhatsky
  2024-07-30 11:38     ` Alex Shi
  0 siblings, 1 reply; 50+ messages in thread
From: Sergey Senozhatsky @ 2024-07-30  9:34 UTC (permalink / raw)
  To: alexs
  Cc: Vitaly Wool, Miaohe Lin, Andrew Morton, linux-kernel, linux-mm,
	minchan, willy, senozhatsky, david, 42.hyeyoo, Yosry Ahmed,
	nphamcs

On (24/07/29 19:25), alexs@kernel.org wrote:
[..]
> +static inline void __zpdesc_clear_movable(struct zpdesc *zpdesc)
> +{
> +	__ClearPageMovable(zpdesc_page(zpdesc));
> +}

[..]

> @@ -846,7 +846,7 @@ static void reset_zpdesc(struct zpdesc *zpdesc)
>  {
>  	struct page *page = zpdesc_page(zpdesc);
>  
> -	__ClearPageMovable(page);
> +	__zpdesc_clear_movable(zpdesc);
>  	ClearPagePrivate(page);

Just a quick question, I see that you wrote wrappers for pretty
much everything, including SetPagePrivate(), but not for
ClearPagePrivate()?


^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v4 22/22] mm/zsmalloc: update comments for page->zpdesc changes
  2024-07-29 11:25 ` [PATCH v4 22/22] mm/zsmalloc: update comments for page->zpdesc changes alexs
@ 2024-07-30  9:37   ` Sergey Senozhatsky
  2024-07-30 11:45     ` Alex Shi
  0 siblings, 1 reply; 50+ messages in thread
From: Sergey Senozhatsky @ 2024-07-30  9:37 UTC (permalink / raw)
  To: alexs
  Cc: Vitaly Wool, Miaohe Lin, Andrew Morton, linux-kernel, linux-mm,
	minchan, willy, senozhatsky, david, 42.hyeyoo, Yosry Ahmed,
	nphamcs

On (24/07/29 19:25), alexs@kernel.org wrote:
> 
> From: Alex Shi <alexs@kernel.org>
> 

Usually some simple commit message is still expected.


^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v4 03/22] mm/zsmalloc: convert __zs_map_object/__zs_unmap_object to use zpdesc
  2024-07-29 11:25 ` [PATCH v4 03/22] mm/zsmalloc: convert __zs_map_object/__zs_unmap_object to use zpdesc alexs
@ 2024-07-30  9:38   ` Sergey Senozhatsky
  0 siblings, 0 replies; 50+ messages in thread
From: Sergey Senozhatsky @ 2024-07-30  9:38 UTC (permalink / raw)
  To: alexs
  Cc: Vitaly Wool, Miaohe Lin, Andrew Morton, linux-kernel, linux-mm,
	minchan, willy, senozhatsky, david, 42.hyeyoo, Yosry Ahmed,
	nphamcs

On (24/07/29 19:25), alexs@kernel.org wrote:
> +static inline void *zpdesc_kmap_atomic(struct zpdesc *zpdesc)
> +{
> +	return kmap_atomic(zpdesc_page(zpdesc));
> +}
> +
[..]
>  	/* copy object to per-cpu buffer */
> -	addr = kmap_atomic(pages[0]);
> +	addr = zpdesc_kmap_atomic(zpdescs[0]);
>  	memcpy(buf, addr + off, sizes[0]);
>  	kunmap_atomic(addr);
> -	addr = kmap_atomic(pages[1]);
> +	addr = zpdesc_kmap_atomic(zpdescs[1]);
>  	memcpy(buf + sizes[0], addr, sizes[1]);
>  	kunmap_atomic(addr);

Don't know if kmap_atomic() wrapper buys us anything, but okay.


^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v4 18/22] mm/zsmalloc: introduce __zpdesc_clear_movable
  2024-07-30  9:34   ` Sergey Senozhatsky
@ 2024-07-30 11:38     ` Alex Shi
  0 siblings, 0 replies; 50+ messages in thread
From: Alex Shi @ 2024-07-30 11:38 UTC (permalink / raw)
  To: Sergey Senozhatsky, alexs
  Cc: Vitaly Wool, Miaohe Lin, Andrew Morton, linux-kernel, linux-mm,
	minchan, willy, david, 42.hyeyoo, Yosry Ahmed, nphamcs



On 7/30/24 5:34 PM, Sergey Senozhatsky wrote:
> On (24/07/29 19:25), alexs@kernel.org wrote:
> [..]
>> +static inline void __zpdesc_clear_movable(struct zpdesc *zpdesc)
>> +{
>> +	__ClearPageMovable(zpdesc_page(zpdesc));
>> +}
> 
> [..]
> 
>> @@ -846,7 +846,7 @@ static void reset_zpdesc(struct zpdesc *zpdesc)
>>  {
>>  	struct page *page = zpdesc_page(zpdesc);
>>  
>> -	__ClearPageMovable(page);
>> +	__zpdesc_clear_movable(zpdesc);
>>  	ClearPagePrivate(page);
> 
> Just a quick question, I see that you wrote wrappers for pretty
> much everything, including SetPagePrivate(), but not for
> ClearPagePrivate()?

Hi Sergey,

Thanks for comment!
Yes, it's better to have one for clear, I'll sent a patch soon.

Alex


^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v4 22/22] mm/zsmalloc: update comments for page->zpdesc changes
  2024-07-30  9:37   ` Sergey Senozhatsky
@ 2024-07-30 11:45     ` Alex Shi
  2024-07-31  2:16       ` Sergey Senozhatsky
  0 siblings, 1 reply; 50+ messages in thread
From: Alex Shi @ 2024-07-30 11:45 UTC (permalink / raw)
  To: Sergey Senozhatsky, alexs
  Cc: Vitaly Wool, Miaohe Lin, Andrew Morton, linux-kernel, linux-mm,
	minchan, willy, david, 42.hyeyoo, Yosry Ahmed, nphamcs



On 7/30/24 5:37 PM, Sergey Senozhatsky wrote:
> On (24/07/29 19:25), alexs@kernel.org wrote:
>>
>> From: Alex Shi <alexs@kernel.org>
>>
> 
> Usually some simple commit message is still expected.

Uh, my fault. Just forgive this part, is the following log fine?

    After the page to zpdesc conversion, there still left few comments or
    function named with page not zpdesc, let's update the comments and
    rename function create_page_chain() as create_zpdesc_chain().

Thanks
Alex


^ permalink raw reply	[flat|nested] 50+ messages in thread

* [PATCH 23/23] mm/zsmalloc: introduce zpdesc_clear_first() helper
  2024-07-29 11:25 [PATCH v4 00/22] mm/zsmalloc: add zpdesc memory descriptor for zswap.zpool alexs
                   ` (21 preceding siblings ...)
  2024-07-29 11:25 ` [PATCH v4 22/22] mm/zsmalloc: update comments for page->zpdesc changes alexs
@ 2024-07-30 12:31 ` alexs
  22 siblings, 0 replies; 50+ messages in thread
From: alexs @ 2024-07-30 12:31 UTC (permalink / raw)
  To: alexs
  Cc: 42.hyeyoo, akpm, david, linmiaohe, linux-kernel, linux-mm,
	minchan, nphamcs, senozhatsky, vitaly.wool, willy, yosryahmed

From: Alex Shi <alexs@kernel.org>

Like the zpdesc_set_first(), introduce zpdesc_clear_first() helper for
ClearPagePrivate(), then clean up a 'struct page' usage in
reset_zpdesc().

Signed-off-by: Alex Shi <alexs@kernel.org>
To: linux-kernel@vger.kernel.org
To: linux-mm@kvack.org
To: Andrew Morton <akpm@linux-foundation.org>
To: Sergey Senozhatsky <senozhatsky@chromium.org>
To: Minchan Kim <minchan@kernel.org>
---
 mm/zsmalloc.c | 9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 50ce4a3b8279..731055ccef23 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -259,6 +259,11 @@ static inline void zpdesc_set_first(struct zpdesc *zpdesc)
 	SetPagePrivate(zpdesc_page(zpdesc));
 }
 
+static inline void zpdesc_clear_first(struct zpdesc *zpdesc)
+{
+	ClearPagePrivate(zpdesc_page(zpdesc));
+}
+
 static inline void zpdesc_inc_zone_page_state(struct zpdesc *zpdesc)
 {
 	inc_zone_page_state(zpdesc_page(zpdesc), NR_ZSPAGES);
@@ -844,10 +849,8 @@ static inline bool obj_allocated(struct zpdesc *zpdesc, void *obj,
 
 static void reset_zpdesc(struct zpdesc *zpdesc)
 {
-	struct page *page = zpdesc_page(zpdesc);
-
 	__zpdesc_clear_movable(zpdesc);
-	ClearPagePrivate(page);
+	zpdesc_clear_first(zpdesc);
 	zpdesc->zspage = NULL;
 	zpdesc->next = NULL;
 	reset_first_obj_offset(zpdesc);
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 50+ messages in thread

* Re: [PATCH v4 22/22] mm/zsmalloc: update comments for page->zpdesc changes
  2024-07-30 11:45     ` Alex Shi
@ 2024-07-31  2:16       ` Sergey Senozhatsky
  2024-07-31  4:14         ` Alex Shi
  0 siblings, 1 reply; 50+ messages in thread
From: Sergey Senozhatsky @ 2024-07-31  2:16 UTC (permalink / raw)
  To: Alex Shi
  Cc: Sergey Senozhatsky, alexs, Vitaly Wool, Miaohe Lin, Andrew Morton,
	linux-kernel, linux-mm, minchan, willy, david, 42.hyeyoo,
	Yosry Ahmed, nphamcs

On (24/07/30 19:45), Alex Shi wrote:
> On 7/30/24 5:37 PM, Sergey Senozhatsky wrote:
> > On (24/07/29 19:25), alexs@kernel.org wrote:
> >>
> >> From: Alex Shi <alexs@kernel.org>
> >>
> > 
> > Usually some simple commit message is still expected.
> 
> Uh, my fault. Just forgive this part, is the following log fine?
> 
>     After the page to zpdesc conversion, there still left few comments or
>     function named with page not zpdesc, let's update the comments and
>     rename function create_page_chain() as create_zpdesc_chain().

A bit of a different thing, still documentation related tho: do
we want to do something about comments that mention page_lock in
zsmalloc.c?


^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v4 22/22] mm/zsmalloc: update comments for page->zpdesc changes
  2024-07-31  2:16       ` Sergey Senozhatsky
@ 2024-07-31  4:14         ` Alex Shi
  2024-08-01  3:13           ` Sergey Senozhatsky
  0 siblings, 1 reply; 50+ messages in thread
From: Alex Shi @ 2024-07-31  4:14 UTC (permalink / raw)
  To: Sergey Senozhatsky
  Cc: alexs, Vitaly Wool, Miaohe Lin, Andrew Morton, linux-kernel,
	linux-mm, minchan, willy, david, 42.hyeyoo, Yosry Ahmed, nphamcs



On 7/31/24 10:16 AM, Sergey Senozhatsky wrote:
> On (24/07/30 19:45), Alex Shi wrote:
>> On 7/30/24 5:37 PM, Sergey Senozhatsky wrote:
>>> On (24/07/29 19:25), alexs@kernel.org wrote:
>>>>
>>>> From: Alex Shi <alexs@kernel.org>
>>>>
>>>
>>> Usually some simple commit message is still expected.
>>
>> Uh, my fault. Just forgive this part, is the following log fine?
>>
>>     After the page to zpdesc conversion, there still left few comments or
>>     function named with page not zpdesc, let's update the comments and
>>     rename function create_page_chain() as create_zpdesc_chain().
> 
> A bit of a different thing, still documentation related tho: do
> we want to do something about comments that mention page_lock in
> zsmalloc.c?

Good question!

There are some comments mentioned about the page_lock in the file, but missed
in the header of file, so how about the following adding:

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 731055ccef23..eac110edbff0 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -25,6 +25,8 @@
  *
  * Usage of struct zpdesc(page) flags:
  *     PG_private: identifies the first component page
+ *     PG_lock: lock all component pages for a zspage free, serialize with
+ *              migration
  */
 
 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt

Thanks a lot!


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* Re: [PATCH v4 22/22] mm/zsmalloc: update comments for page->zpdesc changes
  2024-07-31  4:14         ` Alex Shi
@ 2024-08-01  3:13           ` Sergey Senozhatsky
  2024-08-01  3:35             ` Matthew Wilcox
  0 siblings, 1 reply; 50+ messages in thread
From: Sergey Senozhatsky @ 2024-08-01  3:13 UTC (permalink / raw)
  To: Alex Shi
  Cc: Sergey Senozhatsky, alexs, Vitaly Wool, Miaohe Lin, Andrew Morton,
	linux-kernel, linux-mm, minchan, willy, david, 42.hyeyoo,
	Yosry Ahmed, nphamcs

On (24/07/31 12:14), Alex Shi wrote:
> > A bit of a different thing, still documentation related tho: do
> > we want to do something about comments that mention page_lock in
> > zsmalloc.c?
> 
> Good question!
> 
> There are some comments mentioned about the page_lock in the file, but missed
> in the header of file, so how about the following adding:

And e.g. things like

`The page locks trylock_zspage got will be released by __free_zspage.`

Should this (and the rest) spell "zpdesc locks" or something? Or do
we still want to refer to it as "page lock"?


^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v4 22/22] mm/zsmalloc: update comments for page->zpdesc changes
  2024-08-01  3:13           ` Sergey Senozhatsky
@ 2024-08-01  3:35             ` Matthew Wilcox
  2024-08-01  8:06               ` Alex Shi
  0 siblings, 1 reply; 50+ messages in thread
From: Matthew Wilcox @ 2024-08-01  3:35 UTC (permalink / raw)
  To: Sergey Senozhatsky
  Cc: Alex Shi, alexs, Vitaly Wool, Miaohe Lin, Andrew Morton,
	linux-kernel, linux-mm, minchan, david, 42.hyeyoo, Yosry Ahmed,
	nphamcs

On Thu, Aug 01, 2024 at 12:13:04PM +0900, Sergey Senozhatsky wrote:
> On (24/07/31 12:14), Alex Shi wrote:
> > > A bit of a different thing, still documentation related tho: do
> > > we want to do something about comments that mention page_lock in
> > > zsmalloc.c?
> > 
> > Good question!
> > 
> > There are some comments mentioned about the page_lock in the file, but missed
> > in the header of file, so how about the following adding:
> 
> And e.g. things like
> 
> `The page locks trylock_zspage got will be released by __free_zspage.`
> 
> Should this (and the rest) spell "zpdesc locks" or something? Or do
> we still want to refer to it as "page lock"?

pages do not have locks.  folios have locks.  zpdesc sounds like it has
a lock too.


^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v4 22/22] mm/zsmalloc: update comments for page->zpdesc changes
  2024-08-01  3:35             ` Matthew Wilcox
@ 2024-08-01  8:06               ` Alex Shi
  0 siblings, 0 replies; 50+ messages in thread
From: Alex Shi @ 2024-08-01  8:06 UTC (permalink / raw)
  To: Matthew Wilcox, Sergey Senozhatsky
  Cc: alexs, Vitaly Wool, Miaohe Lin, Andrew Morton, linux-kernel,
	linux-mm, minchan, david, 42.hyeyoo, Yosry Ahmed, nphamcs



On 8/1/24 11:35 AM, Matthew Wilcox wrote:
> On Thu, Aug 01, 2024 at 12:13:04PM +0900, Sergey Senozhatsky wrote:
>> On (24/07/31 12:14), Alex Shi wrote:
>>>> A bit of a different thing, still documentation related tho: do
>>>> we want to do something about comments that mention page_lock in
>>>> zsmalloc.c?
>>>
>>> Good question!
>>>
>>> There are some comments mentioned about the page_lock in the file, but missed
>>> in the header of file, so how about the following adding:
>>
>> And e.g. things like
>>
>> `The page locks trylock_zspage got will be released by __free_zspage.`
>>
>> Should this (and the rest) spell "zpdesc locks" or something? Or do
>> we still want to refer to it as "page lock"?
> 
> pages do not have locks.  folios have locks.  zpdesc sounds like it has
> a lock too.

Thanks willy and Sergey's suggestion! If I understand right, we'd better to update
all subpages calling in the file by zpdesc? 
Yes that's a bit more fit the code. So, is the following new patch fine?

=========
From 6699da8d62a22e9cba4ee4452b2805fc66920395 Mon Sep 17 00:00:00 2001
From: Alex Shi <alexs@kernel.org>
Date: Mon, 8 Jul 2024 20:26:20 +0800
Subject: [PATCH] mm/zsmalloc: update comments for page->zpdesc changes

Thanks for Sergey and Willy's suggestion!
After the page to zpdesc conversion, there still left few comments or
function named with page not zpdesc, let's update the comments and
rename function create_page_chain() as create_zpdesc_chain().

Signed-off-by: Alex Shi <alexs@kernel.org>
---
 mm/zsmalloc.c | 47 ++++++++++++++++++++++++++---------------------
 1 file changed, 26 insertions(+), 21 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 1543a339b7f4..490cecea72f6 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -17,14 +17,16 @@
  *
  * Usage of struct zpdesc fields:
  *	zpdesc->zspage: points to zspage
- *	zpdesc->next: links together all component pages of a zspage
+ *	zpdesc->next: links together all component zpdescs of a zspage
  *		For the huge page, this is always 0, so we use this field
  *		to store handle.
  *	zpdesc->first_obj_offset: PG_zsmalloc, lower 16 bit locate the first
  *		object offset in a subpage of a zspage
  *
  * Usage of struct zpdesc(page) flags:
- *	PG_private: identifies the first component page
+ *	PG_private: identifies the first component zpdesc
+ *	PG_lock: lock all component zpdescs for a zspage free, serialize with
+ *		 migration
  */
 
 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
@@ -191,7 +193,10 @@ struct size_class {
 	 */
 	int size;
 	int objs_per_zspage;
-	/* Number of PAGE_SIZE sized pages to combine to form a 'zspage' */
+	/*
+	 * Number of PAGE_SIZE sized zpdescs/pages to combine to
+	 * form a 'zspage'
+	 */
 	int pages_per_zspage;
 
 	unsigned int index;
@@ -913,7 +918,7 @@ static void free_zspage(struct zs_pool *pool, struct size_class *class,
 
 	/*
 	 * Since zs_free couldn't be sleepable, this function cannot call
-	 * lock_page. The page locks trylock_zspage got will be released
+	 * lock_page. The zpdesc locks trylock_zspage got will be released
 	 * by __free_zspage.
 	 */
 	if (!trylock_zspage(zspage)) {
@@ -970,7 +975,7 @@ static void init_zspage(struct size_class *class, struct zspage *zspage)
 	set_freeobj(zspage, 0);
 }
 
-static void create_page_chain(struct size_class *class, struct zspage *zspage,
+static void create_zpdesc_chain(struct size_class *class, struct zspage *zspage,
 				struct zpdesc *zpdescs[])
 {
 	int i;
@@ -979,9 +984,9 @@ static void create_page_chain(struct size_class *class, struct zspage *zspage,
 	int nr_zpdescs = class->pages_per_zspage;
 
 	/*
-	 * Allocate individual pages and link them together as:
-	 * 1. all pages are linked together using zpdesc->next
-	 * 2. each sub-page point to zspage using zpdesc->zspage
+	 * Allocate individual zpdescs and link them together as:
+	 * 1. all zpdescs are linked together using zpdesc->next
+	 * 2. each sub-zpdesc point to zspage using zpdesc->zspage
 	 *
 	 * we set PG_private to identify the first zpdesc (i.e. no other zpdesc
 	 * has this flag set).
@@ -1039,7 +1044,7 @@ static struct zspage *alloc_zspage(struct zs_pool *pool,
 		zpdescs[i] = zpdesc;
 	}
 
-	create_page_chain(class, zspage, zpdescs);
+	create_zpdesc_chain(class, zspage, zpdescs);
 	init_zspage(class, zspage);
 	zspage->pool = pool;
 	zspage->class = class->index;
@@ -1366,7 +1371,7 @@ static unsigned long obj_malloc(struct zs_pool *pool,
 		/* record handle in the header of allocated chunk */
 		link->handle = handle | OBJ_ALLOCATED_TAG;
 	else
-		/* record handle to page->index */
+		/* record handle to zpdesc->handle */
 		zspage->first_zpdesc->handle = handle | OBJ_ALLOCATED_TAG;
 
 	kunmap_atomic(vaddr);
@@ -1699,19 +1704,19 @@ static int putback_zspage(struct size_class *class, struct zspage *zspage)
 #ifdef CONFIG_COMPACTION
 /*
  * To prevent zspage destroy during migration, zspage freeing should
- * hold locks of all pages in the zspage.
+ * hold locks of all component zpdesc in the zspage.
  */
 static void lock_zspage(struct zspage *zspage)
 {
 	struct zpdesc *curr_zpdesc, *zpdesc;
 
 	/*
-	 * Pages we haven't locked yet can be migrated off the list while we're
+	 * Zpdesc we haven't locked yet can be migrated off the list while we're
 	 * trying to lock them, so we need to be careful and only attempt to
-	 * lock each page under migrate_read_lock(). Otherwise, the page we lock
-	 * may no longer belong to the zspage. This means that we may wait for
-	 * the wrong page to unlock, so we must take a reference to the page
-	 * prior to waiting for it to unlock outside migrate_read_lock().
+	 * lock each zpdesc under migrate_read_lock(). Otherwise, the zpdesc we
+	 * lock may no longer belong to the zspage. This means that we may wait
+	 * for the wrong zpdesc to unlock, so we must take a reference to the
+	 * zpdesc prior to waiting for it to unlock outside migrate_read_lock().
 	 */
 	while (1) {
 		migrate_read_lock(zspage);
@@ -1786,7 +1791,7 @@ static void replace_sub_page(struct size_class *class, struct zspage *zspage,
 		idx++;
 	} while ((zpdesc = get_next_zpdesc(zpdesc)) != NULL);
 
-	create_page_chain(class, zspage, zpdescs);
+	create_zpdesc_chain(class, zspage, zpdescs);
 	first_obj_offset = get_first_obj_offset(oldzpdesc);
 	set_first_obj_offset(newzpdesc, first_obj_offset);
 	if (unlikely(ZsHugePage(zspage)))
@@ -1797,8 +1802,8 @@ static void replace_sub_page(struct size_class *class, struct zspage *zspage,
 static bool zs_page_isolate(struct page *page, isolate_mode_t mode)
 {
 	/*
-	 * Page is locked so zspage couldn't be destroyed. For detail, look at
-	 * lock_zspage in free_zspage.
+	 * Page/zpdesc is locked so zspage couldn't be destroyed. For detail,
+	 * look at lock_zspage in free_zspage.
 	 */
 	VM_BUG_ON_PAGE(PageIsolated(page), page);
 
@@ -1825,7 +1830,7 @@ static int zs_page_migrate(struct page *newpage, struct page *page,
 	/* We're committed, tell the world that this is a Zsmalloc page. */
 	__zpdesc_set_zsmalloc(newzpdesc);
 
-	/* The page is locked, so this pointer must remain valid */
+	/* The zpdesc/page is locked, so this pointer must remain valid */
 	zspage = get_zspage(zpdesc);
 	pool = zspage->pool;
 
@@ -1898,7 +1903,7 @@ static const struct movable_operations zsmalloc_mops = {
 };
 
 /*
- * Caller should hold page_lock of all pages in the zspage
+ * Caller should hold zpdesc locks of all in the zspage
  * In here, we cannot use zspage meta data.
  */
 static void async_free_zspage(struct work_struct *work)
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 50+ messages in thread

* Re: [PATCH v4 01/22] mm/zsmalloc: add zpdesc memory descriptor for zswap.zpool
  2024-07-29 11:25 ` [PATCH v4 01/22] " alexs
@ 2024-08-02 18:52   ` Vishal Moola
  2024-08-05  4:06     ` Alex Shi
  2024-08-02 19:30   ` Matthew Wilcox
  1 sibling, 1 reply; 50+ messages in thread
From: Vishal Moola @ 2024-08-02 18:52 UTC (permalink / raw)
  To: alexs
  Cc: Vitaly Wool, Miaohe Lin, Andrew Morton, linux-kernel, linux-mm,
	minchan, willy, senozhatsky, david, 42.hyeyoo, Yosry Ahmed,
	nphamcs

On Mon, Jul 29, 2024 at 07:25:13PM +0800, alexs@kernel.org wrote:
> From: Alex Shi <alexs@kernel.org>

I've been busy with other things, so I haven't been able to review this
until now. Thanks to both you and Hyeonggon for working on this memdesc :)

> The 1st patch introduces new memory decriptor zpdesc and rename
> zspage.first_page to zspage.first_zpdesc, no functional change.
> 
> We removed PG_owner_priv_1 since it was moved to zspage after
> commit a41ec880aa7b ("zsmalloc: move huge compressed obj from
> page to zspage").
> 
> And keep the memcg_data member, since as Yosry pointed out:
> "When the pages are freed, put_page() -> folio_put() -> __folio_put() will call
> mem_cgroup_uncharge(). The latter will call folio_memcg() (which reads
> folio->memcg_data) to figure out if uncharging needs to be done.
> 
> There are also other similar code paths that will check
> folio->memcg_data. It is currently expected to be present for all
> folios. So until we have custom code paths per-folio type for
> allocation/freeing/etc, we need to keep folio->memcg_data present and
> properly initialized."
> 
> Originally-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> Signed-off-by: Alex Shi <alexs@kernel.org>
> ---
>  mm/zpdesc.h   | 66 +++++++++++++++++++++++++++++++++++++++++++++++++++
>  mm/zsmalloc.c | 21 ++++++++--------
>  2 files changed, 76 insertions(+), 11 deletions(-)
>  create mode 100644 mm/zpdesc.h
> 
> diff --git a/mm/zpdesc.h b/mm/zpdesc.h
> new file mode 100644
> index 000000000000..2dbef231f616
> --- /dev/null
> +++ b/mm/zpdesc.h
> @@ -0,0 +1,66 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/* zpdesc.h: zswap.zpool memory descriptor
> + *
> + * Written by Alex Shi <alexs@kernel.org>
> + *	      Hyeonggon Yoo <42.hyeyoo@gmail.com>
> + */
> +#ifndef __MM_ZPDESC_H__
> +#define __MM_ZPDESC_H__
> +
> +/*
> + * struct zpdesc -	Memory descriptor for zpool memory, now is for zsmalloc
> + * @flags:		Page flags, PG_private: identifies the first component page
> + * @lru:		Indirectly used by page migration
> + * @mops:		Used by page migration
> + * @next:		Next zpdesc in a zspage in zsmalloc zpool
> + * @handle:		For huge zspage in zsmalloc zpool
> + * @zspage:		Pointer to zspage in zsmalloc
> + * @memcg_data:		Memory Control Group data.
> + *

I think its a good idea to include comments for the padding (namely what
aliases with it in struct page) here as well. It doesn't hurt, and will
make them easier to remove in the future.

> + * This struct overlays struct page for now. Do not modify without a good
> + * understanding of the issues.
> + */
> +struct zpdesc {
> +	unsigned long flags;
> +	struct list_head lru;
> +	struct movable_operations *mops;
> +	union {
> +		/* Next zpdescs in a zspage in zsmalloc zpool */
> +		struct zpdesc *next;
> +		/* For huge zspage in zsmalloc zpool */
> +		unsigned long handle;
> +	};
> +	struct zspage *zspage;

I like using pointers here, although I think the comments should be more
precise about what the purpose of the pointer is. Maybe something like
"Points to the zspage this zpdesc is a part of" or something.

> +	unsigned long _zp_pad_1;
> +#ifdef CONFIG_MEMCG
> +	unsigned long memcg_data;
> +#endif
> +};

You should definitely fold your additions to the struct from patch 17
into this patch. It makes it easier to review, and better for anyone
looking at the commit log in the future.

> +#define ZPDESC_MATCH(pg, zp) \
> +	static_assert(offsetof(struct page, pg) == offsetof(struct zpdesc, zp))
> +
> +ZPDESC_MATCH(flags, flags);
> +ZPDESC_MATCH(lru, lru);
> +ZPDESC_MATCH(mapping, mops);
> +ZPDESC_MATCH(index, next);
> +ZPDESC_MATCH(index, handle);
> +ZPDESC_MATCH(private, zspage);
> +#ifdef CONFIG_MEMCG
> +ZPDESC_MATCH(memcg_data, memcg_data);
> +#endif
> +#undef ZPDESC_MATCH
> +static_assert(sizeof(struct zpdesc) <= sizeof(struct page));
> +
> +#define zpdesc_page(zp)			(_Generic((zp),			\
> +	const struct zpdesc *:		(const struct page *)(zp),	\
> +	struct zpdesc *:		(struct page *)(zp)))
> +
> +#define zpdesc_folio(zp)		(_Generic((zp),			\
> +	const struct zpdesc *:		(const struct folio *)(zp),	\
> +	struct zpdesc *:		(struct folio *)(zp)))
> +
> +#define page_zpdesc(p)			(_Generic((p),			\
> +	const struct page *:		(const struct zpdesc *)(p),	\
> +	struct page *:			(struct zpdesc *)(p)))
> +
> +#endif

I'm don't think we need both page and folio cast functions for zpdescs.
Sticking to pages will probably suffice (and be easiest) since all APIs
zsmalloc cares about are already defined. 

We can stick to 1 "middle-man" descriptor for zpdescs since zsmalloc
uses those pages as space to track zspages and nothing more. We'll likely
end up completely removing it from zsmalloc once we can allocate
memdescs on their own: It seems most (if not all) of the "indirect" members
of zpdesc are used as indicators to the rest of core-mm telling them not to
mess with that memory.


^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v4 02/22] mm/zsmalloc: use zpdesc in trylock_zspage/lock_zspage
  2024-07-29 11:25 ` [PATCH v4 02/22] mm/zsmalloc: use zpdesc in trylock_zspage/lock_zspage alexs
@ 2024-08-02 19:02   ` Vishal Moola
  2024-08-05  7:55     ` Alex Shi
  0 siblings, 1 reply; 50+ messages in thread
From: Vishal Moola @ 2024-08-02 19:02 UTC (permalink / raw)
  To: alexs
  Cc: Vitaly Wool, Miaohe Lin, Andrew Morton, linux-kernel, linux-mm,
	minchan, willy, senozhatsky, david, 42.hyeyoo, Yosry Ahmed,
	nphamcs

On Mon, Jul 29, 2024 at 07:25:14PM +0800, alexs@kernel.org wrote:
> From: Alex Shi <alexs@kernel.org>
> 
> To use zpdesc in trylock_zspage/lock_zspage funcs, we add couple of helpers:
> zpdesc_lock/zpdesc_unlock/zpdesc_trylock/zpdesc_wait_locked and
> zpdesc_get/zpdesc_put for this purpose.

You should always include the "()" following function names. It just
makes everything more readable.

> Here we use the folio series func in guts for 2 reasons, one zswap.zpool
> only get single page, and use folio could save some compound_head checking;
> two, folio_put could bypass devmap checking that we don't need.
> 
> Originally-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> Signed-off-by: Alex Shi <alexs@kernel.org>
> ---
>  mm/zpdesc.h   | 30 ++++++++++++++++++++++++
>  mm/zsmalloc.c | 64 ++++++++++++++++++++++++++++++++++-----------------
>  2 files changed, 73 insertions(+), 21 deletions(-)
> 
> diff --git a/mm/zpdesc.h b/mm/zpdesc.h
> index 2dbef231f616..3b04197cec9d 100644
> --- a/mm/zpdesc.h
> +++ b/mm/zpdesc.h
> @@ -63,4 +63,34 @@ static_assert(sizeof(struct zpdesc) <= sizeof(struct page));
>  	const struct page *:		(const struct zpdesc *)(p),	\
>  	struct page *:			(struct zpdesc *)(p)))
>  
> +static inline void zpdesc_lock(struct zpdesc *zpdesc)
> +{
> +	folio_lock(zpdesc_folio(zpdesc));
> +}
> +
> +static inline bool zpdesc_trylock(struct zpdesc *zpdesc)
> +{
> +	return folio_trylock(zpdesc_folio(zpdesc));
> +}
> +
> +static inline void zpdesc_unlock(struct zpdesc *zpdesc)
> +{
> +	folio_unlock(zpdesc_folio(zpdesc));
> +}
> +
> +static inline void zpdesc_wait_locked(struct zpdesc *zpdesc)
> +{
> +	folio_wait_locked(zpdesc_folio(zpdesc));
> +}

The more I look at zsmalloc, the more skeptical I get about it "needing"
the folio_lock. At a glance it seems like a zspage already has its own lock,
and the migration doesn't appear to be truly physical? There's probably
something I'm missing... it would make this code a lot simpler to drop
many of the folio locks.

> +
> +static inline void zpdesc_get(struct zpdesc *zpdesc)
> +{
> +	folio_get(zpdesc_folio(zpdesc));
> +}
> +
> +static inline void zpdesc_put(struct zpdesc *zpdesc)
> +{
> +	folio_put(zpdesc_folio(zpdesc));
> +}
> +
>  #endif
> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> index a532851025f9..243677a9c6d2 100644
> --- a/mm/zsmalloc.c
> +++ b/mm/zsmalloc.c
> @@ -433,13 +433,17 @@ static __maybe_unused int is_first_page(struct page *page)
>  	return PagePrivate(page);
>  }
>  
> +static int is_first_zpdesc(struct zpdesc *zpdesc)
> +{
> +	return PagePrivate(zpdesc_page(zpdesc));
> +}
> +

I feel like we might not even need to use the PG_private flag for
zpages? It seems to me like its just used for sanity checking. Can
zpage->first_page ever not point to the first zpdesc?

For the purpose of introducing the memdesc its fine to continue using
it; just some food for thought.


^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v4 06/22] mm/zsmalloc: convert create_page_chain() and its users to use zpdesc
  2024-07-29 11:25 ` [PATCH v4 06/22] mm/zsmalloc: convert create_page_chain() and its users " alexs
@ 2024-08-02 19:09   ` Vishal Moola
  2024-08-05  8:20     ` Alex Shi
  0 siblings, 1 reply; 50+ messages in thread
From: Vishal Moola @ 2024-08-02 19:09 UTC (permalink / raw)
  To: alexs
  Cc: Vitaly Wool, Miaohe Lin, Andrew Morton, linux-kernel, linux-mm,
	minchan, willy, senozhatsky, david, 42.hyeyoo, Yosry Ahmed,
	nphamcs

On Mon, Jul 29, 2024 at 07:25:18PM +0800, alexs@kernel.org wrote:
> From: Alex Shi <alexs@kernel.org>
> 
> Introduce a few helper functions for conversion to convert create_page_chain()
> to use zpdesc, then use zpdesc in replace_sub_page() too.

As a general note, I've been having trouble keeping track of your helper
functions throughout your patchset. Things get confusing when helper
functions are "add-ons" to patches and are then replaced/rewritten
in various subsequent patches - might just be me though.

> Originally-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> Signed-off-by: Alex Shi <alexs@kernel.org>
> ---
>  mm/zpdesc.h   |   6 +++
>  mm/zsmalloc.c | 115 +++++++++++++++++++++++++++++++++-----------------
>  2 files changed, 82 insertions(+), 39 deletions(-)
> 
> diff --git a/mm/zpdesc.h b/mm/zpdesc.h
> index 79ec40b03956..2293453f5d57 100644
> --- a/mm/zpdesc.h
> +++ b/mm/zpdesc.h
> @@ -102,4 +102,10 @@ static inline struct zpdesc *pfn_zpdesc(unsigned long pfn)
>  {
>  	return page_zpdesc(pfn_to_page(pfn));
>  }
> +
> +static inline void __zpdesc_set_movable(struct zpdesc *zpdesc,
> +					const struct movable_operations *mops)
> +{
> +	__SetPageMovable(zpdesc_page(zpdesc), mops);
> +}
>  #endif
> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> index bbc165cb587d..a8f390beeab8 100644
> --- a/mm/zsmalloc.c
> +++ b/mm/zsmalloc.c
> @@ -248,6 +248,41 @@ static inline void *zpdesc_kmap_atomic(struct zpdesc *zpdesc)
>  	return kmap_atomic(zpdesc_page(zpdesc));
>  }
>  
> +static inline void zpdesc_set_zspage(struct zpdesc *zpdesc,
> +				     struct zspage *zspage)
> +{
> +	zpdesc->zspage = zspage;
> +}
> +
> +static inline void zpdesc_set_first(struct zpdesc *zpdesc)
> +{
> +	SetPagePrivate(zpdesc_page(zpdesc));
> +}
> +

I'm not a fan of the names above. IMO, naming should follow some
semblance of consistency regarding their purpose (or have comments
that describe their purpose instead).

At a glance zpdesc_set_zspage() and zpdesc_set_first() sound like they
are doing similar things, but I don't think they serve similar purposes?

> +static inline void zpdesc_inc_zone_page_state(struct zpdesc *zpdesc)
> +{
> +	inc_zone_page_state(zpdesc_page(zpdesc), NR_ZSPAGES);
> +}
> +
> +static inline void zpdesc_dec_zone_page_state(struct zpdesc *zpdesc)
> +{
> +	dec_zone_page_state(zpdesc_page(zpdesc), NR_ZSPAGES);
> +}
> +
> +static inline struct zpdesc *alloc_zpdesc(gfp_t gfp)
> +{
> +	struct page *page = alloc_page(gfp);
> +
> +	return page_zpdesc(page);
> +}
> +
> +static inline void free_zpdesc(struct zpdesc *zpdesc)
> +{
> +	struct page *page = zpdesc_page(zpdesc);
> +
> +	__free_page(page);
> +}
> +
 


^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v4 20/22] mm/zsmalloc: introduce __zpdesc_set_zsmalloc()
  2024-07-29 11:25 ` [PATCH v4 20/22] mm/zsmalloc: introduce __zpdesc_set_zsmalloc() alexs
@ 2024-08-02 19:11   ` Vishal Moola
  2024-08-05  8:28     ` Alex Shi
  0 siblings, 1 reply; 50+ messages in thread
From: Vishal Moola @ 2024-08-02 19:11 UTC (permalink / raw)
  To: alexs
  Cc: Vitaly Wool, Miaohe Lin, Andrew Morton, linux-kernel, linux-mm,
	minchan, willy, senozhatsky, david, 42.hyeyoo, Yosry Ahmed,
	nphamcs

On Mon, Jul 29, 2024 at 07:25:32PM +0800, alexs@kernel.org wrote:
> From: Alex Shi <alexs@kernel.org>
> 
> Add a helper __zpdesc_set_zsmalloc() for __SetPageZsmalloc(), and use
> it in callers to make code clear.

Definitely just fold this into the prior patch. It effectively does the
same thing.


^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v4 21/22] mm/zsmalloc: fix build warning from lkp testing
  2024-07-29 11:25 ` [PATCH v4 21/22] mm/zsmalloc: fix build warning from lkp testing alexs
@ 2024-08-02 19:13   ` Vishal Moola
  2024-08-05  8:38     ` Alex Shi
  0 siblings, 1 reply; 50+ messages in thread
From: Vishal Moola @ 2024-08-02 19:13 UTC (permalink / raw)
  To: alexs
  Cc: Vitaly Wool, Miaohe Lin, Andrew Morton, linux-kernel, linux-mm,
	minchan, willy, senozhatsky, david, 42.hyeyoo, Yosry Ahmed,
	nphamcs, kernel test robot

On Mon, Jul 29, 2024 at 07:25:33PM +0800, alexs@kernel.org wrote:
> From: Alex Shi <alexs@kernel.org>
> 
> LKP reported the following warning w/o CONFIG_DEBUG_VM:
> 	mm/zsmalloc.c:471:12: warning: function 'is_first_zpdesc' is not
> 	needed and will not be emitted [-Wunneeded-internal-declaration]
> To remove this warning, better to incline the function is_first_zpdesc

In future iterations of the series, just fold this into the patch its
fixing. It makes reviewing easier.



^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v4 01/22] mm/zsmalloc: add zpdesc memory descriptor for zswap.zpool
  2024-07-29 11:25 ` [PATCH v4 01/22] " alexs
  2024-08-02 18:52   ` Vishal Moola
@ 2024-08-02 19:30   ` Matthew Wilcox
  2024-08-05  4:36     ` Alex Shi
  1 sibling, 1 reply; 50+ messages in thread
From: Matthew Wilcox @ 2024-08-02 19:30 UTC (permalink / raw)
  To: alexs
  Cc: Vitaly Wool, Miaohe Lin, Andrew Morton, linux-kernel, linux-mm,
	minchan, senozhatsky, david, 42.hyeyoo, Yosry Ahmed, nphamcs

On Mon, Jul 29, 2024 at 07:25:13PM +0800, alexs@kernel.org wrote:
> +/*
> + * struct zpdesc -	Memory descriptor for zpool memory, now is for zsmalloc
> + * @flags:		Page flags, PG_private: identifies the first component page
> + * @lru:		Indirectly used by page migration
> + * @mops:		Used by page migration
> + * @next:		Next zpdesc in a zspage in zsmalloc zpool
> + * @handle:		For huge zspage in zsmalloc zpool
> + * @zspage:		Pointer to zspage in zsmalloc
> + * @memcg_data:		Memory Control Group data.
> + *
> + * This struct overlays struct page for now. Do not modify without a good
> + * understanding of the issues.
> + */
> +struct zpdesc {
> +	unsigned long flags;
> +	struct list_head lru;
> +	struct movable_operations *mops;
> +	union {
> +		/* Next zpdescs in a zspage in zsmalloc zpool */
> +		struct zpdesc *next;
> +		/* For huge zspage in zsmalloc zpool */
> +		unsigned long handle;
> +	};
> +	struct zspage *zspage;
> +	unsigned long _zp_pad_1;
> +#ifdef CONFIG_MEMCG
> +	unsigned long memcg_data;
> +#endif
> +};

Before we do a v5, what's the plan for a shrunk struct page?  It feels
like a lot of what's going on here is just "because we can".  But if you
actually had to allocate the memory, would you?

That is, if we get to

struct page {
	unsigned long memdesc;
};

what do you put in the 60  bits of information?  Do you allocate a
per-page struct zpdesc, and have each one pointing to a zspage?  Or do
you extend the current contents of zspage to describe the pages allocated
to it, and make each struct page point to the zspage?

I don't know your code, so I'm not trying to choose for you.  I'm just
trying to make sure we're walking in the right direction.


^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v4 01/22] mm/zsmalloc: add zpdesc memory descriptor for zswap.zpool
  2024-08-02 18:52   ` Vishal Moola
@ 2024-08-05  4:06     ` Alex Shi
  2024-08-08 18:21       ` Vishal Moola
  0 siblings, 1 reply; 50+ messages in thread
From: Alex Shi @ 2024-08-05  4:06 UTC (permalink / raw)
  To: Vishal Moola, alexs
  Cc: Vitaly Wool, Miaohe Lin, Andrew Morton, linux-kernel, linux-mm,
	minchan, willy, senozhatsky, david, 42.hyeyoo, Yosry Ahmed,
	nphamcs



On 8/3/24 2:52 AM, Vishal Moola wrote:
> On Mon, Jul 29, 2024 at 07:25:13PM +0800, alexs@kernel.org wrote:
>> From: Alex Shi <alexs@kernel.org>
> 
> I've been busy with other things, so I haven't been able to review this
> until now. Thanks to both you and Hyeonggon for working on this memdesc :)

Hi Vishal,

Thank a lot for your comments!

My pleasure! :)

> 
>> The 1st patch introduces new memory decriptor zpdesc and rename
>> zspage.first_page to zspage.first_zpdesc, no functional change.
>>
>> We removed PG_owner_priv_1 since it was moved to zspage after
>> commit a41ec880aa7b ("zsmalloc: move huge compressed obj from
>> page to zspage").
>>
>> And keep the memcg_data member, since as Yosry pointed out:
>> "When the pages are freed, put_page() -> folio_put() -> __folio_put() will call
>> mem_cgroup_uncharge(). The latter will call folio_memcg() (which reads
>> folio->memcg_data) to figure out if uncharging needs to be done.
>>
>> There are also other similar code paths that will check
>> folio->memcg_data. It is currently expected to be present for all
>> folios. So until we have custom code paths per-folio type for
>> allocation/freeing/etc, we need to keep folio->memcg_data present and
>> properly initialized."
>>
>> Originally-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
>> Signed-off-by: Alex Shi <alexs@kernel.org>
>> ---
>>  mm/zpdesc.h   | 66 +++++++++++++++++++++++++++++++++++++++++++++++++++
>>  mm/zsmalloc.c | 21 ++++++++--------
>>  2 files changed, 76 insertions(+), 11 deletions(-)
>>  create mode 100644 mm/zpdesc.h
>>
>> diff --git a/mm/zpdesc.h b/mm/zpdesc.h
>> new file mode 100644
>> index 000000000000..2dbef231f616
>> --- /dev/null
>> +++ b/mm/zpdesc.h
>> @@ -0,0 +1,66 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/* zpdesc.h: zswap.zpool memory descriptor
>> + *
>> + * Written by Alex Shi <alexs@kernel.org>
>> + *	      Hyeonggon Yoo <42.hyeyoo@gmail.com>
>> + */
>> +#ifndef __MM_ZPDESC_H__
>> +#define __MM_ZPDESC_H__
>> +
>> +/*
>> + * struct zpdesc -	Memory descriptor for zpool memory, now is for zsmalloc
>> + * @flags:		Page flags, PG_private: identifies the first component page
>> + * @lru:		Indirectly used by page migration
>> + * @mops:		Used by page migration
>> + * @next:		Next zpdesc in a zspage in zsmalloc zpool
>> + * @handle:		For huge zspage in zsmalloc zpool
>> + * @zspage:		Pointer to zspage in zsmalloc
>> + * @memcg_data:		Memory Control Group data.
>> + *
> 
> I think its a good idea to include comments for the padding (namely what
> aliases with it in struct page) here as well. It doesn't hurt, and will
> make them easier to remove in the future.
> 
>> + * This struct overlays struct page for now. Do not modify without a good
>> + * understanding of the issues.
>> + */
>> +struct zpdesc {
>> +	unsigned long flags;
>> +	struct list_head lru;
>> +	struct movable_operations *mops;
>> +	union {
>> +		/* Next zpdescs in a zspage in zsmalloc zpool */
>> +		struct zpdesc *next;
>> +		/* For huge zspage in zsmalloc zpool */
>> +		unsigned long handle;
>> +	};
>> +	struct zspage *zspage;
> 
> I like using pointers here, although I think the comments should be more
> precise about what the purpose of the pointer is. Maybe something like
> "Points to the zspage this zpdesc is a part of" or something.

I will change the comments for this member. Thanks!

> 
>> +	unsigned long _zp_pad_1;
>> +#ifdef CONFIG_MEMCG
>> +	unsigned long memcg_data;
>> +#endif
>> +};
> 
> You should definitely fold your additions to the struct from patch 17
> into this patch. It makes it easier to review, and better for anyone
> looking at the commit log in the future.

Thanks! I will move the struct part from patch 17 here.

> 
>> +#define ZPDESC_MATCH(pg, zp) \
>> +	static_assert(offsetof(struct page, pg) == offsetof(struct zpdesc, zp))
>> +
>> +ZPDESC_MATCH(flags, flags);
>> +ZPDESC_MATCH(lru, lru);
>> +ZPDESC_MATCH(mapping, mops);
>> +ZPDESC_MATCH(index, next);
>> +ZPDESC_MATCH(index, handle);
>> +ZPDESC_MATCH(private, zspage);
>> +#ifdef CONFIG_MEMCG
>> +ZPDESC_MATCH(memcg_data, memcg_data);
>> +#endif
>> +#undef ZPDESC_MATCH
>> +static_assert(sizeof(struct zpdesc) <= sizeof(struct page));
>> +
>> +#define zpdesc_page(zp)			(_Generic((zp),			\
>> +	const struct zpdesc *:		(const struct page *)(zp),	\
>> +	struct zpdesc *:		(struct page *)(zp)))
>> +
>> +#define zpdesc_folio(zp)		(_Generic((zp),			\
>> +	const struct zpdesc *:		(const struct folio *)(zp),	\
>> +	struct zpdesc *:		(struct folio *)(zp)))
>> +
>> +#define page_zpdesc(p)			(_Generic((p),			\
>> +	const struct page *:		(const struct zpdesc *)(p),	\
>> +	struct page *:			(struct zpdesc *)(p)))
>> +
>> +#endif
> 
> I'm don't think we need both page and folio cast functions for zpdescs.
> Sticking to pages will probably suffice (and be easiest) since all APIs
> zsmalloc cares about are already defined. 
> 
> We can stick to 1 "middle-man" descriptor for zpdescs since zsmalloc
> uses those pages as space to track zspages and nothing more. We'll likely
> end up completely removing it from zsmalloc once we can allocate
> memdescs on their own: It seems most (if not all) of the "indirect" members
> of zpdesc are used as indicators to the rest of core-mm telling them not to
> mess with that memory.

Yes, that is also my first attempt to skip folio part, but I found we could got
6.3% object size reduced on zsmalloc.o file, from 37.2KB to 34.9KB, if we use
folio series lock and folio_get/put functions. That saving come from compound_head
check skipping.
So I wrapped them carefully in zpdesc series functions in zpdesc.h file.
They should be easy replaced when we use memdescs in the future. Could we keep them
a while, or ?

Thanks
Alex
 


^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v4 01/22] mm/zsmalloc: add zpdesc memory descriptor for zswap.zpool
  2024-08-02 19:30   ` Matthew Wilcox
@ 2024-08-05  4:36     ` Alex Shi
  0 siblings, 0 replies; 50+ messages in thread
From: Alex Shi @ 2024-08-05  4:36 UTC (permalink / raw)
  To: Matthew Wilcox, alexs
  Cc: Vitaly Wool, Miaohe Lin, Andrew Morton, linux-kernel, linux-mm,
	minchan, senozhatsky, david, 42.hyeyoo, Yosry Ahmed, nphamcs



On 8/3/24 3:30 AM, Matthew Wilcox wrote:
> On Mon, Jul 29, 2024 at 07:25:13PM +0800, alexs@kernel.org wrote:
>> +/*
>> + * struct zpdesc -	Memory descriptor for zpool memory, now is for zsmalloc
>> + * @flags:		Page flags, PG_private: identifies the first component page
>> + * @lru:		Indirectly used by page migration
>> + * @mops:		Used by page migration
>> + * @next:		Next zpdesc in a zspage in zsmalloc zpool
>> + * @handle:		For huge zspage in zsmalloc zpool
>> + * @zspage:		Pointer to zspage in zsmalloc
>> + * @memcg_data:		Memory Control Group data.
>> + *
>> + * This struct overlays struct page for now. Do not modify without a good
>> + * understanding of the issues.
>> + */
>> +struct zpdesc {
>> +	unsigned long flags;
>> +	struct list_head lru;
>> +	struct movable_operations *mops;
>> +	union {
>> +		/* Next zpdescs in a zspage in zsmalloc zpool */
>> +		struct zpdesc *next;
>> +		/* For huge zspage in zsmalloc zpool */
>> +		unsigned long handle;
>> +	};
>> +	struct zspage *zspage;
>> +	unsigned long _zp_pad_1;
>> +#ifdef CONFIG_MEMCG
>> +	unsigned long memcg_data;
>> +#endif
>> +};
> 
> Before we do a v5, what's the plan for a shrunk struct page?  It feels
> like a lot of what's going on here is just "because we can".  But if you
> actually had to allocate the memory, would you?
> 
> That is, if we get to
> 
> struct page {
> 	unsigned long memdesc;
> };
> 

Yes, we still have a huge gap between this target. 

> what do you put in the 60  bits of information?  Do you allocate a
> per-page struct zpdesc, and have each one pointing to a zspage?  Or do
> you extend the current contents of zspage to describe the pages allocated
> to it, and make each struct page point to the zspage?

I am not very clear the way to get there. The easy path for me, I guess, 
would move struct zpdesc members out to zspage, like 2nd way of your suggestion.

I believe you has much more idea of the ways to memdesc. 
Is there some references or detailed info which I could learn from you?

Many thanks!
Alex
 
> 
> I don't know your code, so I'm not trying to choose for you.  I'm just
> trying to make sure we're walking in the right direction.


^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v4 02/22] mm/zsmalloc: use zpdesc in trylock_zspage/lock_zspage
  2024-08-02 19:02   ` Vishal Moola
@ 2024-08-05  7:55     ` Alex Shi
  0 siblings, 0 replies; 50+ messages in thread
From: Alex Shi @ 2024-08-05  7:55 UTC (permalink / raw)
  To: Vishal Moola, alexs
  Cc: Vitaly Wool, Miaohe Lin, Andrew Morton, linux-kernel, linux-mm,
	minchan, willy, senozhatsky, david, 42.hyeyoo, Yosry Ahmed,
	nphamcs



On 8/3/24 3:02 AM, Vishal Moola wrote:
> On Mon, Jul 29, 2024 at 07:25:14PM +0800, alexs@kernel.org wrote:
>> From: Alex Shi <alexs@kernel.org>
>>
>> To use zpdesc in trylock_zspage/lock_zspage funcs, we add couple of helpers:
>> zpdesc_lock/zpdesc_unlock/zpdesc_trylock/zpdesc_wait_locked and
>> zpdesc_get/zpdesc_put for this purpose.
> 
> You should always include the "()" following function names. It just
> makes everything more readable.

Thanks for reminder, I will update the commit log.

> 
>> Here we use the folio series func in guts for 2 reasons, one zswap.zpool
>> only get single page, and use folio could save some compound_head checking;
>> two, folio_put could bypass devmap checking that we don't need.
>>
>> Originally-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
>> Signed-off-by: Alex Shi <alexs@kernel.org>
>> ---
>>  mm/zpdesc.h   | 30 ++++++++++++++++++++++++
>>  mm/zsmalloc.c | 64 ++++++++++++++++++++++++++++++++++-----------------
>>  2 files changed, 73 insertions(+), 21 deletions(-)
>>
>> diff --git a/mm/zpdesc.h b/mm/zpdesc.h
>> index 2dbef231f616..3b04197cec9d 100644
>> --- a/mm/zpdesc.h
>> +++ b/mm/zpdesc.h
>> @@ -63,4 +63,34 @@ static_assert(sizeof(struct zpdesc) <= sizeof(struct page));
>>  	const struct page *:		(const struct zpdesc *)(p),	\
>>  	struct page *:			(struct zpdesc *)(p)))
>>  
>> +static inline void zpdesc_lock(struct zpdesc *zpdesc)
>> +{
>> +	folio_lock(zpdesc_folio(zpdesc));
>> +}
>> +
>> +static inline bool zpdesc_trylock(struct zpdesc *zpdesc)
>> +{
>> +	return folio_trylock(zpdesc_folio(zpdesc));
>> +}
>> +
>> +static inline void zpdesc_unlock(struct zpdesc *zpdesc)
>> +{
>> +	folio_unlock(zpdesc_folio(zpdesc));
>> +}
>> +
>> +static inline void zpdesc_wait_locked(struct zpdesc *zpdesc)
>> +{
>> +	folio_wait_locked(zpdesc_folio(zpdesc));
>> +}
> 
> The more I look at zsmalloc, the more skeptical I get about it "needing"
> the folio_lock. At a glance it seems like a zspage already has its own lock,
> and the migration doesn't appear to be truly physical? There's probably
> something I'm missing... it would make this code a lot simpler to drop
> many of the folio locks.

folio series could save about 6.3% object code... Anyway I don't insist on
it. Just want a double confirm, could we keep the code size saving? :)

> 
>> +
>> +static inline void zpdesc_get(struct zpdesc *zpdesc)
>> +{
>> +	folio_get(zpdesc_folio(zpdesc));
>> +}
>> +
>> +static inline void zpdesc_put(struct zpdesc *zpdesc)
>> +{
>> +	folio_put(zpdesc_folio(zpdesc));
>> +}
>> +
>>  #endif
>> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
>> index a532851025f9..243677a9c6d2 100644
>> --- a/mm/zsmalloc.c
>> +++ b/mm/zsmalloc.c
>> @@ -433,13 +433,17 @@ static __maybe_unused int is_first_page(struct page *page)
>>  	return PagePrivate(page);
>>  }
>>  
>> +static int is_first_zpdesc(struct zpdesc *zpdesc)
>> +{
>> +	return PagePrivate(zpdesc_page(zpdesc));
>> +}
>> +
> 
> I feel like we might not even need to use the PG_private flag for
> zpages? It seems to me like its just used for sanity checking. Can
> zpage->first_page ever not point to the first zpdesc?

Yes, the PG_private is only for sanity checking now. But zspage.first_zpdesc
are still used widely and must point to the first subpage. 
I believe we could safely remove this page flag, maybe next patchset?

> 
> For the purpose of introducing the memdesc its fine to continue using
> it; just some food for thought.

Yes.
 
Thanks a lot! :)



^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v4 06/22] mm/zsmalloc: convert create_page_chain() and its users to use zpdesc
  2024-08-02 19:09   ` Vishal Moola
@ 2024-08-05  8:20     ` Alex Shi
  2024-08-08 18:25       ` Vishal Moola
  0 siblings, 1 reply; 50+ messages in thread
From: Alex Shi @ 2024-08-05  8:20 UTC (permalink / raw)
  To: Vishal Moola, alexs
  Cc: Vitaly Wool, Miaohe Lin, Andrew Morton, linux-kernel, linux-mm,
	minchan, willy, senozhatsky, david, 42.hyeyoo, Yosry Ahmed,
	nphamcs



On 8/3/24 3:09 AM, Vishal Moola wrote:
> On Mon, Jul 29, 2024 at 07:25:18PM +0800, alexs@kernel.org wrote:
>> From: Alex Shi <alexs@kernel.org>
>>
>> Introduce a few helper functions for conversion to convert create_page_chain()
>> to use zpdesc, then use zpdesc in replace_sub_page() too.
> 
> As a general note, I've been having trouble keeping track of your helper
> functions throughout your patchset. Things get confusing when helper
> functions are "add-ons" to patches and are then replaced/rewritten
> in various subsequent patches - might just be me though.

Right, maybe too much helper doesn't give necessary help.

> 
>> Originally-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
>> Signed-off-by: Alex Shi <alexs@kernel.org>
>> ---
>>  mm/zpdesc.h   |   6 +++
>>  mm/zsmalloc.c | 115 +++++++++++++++++++++++++++++++++-----------------
>>  2 files changed, 82 insertions(+), 39 deletions(-)
>>
>> diff --git a/mm/zpdesc.h b/mm/zpdesc.h
>> index 79ec40b03956..2293453f5d57 100644
>> --- a/mm/zpdesc.h
>> +++ b/mm/zpdesc.h
>> @@ -102,4 +102,10 @@ static inline struct zpdesc *pfn_zpdesc(unsigned long pfn)
>>  {
>>  	return page_zpdesc(pfn_to_page(pfn));
>>  }
>> +
>> +static inline void __zpdesc_set_movable(struct zpdesc *zpdesc,
>> +					const struct movable_operations *mops)
>> +{
>> +	__SetPageMovable(zpdesc_page(zpdesc), mops);
>> +}
>>  #endif
>> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
>> index bbc165cb587d..a8f390beeab8 100644
>> --- a/mm/zsmalloc.c
>> +++ b/mm/zsmalloc.c
>> @@ -248,6 +248,41 @@ static inline void *zpdesc_kmap_atomic(struct zpdesc *zpdesc)
>>  	return kmap_atomic(zpdesc_page(zpdesc));
>>  }
>>  
>> +static inline void zpdesc_set_zspage(struct zpdesc *zpdesc,
>> +				     struct zspage *zspage)
>> +{
>> +	zpdesc->zspage = zspage;
>> +}
>> +
>> +static inline void zpdesc_set_first(struct zpdesc *zpdesc)
>> +{
>> +	SetPagePrivate(zpdesc_page(zpdesc));
>> +}
>> +
> 
> I'm not a fan of the names above. IMO, naming should follow some
> semblance of consistency regarding their purpose (or have comments
> that describe their purpose instead).
> 
> At a glance zpdesc_set_zspage() and zpdesc_set_first() sound like they
> are doing similar things, but I don't think they serve similar purposes?

zpdesc_set_zspage() only used in one place, a helper maynot needed. Let me remove it.
Same thing for the alloc_zpdesc() and free_zpdesc(), they could be merge into using place.

Thanks
Alex
> 
>> +static inline void zpdesc_inc_zone_page_state(struct zpdesc *zpdesc)
>> +{
>> +	inc_zone_page_state(zpdesc_page(zpdesc), NR_ZSPAGES);
>> +}
>> +
>> +static inline void zpdesc_dec_zone_page_state(struct zpdesc *zpdesc)
>> +{
>> +	dec_zone_page_state(zpdesc_page(zpdesc), NR_ZSPAGES);
>> +}
>> +
>> +static inline struct zpdesc *alloc_zpdesc(gfp_t gfp)
>> +{
>> +	struct page *page = alloc_page(gfp);
>> +
>> +	return page_zpdesc(page);
>> +}
>> +
>> +static inline void free_zpdesc(struct zpdesc *zpdesc)
>> +{
>> +	struct page *page = zpdesc_page(zpdesc);
>> +
>> +	__free_page(page);
>> +}
>> +
>  


^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v4 20/22] mm/zsmalloc: introduce __zpdesc_set_zsmalloc()
  2024-08-02 19:11   ` Vishal Moola
@ 2024-08-05  8:28     ` Alex Shi
  0 siblings, 0 replies; 50+ messages in thread
From: Alex Shi @ 2024-08-05  8:28 UTC (permalink / raw)
  To: Vishal Moola, alexs
  Cc: Vitaly Wool, Miaohe Lin, Andrew Morton, linux-kernel, linux-mm,
	minchan, willy, senozhatsky, david, 42.hyeyoo, Yosry Ahmed,
	nphamcs



On 8/3/24 3:11 AM, Vishal Moola wrote:
> On Mon, Jul 29, 2024 at 07:25:32PM +0800, alexs@kernel.org wrote:
>> From: Alex Shi <alexs@kernel.org>
>>
>> Add a helper __zpdesc_set_zsmalloc() for __SetPageZsmalloc(), and use
>> it in callers to make code clear.
> 
> Definitely just fold this into the prior patch. It effectively does the
> same thing.

will merge them into one.

Thanks for comments!


^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v4 21/22] mm/zsmalloc: fix build warning from lkp testing
  2024-08-02 19:13   ` Vishal Moola
@ 2024-08-05  8:38     ` Alex Shi
  0 siblings, 0 replies; 50+ messages in thread
From: Alex Shi @ 2024-08-05  8:38 UTC (permalink / raw)
  To: Vishal Moola, alexs
  Cc: Vitaly Wool, Miaohe Lin, Andrew Morton, linux-kernel, linux-mm,
	minchan, willy, senozhatsky, david, 42.hyeyoo, Yosry Ahmed,
	nphamcs, kernel test robot



On 8/3/24 3:13 AM, Vishal Moola wrote:
> On Mon, Jul 29, 2024 at 07:25:33PM +0800, alexs@kernel.org wrote:
>> From: Alex Shi <alexs@kernel.org>
>>
>> LKP reported the following warning w/o CONFIG_DEBUG_VM:
>> 	mm/zsmalloc.c:471:12: warning: function 'is_first_zpdesc' is not
>> 	needed and will not be emitted [-Wunneeded-internal-declaration]
>> To remove this warning, better to incline the function is_first_zpdesc
> 
> In future iterations of the series, just fold this into the patch its
> fixing. It makes reviewing easier.
> 

Yes, thanks for comments!


^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v4 01/22] mm/zsmalloc: add zpdesc memory descriptor for zswap.zpool
  2024-08-05  4:06     ` Alex Shi
@ 2024-08-08 18:21       ` Vishal Moola
  2024-08-09  1:57         ` Alex Shi
  0 siblings, 1 reply; 50+ messages in thread
From: Vishal Moola @ 2024-08-08 18:21 UTC (permalink / raw)
  To: Alex Shi
  Cc: alexs, Vitaly Wool, Miaohe Lin, Andrew Morton, linux-kernel,
	linux-mm, minchan, willy, senozhatsky, david, 42.hyeyoo,
	Yosry Ahmed, nphamcs

On Mon, Aug 05, 2024 at 12:06:24PM +0800, Alex Shi wrote:
> 
> 
> On 8/3/24 2:52 AM, Vishal Moola wrote:
> > On Mon, Jul 29, 2024 at 07:25:13PM +0800, alexs@kernel.org wrote:
> >> From: Alex Shi <alexs@kernel.org>
> > 
> >> +	static_assert(offsetof(struct page, pg) == offsetof(struct zpdesc, zp))
> >> +
> >> +ZPDESC_MATCH(flags, flags);
> >> +ZPDESC_MATCH(lru, lru);
> >> +ZPDESC_MATCH(mapping, mops);
> >> +ZPDESC_MATCH(index, next);
> >> +ZPDESC_MATCH(index, handle);
> >> +ZPDESC_MATCH(private, zspage);
> >> +#ifdef CONFIG_MEMCG
> >> +ZPDESC_MATCH(memcg_data, memcg_data);
> >> +#endif
> >> +#undef ZPDESC_MATCH
> >> +static_assert(sizeof(struct zpdesc) <= sizeof(struct page));
> >> +
> >> +#define zpdesc_page(zp)			(_Generic((zp),			\
> >> +	const struct zpdesc *:		(const struct page *)(zp),	\
> >> +	struct zpdesc *:		(struct page *)(zp)))
> >> +
> >> +#define zpdesc_folio(zp)		(_Generic((zp),			\
> >> +	const struct zpdesc *:		(const struct folio *)(zp),	\
> >> +	struct zpdesc *:		(struct folio *)(zp)))
> >> +
> >> +#define page_zpdesc(p)			(_Generic((p),			\
> >> +	const struct page *:		(const struct zpdesc *)(p),	\
> >> +	struct page *:			(struct zpdesc *)(p)))
> >> +
> >> +#endif
> > 
> > I'm don't think we need both page and folio cast functions for zpdescs.
> > Sticking to pages will probably suffice (and be easiest) since all APIs
> > zsmalloc cares about are already defined. 
> > 
> > We can stick to 1 "middle-man" descriptor for zpdescs since zsmalloc
> > uses those pages as space to track zspages and nothing more. We'll likely
> > end up completely removing it from zsmalloc once we can allocate
> > memdescs on their own: It seems most (if not all) of the "indirect" members
> > of zpdesc are used as indicators to the rest of core-mm telling them not to
> > mess with that memory.
> 
> Yes, that is also my first attempt to skip folio part, but I found we could got
> 6.3% object size reduced on zsmalloc.o file, from 37.2KB to 34.9KB, if we use
> folio series lock and folio_get/put functions. That saving come from compound_head
> check skipping.
> So I wrapped them carefully in zpdesc series functions in zpdesc.h file.
> They should be easy replaced when we use memdescs in the future. Could we keep them
> a while, or ?

IMO, Its alright to keep both pages and folios due to the size reduction.
However if we do keep both, it should be clearer that we Want zpdescs to
be order-0 pages, and the only reason we have folios is that
compound_head() size reduction (and nothing more). I think a comment by
the zpdesc_folio() macro will suffice.

> Thanks
> Alex
>  


^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v4 06/22] mm/zsmalloc: convert create_page_chain() and its users to use zpdesc
  2024-08-05  8:20     ` Alex Shi
@ 2024-08-08 18:25       ` Vishal Moola
  2024-08-09  1:57         ` Alex Shi
  0 siblings, 1 reply; 50+ messages in thread
From: Vishal Moola @ 2024-08-08 18:25 UTC (permalink / raw)
  To: Alex Shi
  Cc: alexs, Vitaly Wool, Miaohe Lin, Andrew Morton, linux-kernel,
	linux-mm, minchan, willy, senozhatsky, david, 42.hyeyoo,
	Yosry Ahmed, nphamcs

On Mon, Aug 05, 2024 at 04:20:15PM +0800, Alex Shi wrote:
> 
> 
> On 8/3/24 3:09 AM, Vishal Moola wrote:
> > On Mon, Jul 29, 2024 at 07:25:18PM +0800, alexs@kernel.org wrote:
> >> From: Alex Shi <alexs@kernel.org>
> >>
> >> Introduce a few helper functions for conversion to convert create_page_chain()
> >> to use zpdesc, then use zpdesc in replace_sub_page() too.
> > 
> > As a general note, I've been having trouble keeping track of your helper
> > functions throughout your patchset. Things get confusing when helper
> > functions are "add-ons" to patches and are then replaced/rewritten
> > in various subsequent patches - might just be me though.
> 
> Right, maybe too much helper doesn't give necessary help.
> 
> > 
> >> Originally-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> >> Signed-off-by: Alex Shi <alexs@kernel.org>
> >> ---
> >>  mm/zpdesc.h   |   6 +++
> >>  mm/zsmalloc.c | 115 +++++++++++++++++++++++++++++++++-----------------
> >>  2 files changed, 82 insertions(+), 39 deletions(-)
> >>
> >> diff --git a/mm/zpdesc.h b/mm/zpdesc.h
> >> index 79ec40b03956..2293453f5d57 100644
> >> --- a/mm/zpdesc.h
> >> +++ b/mm/zpdesc.h
> >> @@ -102,4 +102,10 @@ static inline struct zpdesc *pfn_zpdesc(unsigned long pfn)
> >>  {
> >>  	return page_zpdesc(pfn_to_page(pfn));
> >>  }
> >> +
> >> +static inline void __zpdesc_set_movable(struct zpdesc *zpdesc,
> >> +					const struct movable_operations *mops)
> >> +{
> >> +	__SetPageMovable(zpdesc_page(zpdesc), mops);
> >> +}
> >>  #endif
> >> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> >> index bbc165cb587d..a8f390beeab8 100644
> >> --- a/mm/zsmalloc.c
> >> +++ b/mm/zsmalloc.c
> >> @@ -248,6 +248,41 @@ static inline void *zpdesc_kmap_atomic(struct zpdesc *zpdesc)
> >>  	return kmap_atomic(zpdesc_page(zpdesc));
> >>  }
> >>  
> >> +static inline void zpdesc_set_zspage(struct zpdesc *zpdesc,
> >> +				     struct zspage *zspage)
> >> +{
> >> +	zpdesc->zspage = zspage;
> >> +}
> >> +
> >> +static inline void zpdesc_set_first(struct zpdesc *zpdesc)
> >> +{
> >> +	SetPagePrivate(zpdesc_page(zpdesc));
> >> +}
> >> +
> > 
> > I'm not a fan of the names above. IMO, naming should follow some
> > semblance of consistency regarding their purpose (or have comments
> > that describe their purpose instead).
> > 
> > At a glance zpdesc_set_zspage() and zpdesc_set_first() sound like they
> > are doing similar things, but I don't think they serve similar purposes?
> 
> zpdesc_set_zspage() only used in one place, a helper maynot needed. Let me remove it.
> Same thing for the alloc_zpdesc() and free_zpdesc(), they could be merge into using place.

alloc_zpdesc() and free_zpdesc() are fine as is. The helper functions
will be useful whenever memdescs can be allocated on their own, so its
better to introduce it now.

> Thanks
> Alex
> > 
> >> +static inline void zpdesc_inc_zone_page_state(struct zpdesc *zpdesc)
> >> +{
> >> +	inc_zone_page_state(zpdesc_page(zpdesc), NR_ZSPAGES);
> >> +}
> >> +
> >> +static inline void zpdesc_dec_zone_page_state(struct zpdesc *zpdesc)
> >> +{
> >> +	dec_zone_page_state(zpdesc_page(zpdesc), NR_ZSPAGES);
> >> +}
> >> +
> >> +static inline struct zpdesc *alloc_zpdesc(gfp_t gfp)
> >> +{
> >> +	struct page *page = alloc_page(gfp);
> >> +
> >> +	return page_zpdesc(page);
> >> +}
> >> +
> >> +static inline void free_zpdesc(struct zpdesc *zpdesc)
> >> +{
> >> +	struct page *page = zpdesc_page(zpdesc);
> >> +
> >> +	__free_page(page);
> >> +}
> >> +
> >  


^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v4 06/22] mm/zsmalloc: convert create_page_chain() and its users to use zpdesc
  2024-08-08 18:25       ` Vishal Moola
@ 2024-08-09  1:57         ` Alex Shi
  0 siblings, 0 replies; 50+ messages in thread
From: Alex Shi @ 2024-08-09  1:57 UTC (permalink / raw)
  To: Vishal Moola
  Cc: alexs, Vitaly Wool, Miaohe Lin, Andrew Morton, linux-kernel,
	linux-mm, minchan, willy, senozhatsky, david, 42.hyeyoo,
	Yosry Ahmed, nphamcs



On 8/9/24 2:25 AM, Vishal Moola wrote:
>>> I'm not a fan of the names above. IMO, naming should follow some
>>> semblance of consistency regarding their purpose (or have comments
>>> that describe their purpose instead).
>>>
>>> At a glance zpdesc_set_zspage() and zpdesc_set_first() sound like they
>>> are doing similar things, but I don't think they serve similar purposes?
>> zpdesc_set_zspage() only used in one place, a helper maynot needed. Let me remove it.
>> Same thing for the alloc_zpdesc() and free_zpdesc(), they could be merge into using place.
> alloc_zpdesc() and free_zpdesc() are fine as is. The helper functions
> will be useful whenever memdescs can be allocated on their own, so its
> better to introduce it now.

Thanks for suggestion. will recover them in next version, maybe tomorrow if no more comments.


^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v4 01/22] mm/zsmalloc: add zpdesc memory descriptor for zswap.zpool
  2024-08-08 18:21       ` Vishal Moola
@ 2024-08-09  1:57         ` Alex Shi
  0 siblings, 0 replies; 50+ messages in thread
From: Alex Shi @ 2024-08-09  1:57 UTC (permalink / raw)
  To: Vishal Moola
  Cc: alexs, Vitaly Wool, Miaohe Lin, Andrew Morton, linux-kernel,
	linux-mm, minchan, willy, senozhatsky, david, 42.hyeyoo,
	Yosry Ahmed, nphamcs



On 8/9/24 2:21 AM, Vishal Moola wrote:
>> So I wrapped them carefully in zpdesc series functions in zpdesc.h file.
>> They should be easy replaced when we use memdescs in the future. Could we keep them
>> a while, or ?
> IMO, Its alright to keep both pages and folios due to the size reduction.
> However if we do keep both, it should be clearer that we Want zpdescs to
> be order-0 pages, and the only reason we have folios is that
> compound_head() size reduction (and nothing more). I think a comment by
> the zpdesc_folio() macro will suffice.

Right, will add some comments for this.

Thanks!


^ permalink raw reply	[flat|nested] 50+ messages in thread

end of thread, other threads:[~2024-08-09  1:57 UTC | newest]

Thread overview: 50+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-07-29 11:25 [PATCH v4 00/22] mm/zsmalloc: add zpdesc memory descriptor for zswap.zpool alexs
2024-07-29 11:25 ` [PATCH v4 01/22] " alexs
2024-08-02 18:52   ` Vishal Moola
2024-08-05  4:06     ` Alex Shi
2024-08-08 18:21       ` Vishal Moola
2024-08-09  1:57         ` Alex Shi
2024-08-02 19:30   ` Matthew Wilcox
2024-08-05  4:36     ` Alex Shi
2024-07-29 11:25 ` [PATCH v4 02/22] mm/zsmalloc: use zpdesc in trylock_zspage/lock_zspage alexs
2024-08-02 19:02   ` Vishal Moola
2024-08-05  7:55     ` Alex Shi
2024-07-29 11:25 ` [PATCH v4 03/22] mm/zsmalloc: convert __zs_map_object/__zs_unmap_object to use zpdesc alexs
2024-07-30  9:38   ` Sergey Senozhatsky
2024-07-29 11:25 ` [PATCH v4 04/22] mm/zsmalloc: add and use pfn/zpdesc seeking funcs alexs
2024-07-29 11:25 ` [PATCH v4 05/22] mm/zsmalloc: convert obj_malloc() to use zpdesc alexs
2024-07-29 11:25 ` [PATCH v4 06/22] mm/zsmalloc: convert create_page_chain() and its users " alexs
2024-08-02 19:09   ` Vishal Moola
2024-08-05  8:20     ` Alex Shi
2024-08-08 18:25       ` Vishal Moola
2024-08-09  1:57         ` Alex Shi
2024-07-29 11:25 ` [PATCH v4 07/22] mm/zsmalloc: convert obj_allocated() and related helpers " alexs
2024-07-29 11:25 ` [PATCH v4 08/22] mm/zsmalloc: convert init_zspage() " alexs
2024-07-29 11:25 ` [PATCH v4 09/22] mm/zsmalloc: convert obj_to_page() and zs_free() " alexs
2024-07-29 11:25 ` [PATCH v4 10/22] mm/zsmalloc: add zpdesc_is_isolated/zpdesc_zone helper for zs_page_migrate alexs
2024-07-29 11:25 ` [PATCH v4 11/22] mm/zsmalloc: rename reset_page to reset_zpdesc and use zpdesc in it alexs
2024-07-29 11:25 ` [PATCH v4 12/22] mm/zsmalloc: convert __free_zspage() to use zdsesc alexs
2024-07-29 11:25 ` [PATCH v4 13/22] mm/zsmalloc: convert location_to_obj() to take zpdesc alexs
2024-07-29 11:25 ` [PATCH v4 14/22] mm/zsmalloc: convert migrate_zspage() to use zpdesc alexs
2024-07-29 11:25 ` [PATCH v4 15/22] mm/zsmalloc: convert get_zspage() to take zpdesc alexs
2024-07-29 11:25 ` [PATCH v4 16/22] mm/zsmalloc: convert SetZsPageMovable and remove unused funcs alexs
2024-07-29 11:25 ` [PATCH v4 17/22] mm/zsmalloc: convert get/set_first_obj_offset() to take zpdesc alexs
2024-07-29 11:25 ` [PATCH v4 18/22] mm/zsmalloc: introduce __zpdesc_clear_movable alexs
2024-07-30  9:34   ` Sergey Senozhatsky
2024-07-30 11:38     ` Alex Shi
2024-07-29 11:25 ` [PATCH v4 19/22] mm/zsmalloc: introduce __zpdesc_clear_zsmalloc alexs
2024-07-29 11:25 ` [PATCH v4 20/22] mm/zsmalloc: introduce __zpdesc_set_zsmalloc() alexs
2024-08-02 19:11   ` Vishal Moola
2024-08-05  8:28     ` Alex Shi
2024-07-29 11:25 ` [PATCH v4 21/22] mm/zsmalloc: fix build warning from lkp testing alexs
2024-08-02 19:13   ` Vishal Moola
2024-08-05  8:38     ` Alex Shi
2024-07-29 11:25 ` [PATCH v4 22/22] mm/zsmalloc: update comments for page->zpdesc changes alexs
2024-07-30  9:37   ` Sergey Senozhatsky
2024-07-30 11:45     ` Alex Shi
2024-07-31  2:16       ` Sergey Senozhatsky
2024-07-31  4:14         ` Alex Shi
2024-08-01  3:13           ` Sergey Senozhatsky
2024-08-01  3:35             ` Matthew Wilcox
2024-08-01  8:06               ` Alex Shi
2024-07-30 12:31 ` [PATCH 23/23] mm/zsmalloc: introduce zpdesc_clear_first() helper alexs

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).