public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH mm-unstable 0/5] Switch zswap to object read/write APIs
@ 2025-03-05  6:11 Yosry Ahmed
  2025-03-05  6:11 ` [PATCH mm-unstable 1/5] mm: zpool: Add interfaces for " Yosry Ahmed
                   ` (5 more replies)
  0 siblings, 6 replies; 28+ messages in thread
From: Yosry Ahmed @ 2025-03-05  6:11 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Johannes Weiner, Nhat Pham, Chengming Zhou, Minchan Kim,
	Sergey Senozhatsky, Herbert Xu, Thomas Gleixner, Peter Zijlstra,
	linux-mm, linux-kernel, Yosry Ahmed

This patch series updates zswap to use the new object read/write APIs
defined by zsmalloc in [1], and remove the old object mapping APIs and
the related code from zpool and zsmalloc.

This depends on the zsmalloc/zram series introducing the APIs [1] and
the series removing zbud and z3fold [2].

[1]https://lore.kernel.org/lkml/20250227043618.88380-1-senozhatsky@chromium.org/
[2]https://lore.kernel.org/lkml/20250129180633.3501650-1-yosry.ahmed@linux.dev/

Yosry Ahmed (5):
  mm: zpool: Add interfaces for object read/write APIs
  mm: zswap: Use object read/write APIs instead of object mapping APIs
  mm: zpool: Remove object mapping APIs
  mm: zsmalloc: Remove object mapping APIs and per-CPU map areas
  mm: zpool: Remove zpool_malloc_support_movable()

 include/linux/cpuhotplug.h |   1 -
 include/linux/zpool.h      |  42 ++----
 include/linux/zsmalloc.h   |  21 ---
 mm/zpool.c                 |  93 +++++--------
 mm/zsmalloc.c              | 263 +++----------------------------------
 mm/zswap.c                 |  37 ++----
 6 files changed, 75 insertions(+), 382 deletions(-)

-- 
2.48.1.711.g2feabab25a-goog


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH mm-unstable 1/5] mm: zpool: Add interfaces for object read/write APIs
  2025-03-05  6:11 [PATCH mm-unstable 0/5] Switch zswap to object read/write APIs Yosry Ahmed
@ 2025-03-05  6:11 ` Yosry Ahmed
  2025-03-05  8:18   ` Sergey Senozhatsky
                     ` (2 more replies)
  2025-03-05  6:11 ` [PATCH mm-unstable 2/5] mm: zswap: Use object read/write APIs instead of object mapping APIs Yosry Ahmed
                   ` (4 subsequent siblings)
  5 siblings, 3 replies; 28+ messages in thread
From: Yosry Ahmed @ 2025-03-05  6:11 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Johannes Weiner, Nhat Pham, Chengming Zhou, Minchan Kim,
	Sergey Senozhatsky, Herbert Xu, Thomas Gleixner, Peter Zijlstra,
	linux-mm, linux-kernel, Yosry Ahmed

Zsmalloc introduce new APIs to read/write objects besides mapping them.
Add the necessary zpool interfaces.

Signed-off-by: Yosry Ahmed <yosry.ahmed@linux.dev>
---
 include/linux/zpool.h | 17 +++++++++++++++
 mm/zpool.c            | 48 +++++++++++++++++++++++++++++++++++++++++++
 mm/zsmalloc.c         | 21 +++++++++++++++++++
 3 files changed, 86 insertions(+)

diff --git a/include/linux/zpool.h b/include/linux/zpool.h
index 5e6dc46b8cc4c..1784e735ee049 100644
--- a/include/linux/zpool.h
+++ b/include/linux/zpool.h
@@ -52,6 +52,16 @@ void *zpool_map_handle(struct zpool *pool, unsigned long handle,
 
 void zpool_unmap_handle(struct zpool *pool, unsigned long handle);
 
+
+void *zpool_obj_read_begin(struct zpool *zpool, unsigned long handle,
+			   void *local_copy);
+
+void zpool_obj_read_end(struct zpool *zpool, unsigned long handle,
+			void *handle_mem);
+
+void zpool_obj_write(struct zpool *zpool, unsigned long handle,
+		     void *handle_mem, size_t mem_len);
+
 u64 zpool_get_total_pages(struct zpool *pool);
 
 
@@ -90,6 +100,13 @@ struct zpool_driver {
 				enum zpool_mapmode mm);
 	void (*unmap)(void *pool, unsigned long handle);
 
+	void *(*obj_read_begin)(void *pool, unsigned long handle,
+				void *local_copy);
+	void (*obj_read_end)(void *pool, unsigned long handle,
+			     void *handle_mem);
+	void (*obj_write)(void *pool, unsigned long handle,
+			  void *handle_mem, size_t mem_len);
+
 	u64 (*total_pages)(void *pool);
 };
 
diff --git a/mm/zpool.c b/mm/zpool.c
index 4bbd12d4b6599..378c2d1e5638f 100644
--- a/mm/zpool.c
+++ b/mm/zpool.c
@@ -320,6 +320,54 @@ void zpool_unmap_handle(struct zpool *zpool, unsigned long handle)
 	zpool->driver->unmap(zpool->pool, handle);
 }
 
+/**
+ * zpool_obj_read_begin() - Start reading from a previously allocated handle.
+ * @zpool:	The zpool that the handle was allocated from
+ * @handle:	The handle to read from
+ * @local_copy:	A local buffer to use if needed.
+ *
+ * This starts a read operation of a previously allocated handle. The passed
+ * @local_copy buffer may be used if needed by copying the memory into.
+ * zpool_obj_read_end() MUST be called after the read is completed to undo any
+ * actions taken (e.g. release locks).
+ *
+ * Returns: A pointer to the handle memory to be read, if @local_copy is used,
+ * the returned pointer is @local_copy.
+ */
+void *zpool_obj_read_begin(struct zpool *zpool, unsigned long handle,
+			   void *local_copy)
+{
+	return zpool->driver->obj_read_begin(zpool->pool, handle, local_copy);
+}
+
+/**
+ * zpool_obj_read_end() - Finish reading from a previously allocated handle.
+ * @zpool:	The zpool that the handle was allocated from
+ * @handle:	The handle to read from
+ * @handle_mem:	The pointer returned by zpool_obj_read_begin()
+ *
+ * Finishes a read operation previously started by zpool_obj_read_begin().
+ */
+void zpool_obj_read_end(struct zpool *zpool, unsigned long handle,
+			void *handle_mem)
+{
+	zpool->driver->obj_read_end(zpool->pool, handle, handle_mem);
+}
+
+/**
+ * zpool_obj_write() - Write to a previously allocated handle.
+ * @zpool:	The zpool that the handle was allocated from
+ * @handle:	The handle to read from
+ * @handle_mem:	The memory to copy from into the handle.
+ * @mem_len:	The length of memory to be written.
+ *
+ */
+void zpool_obj_write(struct zpool *zpool, unsigned long handle,
+		     void *handle_mem, size_t mem_len)
+{
+	zpool->driver->obj_write(zpool->pool, handle, handle_mem, mem_len);
+}
+
 /**
  * zpool_get_total_pages() - The total size of the pool
  * @zpool:	The zpool to check
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 63c99db71dc1f..d84b300db64e7 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -507,6 +507,24 @@ static void zs_zpool_unmap(void *pool, unsigned long handle)
 	zs_unmap_object(pool, handle);
 }
 
+static void *zs_zpool_obj_read_begin(void *pool, unsigned long handle,
+				     void *local_copy)
+{
+	return zs_obj_read_begin(pool, handle, local_copy);
+}
+
+static void zs_zpool_obj_read_end(void *pool, unsigned long handle,
+				  void *handle_mem)
+{
+	zs_obj_read_end(pool, handle, handle_mem);
+}
+
+static void zs_zpool_obj_write(void *pool, unsigned long handle,
+			       void *handle_mem, size_t mem_len)
+{
+	zs_obj_write(pool, handle, handle_mem, mem_len);
+}
+
 static u64 zs_zpool_total_pages(void *pool)
 {
 	return zs_get_total_pages(pool);
@@ -522,6 +540,9 @@ static struct zpool_driver zs_zpool_driver = {
 	.free =			  zs_zpool_free,
 	.map =			  zs_zpool_map,
 	.unmap =		  zs_zpool_unmap,
+	.obj_read_begin =	  zs_zpool_obj_read_begin,
+	.obj_read_end  =	  zs_zpool_obj_read_end,
+	.obj_write =		  zs_zpool_obj_write,
 	.total_pages =		  zs_zpool_total_pages,
 };
 
-- 
2.48.1.711.g2feabab25a-goog


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH mm-unstable 2/5] mm: zswap: Use object read/write APIs instead of object mapping APIs
  2025-03-05  6:11 [PATCH mm-unstable 0/5] Switch zswap to object read/write APIs Yosry Ahmed
  2025-03-05  6:11 ` [PATCH mm-unstable 1/5] mm: zpool: Add interfaces for " Yosry Ahmed
@ 2025-03-05  6:11 ` Yosry Ahmed
  2025-03-05 14:48   ` Johannes Weiner
  2025-03-05 17:35   ` Nhat Pham
  2025-03-05  6:11 ` [PATCH mm-unstable 3/5] mm: zpool: Remove " Yosry Ahmed
                   ` (3 subsequent siblings)
  5 siblings, 2 replies; 28+ messages in thread
From: Yosry Ahmed @ 2025-03-05  6:11 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Johannes Weiner, Nhat Pham, Chengming Zhou, Minchan Kim,
	Sergey Senozhatsky, Herbert Xu, Thomas Gleixner, Peter Zijlstra,
	linux-mm, linux-kernel, Yosry Ahmed

Use the new object read/write APIs instead of mapping APIs.

On compress side, zpool_obj_write() is more concise and provides exactly
what zswap needs to write the compressed object to the zpool, instead of
map->copy->unmap.

On the decompress side, zpool_obj_read_begin() is sleepable, which
allows avoiding the memcpy() for zsmalloc and slightly simplifying the
code by:
- Avoiding checking if the zpool driver is sleepable, reducing special
  cases and shrinking the huge comment.
- Having a single zpool_obj_read_end() call rather than multiple
  conditional zpool_unmap_handle() calls.

The !virt_addr_valid() case can be removed in the future if the crypto
API supports kmap addresses or by using kmap_to_page(), completely
eliminating the memcpy() path in zswap_decompress(). This a step toward
that. In that spirit, opportunistically make the comment more specific
about the kmap case instead of generic non-linear addresses. This is the
only case that needs to be handled in practice, and the generic comment
makes it seem like a bigger problem that it actually is.

Signed-off-by: Yosry Ahmed <yosry.ahmed@linux.dev>
---

Herbert, I think we can completely get rid of the memcpy() in
zswap_decompress() if we can pass a highmem address to sg and crypto. I
believe your new virtual address API may be used here for this?

---
 mm/zswap.c | 33 +++++++++++++--------------------
 1 file changed, 13 insertions(+), 20 deletions(-)

diff --git a/mm/zswap.c b/mm/zswap.c
index 10f2a16e75869..4c474b692828d 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -930,7 +930,6 @@ static bool zswap_compress(struct page *page, struct zswap_entry *entry,
 	unsigned int dlen = PAGE_SIZE;
 	unsigned long handle;
 	struct zpool *zpool;
-	char *buf;
 	gfp_t gfp;
 	u8 *dst;
 
@@ -972,10 +971,7 @@ static bool zswap_compress(struct page *page, struct zswap_entry *entry,
 	if (alloc_ret)
 		goto unlock;
 
-	buf = zpool_map_handle(zpool, handle, ZPOOL_MM_WO);
-	memcpy(buf, dst, dlen);
-	zpool_unmap_handle(zpool, handle);
-
+	zpool_obj_write(zpool, handle, dst, dlen);
 	entry->handle = handle;
 	entry->length = dlen;
 
@@ -996,24 +992,22 @@ static void zswap_decompress(struct zswap_entry *entry, struct folio *folio)
 	struct zpool *zpool = entry->pool->zpool;
 	struct scatterlist input, output;
 	struct crypto_acomp_ctx *acomp_ctx;
-	u8 *src;
+	u8 *src, *obj;
 
 	acomp_ctx = acomp_ctx_get_cpu_lock(entry->pool);
-	src = zpool_map_handle(zpool, entry->handle, ZPOOL_MM_RO);
+	obj = zpool_obj_read_begin(zpool, entry->handle, acomp_ctx->buffer);
+
 	/*
-	 * If zpool_map_handle is atomic, we cannot reliably utilize its mapped buffer
-	 * to do crypto_acomp_decompress() which might sleep. In such cases, we must
-	 * resort to copying the buffer to a temporary one.
-	 * Meanwhile, zpool_map_handle() might return a non-linearly mapped buffer,
-	 * such as a kmap address of high memory or even ever a vmap address.
-	 * However, sg_init_one is only equipped to handle linearly mapped low memory.
-	 * In such cases, we also must copy the buffer to a temporary and lowmem one.
+	 * zpool_obj_read_begin() might return a kmap address of highmem when
+	 * acomp_ctx->buffer is not used.  However, sg_init_one() does not
+	 * handle highmem addresses, so copy the object to acomp_ctx->buffer.
 	 */
-	if ((acomp_ctx->is_sleepable && !zpool_can_sleep_mapped(zpool)) ||
-	    !virt_addr_valid(src)) {
-		memcpy(acomp_ctx->buffer, src, entry->length);
+	if (virt_addr_valid(obj)) {
+		src = obj;
+	} else {
+		WARN_ON_ONCE(obj == acomp_ctx->buffer);
+		memcpy(acomp_ctx->buffer, obj, entry->length);
 		src = acomp_ctx->buffer;
-		zpool_unmap_handle(zpool, entry->handle);
 	}
 
 	sg_init_one(&input, src, entry->length);
@@ -1023,8 +1017,7 @@ static void zswap_decompress(struct zswap_entry *entry, struct folio *folio)
 	BUG_ON(crypto_wait_req(crypto_acomp_decompress(acomp_ctx->req), &acomp_ctx->wait));
 	BUG_ON(acomp_ctx->req->dlen != PAGE_SIZE);
 
-	if (src != acomp_ctx->buffer)
-		zpool_unmap_handle(zpool, entry->handle);
+	zpool_obj_read_end(zpool, entry->handle, obj);
 	acomp_ctx_put_unlock(acomp_ctx);
 }
 
-- 
2.48.1.711.g2feabab25a-goog


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH mm-unstable 3/5] mm: zpool: Remove object mapping APIs
  2025-03-05  6:11 [PATCH mm-unstable 0/5] Switch zswap to object read/write APIs Yosry Ahmed
  2025-03-05  6:11 ` [PATCH mm-unstable 1/5] mm: zpool: Add interfaces for " Yosry Ahmed
  2025-03-05  6:11 ` [PATCH mm-unstable 2/5] mm: zswap: Use object read/write APIs instead of object mapping APIs Yosry Ahmed
@ 2025-03-05  6:11 ` Yosry Ahmed
  2025-03-05  8:17   ` Sergey Senozhatsky
                     ` (3 more replies)
  2025-03-05  6:11 ` [PATCH mm-unstable 4/5] mm: zsmalloc: Remove object mapping APIs and per-CPU map areas Yosry Ahmed
                   ` (2 subsequent siblings)
  5 siblings, 4 replies; 28+ messages in thread
From: Yosry Ahmed @ 2025-03-05  6:11 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Johannes Weiner, Nhat Pham, Chengming Zhou, Minchan Kim,
	Sergey Senozhatsky, Herbert Xu, Thomas Gleixner, Peter Zijlstra,
	linux-mm, linux-kernel, Yosry Ahmed

zpool_map_handle(), zpool_unmap_handle(), and zpool_can_sleep_mapped()
are no longer used. Remove them with the underlying driver callbacks.

Signed-off-by: Yosry Ahmed <yosry.ahmed@linux.dev>
---
 include/linux/zpool.h | 30 ---------------------
 mm/zpool.c            | 61 -------------------------------------------
 mm/zsmalloc.c         | 27 -------------------
 3 files changed, 118 deletions(-)

diff --git a/include/linux/zpool.h b/include/linux/zpool.h
index 1784e735ee049..2c8a9d2654f6f 100644
--- a/include/linux/zpool.h
+++ b/include/linux/zpool.h
@@ -13,25 +13,6 @@
 
 struct zpool;
 
-/*
- * Control how a handle is mapped.  It will be ignored if the
- * implementation does not support it.  Its use is optional.
- * Note that this does not refer to memory protection, it
- * refers to how the memory will be copied in/out if copying
- * is necessary during mapping; read-write is the safest as
- * it copies the existing memory in on map, and copies the
- * changed memory back out on unmap.  Write-only does not copy
- * in the memory and should only be used for initialization.
- * If in doubt, use ZPOOL_MM_DEFAULT which is read-write.
- */
-enum zpool_mapmode {
-	ZPOOL_MM_RW, /* normal read-write mapping */
-	ZPOOL_MM_RO, /* read-only (no copy-out at unmap time) */
-	ZPOOL_MM_WO, /* write-only (no copy-in at map time) */
-
-	ZPOOL_MM_DEFAULT = ZPOOL_MM_RW
-};
-
 bool zpool_has_pool(char *type);
 
 struct zpool *zpool_create_pool(const char *type, const char *name, gfp_t gfp);
@@ -47,12 +28,6 @@ int zpool_malloc(struct zpool *pool, size_t size, gfp_t gfp,
 
 void zpool_free(struct zpool *pool, unsigned long handle);
 
-void *zpool_map_handle(struct zpool *pool, unsigned long handle,
-			enum zpool_mapmode mm);
-
-void zpool_unmap_handle(struct zpool *pool, unsigned long handle);
-
-
 void *zpool_obj_read_begin(struct zpool *zpool, unsigned long handle,
 			   void *local_copy);
 
@@ -95,11 +70,6 @@ struct zpool_driver {
 				unsigned long *handle);
 	void (*free)(void *pool, unsigned long handle);
 
-	bool sleep_mapped;
-	void *(*map)(void *pool, unsigned long handle,
-				enum zpool_mapmode mm);
-	void (*unmap)(void *pool, unsigned long handle);
-
 	void *(*obj_read_begin)(void *pool, unsigned long handle,
 				void *local_copy);
 	void (*obj_read_end)(void *pool, unsigned long handle,
diff --git a/mm/zpool.c b/mm/zpool.c
index 378c2d1e5638f..4fc665b42f5e9 100644
--- a/mm/zpool.c
+++ b/mm/zpool.c
@@ -277,49 +277,6 @@ void zpool_free(struct zpool *zpool, unsigned long handle)
 	zpool->driver->free(zpool->pool, handle);
 }
 
-/**
- * zpool_map_handle() - Map a previously allocated handle into memory
- * @zpool:	The zpool that the handle was allocated from
- * @handle:	The handle to map
- * @mapmode:	How the memory should be mapped
- *
- * This maps a previously allocated handle into memory.  The @mapmode
- * param indicates to the implementation how the memory will be
- * used, i.e. read-only, write-only, read-write.  If the
- * implementation does not support it, the memory will be treated
- * as read-write.
- *
- * This may hold locks, disable interrupts, and/or preemption,
- * and the zpool_unmap_handle() must be called to undo those
- * actions.  The code that uses the mapped handle should complete
- * its operations on the mapped handle memory quickly and unmap
- * as soon as possible.  As the implementation may use per-cpu
- * data, multiple handles should not be mapped concurrently on
- * any cpu.
- *
- * Returns: A pointer to the handle's mapped memory area.
- */
-void *zpool_map_handle(struct zpool *zpool, unsigned long handle,
-			enum zpool_mapmode mapmode)
-{
-	return zpool->driver->map(zpool->pool, handle, mapmode);
-}
-
-/**
- * zpool_unmap_handle() - Unmap a previously mapped handle
- * @zpool:	The zpool that the handle was allocated from
- * @handle:	The handle to unmap
- *
- * This unmaps a previously mapped handle.  Any locks or other
- * actions that the implementation took in zpool_map_handle()
- * will be undone here.  The memory area returned from
- * zpool_map_handle() should no longer be used after this.
- */
-void zpool_unmap_handle(struct zpool *zpool, unsigned long handle)
-{
-	zpool->driver->unmap(zpool->pool, handle);
-}
-
 /**
  * zpool_obj_read_begin() - Start reading from a previously allocated handle.
  * @zpool:	The zpool that the handle was allocated from
@@ -381,23 +338,5 @@ u64 zpool_get_total_pages(struct zpool *zpool)
 	return zpool->driver->total_pages(zpool->pool);
 }
 
-/**
- * zpool_can_sleep_mapped - Test if zpool can sleep when do mapped.
- * @zpool:	The zpool to test
- *
- * Some allocators enter non-preemptible context in ->map() callback (e.g.
- * disable pagefaults) and exit that context in ->unmap(), which limits what
- * we can do with the mapped object. For instance, we cannot wait for
- * asynchronous crypto API to decompress such an object or take mutexes
- * since those will call into the scheduler. This function tells us whether
- * we use such an allocator.
- *
- * Returns: true if zpool can sleep; false otherwise.
- */
-bool zpool_can_sleep_mapped(struct zpool *zpool)
-{
-	return zpool->driver->sleep_mapped;
-}
-
 MODULE_AUTHOR("Dan Streetman <ddstreet@ieee.org>");
 MODULE_DESCRIPTION("Common API for compressed memory storage");
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index d84b300db64e7..56d6ed5c675b2 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -482,31 +482,6 @@ static void zs_zpool_free(void *pool, unsigned long handle)
 	zs_free(pool, handle);
 }
 
-static void *zs_zpool_map(void *pool, unsigned long handle,
-			enum zpool_mapmode mm)
-{
-	enum zs_mapmode zs_mm;
-
-	switch (mm) {
-	case ZPOOL_MM_RO:
-		zs_mm = ZS_MM_RO;
-		break;
-	case ZPOOL_MM_WO:
-		zs_mm = ZS_MM_WO;
-		break;
-	case ZPOOL_MM_RW:
-	default:
-		zs_mm = ZS_MM_RW;
-		break;
-	}
-
-	return zs_map_object(pool, handle, zs_mm);
-}
-static void zs_zpool_unmap(void *pool, unsigned long handle)
-{
-	zs_unmap_object(pool, handle);
-}
-
 static void *zs_zpool_obj_read_begin(void *pool, unsigned long handle,
 				     void *local_copy)
 {
@@ -538,8 +513,6 @@ static struct zpool_driver zs_zpool_driver = {
 	.malloc_support_movable = true,
 	.malloc =		  zs_zpool_malloc,
 	.free =			  zs_zpool_free,
-	.map =			  zs_zpool_map,
-	.unmap =		  zs_zpool_unmap,
 	.obj_read_begin =	  zs_zpool_obj_read_begin,
 	.obj_read_end  =	  zs_zpool_obj_read_end,
 	.obj_write =		  zs_zpool_obj_write,
-- 
2.48.1.711.g2feabab25a-goog


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH mm-unstable 4/5] mm: zsmalloc: Remove object mapping APIs and per-CPU map areas
  2025-03-05  6:11 [PATCH mm-unstable 0/5] Switch zswap to object read/write APIs Yosry Ahmed
                   ` (2 preceding siblings ...)
  2025-03-05  6:11 ` [PATCH mm-unstable 3/5] mm: zpool: Remove " Yosry Ahmed
@ 2025-03-05  6:11 ` Yosry Ahmed
  2025-03-05  8:16   ` Sergey Senozhatsky
                     ` (3 more replies)
  2025-03-05  6:11 ` [PATCH mm-unstable 5/5] mm: zpool: Remove zpool_malloc_support_movable() Yosry Ahmed
  2025-03-05  8:18 ` [PATCH mm-unstable 0/5] Switch zswap to object read/write APIs Sergey Senozhatsky
  5 siblings, 4 replies; 28+ messages in thread
From: Yosry Ahmed @ 2025-03-05  6:11 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Johannes Weiner, Nhat Pham, Chengming Zhou, Minchan Kim,
	Sergey Senozhatsky, Herbert Xu, Thomas Gleixner, Peter Zijlstra,
	linux-mm, linux-kernel, Yosry Ahmed

zs_map_object() and zs_unmap_object() are no longer used, remove them.
Since these are the only users of per-CPU mapping_areas, remove them and
the associated CPU hotplug callbacks too.

Signed-off-by: Yosry Ahmed <yosry.ahmed@linux.dev>
---
 include/linux/cpuhotplug.h |   1 -
 include/linux/zsmalloc.h   |  21 ----
 mm/zsmalloc.c              | 226 +------------------------------------
 3 files changed, 1 insertion(+), 247 deletions(-)

diff --git a/include/linux/cpuhotplug.h b/include/linux/cpuhotplug.h
index 6cc5e484547c1..1987400000b41 100644
--- a/include/linux/cpuhotplug.h
+++ b/include/linux/cpuhotplug.h
@@ -116,7 +116,6 @@ enum cpuhp_state {
 	CPUHP_NET_IUCV_PREPARE,
 	CPUHP_ARM_BL_PREPARE,
 	CPUHP_TRACE_RB_PREPARE,
-	CPUHP_MM_ZS_PREPARE,
 	CPUHP_MM_ZSWP_POOL_PREPARE,
 	CPUHP_KVM_PPC_BOOK3S_PREPARE,
 	CPUHP_ZCOMP_PREPARE,
diff --git a/include/linux/zsmalloc.h b/include/linux/zsmalloc.h
index 7d70983cf3980..c26baf9fb331b 100644
--- a/include/linux/zsmalloc.h
+++ b/include/linux/zsmalloc.h
@@ -16,23 +16,6 @@
 
 #include <linux/types.h>
 
-/*
- * zsmalloc mapping modes
- *
- * NOTE: These only make a difference when a mapped object spans pages.
- */
-enum zs_mapmode {
-	ZS_MM_RW, /* normal read-write mapping */
-	ZS_MM_RO, /* read-only (no copy-out at unmap time) */
-	ZS_MM_WO /* write-only (no copy-in at map time) */
-	/*
-	 * NOTE: ZS_MM_WO should only be used for initializing new
-	 * (uninitialized) allocations.  Partial writes to already
-	 * initialized allocations should use ZS_MM_RW to preserve the
-	 * existing data.
-	 */
-};
-
 struct zs_pool_stats {
 	/* How many pages were migrated (freed) */
 	atomic_long_t pages_compacted;
@@ -48,10 +31,6 @@ void zs_free(struct zs_pool *pool, unsigned long obj);
 
 size_t zs_huge_class_size(struct zs_pool *pool);
 
-void *zs_map_object(struct zs_pool *pool, unsigned long handle,
-			enum zs_mapmode mm);
-void zs_unmap_object(struct zs_pool *pool, unsigned long handle);
-
 unsigned long zs_get_total_pages(struct zs_pool *pool);
 unsigned long zs_compact(struct zs_pool *pool);
 
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 56d6ed5c675b2..cd1c2a8ffef05 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -281,13 +281,6 @@ struct zspage {
 	struct zspage_lock zsl;
 };
 
-struct mapping_area {
-	local_lock_t lock;
-	char *vm_buf; /* copy buffer for objects that span pages */
-	char *vm_addr; /* address of kmap_local_page()'ed pages */
-	enum zs_mapmode vm_mm; /* mapping mode */
-};
-
 static void zspage_lock_init(struct zspage *zspage)
 {
 	static struct lock_class_key __key;
@@ -522,11 +515,6 @@ static struct zpool_driver zs_zpool_driver = {
 MODULE_ALIAS("zpool-zsmalloc");
 #endif /* CONFIG_ZPOOL */
 
-/* per-cpu VM mapping areas for zspage accesses that cross page boundaries */
-static DEFINE_PER_CPU(struct mapping_area, zs_map_area) = {
-	.lock	= INIT_LOCAL_LOCK(lock),
-};
-
 static inline bool __maybe_unused is_first_zpdesc(struct zpdesc *zpdesc)
 {
 	return PagePrivate(zpdesc_page(zpdesc));
@@ -1111,93 +1099,6 @@ static struct zspage *find_get_zspage(struct size_class *class)
 	return zspage;
 }
 
-static inline int __zs_cpu_up(struct mapping_area *area)
-{
-	/*
-	 * Make sure we don't leak memory if a cpu UP notification
-	 * and zs_init() race and both call zs_cpu_up() on the same cpu
-	 */
-	if (area->vm_buf)
-		return 0;
-	area->vm_buf = kmalloc(ZS_MAX_ALLOC_SIZE, GFP_KERNEL);
-	if (!area->vm_buf)
-		return -ENOMEM;
-	return 0;
-}
-
-static inline void __zs_cpu_down(struct mapping_area *area)
-{
-	kfree(area->vm_buf);
-	area->vm_buf = NULL;
-}
-
-static void *__zs_map_object(struct mapping_area *area,
-			struct zpdesc *zpdescs[2], int off, int size)
-{
-	size_t sizes[2];
-	char *buf = area->vm_buf;
-
-	/* disable page faults to match kmap_local_page() return conditions */
-	pagefault_disable();
-
-	/* no read fastpath */
-	if (area->vm_mm == ZS_MM_WO)
-		goto out;
-
-	sizes[0] = PAGE_SIZE - off;
-	sizes[1] = size - sizes[0];
-
-	/* copy object to per-cpu buffer */
-	memcpy_from_page(buf, zpdesc_page(zpdescs[0]), off, sizes[0]);
-	memcpy_from_page(buf + sizes[0], zpdesc_page(zpdescs[1]), 0, sizes[1]);
-out:
-	return area->vm_buf;
-}
-
-static void __zs_unmap_object(struct mapping_area *area,
-			struct zpdesc *zpdescs[2], int off, int size)
-{
-	size_t sizes[2];
-	char *buf;
-
-	/* no write fastpath */
-	if (area->vm_mm == ZS_MM_RO)
-		goto out;
-
-	buf = area->vm_buf;
-	buf = buf + ZS_HANDLE_SIZE;
-	size -= ZS_HANDLE_SIZE;
-	off += ZS_HANDLE_SIZE;
-
-	sizes[0] = PAGE_SIZE - off;
-	sizes[1] = size - sizes[0];
-
-	/* copy per-cpu buffer to object */
-	memcpy_to_page(zpdesc_page(zpdescs[0]), off, buf, sizes[0]);
-	memcpy_to_page(zpdesc_page(zpdescs[1]), 0, buf + sizes[0], sizes[1]);
-
-out:
-	/* enable page faults to match kunmap_local() return conditions */
-	pagefault_enable();
-}
-
-static int zs_cpu_prepare(unsigned int cpu)
-{
-	struct mapping_area *area;
-
-	area = &per_cpu(zs_map_area, cpu);
-	return __zs_cpu_up(area);
-}
-
-static int zs_cpu_dead(unsigned int cpu)
-{
-	struct mapping_area *area;
-
-	area = &per_cpu(zs_map_area, cpu);
-	__zs_cpu_down(area);
-	return 0;
-}
-
 static bool can_merge(struct size_class *prev, int pages_per_zspage,
 					int objs_per_zspage)
 {
@@ -1245,117 +1146,6 @@ unsigned long zs_get_total_pages(struct zs_pool *pool)
 }
 EXPORT_SYMBOL_GPL(zs_get_total_pages);
 
-/**
- * zs_map_object - get address of allocated object from handle.
- * @pool: pool from which the object was allocated
- * @handle: handle returned from zs_malloc
- * @mm: mapping mode to use
- *
- * Before using an object allocated from zs_malloc, it must be mapped using
- * this function. When done with the object, it must be unmapped using
- * zs_unmap_object.
- *
- * Only one object can be mapped per cpu at a time. There is no protection
- * against nested mappings.
- *
- * This function returns with preemption and page faults disabled.
- */
-void *zs_map_object(struct zs_pool *pool, unsigned long handle,
-			enum zs_mapmode mm)
-{
-	struct zspage *zspage;
-	struct zpdesc *zpdesc;
-	unsigned long obj, off;
-	unsigned int obj_idx;
-
-	struct size_class *class;
-	struct mapping_area *area;
-	struct zpdesc *zpdescs[2];
-	void *ret;
-
-	/*
-	 * Because we use per-cpu mapping areas shared among the
-	 * pools/users, we can't allow mapping in interrupt context
-	 * because it can corrupt another users mappings.
-	 */
-	BUG_ON(in_interrupt());
-
-	/* It guarantees it can get zspage from handle safely */
-	read_lock(&pool->lock);
-	obj = handle_to_obj(handle);
-	obj_to_location(obj, &zpdesc, &obj_idx);
-	zspage = get_zspage(zpdesc);
-
-	/*
-	 * migration cannot move any zpages in this zspage. Here, class->lock
-	 * is too heavy since callers would take some time until they calls
-	 * zs_unmap_object API so delegate the locking from class to zspage
-	 * which is smaller granularity.
-	 */
-	zspage_read_lock(zspage);
-	read_unlock(&pool->lock);
-
-	class = zspage_class(pool, zspage);
-	off = offset_in_page(class->size * obj_idx);
-
-	local_lock(&zs_map_area.lock);
-	area = this_cpu_ptr(&zs_map_area);
-	area->vm_mm = mm;
-	if (off + class->size <= PAGE_SIZE) {
-		/* this object is contained entirely within a page */
-		area->vm_addr = kmap_local_zpdesc(zpdesc);
-		ret = area->vm_addr + off;
-		goto out;
-	}
-
-	/* this object spans two pages */
-	zpdescs[0] = zpdesc;
-	zpdescs[1] = get_next_zpdesc(zpdesc);
-	BUG_ON(!zpdescs[1]);
-
-	ret = __zs_map_object(area, zpdescs, off, class->size);
-out:
-	if (likely(!ZsHugePage(zspage)))
-		ret += ZS_HANDLE_SIZE;
-
-	return ret;
-}
-EXPORT_SYMBOL_GPL(zs_map_object);
-
-void zs_unmap_object(struct zs_pool *pool, unsigned long handle)
-{
-	struct zspage *zspage;
-	struct zpdesc *zpdesc;
-	unsigned long obj, off;
-	unsigned int obj_idx;
-
-	struct size_class *class;
-	struct mapping_area *area;
-
-	obj = handle_to_obj(handle);
-	obj_to_location(obj, &zpdesc, &obj_idx);
-	zspage = get_zspage(zpdesc);
-	class = zspage_class(pool, zspage);
-	off = offset_in_page(class->size * obj_idx);
-
-	area = this_cpu_ptr(&zs_map_area);
-	if (off + class->size <= PAGE_SIZE)
-		kunmap_local(area->vm_addr);
-	else {
-		struct zpdesc *zpdescs[2];
-
-		zpdescs[0] = zpdesc;
-		zpdescs[1] = get_next_zpdesc(zpdesc);
-		BUG_ON(!zpdescs[1]);
-
-		__zs_unmap_object(area, zpdescs, off, class->size);
-	}
-	local_unlock(&zs_map_area.lock);
-
-	zspage_read_unlock(zspage);
-}
-EXPORT_SYMBOL_GPL(zs_unmap_object);
-
 void *zs_obj_read_begin(struct zs_pool *pool, unsigned long handle,
 			void *local_copy)
 {
@@ -1975,7 +1765,7 @@ static int zs_page_migrate(struct page *newpage, struct page *page,
 	 * the class lock protects zpage alloc/free in the zspage.
 	 */
 	spin_lock(&class->lock);
-	/* the zspage write_lock protects zpage access via zs_map_object */
+	/* the zspage write_lock protects zpage access via zs_obj_read/write() */
 	if (!zspage_write_trylock(zspage)) {
 		spin_unlock(&class->lock);
 		write_unlock(&pool->lock);
@@ -2459,23 +2249,11 @@ EXPORT_SYMBOL_GPL(zs_destroy_pool);
 
 static int __init zs_init(void)
 {
-	int ret;
-
-	ret = cpuhp_setup_state(CPUHP_MM_ZS_PREPARE, "mm/zsmalloc:prepare",
-				zs_cpu_prepare, zs_cpu_dead);
-	if (ret)
-		goto out;
-
 #ifdef CONFIG_ZPOOL
 	zpool_register_driver(&zs_zpool_driver);
 #endif
-
 	zs_stat_init();
-
 	return 0;
-
-out:
-	return ret;
 }
 
 static void __exit zs_exit(void)
@@ -2483,8 +2261,6 @@ static void __exit zs_exit(void)
 #ifdef CONFIG_ZPOOL
 	zpool_unregister_driver(&zs_zpool_driver);
 #endif
-	cpuhp_remove_state(CPUHP_MM_ZS_PREPARE);
-
 	zs_stat_exit();
 }
 
-- 
2.48.1.711.g2feabab25a-goog


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH mm-unstable 5/5] mm: zpool: Remove zpool_malloc_support_movable()
  2025-03-05  6:11 [PATCH mm-unstable 0/5] Switch zswap to object read/write APIs Yosry Ahmed
                   ` (3 preceding siblings ...)
  2025-03-05  6:11 ` [PATCH mm-unstable 4/5] mm: zsmalloc: Remove object mapping APIs and per-CPU map areas Yosry Ahmed
@ 2025-03-05  6:11 ` Yosry Ahmed
  2025-03-05  8:14   ` Sergey Senozhatsky
                     ` (2 more replies)
  2025-03-05  8:18 ` [PATCH mm-unstable 0/5] Switch zswap to object read/write APIs Sergey Senozhatsky
  5 siblings, 3 replies; 28+ messages in thread
From: Yosry Ahmed @ 2025-03-05  6:11 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Johannes Weiner, Nhat Pham, Chengming Zhou, Minchan Kim,
	Sergey Senozhatsky, Herbert Xu, Thomas Gleixner, Peter Zijlstra,
	linux-mm, linux-kernel, Yosry Ahmed

zpool_malloc_support_movable() always returns true for zsmalloc, the
only remaining zpool driver. Remove it and set the gfp flags in
zswap_compress() accordingly. Opportunistically use GFP_NOWAIT instead
of __GFP_NOWARN | __GFP_KSWAPD_RECLAIM for conciseness as they are
equivalent.

Signed-off-by: Yosry Ahmed <yosry.ahmed@linux.dev>
---
 include/linux/zpool.h |  3 ---
 mm/zpool.c            | 16 ----------------
 mm/zsmalloc.c         |  1 -
 mm/zswap.c            |  4 +---
 4 files changed, 1 insertion(+), 23 deletions(-)

diff --git a/include/linux/zpool.h b/include/linux/zpool.h
index 2c8a9d2654f6f..52f30e526607f 100644
--- a/include/linux/zpool.h
+++ b/include/linux/zpool.h
@@ -21,8 +21,6 @@ const char *zpool_get_type(struct zpool *pool);
 
 void zpool_destroy_pool(struct zpool *pool);
 
-bool zpool_malloc_support_movable(struct zpool *pool);
-
 int zpool_malloc(struct zpool *pool, size_t size, gfp_t gfp,
 			unsigned long *handle);
 
@@ -65,7 +63,6 @@ struct zpool_driver {
 	void *(*create)(const char *name, gfp_t gfp);
 	void (*destroy)(void *pool);
 
-	bool malloc_support_movable;
 	int (*malloc)(void *pool, size_t size, gfp_t gfp,
 				unsigned long *handle);
 	void (*free)(void *pool, unsigned long handle);
diff --git a/mm/zpool.c b/mm/zpool.c
index 4fc665b42f5e9..6d6d889309324 100644
--- a/mm/zpool.c
+++ b/mm/zpool.c
@@ -220,22 +220,6 @@ const char *zpool_get_type(struct zpool *zpool)
 	return zpool->driver->type;
 }
 
-/**
- * zpool_malloc_support_movable() - Check if the zpool supports
- *	allocating movable memory
- * @zpool:	The zpool to check
- *
- * This returns if the zpool supports allocating movable memory.
- *
- * Implementations must guarantee this to be thread-safe.
- *
- * Returns: true if the zpool supports allocating movable memory, false if not
- */
-bool zpool_malloc_support_movable(struct zpool *zpool)
-{
-	return zpool->driver->malloc_support_movable;
-}
-
 /**
  * zpool_malloc() - Allocate memory
  * @zpool:	The zpool to allocate from.
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index cd1c2a8ffef05..961b270f023c2 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -503,7 +503,6 @@ static struct zpool_driver zs_zpool_driver = {
 	.owner =		  THIS_MODULE,
 	.create =		  zs_zpool_create,
 	.destroy =		  zs_zpool_destroy,
-	.malloc_support_movable = true,
 	.malloc =		  zs_zpool_malloc,
 	.free =			  zs_zpool_free,
 	.obj_read_begin =	  zs_zpool_obj_read_begin,
diff --git a/mm/zswap.c b/mm/zswap.c
index 4c474b692828d..138b50ba832b8 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -964,9 +964,7 @@ static bool zswap_compress(struct page *page, struct zswap_entry *entry,
 		goto unlock;
 
 	zpool = pool->zpool;
-	gfp = __GFP_NORETRY | __GFP_NOWARN | __GFP_KSWAPD_RECLAIM;
-	if (zpool_malloc_support_movable(zpool))
-		gfp |= __GFP_HIGHMEM | __GFP_MOVABLE;
+	gfp = GFP_NOWAIT | __GFP_NORETRY | __GFP_HIGHMEM | __GFP_MOVABLE;
 	alloc_ret = zpool_malloc(zpool, dlen, gfp, &handle);
 	if (alloc_ret)
 		goto unlock;
-- 
2.48.1.711.g2feabab25a-goog


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* Re: [PATCH mm-unstable 5/5] mm: zpool: Remove zpool_malloc_support_movable()
  2025-03-05  6:11 ` [PATCH mm-unstable 5/5] mm: zpool: Remove zpool_malloc_support_movable() Yosry Ahmed
@ 2025-03-05  8:14   ` Sergey Senozhatsky
  2025-03-05 14:53   ` Johannes Weiner
  2025-03-05 17:05   ` Nhat Pham
  2 siblings, 0 replies; 28+ messages in thread
From: Sergey Senozhatsky @ 2025-03-05  8:14 UTC (permalink / raw)
  To: Yosry Ahmed
  Cc: Andrew Morton, Johannes Weiner, Nhat Pham, Chengming Zhou,
	Minchan Kim, Sergey Senozhatsky, Herbert Xu, Thomas Gleixner,
	Peter Zijlstra, linux-mm, linux-kernel

On (25/03/05 06:11), Yosry Ahmed wrote:
> zpool_malloc_support_movable() always returns true for zsmalloc, the
> only remaining zpool driver. Remove it and set the gfp flags in
> zswap_compress() accordingly. Opportunistically use GFP_NOWAIT instead
> of __GFP_NOWARN | __GFP_KSWAPD_RECLAIM for conciseness as they are
> equivalent.
> 
> Signed-off-by: Yosry Ahmed <yosry.ahmed@linux.dev>

Reviewed-by: Sergey Senozhatsky <senozhatsky@chromium.org>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH mm-unstable 4/5] mm: zsmalloc: Remove object mapping APIs and per-CPU map areas
  2025-03-05  6:11 ` [PATCH mm-unstable 4/5] mm: zsmalloc: Remove object mapping APIs and per-CPU map areas Yosry Ahmed
@ 2025-03-05  8:16   ` Sergey Senozhatsky
  2025-03-05 14:51   ` Johannes Weiner
                     ` (2 subsequent siblings)
  3 siblings, 0 replies; 28+ messages in thread
From: Sergey Senozhatsky @ 2025-03-05  8:16 UTC (permalink / raw)
  To: Yosry Ahmed
  Cc: Andrew Morton, Johannes Weiner, Nhat Pham, Chengming Zhou,
	Minchan Kim, Sergey Senozhatsky, Herbert Xu, Thomas Gleixner,
	Peter Zijlstra, linux-mm, linux-kernel

On (25/03/05 06:11), Yosry Ahmed wrote:
> zs_map_object() and zs_unmap_object() are no longer used, remove them.
> Since these are the only users of per-CPU mapping_areas, remove them and
> the associated CPU hotplug callbacks too.
> 
> Signed-off-by: Yosry Ahmed <yosry.ahmed@linux.dev>

Acked-by: Sergey Senozhatsky <senozhatsky@chromium.org>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH mm-unstable 3/5] mm: zpool: Remove object mapping APIs
  2025-03-05  6:11 ` [PATCH mm-unstable 3/5] mm: zpool: Remove " Yosry Ahmed
@ 2025-03-05  8:17   ` Sergey Senozhatsky
  2025-03-05 14:49   ` Johannes Weiner
                     ` (2 subsequent siblings)
  3 siblings, 0 replies; 28+ messages in thread
From: Sergey Senozhatsky @ 2025-03-05  8:17 UTC (permalink / raw)
  To: Yosry Ahmed
  Cc: Andrew Morton, Johannes Weiner, Nhat Pham, Chengming Zhou,
	Minchan Kim, Sergey Senozhatsky, Herbert Xu, Thomas Gleixner,
	Peter Zijlstra, linux-mm, linux-kernel

On (25/03/05 06:11), Yosry Ahmed wrote:
> zpool_map_handle(), zpool_unmap_handle(), and zpool_can_sleep_mapped()
> are no longer used. Remove them with the underlying driver callbacks.
> 
> Signed-off-by: Yosry Ahmed <yosry.ahmed@linux.dev>

Reviewed-by: Sergey Senozhatsky <senozhatsky@chromium.org>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH mm-unstable 1/5] mm: zpool: Add interfaces for object read/write APIs
  2025-03-05  6:11 ` [PATCH mm-unstable 1/5] mm: zpool: Add interfaces for " Yosry Ahmed
@ 2025-03-05  8:18   ` Sergey Senozhatsky
  2025-03-05 14:43   ` Johannes Weiner
  2025-03-05 17:32   ` Nhat Pham
  2 siblings, 0 replies; 28+ messages in thread
From: Sergey Senozhatsky @ 2025-03-05  8:18 UTC (permalink / raw)
  To: Yosry Ahmed
  Cc: Andrew Morton, Johannes Weiner, Nhat Pham, Chengming Zhou,
	Minchan Kim, Sergey Senozhatsky, Herbert Xu, Thomas Gleixner,
	Peter Zijlstra, linux-mm, linux-kernel

On (25/03/05 06:11), Yosry Ahmed wrote:
> Zsmalloc introduce new APIs to read/write objects besides mapping them.
> Add the necessary zpool interfaces.
> 
> Signed-off-by: Yosry Ahmed <yosry.ahmed@linux.dev>

Reviewed-by: Sergey Senozhatsky <senozhatsky@chromium.org>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH mm-unstable 0/5] Switch zswap to object read/write APIs
  2025-03-05  6:11 [PATCH mm-unstable 0/5] Switch zswap to object read/write APIs Yosry Ahmed
                   ` (4 preceding siblings ...)
  2025-03-05  6:11 ` [PATCH mm-unstable 5/5] mm: zpool: Remove zpool_malloc_support_movable() Yosry Ahmed
@ 2025-03-05  8:18 ` Sergey Senozhatsky
  5 siblings, 0 replies; 28+ messages in thread
From: Sergey Senozhatsky @ 2025-03-05  8:18 UTC (permalink / raw)
  To: Yosry Ahmed
  Cc: Andrew Morton, Johannes Weiner, Nhat Pham, Chengming Zhou,
	Minchan Kim, Sergey Senozhatsky, Herbert Xu, Thomas Gleixner,
	Peter Zijlstra, linux-mm, linux-kernel

On (25/03/05 06:11), Yosry Ahmed wrote:
> This patch series updates zswap to use the new object read/write APIs
> defined by zsmalloc in [1], and remove the old object mapping APIs and
> the related code from zpool and zsmalloc.

Thank you for working on this!

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH mm-unstable 1/5] mm: zpool: Add interfaces for object read/write APIs
  2025-03-05  6:11 ` [PATCH mm-unstable 1/5] mm: zpool: Add interfaces for " Yosry Ahmed
  2025-03-05  8:18   ` Sergey Senozhatsky
@ 2025-03-05 14:43   ` Johannes Weiner
  2025-03-05 17:32   ` Nhat Pham
  2 siblings, 0 replies; 28+ messages in thread
From: Johannes Weiner @ 2025-03-05 14:43 UTC (permalink / raw)
  To: Yosry Ahmed
  Cc: Andrew Morton, Nhat Pham, Chengming Zhou, Minchan Kim,
	Sergey Senozhatsky, Herbert Xu, Thomas Gleixner, Peter Zijlstra,
	linux-mm, linux-kernel

On Wed, Mar 05, 2025 at 06:11:29AM +0000, Yosry Ahmed wrote:
> Zsmalloc introduce new APIs to read/write objects besides mapping them.
> Add the necessary zpool interfaces.
> 
> Signed-off-by: Yosry Ahmed <yosry.ahmed@linux.dev>

Acked-by: Johannes Weiner <hannes@cmpxchg.org>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH mm-unstable 2/5] mm: zswap: Use object read/write APIs instead of object mapping APIs
  2025-03-05  6:11 ` [PATCH mm-unstable 2/5] mm: zswap: Use object read/write APIs instead of object mapping APIs Yosry Ahmed
@ 2025-03-05 14:48   ` Johannes Weiner
  2025-03-05 17:35   ` Nhat Pham
  1 sibling, 0 replies; 28+ messages in thread
From: Johannes Weiner @ 2025-03-05 14:48 UTC (permalink / raw)
  To: Yosry Ahmed
  Cc: Andrew Morton, Nhat Pham, Chengming Zhou, Minchan Kim,
	Sergey Senozhatsky, Herbert Xu, Thomas Gleixner, Peter Zijlstra,
	linux-mm, linux-kernel

On Wed, Mar 05, 2025 at 06:11:30AM +0000, Yosry Ahmed wrote:
> Use the new object read/write APIs instead of mapping APIs.
> 
> On compress side, zpool_obj_write() is more concise and provides exactly
> what zswap needs to write the compressed object to the zpool, instead of
> map->copy->unmap.
> 
> On the decompress side, zpool_obj_read_begin() is sleepable, which
> allows avoiding the memcpy() for zsmalloc and slightly simplifying the
> code by:
> - Avoiding checking if the zpool driver is sleepable, reducing special
>   cases and shrinking the huge comment.
> - Having a single zpool_obj_read_end() call rather than multiple
>   conditional zpool_unmap_handle() calls.
> 
> The !virt_addr_valid() case can be removed in the future if the crypto
> API supports kmap addresses or by using kmap_to_page(), completely
> eliminating the memcpy() path in zswap_decompress(). This a step toward
> that. In that spirit, opportunistically make the comment more specific
> about the kmap case instead of generic non-linear addresses. This is the
> only case that needs to be handled in practice, and the generic comment
> makes it seem like a bigger problem that it actually is.
> 
> Signed-off-by: Yosry Ahmed <yosry.ahmed@linux.dev>

Acked-by: Johannes Weiner <hannes@cmpxchg.org>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH mm-unstable 3/5] mm: zpool: Remove object mapping APIs
  2025-03-05  6:11 ` [PATCH mm-unstable 3/5] mm: zpool: Remove " Yosry Ahmed
  2025-03-05  8:17   ` Sergey Senozhatsky
@ 2025-03-05 14:49   ` Johannes Weiner
  2025-03-05 17:37   ` Nhat Pham
  2025-03-06  1:48   ` Herbert Xu
  3 siblings, 0 replies; 28+ messages in thread
From: Johannes Weiner @ 2025-03-05 14:49 UTC (permalink / raw)
  To: Yosry Ahmed
  Cc: Andrew Morton, Nhat Pham, Chengming Zhou, Minchan Kim,
	Sergey Senozhatsky, Herbert Xu, Thomas Gleixner, Peter Zijlstra,
	linux-mm, linux-kernel

On Wed, Mar 05, 2025 at 06:11:31AM +0000, Yosry Ahmed wrote:
> zpool_map_handle(), zpool_unmap_handle(), and zpool_can_sleep_mapped()
> are no longer used. Remove them with the underlying driver callbacks.
> 
> Signed-off-by: Yosry Ahmed <yosry.ahmed@linux.dev>

Acked-by: Johannes Weiner <hannes@cmpxchg.org>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH mm-unstable 4/5] mm: zsmalloc: Remove object mapping APIs and per-CPU map areas
  2025-03-05  6:11 ` [PATCH mm-unstable 4/5] mm: zsmalloc: Remove object mapping APIs and per-CPU map areas Yosry Ahmed
  2025-03-05  8:16   ` Sergey Senozhatsky
@ 2025-03-05 14:51   ` Johannes Weiner
  2025-03-05 17:39   ` Nhat Pham
  2025-03-05 18:57   ` Yosry Ahmed
  3 siblings, 0 replies; 28+ messages in thread
From: Johannes Weiner @ 2025-03-05 14:51 UTC (permalink / raw)
  To: Yosry Ahmed
  Cc: Andrew Morton, Nhat Pham, Chengming Zhou, Minchan Kim,
	Sergey Senozhatsky, Herbert Xu, Thomas Gleixner, Peter Zijlstra,
	linux-mm, linux-kernel

On Wed, Mar 05, 2025 at 06:11:32AM +0000, Yosry Ahmed wrote:
> zs_map_object() and zs_unmap_object() are no longer used, remove them.
> Since these are the only users of per-CPU mapping_areas, remove them and
> the associated CPU hotplug callbacks too.
> 
> Signed-off-by: Yosry Ahmed <yosry.ahmed@linux.dev>

Acked-by: Johannes Weiner <hannes@cmpxchg.org>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH mm-unstable 5/5] mm: zpool: Remove zpool_malloc_support_movable()
  2025-03-05  6:11 ` [PATCH mm-unstable 5/5] mm: zpool: Remove zpool_malloc_support_movable() Yosry Ahmed
  2025-03-05  8:14   ` Sergey Senozhatsky
@ 2025-03-05 14:53   ` Johannes Weiner
  2025-03-05 17:05   ` Nhat Pham
  2 siblings, 0 replies; 28+ messages in thread
From: Johannes Weiner @ 2025-03-05 14:53 UTC (permalink / raw)
  To: Yosry Ahmed
  Cc: Andrew Morton, Nhat Pham, Chengming Zhou, Minchan Kim,
	Sergey Senozhatsky, Herbert Xu, Thomas Gleixner, Peter Zijlstra,
	linux-mm, linux-kernel

On Wed, Mar 05, 2025 at 06:11:33AM +0000, Yosry Ahmed wrote:
> zpool_malloc_support_movable() always returns true for zsmalloc, the
> only remaining zpool driver. Remove it and set the gfp flags in
> zswap_compress() accordingly. Opportunistically use GFP_NOWAIT instead
> of __GFP_NOWARN | __GFP_KSWAPD_RECLAIM for conciseness as they are
> equivalent.
> 
> Signed-off-by: Yosry Ahmed <yosry.ahmed@linux.dev>

Acked-by: Johannes Weiner <hannes@cmpxchg.org>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH mm-unstable 5/5] mm: zpool: Remove zpool_malloc_support_movable()
  2025-03-05  6:11 ` [PATCH mm-unstable 5/5] mm: zpool: Remove zpool_malloc_support_movable() Yosry Ahmed
  2025-03-05  8:14   ` Sergey Senozhatsky
  2025-03-05 14:53   ` Johannes Weiner
@ 2025-03-05 17:05   ` Nhat Pham
  2 siblings, 0 replies; 28+ messages in thread
From: Nhat Pham @ 2025-03-05 17:05 UTC (permalink / raw)
  To: Yosry Ahmed
  Cc: Andrew Morton, Johannes Weiner, Chengming Zhou, Minchan Kim,
	Sergey Senozhatsky, Herbert Xu, Thomas Gleixner, Peter Zijlstra,
	linux-mm, linux-kernel

On Tue, Mar 4, 2025 at 10:12 PM Yosry Ahmed <yosry.ahmed@linux.dev> wrote:
>
> zpool_malloc_support_movable() always returns true for zsmalloc, the
> only remaining zpool driver. Remove it and set the gfp flags in
> zswap_compress() accordingly. Opportunistically use GFP_NOWAIT instead
> of __GFP_NOWARN | __GFP_KSWAPD_RECLAIM for conciseness as they are
> equivalent.
>
> Signed-off-by: Yosry Ahmed <yosry.ahmed@linux.dev>

Acked-by: Nhat Pham <nphamcs@gmail.com>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH mm-unstable 1/5] mm: zpool: Add interfaces for object read/write APIs
  2025-03-05  6:11 ` [PATCH mm-unstable 1/5] mm: zpool: Add interfaces for " Yosry Ahmed
  2025-03-05  8:18   ` Sergey Senozhatsky
  2025-03-05 14:43   ` Johannes Weiner
@ 2025-03-05 17:32   ` Nhat Pham
  2 siblings, 0 replies; 28+ messages in thread
From: Nhat Pham @ 2025-03-05 17:32 UTC (permalink / raw)
  To: Yosry Ahmed
  Cc: Andrew Morton, Johannes Weiner, Chengming Zhou, Minchan Kim,
	Sergey Senozhatsky, Herbert Xu, Thomas Gleixner, Peter Zijlstra,
	linux-mm, linux-kernel

On Tue, Mar 4, 2025 at 10:11 PM Yosry Ahmed <yosry.ahmed@linux.dev> wrote:
>
> Zsmalloc introduce new APIs to read/write objects besides mapping them.
> Add the necessary zpool interfaces.
>
> Signed-off-by: Yosry Ahmed <yosry.ahmed@linux.dev>

Acked-by: Nhat Pham <nphamcs@gmail.com>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH mm-unstable 2/5] mm: zswap: Use object read/write APIs instead of object mapping APIs
  2025-03-05  6:11 ` [PATCH mm-unstable 2/5] mm: zswap: Use object read/write APIs instead of object mapping APIs Yosry Ahmed
  2025-03-05 14:48   ` Johannes Weiner
@ 2025-03-05 17:35   ` Nhat Pham
  1 sibling, 0 replies; 28+ messages in thread
From: Nhat Pham @ 2025-03-05 17:35 UTC (permalink / raw)
  To: Yosry Ahmed
  Cc: Andrew Morton, Johannes Weiner, Chengming Zhou, Minchan Kim,
	Sergey Senozhatsky, Herbert Xu, Thomas Gleixner, Peter Zijlstra,
	linux-mm, linux-kernel

On Tue, Mar 4, 2025 at 10:11 PM Yosry Ahmed <yosry.ahmed@linux.dev> wrote:
>
> Use the new object read/write APIs instead of mapping APIs.
>
> On compress side, zpool_obj_write() is more concise and provides exactly
> what zswap needs to write the compressed object to the zpool, instead of
> map->copy->unmap.
>
> On the decompress side, zpool_obj_read_begin() is sleepable, which
> allows avoiding the memcpy() for zsmalloc and slightly simplifying the
> code by:
> - Avoiding checking if the zpool driver is sleepable, reducing special
>   cases and shrinking the huge comment.
> - Having a single zpool_obj_read_end() call rather than multiple
>   conditional zpool_unmap_handle() calls.
>
> The !virt_addr_valid() case can be removed in the future if the crypto
> API supports kmap addresses or by using kmap_to_page(), completely
> eliminating the memcpy() path in zswap_decompress(). This a step toward
> that. In that spirit, opportunistically make the comment more specific
> about the kmap case instead of generic non-linear addresses. This is the
> only case that needs to be handled in practice, and the generic comment
> makes it seem like a bigger problem that it actually is.
>
> Signed-off-by: Yosry Ahmed <yosry.ahmed@linux.dev>

Acked-by: Nhat Pham <nphamcs@gmail.com>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH mm-unstable 3/5] mm: zpool: Remove object mapping APIs
  2025-03-05  6:11 ` [PATCH mm-unstable 3/5] mm: zpool: Remove " Yosry Ahmed
  2025-03-05  8:17   ` Sergey Senozhatsky
  2025-03-05 14:49   ` Johannes Weiner
@ 2025-03-05 17:37   ` Nhat Pham
  2025-03-06  1:48   ` Herbert Xu
  3 siblings, 0 replies; 28+ messages in thread
From: Nhat Pham @ 2025-03-05 17:37 UTC (permalink / raw)
  To: Yosry Ahmed
  Cc: Andrew Morton, Johannes Weiner, Chengming Zhou, Minchan Kim,
	Sergey Senozhatsky, Herbert Xu, Thomas Gleixner, Peter Zijlstra,
	linux-mm, linux-kernel

On Tue, Mar 4, 2025 at 10:11 PM Yosry Ahmed <yosry.ahmed@linux.dev> wrote:
>
> zpool_map_handle(), zpool_unmap_handle(), and zpool_can_sleep_mapped()
> are no longer used. Remove them with the underlying driver callbacks.
>
> Signed-off-by: Yosry Ahmed <yosry.ahmed@linux.dev>
> ---
>  include/linux/zpool.h | 30 ---------------------
>  mm/zpool.c            | 61 -------------------------------------------
>  mm/zsmalloc.c         | 27 -------------------
>  3 files changed, 118 deletions(-)

Me see deletions, me likey. I've never liked the object mapping API anyway.

Acked-by: Nhat Pham <nphamcs@gmail.com>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH mm-unstable 4/5] mm: zsmalloc: Remove object mapping APIs and per-CPU map areas
  2025-03-05  6:11 ` [PATCH mm-unstable 4/5] mm: zsmalloc: Remove object mapping APIs and per-CPU map areas Yosry Ahmed
  2025-03-05  8:16   ` Sergey Senozhatsky
  2025-03-05 14:51   ` Johannes Weiner
@ 2025-03-05 17:39   ` Nhat Pham
  2025-03-05 18:57   ` Yosry Ahmed
  3 siblings, 0 replies; 28+ messages in thread
From: Nhat Pham @ 2025-03-05 17:39 UTC (permalink / raw)
  To: Yosry Ahmed
  Cc: Andrew Morton, Johannes Weiner, Chengming Zhou, Minchan Kim,
	Sergey Senozhatsky, Herbert Xu, Thomas Gleixner, Peter Zijlstra,
	linux-mm, linux-kernel

On Tue, Mar 4, 2025 at 10:12 PM Yosry Ahmed <yosry.ahmed@linux.dev> wrote:
>
> zs_map_object() and zs_unmap_object() are no longer used, remove them.
> Since these are the only users of per-CPU mapping_areas, remove them and
> the associated CPU hotplug callbacks too.
>
> Signed-off-by: Yosry Ahmed <yosry.ahmed@linux.dev>

Acked-by: Nhat Pham <nphamcs@gmail.com>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH mm-unstable 4/5] mm: zsmalloc: Remove object mapping APIs and per-CPU map areas
  2025-03-05  6:11 ` [PATCH mm-unstable 4/5] mm: zsmalloc: Remove object mapping APIs and per-CPU map areas Yosry Ahmed
                     ` (2 preceding siblings ...)
  2025-03-05 17:39   ` Nhat Pham
@ 2025-03-05 18:57   ` Yosry Ahmed
  3 siblings, 0 replies; 28+ messages in thread
From: Yosry Ahmed @ 2025-03-05 18:57 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Johannes Weiner, Nhat Pham, Chengming Zhou, Minchan Kim,
	Sergey Senozhatsky, Herbert Xu, Thomas Gleixner, Peter Zijlstra,
	linux-mm, linux-kernel

On Wed, Mar 05, 2025 at 06:11:32AM +0000, Yosry Ahmed wrote:
> zs_map_object() and zs_unmap_object() are no longer used, remove them.
> Since these are the only users of per-CPU mapping_areas, remove them and
> the associated CPU hotplug callbacks too.
> 
> Signed-off-by: Yosry Ahmed <yosry.ahmed@linux.dev>
> ---
>  include/linux/cpuhotplug.h |   1 -
>  include/linux/zsmalloc.h   |  21 ----
>  mm/zsmalloc.c              | 226 +------------------------------------
>  3 files changed, 1 insertion(+), 247 deletions(-)

I missed updating the docs. Andrew, could you please squash the
following diff in? I intentionally did not state the name of the new
APIs to avoid needing to update the docs with similar changes in the
future:

diff --git a/Documentation/mm/zsmalloc.rst b/Documentation/mm/zsmalloc.rst
index 76902835e68e9..d2bbecd78e146 100644
--- a/Documentation/mm/zsmalloc.rst
+++ b/Documentation/mm/zsmalloc.rst
@@ -27,9 +27,8 @@ Instead, it returns an opaque handle (unsigned long) which encodes actual
 location of the allocated object. The reason for this indirection is that
 zsmalloc does not keep zspages permanently mapped since that would cause
 issues on 32-bit systems where the VA region for kernel space mappings
-is very small. So, before using the allocating memory, the object has to
-be mapped using zs_map_object() to get a usable pointer and subsequently
-unmapped using zs_unmap_object().
+is very small. So, using the allocated memory should be done through the
+proper handle-based APIs.

 stat
 ====

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* Re: [PATCH mm-unstable 3/5] mm: zpool: Remove object mapping APIs
  2025-03-05  6:11 ` [PATCH mm-unstable 3/5] mm: zpool: Remove " Yosry Ahmed
                     ` (2 preceding siblings ...)
  2025-03-05 17:37   ` Nhat Pham
@ 2025-03-06  1:48   ` Herbert Xu
  2025-03-06  4:19     ` Herbert Xu
  2025-03-06 14:15     ` Johannes Weiner
  3 siblings, 2 replies; 28+ messages in thread
From: Herbert Xu @ 2025-03-06  1:48 UTC (permalink / raw)
  To: Yosry Ahmed
  Cc: Andrew Morton, Johannes Weiner, Nhat Pham, Chengming Zhou,
	Minchan Kim, Sergey Senozhatsky, Thomas Gleixner, Peter Zijlstra,
	linux-mm, linux-kernel

On Wed, Mar 05, 2025 at 06:11:31AM +0000, Yosry Ahmed wrote:
> zpool_map_handle(), zpool_unmap_handle(), and zpool_can_sleep_mapped()
> are no longer used. Remove them with the underlying driver callbacks.
> 
> Signed-off-by: Yosry Ahmed <yosry.ahmed@linux.dev>
> ---
>  include/linux/zpool.h | 30 ---------------------
>  mm/zpool.c            | 61 -------------------------------------------
>  mm/zsmalloc.c         | 27 -------------------
>  3 files changed, 118 deletions(-)

This patch breaks zbud and z3fold because they haven't been converted
to the new interface.

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH mm-unstable 3/5] mm: zpool: Remove object mapping APIs
  2025-03-06  1:48   ` Herbert Xu
@ 2025-03-06  4:19     ` Herbert Xu
  2025-03-06 16:55       ` Yosry Ahmed
  2025-03-06 14:15     ` Johannes Weiner
  1 sibling, 1 reply; 28+ messages in thread
From: Herbert Xu @ 2025-03-06  4:19 UTC (permalink / raw)
  To: Yosry Ahmed
  Cc: Andrew Morton, Johannes Weiner, Nhat Pham, Chengming Zhou,
	Minchan Kim, Sergey Senozhatsky, Thomas Gleixner, Peter Zijlstra,
	linux-mm, linux-kernel

On Thu, Mar 06, 2025 at 09:48:58AM +0800, Herbert Xu wrote:
>
> This patch breaks zbud and z3fold because they haven't been converted
> to the new interface.

I've rebased my zswap SG patch on top of your series.  I've removed
all the mapping code from zpool/zsmalloc and pushed it out to zram
instead.

This patch depends on a new memcpy_sglist function which I've just
posted a patch for:

https://patchwork.kernel.org/project/linux-crypto/patch/Z8kXhLb681E_FLzs@gondor.apana.org.au/

From a77ee529b831e7e606ed2a5b723b74ce234a3915 Mon Sep 17 00:00:00 2001
From: Herbert Xu <herbert@gondor.apana.org.au>
Date: Thu, 6 Mar 2025 12:13:58 +0800
Subject: [PATCH] mm: zswap: Give non-linear objects to Crypto API

Instead of copying non-linear objects into a buffer, use the
scatterlist to give them directly to the Crypto API.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---
 drivers/block/zram/zram_drv.c |  58 ++++++++++++----
 include/linux/zpool.h         |  26 +++----
 include/linux/zsmalloc.h      |  11 ++-
 mm/z3fold.c                   |  17 +++--
 mm/zbud.c                     |  45 +++++++------
 mm/zpool.c                    |  54 ++++++---------
 mm/zsmalloc.c                 | 123 +++++++---------------------------
 mm/zswap.c                    |  35 +++-------
 8 files changed, 148 insertions(+), 221 deletions(-)

diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index fda7d8624889..d72becec534d 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -15,6 +15,7 @@
 #define KMSG_COMPONENT "zram"
 #define pr_fmt(fmt) KMSG_COMPONENT ": " fmt
 
+#include <crypto/scatterwalk.h>
 #include <linux/module.h>
 #include <linux/kernel.h>
 #include <linux/bio.h>
@@ -23,6 +24,7 @@
 #include <linux/buffer_head.h>
 #include <linux/device.h>
 #include <linux/highmem.h>
+#include <linux/scatterlist.h>
 #include <linux/slab.h>
 #include <linux/backing-dev.h>
 #include <linux/string.h>
@@ -1559,21 +1561,23 @@ static int read_same_filled_page(struct zram *zram, struct page *page,
 static int read_incompressible_page(struct zram *zram, struct page *page,
 				    u32 index)
 {
+	struct scatterlist output[1];
+	struct scatterlist input[2];
 	unsigned long handle;
-	void *src, *dst;
 
+	sg_init_table(output, 1);
+	sg_set_page(output, page, PAGE_SIZE, 0);
 	handle = zram_get_handle(zram, index);
-	src = zs_obj_read_begin(zram->mem_pool, handle, NULL);
-	dst = kmap_local_page(page);
-	copy_page(dst, src);
-	kunmap_local(dst);
-	zs_obj_read_end(zram->mem_pool, handle, src);
+	zs_pin_object(zram->mem_pool, handle, input);
+	memcpy_sglist(output, input, PAGE_SIZE);
+	zs_unpin_object(zram->mem_pool, handle);
 
 	return 0;
 }
 
 static int read_compressed_page(struct zram *zram, struct page *page, u32 index)
 {
+	struct scatterlist input[2];
 	struct zcomp_strm *zstrm;
 	unsigned long handle;
 	unsigned int size;
@@ -1585,11 +1589,22 @@ static int read_compressed_page(struct zram *zram, struct page *page, u32 index)
 	prio = zram_get_priority(zram, index);
 
 	zstrm = zcomp_stream_get(zram->comps[prio]);
-	src = zs_obj_read_begin(zram->mem_pool, handle, zstrm->local_copy);
+	zs_pin_object(zram->mem_pool, handle, input);
+	if (sg_is_last(input)) {
+		unsigned int offset = input[0].offset;
+
+		src = kmap_local_page(sg_page(input) + (offset >> PAGE_SHIFT));
+		src += offset_in_page(offset);
+	} else {
+		memcpy_from_sglist(zstrm->local_copy, input, 0, size);
+		src = zstrm->local_copy;
+	}
 	dst = kmap_local_page(page);
 	ret = zcomp_decompress(zram->comps[prio], zstrm, src, size, dst);
 	kunmap_local(dst);
-	zs_obj_read_end(zram->mem_pool, handle, src);
+	if (sg_is_last(input))
+		kunmap_local(src);
+	zs_unpin_object(zram->mem_pool, handle);
 	zcomp_stream_put(zstrm);
 
 	return ret;
@@ -1684,8 +1699,9 @@ static int write_same_filled_page(struct zram *zram, unsigned long fill,
 static int write_incompressible_page(struct zram *zram, struct page *page,
 				     u32 index)
 {
+	struct scatterlist output[2];
+	struct scatterlist input[1];
 	unsigned long handle;
-	void *src;
 
 	/*
 	 * This function is called from preemptible context so we don't need
@@ -1703,9 +1719,11 @@ static int write_incompressible_page(struct zram *zram, struct page *page,
 		return -ENOMEM;
 	}
 
-	src = kmap_local_page(page);
-	zs_obj_write(zram->mem_pool, handle, src, PAGE_SIZE);
-	kunmap_local(src);
+	sg_init_table(input, 1);
+	sg_set_page(input, page, PAGE_SIZE, 0);
+	zs_pin_object(zram->mem_pool, handle, output);
+	memcpy_sglist(output, input, PAGE_SIZE);
+	zs_unpin_object(zram->mem_pool, handle);
 
 	zram_slot_lock(zram, index);
 	zram_set_flag(zram, index, ZRAM_HUGE);
@@ -1723,6 +1741,8 @@ static int write_incompressible_page(struct zram *zram, struct page *page,
 
 static int zram_write_page(struct zram *zram, struct page *page, u32 index)
 {
+	struct scatterlist output[2];
+	struct scatterlist input[1];
 	int ret = 0;
 	unsigned long handle;
 	unsigned int comp_len;
@@ -1773,7 +1793,11 @@ static int zram_write_page(struct zram *zram, struct page *page, u32 index)
 		return -ENOMEM;
 	}
 
-	zs_obj_write(zram->mem_pool, handle, zstrm->buffer, comp_len);
+	sg_init_table(input, 1);
+	sg_set_buf(input, zstrm->buffer, comp_len);
+	zs_pin_object(zram->mem_pool, handle, output);
+	memcpy_sglist(output, input, comp_len);
+	zs_unpin_object(zram->mem_pool, handle);
 	zcomp_stream_put(zstrm);
 
 	zram_slot_lock(zram, index);
@@ -1872,6 +1896,8 @@ static int recompress_slot(struct zram *zram, u32 index, struct page *page,
 			   u64 *num_recomp_pages, u32 threshold, u32 prio,
 			   u32 prio_max)
 {
+	struct scatterlist output[2];
+	struct scatterlist input[1];
 	struct zcomp_strm *zstrm = NULL;
 	unsigned long handle_old;
 	unsigned long handle_new;
@@ -1990,7 +2016,11 @@ static int recompress_slot(struct zram *zram, u32 index, struct page *page,
 		return PTR_ERR((void *)handle_new);
 	}
 
-	zs_obj_write(zram->mem_pool, handle_new, zstrm->buffer, comp_len_new);
+	sg_init_table(input, 1);
+	sg_set_buf(input, zstrm->buffer, comp_len_new);
+	zs_pin_object(zram->mem_pool, handle_new, output);
+	memcpy_sglist(output, input, comp_len_new);
+	zs_unpin_object(zram->mem_pool, handle_new);
 	zcomp_stream_put(zstrm);
 
 	zram_free_page(zram, index);
diff --git a/include/linux/zpool.h b/include/linux/zpool.h
index a976f1962cc7..2beb0b7c82e2 100644
--- a/include/linux/zpool.h
+++ b/include/linux/zpool.h
@@ -12,6 +12,7 @@
 #ifndef _ZPOOL_H_
 #define _ZPOOL_H_
 
+struct scatterlist;
 struct zpool;
 
 bool zpool_has_pool(char *type);
@@ -27,14 +28,10 @@ int zpool_malloc(struct zpool *pool, size_t size, gfp_t gfp,
 
 void zpool_free(struct zpool *pool, unsigned long handle);
 
-void *zpool_obj_read_begin(struct zpool *zpool, unsigned long handle,
-			   void *local_copy);
+void zpool_pin_handle(struct zpool *pool, unsigned long handle,
+		      struct scatterlist *sg);
 
-void zpool_obj_read_end(struct zpool *zpool, unsigned long handle,
-			void *handle_mem);
-
-void zpool_obj_write(struct zpool *zpool, unsigned long handle,
-		     void *handle_mem, size_t mem_len);
+void zpool_unpin_handle(struct zpool *pool, unsigned long handle);
 
 u64 zpool_get_total_pages(struct zpool *pool);
 
@@ -47,9 +44,8 @@ u64 zpool_get_total_pages(struct zpool *pool);
  * @destroy:	destroy a pool.
  * @malloc:	allocate mem from a pool.
  * @free:	free mem from a pool.
- * @sleep_mapped: whether zpool driver can sleep during map.
- * @map:	map a handle.
- * @unmap:	unmap a handle.
+ * @pin:	pin a handle and write it into a two-entry SG list.
+ * @unpin:	unpin a handle.
  * @total_size:	get total size of a pool.
  *
  * This is created by a zpool implementation and registered
@@ -68,12 +64,8 @@ struct zpool_driver {
 				unsigned long *handle);
 	void (*free)(void *pool, unsigned long handle);
 
-	void *(*obj_read_begin)(void *pool, unsigned long handle,
-				void *local_copy);
-	void (*obj_read_end)(void *pool, unsigned long handle,
-			     void *handle_mem);
-	void (*obj_write)(void *pool, unsigned long handle,
-			  void *handle_mem, size_t mem_len);
+	void (*pin)(void *pool, unsigned long handle, struct scatterlist *sg);
+	void (*unpin)(void *pool, unsigned long handle);
 
 	u64 (*total_pages)(void *pool);
 };
@@ -82,6 +74,4 @@ void zpool_register_driver(struct zpool_driver *driver);
 
 int zpool_unregister_driver(struct zpool_driver *driver);
 
-bool zpool_can_sleep_mapped(struct zpool *pool);
-
 #endif
diff --git a/include/linux/zsmalloc.h b/include/linux/zsmalloc.h
index c26baf9fb331..fba57a792153 100644
--- a/include/linux/zsmalloc.h
+++ b/include/linux/zsmalloc.h
@@ -16,6 +16,8 @@
 
 #include <linux/types.h>
 
+struct scatterlist;
+
 struct zs_pool_stats {
 	/* How many pages were migrated (freed) */
 	atomic_long_t pages_compacted;
@@ -38,11 +40,8 @@ unsigned int zs_lookup_class_index(struct zs_pool *pool, unsigned int size);
 
 void zs_pool_stats(struct zs_pool *pool, struct zs_pool_stats *stats);
 
-void *zs_obj_read_begin(struct zs_pool *pool, unsigned long handle,
-			void *local_copy);
-void zs_obj_read_end(struct zs_pool *pool, unsigned long handle,
-		     void *handle_mem);
-void zs_obj_write(struct zs_pool *pool, unsigned long handle,
-		  void *handle_mem, size_t mem_len);
+void zs_pin_object(struct zs_pool *pool, unsigned long handle,
+		   struct scatterlist *sg);
+void zs_unpin_object(struct zs_pool *pool, unsigned long handle);
 
 #endif
diff --git a/mm/z3fold.c b/mm/z3fold.c
index 379d24b4fef9..eab5c81fa7ad 100644
--- a/mm/z3fold.c
+++ b/mm/z3fold.c
@@ -36,6 +36,7 @@
 #include <linux/percpu.h>
 #include <linux/preempt.h>
 #include <linux/workqueue.h>
+#include <linux/scatterlist.h>
 #include <linux/slab.h>
 #include <linux/spinlock.h>
 #include <linux/zpool.h>
@@ -1392,12 +1393,15 @@ static void z3fold_zpool_free(void *pool, unsigned long handle)
 	z3fold_free(pool, handle);
 }
 
-static void *z3fold_zpool_map(void *pool, unsigned long handle,
-			enum zpool_mapmode mm)
+static void z3fold_zpool_pin(void *pool, unsigned long handle,
+			     struct scatterlist sg[2])
 {
-	return z3fold_map(pool, handle);
+	void *buf = z3fold_map(pool, handle);
+
+	sg_init_one(sg, buf, PAGE_SIZE - offset_in_page(buf));
 }
-static void z3fold_zpool_unmap(void *pool, unsigned long handle)
+
+static void z3fold_zpool_unpin(void *pool, unsigned long handle)
 {
 	z3fold_unmap(pool, handle);
 }
@@ -1409,14 +1413,13 @@ static u64 z3fold_zpool_total_pages(void *pool)
 
 static struct zpool_driver z3fold_zpool_driver = {
 	.type =		"z3fold",
-	.sleep_mapped = true,
 	.owner =	THIS_MODULE,
 	.create =	z3fold_zpool_create,
 	.destroy =	z3fold_zpool_destroy,
 	.malloc =	z3fold_zpool_malloc,
 	.free =		z3fold_zpool_free,
-	.map =		z3fold_zpool_map,
-	.unmap =	z3fold_zpool_unmap,
+	.pin =		z3fold_zpool_pin,
+	.unpin =	z3fold_zpool_unpin,
 	.total_pages =	z3fold_zpool_total_pages,
 };
 
diff --git a/mm/zbud.c b/mm/zbud.c
index e9836fff9438..3132a7c6f926 100644
--- a/mm/zbud.c
+++ b/mm/zbud.c
@@ -36,10 +36,9 @@
  *
  * The zbud API differs from that of conventional allocators in that the
  * allocation function, zbud_alloc(), returns an opaque handle to the user,
- * not a dereferenceable pointer.  The user must map the handle using
- * zbud_map() in order to get a usable pointer by which to access the
- * allocation data and unmap the handle with zbud_unmap() when operations
- * on the allocation data are complete.
+ * not a dereferenceable pointer.  The user must pin the handle using
+ * zbud_pin() in order to access the allocation data and unpin the handle
+ * with zbud_unpin() when operations on the allocation data are complete.
  */
 
 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
@@ -49,6 +48,7 @@
 #include <linux/mm.h>
 #include <linux/module.h>
 #include <linux/preempt.h>
+#include <linux/scatterlist.h>
 #include <linux/slab.h>
 #include <linux/spinlock.h>
 #include <linux/zpool.h>
@@ -339,28 +339,30 @@ static void zbud_free(struct zbud_pool *pool, unsigned long handle)
 }
 
 /**
- * zbud_map() - maps the allocation associated with the given handle
+ * zbud_pin() - pins the allocation associated with the given handle
  * @pool:	pool in which the allocation resides
- * @handle:	handle associated with the allocation to be mapped
+ * @handle:	handle associated with the allocation to be pinned
+ * @sg:		2-entry scatter list to store the memory pointers
  *
- * While trivial for zbud, the mapping functions for others allocators
+ * While trivial for zbud, the pinning functions for others allocators
  * implementing this allocation API could have more complex information encoded
  * in the handle and could create temporary mappings to make the data
  * accessible to the user.
- *
- * Returns: a pointer to the mapped allocation
  */
-static void *zbud_map(struct zbud_pool *pool, unsigned long handle)
+static void zbud_pin(struct zbud_pool *pool, unsigned long handle,
+		      struct scatterlist sg[2])
 {
-	return (void *)(handle);
+	void *buf = (void *)handle;
+
+	sg_init_one(sg, buf, PAGE_SIZE - offset_in_page(buf));
 }
 
 /**
- * zbud_unmap() - maps the allocation associated with the given handle
+ * zbud_unpin() - unpins the allocation associated with the given handle
  * @pool:	pool in which the allocation resides
- * @handle:	handle associated with the allocation to be unmapped
+ * @handle:	handle associated with the allocation to be unpinned
  */
-static void zbud_unmap(struct zbud_pool *pool, unsigned long handle)
+static void zbud_unpin(struct zbud_pool *pool, unsigned long handle)
 {
 }
 
@@ -400,14 +402,14 @@ static void zbud_zpool_free(void *pool, unsigned long handle)
 	zbud_free(pool, handle);
 }
 
-static void *zbud_zpool_map(void *pool, unsigned long handle,
-			enum zpool_mapmode mm)
+static void zbud_zpool_pin(void *pool, unsigned long handle,
+			   struct scatterlist sg[2])
 {
-	return zbud_map(pool, handle);
+	zbud_pin(pool, handle, sg);
 }
-static void zbud_zpool_unmap(void *pool, unsigned long handle)
+static void zbud_zpool_unpin(void *pool, unsigned long handle)
 {
-	zbud_unmap(pool, handle);
+	zbud_unpin(pool, handle);
 }
 
 static u64 zbud_zpool_total_pages(void *pool)
@@ -417,14 +419,13 @@ static u64 zbud_zpool_total_pages(void *pool)
 
 static struct zpool_driver zbud_zpool_driver = {
 	.type =		"zbud",
-	.sleep_mapped = true,
 	.owner =	THIS_MODULE,
 	.create =	zbud_zpool_create,
 	.destroy =	zbud_zpool_destroy,
 	.malloc =	zbud_zpool_malloc,
 	.free =		zbud_zpool_free,
-	.map =		zbud_zpool_map,
-	.unmap =	zbud_zpool_unmap,
+	.pin =		zbud_zpool_pin,
+	.unpin =	zbud_zpool_unpin,
 	.total_pages =	zbud_zpool_total_pages,
 };
 
diff --git a/mm/zpool.c b/mm/zpool.c
index 0931b8135d72..3355b57ec7ee 100644
--- a/mm/zpool.c
+++ b/mm/zpool.c
@@ -13,6 +13,7 @@
 #include <linux/list.h>
 #include <linux/types.h>
 #include <linux/mm.h>
+#include <linux/scatterlist.h>
 #include <linux/slab.h>
 #include <linux/spinlock.h>
 #include <linux/module.h>
@@ -262,51 +263,38 @@ void zpool_free(struct zpool *zpool, unsigned long handle)
 }
 
 /**
- * zpool_obj_read_begin() - Start reading from a previously allocated handle.
+ * zpool_pin_handle() - Pin a previously allocated handle into memory
  * @zpool:	The zpool that the handle was allocated from
- * @handle:	The handle to read from
- * @local_copy:	A local buffer to use if needed.
+ * @handle:	The handle to pin
+ * @sg:		2-entry scatterlist to store pointers to the memmory
  *
- * This starts a read operation of a previously allocated handle. The passed
- * @local_copy buffer may be used if needed by copying the memory into.
- * zpool_obj_read_end() MUST be called after the read is completed to undo any
- * actions taken (e.g. release locks).
+ * This pins a previously allocated handle into memory.
  *
- * Returns: A pointer to the handle memory to be read, if @local_copy is used,
- * the returned pointer is @local_copy.
+ * This may hold locks, disable interrupts, and/or preemption,
+ * and the zpool_unpin_handle() must be called to undo those
+ * actions.  The code that uses the pinned handle should complete
+ * its operations on the pinned handle memory quickly and unpin
+ * as soon as possible.
  */
-void *zpool_obj_read_begin(struct zpool *zpool, unsigned long handle,
-			   void *local_copy)
+void zpool_pin_handle(struct zpool *zpool, unsigned long handle,
+		      struct scatterlist *sg)
 {
-	return zpool->driver->obj_read_begin(zpool->pool, handle, local_copy);
+	zpool->driver->pin(zpool->pool, handle, sg);
 }
 
 /**
- * zpool_obj_read_end() - Finish reading from a previously allocated handle.
+ * zpool_unpin_handle() - Unpin a previously pinned handle
  * @zpool:	The zpool that the handle was allocated from
- * @handle:	The handle to read from
- * @handle_mem:	The pointer returned by zpool_obj_read_begin()
+ * @handle:	The handle to unpin
  *
- * Finishes a read operation previously started by zpool_obj_read_begin().
+ * This unpins a previously pinned handle.  Any locks or other
+ * actions that the implementation took in zpool_pin_handle()
+ * will be undone here.  The memory area returned from
+ * zpool_pin_handle() should no longer be used after this.
  */
-void zpool_obj_read_end(struct zpool *zpool, unsigned long handle,
-			void *handle_mem)
+void zpool_unpin_handle(struct zpool *zpool, unsigned long handle)
 {
-	zpool->driver->obj_read_end(zpool->pool, handle, handle_mem);
-}
-
-/**
- * zpool_obj_write() - Write to a previously allocated handle.
- * @zpool:	The zpool that the handle was allocated from
- * @handle:	The handle to read from
- * @handle_mem:	The memory to copy from into the handle.
- * @mem_len:	The length of memory to be written.
- *
- */
-void zpool_obj_write(struct zpool *zpool, unsigned long handle,
-		     void *handle_mem, size_t mem_len)
-{
-	zpool->driver->obj_write(zpool->pool, handle, handle_mem, mem_len);
+	zpool->driver->unpin(zpool->pool, handle);
 }
 
 /**
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 961b270f023c..90ce09bc4d0a 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -23,6 +23,7 @@
  *	zspage->lock
  */
 
+#include <crypto/scatterwalk.h>
 #include <linux/module.h>
 #include <linux/kernel.h>
 #include <linux/sched.h>
@@ -49,6 +50,7 @@
 #include <linux/pagemap.h>
 #include <linux/fs.h>
 #include <linux/local_lock.h>
+#include <linux/scatterlist.h>
 #include "zpdesc.h"
 
 #define ZSPAGE_MAGIC	0x58
@@ -445,7 +447,6 @@ static void record_obj(unsigned long handle, unsigned long obj)
 /* zpool driver */
 
 #ifdef CONFIG_ZPOOL
-
 static void *zs_zpool_create(const char *name, gfp_t gfp)
 {
 	/*
@@ -475,22 +476,15 @@ static void zs_zpool_free(void *pool, unsigned long handle)
 	zs_free(pool, handle);
 }
 
-static void *zs_zpool_obj_read_begin(void *pool, unsigned long handle,
-				     void *local_copy)
+static void zs_zpool_pin(void *pool, unsigned long handle,
+			 struct scatterlist sg[2])
 {
-	return zs_obj_read_begin(pool, handle, local_copy);
+	zs_pin_object(pool, handle, sg);
 }
 
-static void zs_zpool_obj_read_end(void *pool, unsigned long handle,
-				  void *handle_mem)
+static void zs_zpool_unpin(void *pool, unsigned long handle)
 {
-	zs_obj_read_end(pool, handle, handle_mem);
-}
-
-static void zs_zpool_obj_write(void *pool, unsigned long handle,
-			       void *handle_mem, size_t mem_len)
-{
-	zs_obj_write(pool, handle, handle_mem, mem_len);
+	zs_unpin_object(pool, handle);
 }
 
 static u64 zs_zpool_total_pages(void *pool)
@@ -505,9 +499,8 @@ static struct zpool_driver zs_zpool_driver = {
 	.destroy =		  zs_zpool_destroy,
 	.malloc =		  zs_zpool_malloc,
 	.free =			  zs_zpool_free,
-	.obj_read_begin =	  zs_zpool_obj_read_begin,
-	.obj_read_end  =	  zs_zpool_obj_read_end,
-	.obj_write =		  zs_zpool_obj_write,
+	.pin =			  zs_zpool_pin,
+	.unpin =		  zs_zpool_unpin,
 	.total_pages =		  zs_zpool_total_pages,
 };
 
@@ -1145,15 +1138,15 @@ unsigned long zs_get_total_pages(struct zs_pool *pool)
 }
 EXPORT_SYMBOL_GPL(zs_get_total_pages);
 
-void *zs_obj_read_begin(struct zs_pool *pool, unsigned long handle,
-			void *local_copy)
+void zs_pin_object(struct zs_pool *pool, unsigned long handle,
+		   struct scatterlist *sg)
 {
+	int handle_size = ZS_HANDLE_SIZE;
 	struct zspage *zspage;
 	struct zpdesc *zpdesc;
 	unsigned long obj, off;
 	unsigned int obj_idx;
 	struct size_class *class;
-	void *addr;
 
 	/* Guarantee we can get zspage from handle safely */
 	read_lock(&pool->lock);
@@ -1168,107 +1161,43 @@ void *zs_obj_read_begin(struct zs_pool *pool, unsigned long handle,
 	class = zspage_class(pool, zspage);
 	off = offset_in_page(class->size * obj_idx);
 
+	if (ZsHugePage(zspage))
+		handle_size = 0;
+
 	if (off + class->size <= PAGE_SIZE) {
 		/* this object is contained entirely within a page */
-		addr = kmap_local_zpdesc(zpdesc);
-		addr += off;
+		sg_init_table(sg, 1);
+		sg_set_page(sg, zpdesc_page(zpdesc),
+			    class->size - handle_size, off + handle_size);
 	} else {
 		size_t sizes[2];
 
 		/* this object spans two pages */
 		sizes[0] = PAGE_SIZE - off;
 		sizes[1] = class->size - sizes[0];
-		addr = local_copy;
 
-		memcpy_from_page(addr, zpdesc_page(zpdesc),
-				 off, sizes[0]);
+		sg_init_table(sg, 2);
+		sg_set_page(sg, zpdesc_page(zpdesc), sizes[0] - handle_size,
+			    off + handle_size);
 		zpdesc = get_next_zpdesc(zpdesc);
-		memcpy_from_page(addr + sizes[0],
-				 zpdesc_page(zpdesc),
-				 0, sizes[1]);
+		sg_set_page(&sg[1], zpdesc_page(zpdesc), sizes[1], 0);
 	}
-
-	if (!ZsHugePage(zspage))
-		addr += ZS_HANDLE_SIZE;
-
-	return addr;
 }
-EXPORT_SYMBOL_GPL(zs_obj_read_begin);
+EXPORT_SYMBOL_GPL(zs_pin_object);
 
-void zs_obj_read_end(struct zs_pool *pool, unsigned long handle,
-		     void *handle_mem)
+void zs_unpin_object(struct zs_pool *pool, unsigned long handle)
 {
 	struct zspage *zspage;
 	struct zpdesc *zpdesc;
-	unsigned long obj, off;
 	unsigned int obj_idx;
-	struct size_class *class;
+	unsigned long obj;
 
 	obj = handle_to_obj(handle);
 	obj_to_location(obj, &zpdesc, &obj_idx);
 	zspage = get_zspage(zpdesc);
-	class = zspage_class(pool, zspage);
-	off = offset_in_page(class->size * obj_idx);
-
-	if (off + class->size <= PAGE_SIZE) {
-		if (!ZsHugePage(zspage))
-			off += ZS_HANDLE_SIZE;
-		handle_mem -= off;
-		kunmap_local(handle_mem);
-	}
-
 	zspage_read_unlock(zspage);
 }
-EXPORT_SYMBOL_GPL(zs_obj_read_end);
-
-void zs_obj_write(struct zs_pool *pool, unsigned long handle,
-		  void *handle_mem, size_t mem_len)
-{
-	struct zspage *zspage;
-	struct zpdesc *zpdesc;
-	unsigned long obj, off;
-	unsigned int obj_idx;
-	struct size_class *class;
-
-	/* Guarantee we can get zspage from handle safely */
-	read_lock(&pool->lock);
-	obj = handle_to_obj(handle);
-	obj_to_location(obj, &zpdesc, &obj_idx);
-	zspage = get_zspage(zpdesc);
-
-	/* Make sure migration doesn't move any pages in this zspage */
-	zspage_read_lock(zspage);
-	read_unlock(&pool->lock);
-
-	class = zspage_class(pool, zspage);
-	off = offset_in_page(class->size * obj_idx);
-
-	if (off + class->size <= PAGE_SIZE) {
-		/* this object is contained entirely within a page */
-		void *dst = kmap_local_zpdesc(zpdesc);
-
-		if (!ZsHugePage(zspage))
-			off += ZS_HANDLE_SIZE;
-		memcpy(dst + off, handle_mem, mem_len);
-		kunmap_local(dst);
-	} else {
-		/* this object spans two pages */
-		size_t sizes[2];
-
-		off += ZS_HANDLE_SIZE;
-		sizes[0] = PAGE_SIZE - off;
-		sizes[1] = mem_len - sizes[0];
-
-		memcpy_to_page(zpdesc_page(zpdesc), off,
-			       handle_mem, sizes[0]);
-		zpdesc = get_next_zpdesc(zpdesc);
-		memcpy_to_page(zpdesc_page(zpdesc), 0,
-			       handle_mem + sizes[0], sizes[1]);
-	}
-
-	zspage_read_unlock(zspage);
-}
-EXPORT_SYMBOL_GPL(zs_obj_write);
+EXPORT_SYMBOL_GPL(zs_unpin_object);
 
 /**
  * zs_huge_class_size() - Returns the size (in bytes) of the first huge
diff --git a/mm/zswap.c b/mm/zswap.c
index 8b086740698a..42b5fbeea477 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -13,6 +13,8 @@
 
 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
 
+#include <crypto/acompress.h>
+#include <crypto/scatterwalk.h>
 #include <linux/module.h>
 #include <linux/cpu.h>
 #include <linux/highmem.h>
@@ -26,7 +28,6 @@
 #include <linux/mempolicy.h>
 #include <linux/mempool.h>
 #include <linux/zpool.h>
-#include <crypto/acompress.h>
 #include <linux/zswap.h>
 #include <linux/mm_types.h>
 #include <linux/page-flags.h>
@@ -147,7 +148,6 @@ struct crypto_acomp_ctx {
 	struct crypto_wait wait;
 	u8 *buffer;
 	struct mutex mutex;
-	bool is_sleepable;
 };
 
 /*
@@ -865,7 +865,6 @@ static int zswap_cpu_comp_prepare(unsigned int cpu, struct hlist_node *node)
 
 	acomp_ctx->buffer = buffer;
 	acomp_ctx->acomp = acomp;
-	acomp_ctx->is_sleepable = acomp_is_async(acomp);
 	acomp_ctx->req = req;
 	mutex_unlock(&acomp_ctx->mutex);
 	return 0;
@@ -928,6 +927,7 @@ static bool zswap_compress(struct page *page, struct zswap_entry *entry,
 	struct scatterlist input, output;
 	int comp_ret = 0, alloc_ret = 0;
 	unsigned int dlen = PAGE_SIZE;
+	struct scatterlist sg[2];
 	unsigned long handle;
 	struct zpool *zpool;
 	gfp_t gfp;
@@ -969,7 +969,9 @@ static bool zswap_compress(struct page *page, struct zswap_entry *entry,
 	if (alloc_ret)
 		goto unlock;
 
-	zpool_obj_write(zpool, handle, dst, dlen);
+	zpool_pin_handle(zpool, handle, sg);
+	memcpy_to_sglist(sg, 0, dst, dlen);
+	zpool_unpin_handle(zpool, handle);
 	entry->handle = handle;
 	entry->length = dlen;
 
@@ -988,34 +990,19 @@ static bool zswap_compress(struct page *page, struct zswap_entry *entry,
 static void zswap_decompress(struct zswap_entry *entry, struct folio *folio)
 {
 	struct zpool *zpool = entry->pool->zpool;
-	struct scatterlist input, output;
 	struct crypto_acomp_ctx *acomp_ctx;
-	u8 *src, *obj;
+	struct scatterlist input[2];
+	struct scatterlist output;
 
 	acomp_ctx = acomp_ctx_get_cpu_lock(entry->pool);
-	obj = zpool_obj_read_begin(zpool, entry->handle, acomp_ctx->buffer);
-
-	/*
-	 * zpool_obj_read_begin() might return a kmap address of highmem when
-	 * acomp_ctx->buffer is not used.  However, sg_init_one() does not
-	 * handle highmem addresses, so copy the object to acomp_ctx->buffer.
-	 */
-	if (virt_addr_valid(obj)) {
-		src = obj;
-	} else {
-		WARN_ON_ONCE(obj == acomp_ctx->buffer);
-		memcpy(acomp_ctx->buffer, obj, entry->length);
-		src = acomp_ctx->buffer;
-	}
-
-	sg_init_one(&input, src, entry->length);
+	zpool_pin_handle(zpool, entry->handle, input);
 	sg_init_table(&output, 1);
 	sg_set_folio(&output, folio, PAGE_SIZE, 0);
-	acomp_request_set_params(acomp_ctx->req, &input, &output, entry->length, PAGE_SIZE);
+	acomp_request_set_params(acomp_ctx->req, input, &output, entry->length, PAGE_SIZE);
 	BUG_ON(crypto_wait_req(crypto_acomp_decompress(acomp_ctx->req), &acomp_ctx->wait));
 	BUG_ON(acomp_ctx->req->dlen != PAGE_SIZE);
 
-	zpool_obj_read_end(zpool, entry->handle, obj);
+	zpool_unpin_handle(zpool, entry->handle);
 	acomp_ctx_put_unlock(acomp_ctx);
 }
 
-- 
2.39.5

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* Re: [PATCH mm-unstable 3/5] mm: zpool: Remove object mapping APIs
  2025-03-06  1:48   ` Herbert Xu
  2025-03-06  4:19     ` Herbert Xu
@ 2025-03-06 14:15     ` Johannes Weiner
  1 sibling, 0 replies; 28+ messages in thread
From: Johannes Weiner @ 2025-03-06 14:15 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Yosry Ahmed, Andrew Morton, Nhat Pham, Chengming Zhou,
	Minchan Kim, Sergey Senozhatsky, Thomas Gleixner, Peter Zijlstra,
	linux-mm, linux-kernel

On Thu, Mar 06, 2025 at 09:48:58AM +0800, Herbert Xu wrote:
> On Wed, Mar 05, 2025 at 06:11:31AM +0000, Yosry Ahmed wrote:
> > zpool_map_handle(), zpool_unmap_handle(), and zpool_can_sleep_mapped()
> > are no longer used. Remove them with the underlying driver callbacks.
> > 
> > Signed-off-by: Yosry Ahmed <yosry.ahmed@linux.dev>
> > ---
> >  include/linux/zpool.h | 30 ---------------------
> >  mm/zpool.c            | 61 -------------------------------------------
> >  mm/zsmalloc.c         | 27 -------------------
> >  3 files changed, 118 deletions(-)
> 
> This patch breaks zbud and z3fold because they haven't been converted
> to the new interface.

They're both scheduled for removal and already gone in the mm tree.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH mm-unstable 3/5] mm: zpool: Remove object mapping APIs
  2025-03-06  4:19     ` Herbert Xu
@ 2025-03-06 16:55       ` Yosry Ahmed
  2025-03-07  2:38         ` Herbert Xu
  0 siblings, 1 reply; 28+ messages in thread
From: Yosry Ahmed @ 2025-03-06 16:55 UTC (permalink / raw)
  To: Herbert Xu, Sergey Senozhatsky
  Cc: Andrew Morton, Johannes Weiner, Nhat Pham, Chengming Zhou,
	Minchan Kim, Sergey Senozhatsky, Thomas Gleixner, Peter Zijlstra,
	linux-mm, linux-kernel

On Thu, Mar 06, 2025 at 12:19:19PM +0800, Herbert Xu wrote:
> On Thu, Mar 06, 2025 at 09:48:58AM +0800, Herbert Xu wrote:
> >
> > This patch breaks zbud and z3fold because they haven't been converted
> > to the new interface.
> 
> I've rebased my zswap SG patch on top of your series.  I've removed
> all the mapping code from zpool/zsmalloc and pushed it out to zram
> instead.
> 
> This patch depends on a new memcpy_sglist function which I've just
> posted a patch for:
> 
> https://patchwork.kernel.org/project/linux-crypto/patch/Z8kXhLb681E_FLzs@gondor.apana.org.au/
> 
> From a77ee529b831e7e606ed2a5b723b74ce234a3915 Mon Sep 17 00:00:00 2001
> From: Herbert Xu <herbert@gondor.apana.org.au>
> Date: Thu, 6 Mar 2025 12:13:58 +0800
> Subject: [PATCH] mm: zswap: Give non-linear objects to Crypto API
> 
> Instead of copying non-linear objects into a buffer, use the
> scatterlist to give them directly to the Crypto API.
> 
> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

The zswap and zsmalloc look good and the code is simpler. I am fine with
this approach if Sergey is fine with it, although I wonder if we should
update Sergey's patches in mm-unstable do this directly. Currently we
are switching from mapping APIs to read/write APIs, and then quickly to
the pinning APIs. The history will be confusing.

Sergey, do you prefer if we keep things as-is, or if you update your
series to incorporate Herbert's changes for zsmalloc/zram, then I can
update my series to incorporate the changes in zswap?

We can also combine the series into a single updated one with
zsmalloc/zram/zswap changes.

Let me know what you prefer.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH mm-unstable 3/5] mm: zpool: Remove object mapping APIs
  2025-03-06 16:55       ` Yosry Ahmed
@ 2025-03-07  2:38         ` Herbert Xu
  2025-03-07  5:19           ` Sergey Senozhatsky
  0 siblings, 1 reply; 28+ messages in thread
From: Herbert Xu @ 2025-03-07  2:38 UTC (permalink / raw)
  To: Yosry Ahmed
  Cc: Sergey Senozhatsky, Andrew Morton, Johannes Weiner, Nhat Pham,
	Chengming Zhou, Minchan Kim, Thomas Gleixner, Peter Zijlstra,
	linux-mm, linux-kernel

On Thu, Mar 06, 2025 at 04:55:07PM +0000, Yosry Ahmed wrote:
>
> The zswap and zsmalloc look good and the code is simpler. I am fine with
> this approach if Sergey is fine with it, although I wonder if we should
> update Sergey's patches in mm-unstable do this directly. Currently we
> are switching from mapping APIs to read/write APIs, and then quickly to
> the pinning APIs. The history will be confusing.
> 
> Sergey, do you prefer if we keep things as-is, or if you update your
> series to incorporate Herbert's changes for zsmalloc/zram, then I can
> update my series to incorporate the changes in zswap?
> 
> We can also combine the series into a single updated one with
> zsmalloc/zram/zswap changes.
> 
> Let me know what you prefer.

This patch is only illustrating what zswap would look like once we
move to an SG-based interface.  So I'm not actually submitting it
for inclusion at this time.

Sergey has volunteed to add parameter support to acomp.  Let's
wait for that before making these changes to zsmalloc/zswap.

Thanks,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH mm-unstable 3/5] mm: zpool: Remove object mapping APIs
  2025-03-07  2:38         ` Herbert Xu
@ 2025-03-07  5:19           ` Sergey Senozhatsky
  0 siblings, 0 replies; 28+ messages in thread
From: Sergey Senozhatsky @ 2025-03-07  5:19 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Yosry Ahmed, Sergey Senozhatsky, Andrew Morton, Johannes Weiner,
	Nhat Pham, Chengming Zhou, Minchan Kim, Thomas Gleixner,
	Peter Zijlstra, linux-mm, linux-kernel

On (25/03/07 10:38), Herbert Xu wrote:
> On Thu, Mar 06, 2025 at 04:55:07PM +0000, Yosry Ahmed wrote:
> >
> > The zswap and zsmalloc look good and the code is simpler. I am fine with
> > this approach if Sergey is fine with it, although I wonder if we should
> > update Sergey's patches in mm-unstable do this directly. Currently we
> > are switching from mapping APIs to read/write APIs, and then quickly to
> > the pinning APIs. The history will be confusing.
> > 
> > Sergey, do you prefer if we keep things as-is, or if you update your
> > series to incorporate Herbert's changes for zsmalloc/zram, then I can
> > update my series to incorporate the changes in zswap?
> > 
> > We can also combine the series into a single updated one with
> > zsmalloc/zram/zswap changes.
> > 
> > Let me know what you prefer.
> 
> This patch is only illustrating what zswap would look like once we
> move to an SG-based interface.  So I'm not actually submitting it
> for inclusion at this time.

Ah, OK, that's good to know, I was also a bit puzzled.
Perhaps next time we can label such patches as "POC" or
something.

> Sergey has volunteed to add parameter support to acomp.  Let's
> wait for that before making these changes to zsmalloc/zswap.

Yup, I'll keep you posted (once I have something.)

^ permalink raw reply	[flat|nested] 28+ messages in thread

end of thread, other threads:[~2025-03-07  5:19 UTC | newest]

Thread overview: 28+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-03-05  6:11 [PATCH mm-unstable 0/5] Switch zswap to object read/write APIs Yosry Ahmed
2025-03-05  6:11 ` [PATCH mm-unstable 1/5] mm: zpool: Add interfaces for " Yosry Ahmed
2025-03-05  8:18   ` Sergey Senozhatsky
2025-03-05 14:43   ` Johannes Weiner
2025-03-05 17:32   ` Nhat Pham
2025-03-05  6:11 ` [PATCH mm-unstable 2/5] mm: zswap: Use object read/write APIs instead of object mapping APIs Yosry Ahmed
2025-03-05 14:48   ` Johannes Weiner
2025-03-05 17:35   ` Nhat Pham
2025-03-05  6:11 ` [PATCH mm-unstable 3/5] mm: zpool: Remove " Yosry Ahmed
2025-03-05  8:17   ` Sergey Senozhatsky
2025-03-05 14:49   ` Johannes Weiner
2025-03-05 17:37   ` Nhat Pham
2025-03-06  1:48   ` Herbert Xu
2025-03-06  4:19     ` Herbert Xu
2025-03-06 16:55       ` Yosry Ahmed
2025-03-07  2:38         ` Herbert Xu
2025-03-07  5:19           ` Sergey Senozhatsky
2025-03-06 14:15     ` Johannes Weiner
2025-03-05  6:11 ` [PATCH mm-unstable 4/5] mm: zsmalloc: Remove object mapping APIs and per-CPU map areas Yosry Ahmed
2025-03-05  8:16   ` Sergey Senozhatsky
2025-03-05 14:51   ` Johannes Weiner
2025-03-05 17:39   ` Nhat Pham
2025-03-05 18:57   ` Yosry Ahmed
2025-03-05  6:11 ` [PATCH mm-unstable 5/5] mm: zpool: Remove zpool_malloc_support_movable() Yosry Ahmed
2025-03-05  8:14   ` Sergey Senozhatsky
2025-03-05 14:53   ` Johannes Weiner
2025-03-05 17:05   ` Nhat Pham
2025-03-05  8:18 ` [PATCH mm-unstable 0/5] Switch zswap to object read/write APIs Sergey Senozhatsky

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox