* [PATCH v2 0/2] zswap pool per-CPU acomp_ctx simplifications
@ 2026-03-17 1:48 Kanchana P. Sridhar
2026-03-17 1:48 ` [PATCH v2 1/2] mm: zswap: Remove redundant checks in zswap_cpu_comp_dead() Kanchana P. Sridhar
` (2 more replies)
0 siblings, 3 replies; 19+ messages in thread
From: Kanchana P. Sridhar @ 2026-03-17 1:48 UTC (permalink / raw)
To: hannes, yosry, nphamcs, chengming.zhou, akpm,
kanchanapsridhar2026, linux-mm, linux-kernel
Cc: herbert, senozhatsky
This patchset first removes redundant checks on the acomp_ctx and its
"req" member in zswap_cpu_comp_dead().
Next, it persists the zswap pool's per-CPU acomp_ctx resources to
last until the pool is destroyed. It then simplifies the per-CPU
acomp_ctx mutex locking in zswap_compress()/zswap_decompress().
Code comments added after allocation and before checking to deallocate
the per-CPU acomp_ctx's members, based on expected crypto API return
values and zswap changes this patchset makes.
Patch 2 is an independent submission of patch 23 from [1], to
facilitate merging.
[1]: https://patchwork.kernel.org/project/linux-mm/list/?series=1046677
Changes since v1:
=================
1) Made the changes to eliminate redundant checks on
acomp_ctx/acomp_ctx->req in zswap_cpu_comp_dead(), per Yosry.
2) Renamed acomp_ctx_dealloc() to acomp_ctx_free(), per Yosry.
3) Incorporated suggestions from Yosry and Sashiko to reset the
acomp_ctx's members to NULL after freeing them, to prevent UAF and
double free issues.
4) Replaced v1's patch 2 with v2's patch 1.
Kanchana P. Sridhar (2):
mm: zswap: Remove redundant checks in zswap_cpu_comp_dead().
mm: zswap: Tie per-CPU acomp_ctx lifetime to the pool.
mm/zswap.c | 182 +++++++++++++++++++++++++----------------------------
1 file changed, 85 insertions(+), 97 deletions(-)
--
2.39.5
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v2 1/2] mm: zswap: Remove redundant checks in zswap_cpu_comp_dead().
2026-03-17 1:48 [PATCH v2 0/2] zswap pool per-CPU acomp_ctx simplifications Kanchana P. Sridhar
@ 2026-03-17 1:48 ` Kanchana P. Sridhar
2026-03-17 19:19 ` Yosry Ahmed
2026-03-17 1:48 ` [PATCH v2 2/2] mm: zswap: Tie per-CPU acomp_ctx lifetime to the pool Kanchana P. Sridhar
2026-03-17 19:45 ` [PATCH v2 0/2] zswap pool per-CPU acomp_ctx simplifications Andrew Morton
2 siblings, 1 reply; 19+ messages in thread
From: Kanchana P. Sridhar @ 2026-03-17 1:48 UTC (permalink / raw)
To: hannes, yosry, nphamcs, chengming.zhou, akpm,
kanchanapsridhar2026, linux-mm, linux-kernel
Cc: herbert, senozhatsky
There are presently redundant checks on the per-CPU acomp_ctx and it's
"req" member in zswap_cpu_comp_dead(): redundant because they are
inconsistent with zswap_pool_create() handling of failure in allocating
the acomp_ctx, and with the expected NULL return value from the
acomp_request_alloc() API when it fails to allocate an acomp_req.
Fix these by converting to them to be NULL checks.
Add comments in zswap_cpu_comp_prepare() clarifying the expected return
values of the crypto_alloc_acomp_node() and acomp_request_alloc() API.
Suggested-by: Yosry Ahmed <yosry@kernel.org>
Signed-off-by: Kanchana P. Sridhar <kanchanapsridhar2026@gmail.com>
---
mm/zswap.c | 12 ++++++++++--
1 file changed, 10 insertions(+), 2 deletions(-)
diff --git a/mm/zswap.c b/mm/zswap.c
index bdd24430f6ff..8ac38f1d0469 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -749,6 +749,10 @@ static int zswap_cpu_comp_prepare(unsigned int cpu, struct hlist_node *node)
goto fail;
}
+ /*
+ * In case of an error, crypto_alloc_acomp_node() returns an
+ * error pointer, never NULL.
+ */
acomp = crypto_alloc_acomp_node(pool->tfm_name, 0, 0, cpu_to_node(cpu));
if (IS_ERR(acomp)) {
pr_err("could not alloc crypto acomp %s : %pe\n",
@@ -757,6 +761,7 @@ static int zswap_cpu_comp_prepare(unsigned int cpu, struct hlist_node *node)
goto fail;
}
+ /* acomp_request_alloc() returns NULL in case of an error. */
req = acomp_request_alloc(acomp);
if (!req) {
pr_err("could not alloc crypto acomp_request %s\n",
@@ -802,7 +807,7 @@ static int zswap_cpu_comp_dead(unsigned int cpu, struct hlist_node *node)
struct crypto_acomp *acomp;
u8 *buffer;
- if (IS_ERR_OR_NULL(acomp_ctx))
+ if (!acomp_ctx)
return 0;
mutex_lock(&acomp_ctx->mutex);
@@ -817,8 +822,11 @@ static int zswap_cpu_comp_dead(unsigned int cpu, struct hlist_node *node)
/*
* Do the actual freeing after releasing the mutex to avoid subtle
* locking dependencies causing deadlocks.
+ *
+ * If there was an error in allocating @acomp_ctx->req, it
+ * would be set to NULL.
*/
- if (!IS_ERR_OR_NULL(req))
+ if (req)
acomp_request_free(req);
if (!IS_ERR_OR_NULL(acomp))
crypto_free_acomp(acomp);
--
2.39.5
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH v2 2/2] mm: zswap: Tie per-CPU acomp_ctx lifetime to the pool.
2026-03-17 1:48 [PATCH v2 0/2] zswap pool per-CPU acomp_ctx simplifications Kanchana P. Sridhar
2026-03-17 1:48 ` [PATCH v2 1/2] mm: zswap: Remove redundant checks in zswap_cpu_comp_dead() Kanchana P. Sridhar
@ 2026-03-17 1:48 ` Kanchana P. Sridhar
2026-03-27 2:23 ` Andrew Morton
2026-03-17 19:45 ` [PATCH v2 0/2] zswap pool per-CPU acomp_ctx simplifications Andrew Morton
2 siblings, 1 reply; 19+ messages in thread
From: Kanchana P. Sridhar @ 2026-03-17 1:48 UTC (permalink / raw)
To: hannes, yosry, nphamcs, chengming.zhou, akpm,
kanchanapsridhar2026, linux-mm, linux-kernel
Cc: herbert, senozhatsky
Currently, per-CPU acomp_ctx are allocated on pool creation and/or CPU
hotplug, and destroyed on pool destruction or CPU hotunplug. This
complicates the lifetime management to save memory while a CPU is
offlined, which is not very common.
Simplify lifetime management by allocating per-CPU acomp_ctx once on
pool creation (or CPU hotplug for CPUs onlined later), and keeping them
allocated until the pool is destroyed.
Refactor cleanup code from zswap_cpu_comp_dead() into
acomp_ctx_free() to be used elsewhere.
The main benefit of using the CPU hotplug multi state instance startup
callback to allocate the acomp_ctx resources is that it prevents the
cores from being offlined until the multi state instance addition call
returns.
From Documentation/core-api/cpu_hotplug.rst:
"The node list add/remove operations and the callback invocations are
serialized against CPU hotplug operations."
Furthermore, zswap_[de]compress() cannot contend with
zswap_cpu_comp_prepare() because:
- During pool creation/deletion, the pool is not in the zswap_pools
list.
- During CPU hot[un]plug, the CPU is not yet online, as Yosry pointed
out. zswap_cpu_comp_prepare() will be run on a control CPU,
since CPUHP_MM_ZSWP_POOL_PREPARE is in the PREPARE section of "enum
cpuhp_state".
In both these cases, any recursions into zswap reclaim from
zswap_cpu_comp_prepare() will be handled by the old pool.
The above two observations enable the following simplifications:
1) zswap_cpu_comp_prepare():
a) acomp_ctx mutex locking:
If the process gets migrated while zswap_cpu_comp_prepare() is
running, it will complete on the new CPU. In case of failures, we
pass the acomp_ctx pointer obtained at the start of
zswap_cpu_comp_prepare() to acomp_ctx_free(), which again, can
only undergo migration. There appear to be no contention
scenarios that might cause inconsistent values of acomp_ctx's
members. Hence, it seems there is no need for
mutex_lock(&acomp_ctx->mutex) in zswap_cpu_comp_prepare().
b) acomp_ctx mutex initialization:
Since the pool is not yet on zswap_pools list, we don't need to
initialize the per-CPU acomp_ctx mutex in
zswap_pool_create(). This has been restored to occur in
zswap_cpu_comp_prepare().
c) Subsequent CPU offline-online transitions:
zswap_cpu_comp_prepare() checks upfront if acomp_ctx->acomp is
valid. If so, it returns success. This should handle any CPU
hotplug online-offline transitions after pool creation is done.
2) CPU offline vis-a-vis zswap ops:
Let's suppose the process is migrated to another CPU before the
current CPU is dysfunctional. If zswap_[de]compress() holds the
acomp_ctx->mutex lock of the offlined CPU, that mutex will be
released once it completes on the new CPU. Since there is no
teardown callback, there is no possibility of UAF.
3) Pool creation/deletion and process migration to another CPU:
During pool creation/deletion, the pool is not in the zswap_pools
list. Hence it cannot contend with zswap ops on that CPU. However,
the process can get migrated.
a) Pool creation --> zswap_cpu_comp_prepare()
--> process migrated:
* Old CPU offline: no-op.
* zswap_cpu_comp_prepare() continues
to run on the new CPU to finish
allocating acomp_ctx resources for
the offlined CPU.
b) Pool deletion --> acomp_ctx_free()
--> process migrated:
* Old CPU offline: no-op.
* acomp_ctx_free() continues
to run on the new CPU to finish
de-allocating acomp_ctx resources
for the offlined CPU.
4) Pool deletion vis-a-vis CPU onlining:
The call to cpuhp_state_remove_instance() cannot race with
zswap_cpu_comp_prepare() because of hotplug synchronization.
The current acomp_ctx_get_cpu_lock()/acomp_ctx_put_unlock() are
deleted. Instead, zswap_[de]compress() directly call
mutex_[un]lock(&acomp_ctx->mutex).
The per-CPU memory cost of not deleting the acomp_ctx resources upon CPU
offlining, and only deleting them when the pool is destroyed, is 8.28 KB
on x86_64. This cost is only paid when a CPU is offlined, until it is
onlined again.
Co-developed-by: Kanchana P. Sridhar <kanchanapsridhar2026@gmail.com>
Signed-off-by: Kanchana P. Sridhar <kanchanapsridhar2026@gmail.com>
Signed-off-by: Kanchana P Sridhar <kanchana.p.sridhar@intel.com>
---
mm/zswap.c | 180 ++++++++++++++++++++++++-----------------------------
1 file changed, 80 insertions(+), 100 deletions(-)
diff --git a/mm/zswap.c b/mm/zswap.c
index 8ac38f1d0469..6bdd2ed7d697 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -242,6 +242,34 @@ static inline struct xarray *swap_zswap_tree(swp_entry_t swp)
**********************************/
static void __zswap_pool_empty(struct percpu_ref *ref);
+static void acomp_ctx_free(struct crypto_acomp_ctx *acomp_ctx)
+{
+ if (!acomp_ctx)
+ return;
+
+ /*
+ * If there was an error in allocating @acomp_ctx->req, it
+ * would be set to NULL.
+ */
+ if (acomp_ctx->req)
+ acomp_request_free(acomp_ctx->req);
+
+ acomp_ctx->req = NULL;
+
+ /*
+ * We have to handle both cases here: an error pointer return from
+ * crypto_alloc_acomp_node(); and a) NULL initialization by zswap, or
+ * b) NULL assignment done in a previous call to acomp_ctx_free().
+ */
+ if (!IS_ERR_OR_NULL(acomp_ctx->acomp))
+ crypto_free_acomp(acomp_ctx->acomp);
+
+ acomp_ctx->acomp = NULL;
+
+ kfree(acomp_ctx->buffer);
+ acomp_ctx->buffer = NULL;
+}
+
static struct zswap_pool *zswap_pool_create(char *compressor)
{
struct zswap_pool *pool;
@@ -263,19 +291,27 @@ static struct zswap_pool *zswap_pool_create(char *compressor)
strscpy(pool->tfm_name, compressor, sizeof(pool->tfm_name));
- pool->acomp_ctx = alloc_percpu(*pool->acomp_ctx);
+ /* Many things rely on the zero-initialization. */
+ pool->acomp_ctx = alloc_percpu_gfp(*pool->acomp_ctx,
+ GFP_KERNEL | __GFP_ZERO);
if (!pool->acomp_ctx) {
pr_err("percpu alloc failed\n");
goto error;
}
- for_each_possible_cpu(cpu)
- mutex_init(&per_cpu_ptr(pool->acomp_ctx, cpu)->mutex);
-
+ /*
+ * This is serialized against CPU hotplug operations. Hence, cores
+ * cannot be offlined until this finishes.
+ */
ret = cpuhp_state_add_instance(CPUHP_MM_ZSWP_POOL_PREPARE,
&pool->node);
+
+ /*
+ * cpuhp_state_add_instance() will not cleanup on failure since
+ * we don't register a hotunplug callback.
+ */
if (ret)
- goto error;
+ goto cpuhp_add_fail;
/* being the current pool takes 1 ref; this func expects the
* caller to always add the new pool as the current pool
@@ -292,6 +328,10 @@ static struct zswap_pool *zswap_pool_create(char *compressor)
ref_fail:
cpuhp_state_remove_instance(CPUHP_MM_ZSWP_POOL_PREPARE, &pool->node);
+
+cpuhp_add_fail:
+ for_each_possible_cpu(cpu)
+ acomp_ctx_free(per_cpu_ptr(pool->acomp_ctx, cpu));
error:
if (pool->acomp_ctx)
free_percpu(pool->acomp_ctx);
@@ -322,9 +362,15 @@ static struct zswap_pool *__zswap_pool_create_fallback(void)
static void zswap_pool_destroy(struct zswap_pool *pool)
{
+ int cpu;
+
zswap_pool_debug("destroying", pool);
cpuhp_state_remove_instance(CPUHP_MM_ZSWP_POOL_PREPARE, &pool->node);
+
+ for_each_possible_cpu(cpu)
+ acomp_ctx_free(per_cpu_ptr(pool->acomp_ctx, cpu));
+
free_percpu(pool->acomp_ctx);
zs_destroy_pool(pool->zs_pool);
@@ -738,44 +784,41 @@ static int zswap_cpu_comp_prepare(unsigned int cpu, struct hlist_node *node)
{
struct zswap_pool *pool = hlist_entry(node, struct zswap_pool, node);
struct crypto_acomp_ctx *acomp_ctx = per_cpu_ptr(pool->acomp_ctx, cpu);
- struct crypto_acomp *acomp = NULL;
- struct acomp_req *req = NULL;
- u8 *buffer = NULL;
- int ret;
+ int ret = -ENOMEM;
- buffer = kmalloc_node(PAGE_SIZE, GFP_KERNEL, cpu_to_node(cpu));
- if (!buffer) {
- ret = -ENOMEM;
- goto fail;
+ /*
+ * To handle cases where the CPU goes through online-offline-online
+ * transitions, we return if the acomp_ctx has already been initialized.
+ */
+ if (acomp_ctx->acomp) {
+ WARN_ON_ONCE(IS_ERR(acomp_ctx->acomp));
+ return 0;
}
+ acomp_ctx->buffer = kmalloc_node(PAGE_SIZE, GFP_KERNEL, cpu_to_node(cpu));
+ if (!acomp_ctx->buffer)
+ return ret;
+
/*
* In case of an error, crypto_alloc_acomp_node() returns an
* error pointer, never NULL.
*/
- acomp = crypto_alloc_acomp_node(pool->tfm_name, 0, 0, cpu_to_node(cpu));
- if (IS_ERR(acomp)) {
+ acomp_ctx->acomp = crypto_alloc_acomp_node(pool->tfm_name, 0, 0, cpu_to_node(cpu));
+ if (IS_ERR(acomp_ctx->acomp)) {
pr_err("could not alloc crypto acomp %s : %pe\n",
- pool->tfm_name, acomp);
- ret = PTR_ERR(acomp);
+ pool->tfm_name, acomp_ctx->acomp);
+ ret = PTR_ERR(acomp_ctx->acomp);
goto fail;
}
/* acomp_request_alloc() returns NULL in case of an error. */
- req = acomp_request_alloc(acomp);
- if (!req) {
+ acomp_ctx->req = acomp_request_alloc(acomp_ctx->acomp);
+ if (!acomp_ctx->req) {
pr_err("could not alloc crypto acomp_request %s\n",
pool->tfm_name);
- ret = -ENOMEM;
goto fail;
}
- /*
- * Only hold the mutex after completing allocations, otherwise we may
- * recurse into zswap through reclaim and attempt to hold the mutex
- * again resulting in a deadlock.
- */
- mutex_lock(&acomp_ctx->mutex);
crypto_init_wait(&acomp_ctx->wait);
/*
@@ -783,83 +826,17 @@ static int zswap_cpu_comp_prepare(unsigned int cpu, struct hlist_node *node)
* crypto_wait_req(); if the backend of acomp is scomp, the callback
* won't be called, crypto_wait_req() will return without blocking.
*/
- acomp_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG,
+ acomp_request_set_callback(acomp_ctx->req, CRYPTO_TFM_REQ_MAY_BACKLOG,
crypto_req_done, &acomp_ctx->wait);
- acomp_ctx->buffer = buffer;
- acomp_ctx->acomp = acomp;
- acomp_ctx->req = req;
- mutex_unlock(&acomp_ctx->mutex);
+ mutex_init(&acomp_ctx->mutex);
return 0;
fail:
- if (!IS_ERR_OR_NULL(acomp))
- crypto_free_acomp(acomp);
- kfree(buffer);
+ acomp_ctx_free(acomp_ctx);
return ret;
}
-static int zswap_cpu_comp_dead(unsigned int cpu, struct hlist_node *node)
-{
- struct zswap_pool *pool = hlist_entry(node, struct zswap_pool, node);
- struct crypto_acomp_ctx *acomp_ctx = per_cpu_ptr(pool->acomp_ctx, cpu);
- struct acomp_req *req;
- struct crypto_acomp *acomp;
- u8 *buffer;
-
- if (!acomp_ctx)
- return 0;
-
- mutex_lock(&acomp_ctx->mutex);
- req = acomp_ctx->req;
- acomp = acomp_ctx->acomp;
- buffer = acomp_ctx->buffer;
- acomp_ctx->req = NULL;
- acomp_ctx->acomp = NULL;
- acomp_ctx->buffer = NULL;
- mutex_unlock(&acomp_ctx->mutex);
-
- /*
- * Do the actual freeing after releasing the mutex to avoid subtle
- * locking dependencies causing deadlocks.
- *
- * If there was an error in allocating @acomp_ctx->req, it
- * would be set to NULL.
- */
- if (req)
- acomp_request_free(req);
- if (!IS_ERR_OR_NULL(acomp))
- crypto_free_acomp(acomp);
- kfree(buffer);
-
- return 0;
-}
-
-static struct crypto_acomp_ctx *acomp_ctx_get_cpu_lock(struct zswap_pool *pool)
-{
- struct crypto_acomp_ctx *acomp_ctx;
-
- for (;;) {
- acomp_ctx = raw_cpu_ptr(pool->acomp_ctx);
- mutex_lock(&acomp_ctx->mutex);
- if (likely(acomp_ctx->req))
- return acomp_ctx;
- /*
- * It is possible that we were migrated to a different CPU after
- * getting the per-CPU ctx but before the mutex was acquired. If
- * the old CPU got offlined, zswap_cpu_comp_dead() could have
- * already freed ctx->req (among other things) and set it to
- * NULL. Just try again on the new CPU that we ended up on.
- */
- mutex_unlock(&acomp_ctx->mutex);
- }
-}
-
-static void acomp_ctx_put_unlock(struct crypto_acomp_ctx *acomp_ctx)
-{
- mutex_unlock(&acomp_ctx->mutex);
-}
-
static bool zswap_compress(struct page *page, struct zswap_entry *entry,
struct zswap_pool *pool)
{
@@ -872,7 +849,9 @@ static bool zswap_compress(struct page *page, struct zswap_entry *entry,
u8 *dst;
bool mapped = false;
- acomp_ctx = acomp_ctx_get_cpu_lock(pool);
+ acomp_ctx = raw_cpu_ptr(pool->acomp_ctx);
+ mutex_lock(&acomp_ctx->mutex);
+
dst = acomp_ctx->buffer;
sg_init_table(&input, 1);
sg_set_page(&input, page, PAGE_SIZE, 0);
@@ -938,7 +917,7 @@ static bool zswap_compress(struct page *page, struct zswap_entry *entry,
else if (alloc_ret)
zswap_reject_alloc_fail++;
- acomp_ctx_put_unlock(acomp_ctx);
+ mutex_unlock(&acomp_ctx->mutex);
return comp_ret == 0 && alloc_ret == 0;
}
@@ -950,7 +929,8 @@ static bool zswap_decompress(struct zswap_entry *entry, struct folio *folio)
struct crypto_acomp_ctx *acomp_ctx;
int ret = 0, dlen;
- acomp_ctx = acomp_ctx_get_cpu_lock(pool);
+ acomp_ctx = raw_cpu_ptr(pool->acomp_ctx);
+ mutex_lock(&acomp_ctx->mutex);
zs_obj_read_sg_begin(pool->zs_pool, entry->handle, input, entry->length);
/* zswap entries of length PAGE_SIZE are not compressed. */
@@ -969,7 +949,7 @@ static bool zswap_decompress(struct zswap_entry *entry, struct folio *folio)
}
zs_obj_read_sg_end(pool->zs_pool, entry->handle);
- acomp_ctx_put_unlock(acomp_ctx);
+ mutex_unlock(&acomp_ctx->mutex);
if (!ret && dlen == PAGE_SIZE)
return true;
@@ -1789,7 +1769,7 @@ static int zswap_setup(void)
ret = cpuhp_setup_state_multi(CPUHP_MM_ZSWP_POOL_PREPARE,
"mm/zswap_pool:prepare",
zswap_cpu_comp_prepare,
- zswap_cpu_comp_dead);
+ NULL);
if (ret)
goto hp_fail;
--
2.39.5
^ permalink raw reply related [flat|nested] 19+ messages in thread
* Re: [PATCH v2 1/2] mm: zswap: Remove redundant checks in zswap_cpu_comp_dead().
2026-03-17 1:48 ` [PATCH v2 1/2] mm: zswap: Remove redundant checks in zswap_cpu_comp_dead() Kanchana P. Sridhar
@ 2026-03-17 19:19 ` Yosry Ahmed
2026-03-17 21:09 ` Kanchana P. Sridhar
0 siblings, 1 reply; 19+ messages in thread
From: Yosry Ahmed @ 2026-03-17 19:19 UTC (permalink / raw)
To: Kanchana P. Sridhar
Cc: hannes, nphamcs, chengming.zhou, akpm, linux-mm, linux-kernel,
herbert, senozhatsky
On Mon, Mar 16, 2026 at 6:48 PM Kanchana P. Sridhar
<kanchanapsridhar2026@gmail.com> wrote:
>
> There are presently redundant checks on the per-CPU acomp_ctx and it's
> "req" member in zswap_cpu_comp_dead(): redundant because they are
> inconsistent with zswap_pool_create() handling of failure in allocating
> the acomp_ctx, and with the expected NULL return value from the
> acomp_request_alloc() API when it fails to allocate an acomp_req.
>
> Fix these by converting to them to be NULL checks.
>
> Add comments in zswap_cpu_comp_prepare() clarifying the expected return
> values of the crypto_alloc_acomp_node() and acomp_request_alloc() API.
>
> Suggested-by: Yosry Ahmed <yosry@kernel.org>
> Signed-off-by: Kanchana P. Sridhar <kanchanapsridhar2026@gmail.com>
Acked-by: Yosry Ahmed <yosry@kernel.org>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v2 0/2] zswap pool per-CPU acomp_ctx simplifications
2026-03-17 1:48 [PATCH v2 0/2] zswap pool per-CPU acomp_ctx simplifications Kanchana P. Sridhar
2026-03-17 1:48 ` [PATCH v2 1/2] mm: zswap: Remove redundant checks in zswap_cpu_comp_dead() Kanchana P. Sridhar
2026-03-17 1:48 ` [PATCH v2 2/2] mm: zswap: Tie per-CPU acomp_ctx lifetime to the pool Kanchana P. Sridhar
@ 2026-03-17 19:45 ` Andrew Morton
2026-03-17 19:48 ` Yosry Ahmed
2 siblings, 1 reply; 19+ messages in thread
From: Andrew Morton @ 2026-03-17 19:45 UTC (permalink / raw)
To: Kanchana P. Sridhar
Cc: hannes, yosry, nphamcs, chengming.zhou, linux-mm, linux-kernel,
herbert, senozhatsky
On Mon, 16 Mar 2026 18:48:00 -0700 "Kanchana P. Sridhar" <kanchanapsridhar2026@gmail.com> wrote:
> This patchset first removes redundant checks on the acomp_ctx and its
> "req" member in zswap_cpu_comp_dead().
>
> Next, it persists the zswap pool's per-CPU acomp_ctx resources to
> last until the pool is destroyed. It then simplifies the per-CPU
> acomp_ctx mutex locking in zswap_compress()/zswap_decompress().
>
> Code comments added after allocation and before checking to deallocate
> the per-CPU acomp_ctx's members, based on expected crypto API return
> values and zswap changes this patchset makes.
>
> Patch 2 is an independent submission of patch 23 from [1], to
> facilitate merging.
Thanks.
What happened with "mm: zswap: Consistently use IS_ERR_OR_NULL() to
check acomp_ctx resources"? Still relevant?
https://lkml.kernel.org/r/20260314051632.17931-3-kanchanapsridhar2026@gmail.com
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v2 0/2] zswap pool per-CPU acomp_ctx simplifications
2026-03-17 19:45 ` [PATCH v2 0/2] zswap pool per-CPU acomp_ctx simplifications Andrew Morton
@ 2026-03-17 19:48 ` Yosry Ahmed
2026-03-17 21:15 ` Kanchana P. Sridhar
2026-03-17 21:21 ` Kanchana P. Sridhar
0 siblings, 2 replies; 19+ messages in thread
From: Yosry Ahmed @ 2026-03-17 19:48 UTC (permalink / raw)
To: Andrew Morton
Cc: Kanchana P. Sridhar, hannes, nphamcs, chengming.zhou, linux-mm,
linux-kernel, herbert, senozhatsky
On Tue, Mar 17, 2026 at 12:45 PM Andrew Morton
<akpm@linux-foundation.org> wrote:
>
> On Mon, 16 Mar 2026 18:48:00 -0700 "Kanchana P. Sridhar" <kanchanapsridhar2026@gmail.com> wrote:
>
> > This patchset first removes redundant checks on the acomp_ctx and its
> > "req" member in zswap_cpu_comp_dead().
> >
> > Next, it persists the zswap pool's per-CPU acomp_ctx resources to
> > last until the pool is destroyed. It then simplifies the per-CPU
> > acomp_ctx mutex locking in zswap_compress()/zswap_decompress().
> >
> > Code comments added after allocation and before checking to deallocate
> > the per-CPU acomp_ctx's members, based on expected crypto API return
> > values and zswap changes this patchset makes.
> >
> > Patch 2 is an independent submission of patch 23 from [1], to
> > facilitate merging.
>
> Thanks.
>
> What happened with "mm: zswap: Consistently use IS_ERR_OR_NULL() to
> check acomp_ctx resources"? Still relevant?
>
> https://lkml.kernel.org/r/20260314051632.17931-3-kanchanapsridhar2026@gmail.com
We decided to drop it (and patch 1 here kinda sorta takes its place):
https://lore.kernel.org/all/CACpmpoeo0LhxkoA5Wx6q+9=2scn_az0u=3bar-JgBvTA-ZBkZg@mail.gmail.com/
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v2 1/2] mm: zswap: Remove redundant checks in zswap_cpu_comp_dead().
2026-03-17 19:19 ` Yosry Ahmed
@ 2026-03-17 21:09 ` Kanchana P. Sridhar
0 siblings, 0 replies; 19+ messages in thread
From: Kanchana P. Sridhar @ 2026-03-17 21:09 UTC (permalink / raw)
To: Yosry Ahmed
Cc: hannes, nphamcs, chengming.zhou, akpm, linux-mm, linux-kernel,
herbert, senozhatsky, Kanchana P. Sridhar
On Tue, Mar 17, 2026 at 12:19 PM Yosry Ahmed <yosry@kernel.org> wrote:
>
> On Mon, Mar 16, 2026 at 6:48 PM Kanchana P. Sridhar
> <kanchanapsridhar2026@gmail.com> wrote:
> >
> > There are presently redundant checks on the per-CPU acomp_ctx and it's
> > "req" member in zswap_cpu_comp_dead(): redundant because they are
> > inconsistent with zswap_pool_create() handling of failure in allocating
> > the acomp_ctx, and with the expected NULL return value from the
> > acomp_request_alloc() API when it fails to allocate an acomp_req.
> >
> > Fix these by converting to them to be NULL checks.
> >
> > Add comments in zswap_cpu_comp_prepare() clarifying the expected return
> > values of the crypto_alloc_acomp_node() and acomp_request_alloc() API.
> >
> > Suggested-by: Yosry Ahmed <yosry@kernel.org>
> > Signed-off-by: Kanchana P. Sridhar <kanchanapsridhar2026@gmail.com>
>
> Acked-by: Yosry Ahmed <yosry@kernel.org>
Thanks Yosry!
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v2 0/2] zswap pool per-CPU acomp_ctx simplifications
2026-03-17 19:48 ` Yosry Ahmed
@ 2026-03-17 21:15 ` Kanchana P. Sridhar
2026-03-17 21:21 ` Kanchana P. Sridhar
1 sibling, 0 replies; 19+ messages in thread
From: Kanchana P. Sridhar @ 2026-03-17 21:15 UTC (permalink / raw)
To: Yosry Ahmed
Cc: Andrew Morton, hannes, nphamcs, chengming.zhou, linux-mm,
linux-kernel, herbert, senozhatsky, Kanchana P. Sridhar
[-- Attachment #1: Type: text/plain, Size: 1820 bytes --]
On Tue, Mar 17, 2026 at 12:48 PM Yosry Ahmed <yosry@kernel.org> wrote:
>
> On Tue, Mar 17, 2026 at 12:45 PM Andrew Morton
> <akpm@linux-foundation.org> wrote:
> >
> > On Mon, 16 Mar 2026 18:48:00 -0700 "Kanchana P. Sridhar" <
kanchanapsridhar2026@gmail.com> wrote:
> >
> > > This patchset first removes redundant checks on the acomp_ctx and its
> > > "req" member in zswap_cpu_comp_dead().
> > >
> > > Next, it persists the zswap pool's per-CPU acomp_ctx resources to
> > > last until the pool is destroyed. It then simplifies the per-CPU
> > > acomp_ctx mutex locking in zswap_compress()/zswap_decompress().
> > >
> > > Code comments added after allocation and before checking to deallocate
> > > the per-CPU acomp_ctx's members, based on expected crypto API return
> > > values and zswap changes this patchset makes.
> > >
> > > Patch 2 is an independent submission of patch 23 from [1], to
> > > facilitate merging.
> >
> > Thanks.
> >
> > What happened with "mm: zswap: Consistently use IS_ERR_OR_NULL() to
> > check acomp_ctx resources"? Still relevant?
> >
> >
https://lkml.kernel.org/r/20260314051632.17931-3-kanchanapsridhar2026@gmail.com
>
> We decided to drop it (and patch 1 here kinda sorta takes its place):
>
https://lore.kernel.org/all/CACpmpoeo0LhxkoA5Wx6q+9=2scn_az0u=3bar-JgBvTA-ZBkZg@mail.gmail.com/
Thanks for the clarification, Yosry!
Thanks Andrew, for adding the two patches in v2 to mm-new! Thanks also for
obsoleting the "mm: zswap: Consistently use IS_ERR_OR_NULL() to
check acomp_ctx resources" patch - we decided to replace this with the [v2,1/2]
mm: zswap: Remove redundant checks in zswap_cpu_comp_dead().
<https://patchwork.kernel.org/project/linux-mm/patch/20260317014802.27591-2-kanchanapsridhar2026@gmail.com/>
Best regards,
Kanchana
[-- Attachment #2: Type: text/html, Size: 2551 bytes --]
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v2 0/2] zswap pool per-CPU acomp_ctx simplifications
2026-03-17 19:48 ` Yosry Ahmed
2026-03-17 21:15 ` Kanchana P. Sridhar
@ 2026-03-17 21:21 ` Kanchana P. Sridhar
1 sibling, 0 replies; 19+ messages in thread
From: Kanchana P. Sridhar @ 2026-03-17 21:21 UTC (permalink / raw)
To: Yosry Ahmed
Cc: Andrew Morton, hannes, nphamcs, chengming.zhou, linux-mm,
linux-kernel, herbert, senozhatsky, Kanchana P. Sridhar
On Tue, Mar 17, 2026 at 12:48 PM Yosry Ahmed <yosry@kernel.org> wrote:
>
> On Tue, Mar 17, 2026 at 12:45 PM Andrew Morton
> <akpm@linux-foundation.org> wrote:
> >
> > On Mon, 16 Mar 2026 18:48:00 -0700 "Kanchana P. Sridhar" <kanchanapsridhar2026@gmail.com> wrote:
> >
> > > This patchset first removes redundant checks on the acomp_ctx and its
> > > "req" member in zswap_cpu_comp_dead().
> > >
> > > Next, it persists the zswap pool's per-CPU acomp_ctx resources to
> > > last until the pool is destroyed. It then simplifies the per-CPU
> > > acomp_ctx mutex locking in zswap_compress()/zswap_decompress().
> > >
> > > Code comments added after allocation and before checking to deallocate
> > > the per-CPU acomp_ctx's members, based on expected crypto API return
> > > values and zswap changes this patchset makes.
> > >
> > > Patch 2 is an independent submission of patch 23 from [1], to
> > > facilitate merging.
> >
> > Thanks.
> >
> > What happened with "mm: zswap: Consistently use IS_ERR_OR_NULL() to
> > check acomp_ctx resources"? Still relevant?
> >
> > https://lkml.kernel.org/r/20260314051632.17931-3-kanchanapsridhar2026@gmail.com
>
> We decided to drop it (and patch 1 here kinda sorta takes its place):
> https://lore.kernel.org/all/CACpmpoeo0LhxkoA5Wx6q+9=2scn_az0u=3bar-JgBvTA-ZBkZg@mail.gmail.com/
Thanks for the clarification, Yosry!
Thanks Andrew, for adding the two patches in v2 to mm-new! Thanks also
for obsoleting the "mm: zswap: Consistently use IS_ERR_OR_NULL() to
check acomp_ctx resources" patch - we decided to replace this with the
"[v2,1/2] mm: zswap: Remove redundant checks in
zswap_cpu_comp_dead().", which makes the acomp_ctx consistency checks
changes in the original mainline code, before refactoring that code
into the new acomp_ctx_free() in "[v2,2/2] mm: zswap: Tie per-CPU
acomp_ctx lifetime to the pool."
Best regards,
Kanchana
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v2 2/2] mm: zswap: Tie per-CPU acomp_ctx lifetime to the pool.
2026-03-17 1:48 ` [PATCH v2 2/2] mm: zswap: Tie per-CPU acomp_ctx lifetime to the pool Kanchana P. Sridhar
@ 2026-03-27 2:23 ` Andrew Morton
2026-03-27 19:30 ` Kanchana P. Sridhar
0 siblings, 1 reply; 19+ messages in thread
From: Andrew Morton @ 2026-03-27 2:23 UTC (permalink / raw)
To: Kanchana P. Sridhar
Cc: hannes, yosry, nphamcs, chengming.zhou, linux-mm, linux-kernel,
herbert, senozhatsky
On Mon, 16 Mar 2026 18:48:02 -0700 "Kanchana P. Sridhar" <kanchanapsridhar2026@gmail.com> wrote:
> Currently, per-CPU acomp_ctx are allocated on pool creation and/or CPU
> hotplug, and destroyed on pool destruction or CPU hotunplug. This
> complicates the lifetime management to save memory while a CPU is
> offlined, which is not very common.
>
> Simplify lifetime management by allocating per-CPU acomp_ctx once on
> pool creation (or CPU hotplug for CPUs onlined later), and keeping them
> allocated until the pool is destroyed.
>
> ...
>
This is a tricky-looking patch and I haven't yet recorded any reviews,
so could someone please dig in?
Sashiko wasn't apply to apply this patch for either the v1 or v2
series, so no help there.
Thanks.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v2 2/2] mm: zswap: Tie per-CPU acomp_ctx lifetime to the pool.
2026-03-27 2:23 ` Andrew Morton
@ 2026-03-27 19:30 ` Kanchana P. Sridhar
2026-03-30 18:32 ` Yosry Ahmed
0 siblings, 1 reply; 19+ messages in thread
From: Kanchana P. Sridhar @ 2026-03-27 19:30 UTC (permalink / raw)
To: Andrew Morton
Cc: hannes, yosry, nphamcs, chengming.zhou, linux-mm, linux-kernel,
herbert, senozhatsky, Kanchana P. Sridhar
On Thu, Mar 26, 2026 at 7:23 PM Andrew Morton <akpm@linux-foundation.org> wrote:
>
> On Mon, 16 Mar 2026 18:48:02 -0700 "Kanchana P. Sridhar" <kanchanapsridhar2026@gmail.com> wrote:
>
> > Currently, per-CPU acomp_ctx are allocated on pool creation and/or CPU
> > hotplug, and destroyed on pool destruction or CPU hotunplug. This
> > complicates the lifetime management to save memory while a CPU is
> > offlined, which is not very common.
> >
> > Simplify lifetime management by allocating per-CPU acomp_ctx once on
> > pool creation (or CPU hotplug for CPUs onlined later), and keeping them
> > allocated until the pool is destroyed.
> >
> > ...
> >
>
> This is a tricky-looking patch and I haven't yet recorded any reviews,
> so could someone please dig in?
Hi Andrew, Yosry,
To provide some background, this patch [1] in my v14 patch-series was
Acked-by Yosry, with a minor change requested [2] that is addressed by
the current patch.
Since Sashiko flagged the same issue as [2], I did not automatically
carry forward the Acked-by.
Yosry, can you please confirm if your Acked-by and comments in [2] are
addressed in this current patch?
[1]: https://patchwork.kernel.org/project/linux-mm/patch/20260125033537.334628-24-kanchana.p.sridhar@intel.com/
[2]: https://patchwork.kernel.org/comment/26773986/
Thanks,
Kanchana
>
> Sashiko wasn't apply to apply this patch for either the v1 or v2
> series, so no help there.
>
> Thanks.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v2 2/2] mm: zswap: Tie per-CPU acomp_ctx lifetime to the pool.
2026-03-27 19:30 ` Kanchana P. Sridhar
@ 2026-03-30 18:32 ` Yosry Ahmed
2026-03-30 18:51 ` Andrew Morton
0 siblings, 1 reply; 19+ messages in thread
From: Yosry Ahmed @ 2026-03-30 18:32 UTC (permalink / raw)
To: Kanchana P. Sridhar
Cc: Andrew Morton, hannes, nphamcs, chengming.zhou, linux-mm,
linux-kernel, herbert, senozhatsky
On Fri, Mar 27, 2026 at 12:30 PM Kanchana P. Sridhar
<kanchanapsridhar2026@gmail.com> wrote:
>
> On Thu, Mar 26, 2026 at 7:23 PM Andrew Morton <akpm@linux-foundation.org> wrote:
> >
> > On Mon, 16 Mar 2026 18:48:02 -0700 "Kanchana P. Sridhar" <kanchanapsridhar2026@gmail.com> wrote:
> >
> > > Currently, per-CPU acomp_ctx are allocated on pool creation and/or CPU
> > > hotplug, and destroyed on pool destruction or CPU hotunplug. This
> > > complicates the lifetime management to save memory while a CPU is
> > > offlined, which is not very common.
> > >
> > > Simplify lifetime management by allocating per-CPU acomp_ctx once on
> > > pool creation (or CPU hotplug for CPUs onlined later), and keeping them
> > > allocated until the pool is destroyed.
> > >
> > > ...
> > >
> >
> > This is a tricky-looking patch and I haven't yet recorded any reviews,
> > so could someone please dig in?
>
> Hi Andrew, Yosry,
>
> To provide some background, this patch [1] in my v14 patch-series was
> Acked-by Yosry, with a minor change requested [2] that is addressed by
> the current patch.
>
> Since Sashiko flagged the same issue as [2], I did not automatically
> carry forward the Acked-by.
> Yosry, can you please confirm if your Acked-by and comments in [2] are
> addressed in this current patch?
The patch looks good to me, feel free to add:
Acked-by: Yosry Ahmed <yosry@kernel.org>
That being said, it might be a good idea to resend the patches (new
version for rebase + Ack?) to get Sashiko to take a look, hopefully it
will be able to apply/review the patches this time.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v2 2/2] mm: zswap: Tie per-CPU acomp_ctx lifetime to the pool.
2026-03-30 18:32 ` Yosry Ahmed
@ 2026-03-30 18:51 ` Andrew Morton
2026-03-30 18:52 ` Yosry Ahmed
0 siblings, 1 reply; 19+ messages in thread
From: Andrew Morton @ 2026-03-30 18:51 UTC (permalink / raw)
To: Yosry Ahmed
Cc: Kanchana P. Sridhar, hannes, nphamcs, chengming.zhou, linux-mm,
linux-kernel, herbert, senozhatsky
On Mon, 30 Mar 2026 11:32:07 -0700 Yosry Ahmed <yosry@kernel.org> wrote:
> The patch looks good to me, feel free to add:
>
> Acked-by: Yosry Ahmed <yosry@kernel.org>
Thanks.
> That being said, it might be a good idea to resend the patches (new
> version for rebase + Ack?) to get Sashiko to take a look,
Yes please.
> hopefully it will be able to apply/review the patches this time.
I expect so, but I don't understand why Sashiko is having trouble with
this.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v2 2/2] mm: zswap: Tie per-CPU acomp_ctx lifetime to the pool.
2026-03-30 18:51 ` Andrew Morton
@ 2026-03-30 18:52 ` Yosry Ahmed
2026-03-30 18:59 ` Kanchana P. Sridhar
0 siblings, 1 reply; 19+ messages in thread
From: Yosry Ahmed @ 2026-03-30 18:52 UTC (permalink / raw)
To: Andrew Morton
Cc: Kanchana P. Sridhar, hannes, nphamcs, chengming.zhou, linux-mm,
linux-kernel, herbert, senozhatsky
On Mon, Mar 30, 2026 at 11:51 AM Andrew Morton
<akpm@linux-foundation.org> wrote:
>
> On Mon, 30 Mar 2026 11:32:07 -0700 Yosry Ahmed <yosry@kernel.org> wrote:
>
> > The patch looks good to me, feel free to add:
> >
> > Acked-by: Yosry Ahmed <yosry@kernel.org>
>
> Thanks.
>
> > That being said, it might be a good idea to resend the patches (new
> > version for rebase + Ack?) to get Sashiko to take a look,
>
> Yes please.
>
> > hopefully it will be able to apply/review the patches this time.
>
> I expect so, but I don't understand why Sashiko is having trouble with
> this.
I think it was a transient failure.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v2 2/2] mm: zswap: Tie per-CPU acomp_ctx lifetime to the pool.
2026-03-30 18:52 ` Yosry Ahmed
@ 2026-03-30 18:59 ` Kanchana P. Sridhar
2026-03-30 19:12 ` Andrew Morton
0 siblings, 1 reply; 19+ messages in thread
From: Kanchana P. Sridhar @ 2026-03-30 18:59 UTC (permalink / raw)
To: Yosry Ahmed
Cc: Andrew Morton, hannes, nphamcs, chengming.zhou, linux-mm,
linux-kernel, herbert, senozhatsky, Kanchana P. Sridhar
On Mon, Mar 30, 2026 at 11:52 AM Yosry Ahmed <yosry@kernel.org> wrote:
>
> On Mon, Mar 30, 2026 at 11:51 AM Andrew Morton
> <akpm@linux-foundation.org> wrote:
> >
> > On Mon, 30 Mar 2026 11:32:07 -0700 Yosry Ahmed <yosry@kernel.org> wrote:
> >
> > > The patch looks good to me, feel free to add:
> > >
> > > Acked-by: Yosry Ahmed <yosry@kernel.org>
> >
> > Thanks.
> >
> > > That being said, it might be a good idea to resend the patches (new
> > > version for rebase + Ack?) to get Sashiko to take a look,
> >
> > Yes please.
> >
> > > hopefully it will be able to apply/review the patches this time.
> >
> > I expect so, but I don't understand why Sashiko is having trouble with
> > this.
>
> I think it was a transient failure.
Thanks Yosry and Andrew! Although, when I get the latest mm-unstable,
it already has the v2 patches :)
Should I still send a v3, with just the Acked-by added to patch 2?
Thanks,
Kanchana
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v2 2/2] mm: zswap: Tie per-CPU acomp_ctx lifetime to the pool.
2026-03-30 18:59 ` Kanchana P. Sridhar
@ 2026-03-30 19:12 ` Andrew Morton
2026-03-30 19:13 ` Yosry Ahmed
2026-03-30 21:47 ` Kanchana P. Sridhar
0 siblings, 2 replies; 19+ messages in thread
From: Andrew Morton @ 2026-03-30 19:12 UTC (permalink / raw)
To: Kanchana P. Sridhar
Cc: Yosry Ahmed, hannes, nphamcs, chengming.zhou, linux-mm,
linux-kernel, herbert, senozhatsky
On Mon, 30 Mar 2026 11:59:04 -0700 "Kanchana P. Sridhar" <kanchanapsridhar2026@gmail.com> wrote:
> On Mon, Mar 30, 2026 at 11:52 AM Yosry Ahmed <yosry@kernel.org> wrote:
> >
> > On Mon, Mar 30, 2026 at 11:51 AM Andrew Morton
> > <akpm@linux-foundation.org> wrote:
> > >
> > > On Mon, 30 Mar 2026 11:32:07 -0700 Yosry Ahmed <yosry@kernel.org> wrote:
> > >
> > > > The patch looks good to me, feel free to add:
> > > >
> > > > Acked-by: Yosry Ahmed <yosry@kernel.org>
> > >
> > > Thanks.
> > >
> > > > That being said, it might be a good idea to resend the patches (new
> > > > version for rebase + Ack?) to get Sashiko to take a look,
> > >
> > > Yes please.
> > >
> > > > hopefully it will be able to apply/review the patches this time.
> > >
> > > I expect so, but I don't understand why Sashiko is having trouble with
> > > this.
> >
> > I think it was a transient failure.
>
> Thanks Yosry and Andrew! Although, when I get the latest mm-unstable,
> it already has the v2 patches :)
> Should I still send a v3, with just the Acked-by added to patch 2?
Yes please. Just to retry the Sashiko scan.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v2 2/2] mm: zswap: Tie per-CPU acomp_ctx lifetime to the pool.
2026-03-30 19:12 ` Andrew Morton
@ 2026-03-30 19:13 ` Yosry Ahmed
2026-03-30 21:53 ` Kanchana P. Sridhar
2026-03-30 21:47 ` Kanchana P. Sridhar
1 sibling, 1 reply; 19+ messages in thread
From: Yosry Ahmed @ 2026-03-30 19:13 UTC (permalink / raw)
To: Andrew Morton
Cc: Kanchana P. Sridhar, hannes, nphamcs, chengming.zhou, linux-mm,
linux-kernel, herbert, senozhatsky
On Mon, Mar 30, 2026 at 12:12 PM Andrew Morton
<akpm@linux-foundation.org> wrote:
>
> On Mon, 30 Mar 2026 11:59:04 -0700 "Kanchana P. Sridhar" <kanchanapsridhar2026@gmail.com> wrote:
>
> > On Mon, Mar 30, 2026 at 11:52 AM Yosry Ahmed <yosry@kernel.org> wrote:
> > >
> > > On Mon, Mar 30, 2026 at 11:51 AM Andrew Morton
> > > <akpm@linux-foundation.org> wrote:
> > > >
> > > > On Mon, 30 Mar 2026 11:32:07 -0700 Yosry Ahmed <yosry@kernel.org> wrote:
> > > >
> > > > > The patch looks good to me, feel free to add:
> > > > >
> > > > > Acked-by: Yosry Ahmed <yosry@kernel.org>
> > > >
> > > > Thanks.
> > > >
> > > > > That being said, it might be a good idea to resend the patches (new
> > > > > version for rebase + Ack?) to get Sashiko to take a look,
> > > >
> > > > Yes please.
> > > >
> > > > > hopefully it will be able to apply/review the patches this time.
> > > >
> > > > I expect so, but I don't understand why Sashiko is having trouble with
> > > > this.
> > >
> > > I think it was a transient failure.
> >
> > Thanks Yosry and Andrew! Although, when I get the latest mm-unstable,
> > it already has the v2 patches :)
> > Should I still send a v3, with just the Acked-by added to patch 2?
>
> Yes please. Just to retry the Sashiko scan.
Sashiko tries to apply on mm-unstable, mm-new, mm-unstable, and
linux-next. So the patches need to apply to one of these branches.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v2 2/2] mm: zswap: Tie per-CPU acomp_ctx lifetime to the pool.
2026-03-30 19:12 ` Andrew Morton
2026-03-30 19:13 ` Yosry Ahmed
@ 2026-03-30 21:47 ` Kanchana P. Sridhar
1 sibling, 0 replies; 19+ messages in thread
From: Kanchana P. Sridhar @ 2026-03-30 21:47 UTC (permalink / raw)
To: Andrew Morton
Cc: Yosry Ahmed, hannes, nphamcs, chengming.zhou, linux-mm,
linux-kernel, herbert, senozhatsky, Kanchana P. Sridhar
On Mon, Mar 30, 2026 at 12:12 PM Andrew Morton
<akpm@linux-foundation.org> wrote:
>
> On Mon, 30 Mar 2026 11:59:04 -0700 "Kanchana P. Sridhar" <kanchanapsridhar2026@gmail.com> wrote:
>
> > On Mon, Mar 30, 2026 at 11:52 AM Yosry Ahmed <yosry@kernel.org> wrote:
> > >
> > > On Mon, Mar 30, 2026 at 11:51 AM Andrew Morton
> > > <akpm@linux-foundation.org> wrote:
> > > >
> > > > On Mon, 30 Mar 2026 11:32:07 -0700 Yosry Ahmed <yosry@kernel.org> wrote:
> > > >
> > > > > The patch looks good to me, feel free to add:
> > > > >
> > > > > Acked-by: Yosry Ahmed <yosry@kernel.org>
> > > >
> > > > Thanks.
> > > >
> > > > > That being said, it might be a good idea to resend the patches (new
> > > > > version for rebase + Ack?) to get Sashiko to take a look,
> > > >
> > > > Yes please.
> > > >
> > > > > hopefully it will be able to apply/review the patches this time.
> > > >
> > > > I expect so, but I don't understand why Sashiko is having trouble with
> > > > this.
> > >
> > > I think it was a transient failure.
> >
> > Thanks Yosry and Andrew! Although, when I get the latest mm-unstable,
> > it already has the v2 patches :)
> > Should I still send a v3, with just the Acked-by added to patch 2?
>
> Yes please. Just to retry the Sashiko scan.
Sure, I will do so.
Thanks,
Kanchana
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v2 2/2] mm: zswap: Tie per-CPU acomp_ctx lifetime to the pool.
2026-03-30 19:13 ` Yosry Ahmed
@ 2026-03-30 21:53 ` Kanchana P. Sridhar
0 siblings, 0 replies; 19+ messages in thread
From: Kanchana P. Sridhar @ 2026-03-30 21:53 UTC (permalink / raw)
To: Yosry Ahmed
Cc: Andrew Morton, hannes, nphamcs, chengming.zhou, linux-mm,
linux-kernel, herbert, senozhatsky, Kanchana P. Sridhar
On Mon, Mar 30, 2026 at 12:13 PM Yosry Ahmed <yosry@kernel.org> wrote:
>
> On Mon, Mar 30, 2026 at 12:12 PM Andrew Morton
> <akpm@linux-foundation.org> wrote:
> >
> > On Mon, 30 Mar 2026 11:59:04 -0700 "Kanchana P. Sridhar" <kanchanapsridhar2026@gmail.com> wrote:
> >
> > > On Mon, Mar 30, 2026 at 11:52 AM Yosry Ahmed <yosry@kernel.org> wrote:
> > > >
> > > > On Mon, Mar 30, 2026 at 11:51 AM Andrew Morton
> > > > <akpm@linux-foundation.org> wrote:
> > > > >
> > > > > On Mon, 30 Mar 2026 11:32:07 -0700 Yosry Ahmed <yosry@kernel.org> wrote:
> > > > >
> > > > > > The patch looks good to me, feel free to add:
> > > > > >
> > > > > > Acked-by: Yosry Ahmed <yosry@kernel.org>
> > > > >
> > > > > Thanks.
> > > > >
> > > > > > That being said, it might be a good idea to resend the patches (new
> > > > > > version for rebase + Ack?) to get Sashiko to take a look,
> > > > >
> > > > > Yes please.
> > > > >
> > > > > > hopefully it will be able to apply/review the patches this time.
> > > > >
> > > > > I expect so, but I don't understand why Sashiko is having trouble with
> > > > > this.
> > > >
> > > > I think it was a transient failure.
> > >
> > > Thanks Yosry and Andrew! Although, when I get the latest mm-unstable,
> > > it already has the v2 patches :)
> > > Should I still send a v3, with just the Acked-by added to patch 2?
> >
> > Yes please. Just to retry the Sashiko scan.
>
> Sashiko tries to apply on mm-unstable, mm-new, mm-unstable, and
> linux-next. So the patches need to apply to one of these branches.
Got it..
Thanks,
Kanchana
^ permalink raw reply [flat|nested] 19+ messages in thread
end of thread, other threads:[~2026-03-30 21:53 UTC | newest]
Thread overview: 19+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-17 1:48 [PATCH v2 0/2] zswap pool per-CPU acomp_ctx simplifications Kanchana P. Sridhar
2026-03-17 1:48 ` [PATCH v2 1/2] mm: zswap: Remove redundant checks in zswap_cpu_comp_dead() Kanchana P. Sridhar
2026-03-17 19:19 ` Yosry Ahmed
2026-03-17 21:09 ` Kanchana P. Sridhar
2026-03-17 1:48 ` [PATCH v2 2/2] mm: zswap: Tie per-CPU acomp_ctx lifetime to the pool Kanchana P. Sridhar
2026-03-27 2:23 ` Andrew Morton
2026-03-27 19:30 ` Kanchana P. Sridhar
2026-03-30 18:32 ` Yosry Ahmed
2026-03-30 18:51 ` Andrew Morton
2026-03-30 18:52 ` Yosry Ahmed
2026-03-30 18:59 ` Kanchana P. Sridhar
2026-03-30 19:12 ` Andrew Morton
2026-03-30 19:13 ` Yosry Ahmed
2026-03-30 21:53 ` Kanchana P. Sridhar
2026-03-30 21:47 ` Kanchana P. Sridhar
2026-03-17 19:45 ` [PATCH v2 0/2] zswap pool per-CPU acomp_ctx simplifications Andrew Morton
2026-03-17 19:48 ` Yosry Ahmed
2026-03-17 21:15 ` Kanchana P. Sridhar
2026-03-17 21:21 ` Kanchana P. Sridhar
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox