From: Barry Song <21cnbao@gmail.com>
To: "Sridhar, Kanchana P" <kanchana.p.sridhar@intel.com>
Cc: "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"linux-mm@kvack.org" <linux-mm@kvack.org>,
"hannes@cmpxchg.org" <hannes@cmpxchg.org>,
"yosry.ahmed@linux.dev" <yosry.ahmed@linux.dev>,
"nphamcs@gmail.com" <nphamcs@gmail.com>,
"chengming.zhou@linux.dev" <chengming.zhou@linux.dev>,
"usamaarif642@gmail.com" <usamaarif642@gmail.com>,
"ryan.roberts@arm.com" <ryan.roberts@arm.com>,
"ying.huang@linux.alibaba.com" <ying.huang@linux.alibaba.com>,
"akpm@linux-foundation.org" <akpm@linux-foundation.org>,
"senozhatsky@chromium.org" <senozhatsky@chromium.org>,
"linux-crypto@vger.kernel.org" <linux-crypto@vger.kernel.org>,
"herbert@gondor.apana.org.au" <herbert@gondor.apana.org.au>,
"davem@davemloft.net" <davem@davemloft.net>,
"clabbe@baylibre.com" <clabbe@baylibre.com>,
"ardb@kernel.org" <ardb@kernel.org>,
"ebiggers@google.com" <ebiggers@google.com>,
"surenb@google.com" <surenb@google.com>,
"Accardi, Kristen C" <kristen.c.accardi@intel.com>,
"Gomes, Vinicius" <vinicius.gomes@intel.com>,
"Feghali, Wajdi K" <wajdi.k.feghali@intel.com>,
"Gopal, Vinodh" <vinodh.gopal@intel.com>
Subject: Re: [PATCH v11 24/24] mm: zswap: Batched zswap_compress() with compress batching of large folios.
Date: Fri, 29 Aug 2025 11:31:21 +0800 [thread overview]
Message-ID: <CAGsJ_4xH7aU37w03-4MSJs7Bik6pShLfad8RY8TSzj37AcGwDg@mail.gmail.com> (raw)
In-Reply-To: <PH7PR11MB81213032EE672C69B3FC3370C93AA@PH7PR11MB8121.namprd11.prod.outlook.com>
On Fri, Aug 29, 2025 at 11:05 AM Sridhar, Kanchana P
<kanchana.p.sridhar@intel.com> wrote:
>
>
> > -----Original Message-----
> > From: Barry Song <21cnbao@gmail.com>
> > Sent: Thursday, August 28, 2025 4:54 PM
> > To: Sridhar, Kanchana P <kanchana.p.sridhar@intel.com>
> > Cc: linux-kernel@vger.kernel.org; linux-mm@kvack.org;
> > hannes@cmpxchg.org; yosry.ahmed@linux.dev; nphamcs@gmail.com;
> > chengming.zhou@linux.dev; usamaarif642@gmail.com;
> > ryan.roberts@arm.com; ying.huang@linux.alibaba.com; akpm@linux-
> > foundation.org; senozhatsky@chromium.org; linux-crypto@vger.kernel.org;
> > herbert@gondor.apana.org.au; davem@davemloft.net;
> > clabbe@baylibre.com; ardb@kernel.org; ebiggers@google.com;
> > surenb@google.com; Accardi, Kristen C <kristen.c.accardi@intel.com>;
> > Gomes, Vinicius <vinicius.gomes@intel.com>; Feghali, Wajdi K
> > <wajdi.k.feghali@intel.com>; Gopal, Vinodh <vinodh.gopal@intel.com>
> > Subject: Re: [PATCH v11 24/24] mm: zswap: Batched zswap_compress() with
> > compress batching of large folios.
> >
> > > +static bool zswap_compress(struct folio *folio, long start, unsigned int
> > nr_pages,
> > > + struct zswap_entry *entries[], struct zswap_pool *pool,
> > > + int node_id)
> > > {
> > > struct crypto_acomp_ctx *acomp_ctx;
> > > struct scatterlist input, output;
> > > - int comp_ret = 0, alloc_ret = 0;
> > > - unsigned int dlen = PAGE_SIZE;
> > > - unsigned long handle;
> > > - struct zpool *zpool;
> > > + struct zpool *zpool = pool->zpool;
> > > +
> > > + unsigned int dlens[ZSWAP_MAX_BATCH_SIZE];
> > > + int errors[ZSWAP_MAX_BATCH_SIZE];
> > > +
> > > + unsigned int nr_comps = min(nr_pages, pool->compr_batch_size);
> > > + unsigned int i, j;
> > > + int err;
> > > gfp_t gfp;
> > > - u8 *dst;
> > > +
> > > + gfp = GFP_NOWAIT | __GFP_NORETRY | __GFP_HIGHMEM |
> > __GFP_MOVABLE;
> > >
> > > acomp_ctx = raw_cpu_ptr(pool->acomp_ctx);
> > >
> > > mutex_lock(&acomp_ctx->mutex);
> > >
> > > - dst = acomp_ctx->buffers[0];
> > > - sg_init_table(&input, 1);
> > > - sg_set_page(&input, page, PAGE_SIZE, 0);
> > > -
> > > /*
> > > - * We need PAGE_SIZE * 2 here since there maybe over-compression
> > case,
> > > - * and hardware-accelerators may won't check the dst buffer size, so
> > > - * giving the dst buffer with enough length to avoid buffer overflow.
> > > + * Note:
> > > + * [i] refers to the incoming batch space and is used to
> > > + * index into the folio pages, @entries and @errors.
> > > */
> > > - sg_init_one(&output, dst, PAGE_SIZE * 2);
> > > - acomp_request_set_params(acomp_ctx->req, &input, &output,
> > PAGE_SIZE, dlen);
> > > + for (i = 0; i < nr_pages; i += nr_comps) {
> > > + if (nr_comps == 1) {
> > > + sg_init_table(&input, 1);
> > > + sg_set_page(&input, folio_page(folio, start + i), PAGE_SIZE, 0);
> > >
> > > - /*
> > > - * it maybe looks a little bit silly that we send an asynchronous request,
> > > - * then wait for its completion synchronously. This makes the process
> > look
> > > - * synchronous in fact.
> > > - * Theoretically, acomp supports users send multiple acomp requests in
> > one
> > > - * acomp instance, then get those requests done simultaneously. but in
> > this
> > > - * case, zswap actually does store and load page by page, there is no
> > > - * existing method to send the second page before the first page is
> > done
> > > - * in one thread doing zwap.
> > > - * but in different threads running on different cpu, we have different
> > > - * acomp instance, so multiple threads can do (de)compression in
> > parallel.
> > > - */
> > > - comp_ret = crypto_wait_req(crypto_acomp_compress(acomp_ctx-
> > >req), &acomp_ctx->wait);
> > > - dlen = acomp_ctx->req->dlen;
> > > - if (comp_ret)
> > > - goto unlock;
> > > + /*
> > > + * We need PAGE_SIZE * 2 here since there maybe over-
> > compression case,
> > > + * and hardware-accelerators may won't check the dst buffer
> > size, so
> > > + * giving the dst buffer with enough length to avoid buffer
> > overflow.
> > > + */
> > > + sg_init_one(&output, acomp_ctx->buffers[0], PAGE_SIZE * 2);
> > > + acomp_request_set_params(acomp_ctx->req, &input,
> > > + &output, PAGE_SIZE, PAGE_SIZE);
> > > +
> > > + errors[i] =
> > crypto_wait_req(crypto_acomp_compress(acomp_ctx->req),
> > > + &acomp_ctx->wait);
> > > + if (unlikely(errors[i]))
> > > + goto compress_error;
> > > +
> > > + dlens[i] = acomp_ctx->req->dlen;
> > > + } else {
> > > + struct page *pages[ZSWAP_MAX_BATCH_SIZE];
> > > + unsigned int k;
> > > +
> > > + for (k = 0; k < nr_pages; ++k)
> > > + pages[k] = folio_page(folio, start + k);
> > > +
> > > + struct swap_batch_comp_data batch_comp_data = {
> > > + .pages = pages,
> > > + .dsts = acomp_ctx->buffers,
> > > + .dlens = dlens,
> > > + .errors = errors,
> > > + .nr_comps = nr_pages,
> > > + };
> >
> > Why would this work given that nr_pages might be larger than
> > pool->compr_batch_size?
>
> You mean the batching call? For batching compressors, nr_pages
> is always <= pool->batch_size. For batching compressors, pool->batch_size
> is the pool->compr_batch_size.
I’m actually confused that this feels inconsistent with the earlier
unsigned int nr_comps = min(nr_pages, pool->compr_batch_size);
So why not just use nr_comps instead?
>
> >
> > unsigned int nr_comps = min(nr_pages, pool->compr_batch_size);
> >
> > So this actually doesn’t happen unless pool->compr_batch_size == 1,
> > but the code is confusing, right?
> >
> > > +
> > > + acomp_ctx->req->kernel_data = &batch_comp_data;
> >
> > Can you actually pass a request larger than pool->compr_batch_size
> > to the crypto driver?
>
> Clarification above..
>
> >
> > By the way, swap_batch_comp_data seems like a poor name. Why should
> > crypto drivers know anything about swap_? kernel_data isn’t ideal either;
> > maybe batch_data would be better ?
>
> This will be changing in v12 to use an SG list based on Herbert's suggestions.
>
Cool. Thanks!
Thanks
Barry
next prev parent reply other threads:[~2025-08-29 3:31 UTC|newest]
Thread overview: 68+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-08-01 4:36 [PATCH v11 00/24] zswap compression batching with optimized iaa_crypto driver Kanchana P Sridhar
2025-08-01 4:36 ` [PATCH v11 01/24] crypto: iaa - Reorganize the iaa_crypto driver code Kanchana P Sridhar
2025-08-01 4:36 ` [PATCH v11 02/24] crypto: iaa - New architecture for IAA device WQ comp/decomp usage & core mapping Kanchana P Sridhar
2025-08-01 4:36 ` [PATCH v11 03/24] crypto: iaa - Simplify, consistency of function parameters, minor stats bug fix Kanchana P Sridhar
2025-08-01 4:36 ` [PATCH v11 04/24] crypto: iaa - Descriptor allocation timeouts with mitigations Kanchana P Sridhar
2025-08-01 4:36 ` [PATCH v11 05/24] crypto: iaa - iaa_wq uses percpu_refs for get/put reference counting Kanchana P Sridhar
2025-08-01 4:36 ` [PATCH v11 06/24] crypto: iaa - Simplify the code flow in iaa_compress() and iaa_decompress() Kanchana P Sridhar
2025-08-01 4:36 ` [PATCH v11 07/24] crypto: iaa - Refactor hardware descriptor setup into separate procedures Kanchana P Sridhar
2025-08-01 4:36 ` [PATCH v11 08/24] crypto: iaa - Simplified, efficient job submissions for non-irq mode Kanchana P Sridhar
2025-08-01 4:36 ` [PATCH v11 09/24] crypto: iaa - Deprecate exporting add/remove IAA compression modes Kanchana P Sridhar
2025-08-01 4:36 ` [PATCH v11 10/24] crypto: iaa - Rearchitect the iaa_crypto driver to be usable by zswap and zram Kanchana P Sridhar
2025-08-01 4:36 ` [PATCH v11 11/24] crypto: iaa - Enablers for submitting descriptors then polling for completion Kanchana P Sridhar
2025-08-01 4:36 ` [PATCH v11 12/24] crypto: acomp - Add "void *kernel_data" in "struct acomp_req" for kernel users Kanchana P Sridhar
2025-08-01 4:36 ` [PATCH v11 13/24] crypto: iaa - IAA Batching for parallel compressions/decompressions Kanchana P Sridhar
2025-08-01 4:36 ` [PATCH v11 14/24] crypto: iaa - Enable async mode and make it the default Kanchana P Sridhar
2025-08-01 4:36 ` [PATCH v11 15/24] crypto: iaa - Disable iaa_verify_compress by default Kanchana P Sridhar
2025-08-01 4:36 ` [PATCH v11 16/24] crypto: iaa - Submit the two largest source buffers first in decompress batching Kanchana P Sridhar
2025-08-01 4:36 ` [PATCH v11 17/24] crypto: iaa - Add deflate-iaa-dynamic compression mode Kanchana P Sridhar
2025-08-01 4:36 ` [PATCH v11 18/24] crypto: acomp - Add crypto_acomp_batch_size() to get an algorithm's batch-size Kanchana P Sridhar
2025-08-15 5:28 ` Herbert Xu
2025-08-22 19:31 ` Sridhar, Kanchana P
2025-08-22 21:48 ` Nhat Pham
2025-08-22 21:58 ` Sridhar, Kanchana P
2025-08-22 22:00 ` Sridhar, Kanchana P
2025-08-01 4:36 ` [PATCH v11 19/24] crypto: iaa - IAA acomp_algs register the get_batch_size() interface Kanchana P Sridhar
2025-08-29 0:16 ` Barry Song
2025-08-29 3:12 ` Sridhar, Kanchana P
2025-08-01 4:36 ` [PATCH v11 20/24] mm: zswap: Per-CPU acomp_ctx resources exist from pool creation to deletion Kanchana P Sridhar
2025-08-01 4:36 ` [PATCH v11 21/24] mm: zswap: Consistently use IS_ERR_OR_NULL() to check acomp_ctx resources Kanchana P Sridhar
2025-08-01 4:36 ` [PATCH v11 22/24] mm: zswap: Allocate pool batching resources if the compressor supports batching Kanchana P Sridhar
2025-08-14 20:58 ` Nhat Pham
2025-08-14 22:05 ` Sridhar, Kanchana P
2025-08-26 3:48 ` Barry Song
2025-08-26 4:27 ` Sridhar, Kanchana P
2025-08-26 4:42 ` Barry Song
2025-08-26 4:56 ` Sridhar, Kanchana P
2025-08-26 5:17 ` Barry Song
2025-08-27 0:06 ` Sridhar, Kanchana P
2025-08-28 21:39 ` Barry Song
2025-08-28 22:47 ` Sridhar, Kanchana P
2025-08-28 23:28 ` Barry Song
2025-08-29 2:56 ` Sridhar, Kanchana P
2025-08-29 3:42 ` Barry Song
2025-08-29 18:39 ` Sridhar, Kanchana P
2025-08-30 8:40 ` Barry Song
2025-09-03 18:00 ` Sridhar, Kanchana P
2025-08-01 4:36 ` [PATCH v11 23/24] mm: zswap: zswap_store() will process a large folio in batches Kanchana P Sridhar
2025-08-14 21:05 ` Nhat Pham
2025-08-14 22:10 ` Sridhar, Kanchana P
2025-08-28 23:59 ` Barry Song
2025-08-29 3:06 ` Sridhar, Kanchana P
2025-08-01 4:36 ` [PATCH v11 24/24] mm: zswap: Batched zswap_compress() with compress batching of large folios Kanchana P Sridhar
2025-08-14 21:14 ` Nhat Pham
2025-08-14 22:17 ` Sridhar, Kanchana P
2025-08-28 23:54 ` Barry Song
2025-08-29 3:04 ` Sridhar, Kanchana P
2025-08-29 3:31 ` Barry Song [this message]
2025-08-29 3:39 ` Sridhar, Kanchana P
2025-08-08 23:51 ` [PATCH v11 00/24] zswap compression batching with optimized iaa_crypto driver Nhat Pham
2025-08-09 0:03 ` Sridhar, Kanchana P
2025-08-15 5:27 ` Herbert Xu
2025-08-22 19:26 ` Sridhar, Kanchana P
2025-08-25 5:38 ` Herbert Xu
2025-08-25 18:12 ` Sridhar, Kanchana P
2025-08-26 1:13 ` Herbert Xu
2025-08-26 4:09 ` Sridhar, Kanchana P
2025-08-26 4:14 ` Herbert Xu
2025-08-26 4:42 ` Sridhar, Kanchana P
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAGsJ_4xH7aU37w03-4MSJs7Bik6pShLfad8RY8TSzj37AcGwDg@mail.gmail.com \
--to=21cnbao@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=ardb@kernel.org \
--cc=chengming.zhou@linux.dev \
--cc=clabbe@baylibre.com \
--cc=davem@davemloft.net \
--cc=ebiggers@google.com \
--cc=hannes@cmpxchg.org \
--cc=herbert@gondor.apana.org.au \
--cc=kanchana.p.sridhar@intel.com \
--cc=kristen.c.accardi@intel.com \
--cc=linux-crypto@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=nphamcs@gmail.com \
--cc=ryan.roberts@arm.com \
--cc=senozhatsky@chromium.org \
--cc=surenb@google.com \
--cc=usamaarif642@gmail.com \
--cc=vinicius.gomes@intel.com \
--cc=vinodh.gopal@intel.com \
--cc=wajdi.k.feghali@intel.com \
--cc=ying.huang@linux.alibaba.com \
--cc=yosry.ahmed@linux.dev \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).