linux-crypto.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Barry Song <21cnbao@gmail.com>
To: "Sridhar, Kanchana P" <kanchana.p.sridhar@intel.com>
Cc: "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>,
	 "hannes@cmpxchg.org" <hannes@cmpxchg.org>,
	"yosry.ahmed@linux.dev" <yosry.ahmed@linux.dev>,
	 "nphamcs@gmail.com" <nphamcs@gmail.com>,
	"chengming.zhou@linux.dev" <chengming.zhou@linux.dev>,
	 "usamaarif642@gmail.com" <usamaarif642@gmail.com>,
	"ryan.roberts@arm.com" <ryan.roberts@arm.com>,
	 "ying.huang@linux.alibaba.com" <ying.huang@linux.alibaba.com>,
	 "akpm@linux-foundation.org" <akpm@linux-foundation.org>,
	 "senozhatsky@chromium.org" <senozhatsky@chromium.org>,
	 "linux-crypto@vger.kernel.org" <linux-crypto@vger.kernel.org>,
	 "herbert@gondor.apana.org.au" <herbert@gondor.apana.org.au>,
	"davem@davemloft.net" <davem@davemloft.net>,
	 "clabbe@baylibre.com" <clabbe@baylibre.com>,
	"ardb@kernel.org" <ardb@kernel.org>,
	 "ebiggers@google.com" <ebiggers@google.com>,
	"surenb@google.com" <surenb@google.com>,
	 "Accardi, Kristen C" <kristen.c.accardi@intel.com>,
	"Gomes, Vinicius" <vinicius.gomes@intel.com>,
	 "Feghali, Wajdi K" <wajdi.k.feghali@intel.com>,
	"Gopal, Vinodh" <vinodh.gopal@intel.com>
Subject: Re: [PATCH v11 22/24] mm: zswap: Allocate pool batching resources if the compressor supports batching.
Date: Fri, 29 Aug 2025 11:42:12 +0800	[thread overview]
Message-ID: <CAGsJ_4zMHtYG3rS61PyGfJYd8KwGEw=Gy=g5s5wT_vrEL9fhbA@mail.gmail.com> (raw)
In-Reply-To: <PH7PR11MB81216DFB4CA6F22E0ED76026C93AA@PH7PR11MB8121.namprd11.prod.outlook.com>

On Fri, Aug 29, 2025 at 10:57 AM Sridhar, Kanchana P
<kanchana.p.sridhar@intel.com> wrote:
>
>
> > -----Original Message-----
> > From: Barry Song <21cnbao@gmail.com>
> > Sent: Thursday, August 28, 2025 4:29 PM
> > To: Sridhar, Kanchana P <kanchana.p.sridhar@intel.com>
> > Cc: linux-kernel@vger.kernel.org; linux-mm@kvack.org;
> > hannes@cmpxchg.org; yosry.ahmed@linux.dev; nphamcs@gmail.com;
> > chengming.zhou@linux.dev; usamaarif642@gmail.com;
> > ryan.roberts@arm.com; ying.huang@linux.alibaba.com; akpm@linux-
> > foundation.org; senozhatsky@chromium.org; linux-crypto@vger.kernel.org;
> > herbert@gondor.apana.org.au; davem@davemloft.net;
> > clabbe@baylibre.com; ardb@kernel.org; ebiggers@google.com;
> > surenb@google.com; Accardi, Kristen C <kristen.c.accardi@intel.com>;
> > Gomes, Vinicius <vinicius.gomes@intel.com>; Feghali, Wajdi K
> > <wajdi.k.feghali@intel.com>; Gopal, Vinodh <vinodh.gopal@intel.com>
> > Subject: Re: [PATCH v11 22/24] mm: zswap: Allocate pool batching resources
> > if the compressor supports batching.
> >
> > > >
> > > > If ZSWAP_MAX_BATCH_SIZE is set to 8 and there is no hardware batching,
> > > > compression is done with a step size of 1. If the hardware step size is 4,
> > > > compression occurs in two steps. If the hardware step size is 6, the first
> > > > compression uses a step size of 6, and the second uses a step size of 2.
> > > > Do you think this will work?
> > >
> > > Hi Barry,
> > >
> > > This would be non-optimal from code simplicity and latency perspectives.
> > > One of the benefits of using the hardware accelerator's "batch parallelism"
> > > is cost amortization across the batch. We might lose this benefit if we make
> > > multiple calls to zswap_compress() to ask the hardware accelerator to
> > > batch compress in smaller batches. Compression throughput would also
> > > be sub-optimal.
> >
> > I guess it wouldn’t be an issue if both ZSWAP_MAX_BATCH_SIZE and
> > pool->compr_batch_size are powers of two. As you mentioned, we still
> > gain improvement with ZSWAP_MAX_BATCH_SIZE batching even when
> > pool->compr_batch_size == 1, by compressing pages one by one but
> > batching other work such as zswap_entries_cache_alloc_batch() ?
> >
> > >
> > > In my patch-series, the rule is simple: if an algorithm has specified a
> > > batch-size, carve out the folio by that "batch-size" # of pages to be
> > > compressed as a batch in zswap_compress(). This custom batch-size
> > > is capped at ZSWAP_MAX_BATCH_SIZE.
> > >
> > > If an algorithm has not specified a batch-size, the default batch-size
> > > is 1. In this case, carve out the folio by ZSWAP_MAX_BATCH_SIZE
> > > # of pages to be compressed as a batch in zswap_compress().
> >
> > Yes, I understand your rule. However, having two global variables is still
> > somewhat confusing. It might be clearer to use a single variable with a
> > comment, since one variable can clearly determine the value of the other.
> >
> > Can we get the batch_size at runtime based on pool->compr_batch_size?
> >
> > /*
> >  * If hardware compression supports batching, we use the hardware step size.
> >  * Otherwise, we use ZSWAP_MAX_BATCH_SIZE for batching, but still
> > compress
> >  * one page at a time.
> >  */
> > batch_size = pool->compr_batch_size > 1 ? pool->compr_batch_size :
> >              ZSWAP_MAX_BATCH_SIZE;
> >
> > We probably don’t need this if both pool->compr_batch_size and
> > ZSWAP_MAX_BATCH_SIZE are powers of two?
>
> I am not sure I understand this rationale, but I do want to reiterate
> that the patch-set implements a simple set of rules/design choices
> to provide a batching framework for software and hardware compressors,
> that has shown good performance improvements with both, while
> unifying zswap_store()/zswap_compress() code paths for both.

I’m really curious: if ZSWAP_MAX_BATCH_SIZE = 8 and
compr_batch_size = 4, why wouldn’t batch_size = 8 and
compr_batch_size = 4 perform better than batch_size = 4 and
compr_batch_size = 4?

In short, I’d like the case of compr_batch_size == 1 to be treated the same
as compr_batch_size == 2, 4, etc., since you can still see performance
improvements when ZSWAP_MAX_BATCH_SIZE = 8 and compr_batch_size == 1,
as batching occurs even outside compression.

Therefore, I would expect batch_size == 8 and compr_batch_size == 2 to
perform better than when both are 2.

The only thing preventing this from happening is that compr_batch_size
might be 5, 6, or 7, which are not powers of two?

>
> As explained before, keeping the two variables as distinct u8 members
> of struct zswap_pool is a design choice with these benefits:
>
> 1) Saves computes by avoiding computing this in performance-critical
>     zswap_store() code. I have verified that dynamically computing the
>     batch_size based on pool->compr_batch_size impacts latency.

Ok, I’m a bit surprised, since this small computation wouldn’t
cause a branch misprediction at all.

In any case, if you want to keep both variables, that’s fine.
But can we at least rename one of them? For example, use
pool->store_batch_size and pool->compr_batch_size instead of
pool->batch_size and pool->compr_batch_size, since pool->batch_size
generally has a broader semantic scope than compr_batch_size.

Thanks
Barry

  reply	other threads:[~2025-08-29  3:42 UTC|newest]

Thread overview: 68+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-08-01  4:36 [PATCH v11 00/24] zswap compression batching with optimized iaa_crypto driver Kanchana P Sridhar
2025-08-01  4:36 ` [PATCH v11 01/24] crypto: iaa - Reorganize the iaa_crypto driver code Kanchana P Sridhar
2025-08-01  4:36 ` [PATCH v11 02/24] crypto: iaa - New architecture for IAA device WQ comp/decomp usage & core mapping Kanchana P Sridhar
2025-08-01  4:36 ` [PATCH v11 03/24] crypto: iaa - Simplify, consistency of function parameters, minor stats bug fix Kanchana P Sridhar
2025-08-01  4:36 ` [PATCH v11 04/24] crypto: iaa - Descriptor allocation timeouts with mitigations Kanchana P Sridhar
2025-08-01  4:36 ` [PATCH v11 05/24] crypto: iaa - iaa_wq uses percpu_refs for get/put reference counting Kanchana P Sridhar
2025-08-01  4:36 ` [PATCH v11 06/24] crypto: iaa - Simplify the code flow in iaa_compress() and iaa_decompress() Kanchana P Sridhar
2025-08-01  4:36 ` [PATCH v11 07/24] crypto: iaa - Refactor hardware descriptor setup into separate procedures Kanchana P Sridhar
2025-08-01  4:36 ` [PATCH v11 08/24] crypto: iaa - Simplified, efficient job submissions for non-irq mode Kanchana P Sridhar
2025-08-01  4:36 ` [PATCH v11 09/24] crypto: iaa - Deprecate exporting add/remove IAA compression modes Kanchana P Sridhar
2025-08-01  4:36 ` [PATCH v11 10/24] crypto: iaa - Rearchitect the iaa_crypto driver to be usable by zswap and zram Kanchana P Sridhar
2025-08-01  4:36 ` [PATCH v11 11/24] crypto: iaa - Enablers for submitting descriptors then polling for completion Kanchana P Sridhar
2025-08-01  4:36 ` [PATCH v11 12/24] crypto: acomp - Add "void *kernel_data" in "struct acomp_req" for kernel users Kanchana P Sridhar
2025-08-01  4:36 ` [PATCH v11 13/24] crypto: iaa - IAA Batching for parallel compressions/decompressions Kanchana P Sridhar
2025-08-01  4:36 ` [PATCH v11 14/24] crypto: iaa - Enable async mode and make it the default Kanchana P Sridhar
2025-08-01  4:36 ` [PATCH v11 15/24] crypto: iaa - Disable iaa_verify_compress by default Kanchana P Sridhar
2025-08-01  4:36 ` [PATCH v11 16/24] crypto: iaa - Submit the two largest source buffers first in decompress batching Kanchana P Sridhar
2025-08-01  4:36 ` [PATCH v11 17/24] crypto: iaa - Add deflate-iaa-dynamic compression mode Kanchana P Sridhar
2025-08-01  4:36 ` [PATCH v11 18/24] crypto: acomp - Add crypto_acomp_batch_size() to get an algorithm's batch-size Kanchana P Sridhar
2025-08-15  5:28   ` Herbert Xu
2025-08-22 19:31     ` Sridhar, Kanchana P
2025-08-22 21:48       ` Nhat Pham
2025-08-22 21:58         ` Sridhar, Kanchana P
2025-08-22 22:00           ` Sridhar, Kanchana P
2025-08-01  4:36 ` [PATCH v11 19/24] crypto: iaa - IAA acomp_algs register the get_batch_size() interface Kanchana P Sridhar
2025-08-29  0:16   ` Barry Song
2025-08-29  3:12     ` Sridhar, Kanchana P
2025-08-01  4:36 ` [PATCH v11 20/24] mm: zswap: Per-CPU acomp_ctx resources exist from pool creation to deletion Kanchana P Sridhar
2025-08-01  4:36 ` [PATCH v11 21/24] mm: zswap: Consistently use IS_ERR_OR_NULL() to check acomp_ctx resources Kanchana P Sridhar
2025-08-01  4:36 ` [PATCH v11 22/24] mm: zswap: Allocate pool batching resources if the compressor supports batching Kanchana P Sridhar
2025-08-14 20:58   ` Nhat Pham
2025-08-14 22:05     ` Sridhar, Kanchana P
2025-08-26  3:48   ` Barry Song
2025-08-26  4:27     ` Sridhar, Kanchana P
2025-08-26  4:42       ` Barry Song
2025-08-26  4:56         ` Sridhar, Kanchana P
2025-08-26  5:17           ` Barry Song
2025-08-27  0:06             ` Sridhar, Kanchana P
2025-08-28 21:39               ` Barry Song
2025-08-28 22:47                 ` Sridhar, Kanchana P
2025-08-28 23:28                   ` Barry Song
2025-08-29  2:56                     ` Sridhar, Kanchana P
2025-08-29  3:42                       ` Barry Song [this message]
2025-08-29 18:39                         ` Sridhar, Kanchana P
2025-08-30  8:40                           ` Barry Song
2025-09-03 18:00                             ` Sridhar, Kanchana P
2025-08-01  4:36 ` [PATCH v11 23/24] mm: zswap: zswap_store() will process a large folio in batches Kanchana P Sridhar
2025-08-14 21:05   ` Nhat Pham
2025-08-14 22:10     ` Sridhar, Kanchana P
2025-08-28 23:59   ` Barry Song
2025-08-29  3:06     ` Sridhar, Kanchana P
2025-08-01  4:36 ` [PATCH v11 24/24] mm: zswap: Batched zswap_compress() with compress batching of large folios Kanchana P Sridhar
2025-08-14 21:14   ` Nhat Pham
2025-08-14 22:17     ` Sridhar, Kanchana P
2025-08-28 23:54   ` Barry Song
2025-08-29  3:04     ` Sridhar, Kanchana P
2025-08-29  3:31       ` Barry Song
2025-08-29  3:39         ` Sridhar, Kanchana P
2025-08-08 23:51 ` [PATCH v11 00/24] zswap compression batching with optimized iaa_crypto driver Nhat Pham
2025-08-09  0:03   ` Sridhar, Kanchana P
2025-08-15  5:27   ` Herbert Xu
2025-08-22 19:26     ` Sridhar, Kanchana P
2025-08-25  5:38       ` Herbert Xu
2025-08-25 18:12         ` Sridhar, Kanchana P
2025-08-26  1:13           ` Herbert Xu
2025-08-26  4:09             ` Sridhar, Kanchana P
2025-08-26  4:14               ` Herbert Xu
2025-08-26  4:42                 ` Sridhar, Kanchana P

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAGsJ_4zMHtYG3rS61PyGfJYd8KwGEw=Gy=g5s5wT_vrEL9fhbA@mail.gmail.com' \
    --to=21cnbao@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=ardb@kernel.org \
    --cc=chengming.zhou@linux.dev \
    --cc=clabbe@baylibre.com \
    --cc=davem@davemloft.net \
    --cc=ebiggers@google.com \
    --cc=hannes@cmpxchg.org \
    --cc=herbert@gondor.apana.org.au \
    --cc=kanchana.p.sridhar@intel.com \
    --cc=kristen.c.accardi@intel.com \
    --cc=linux-crypto@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=nphamcs@gmail.com \
    --cc=ryan.roberts@arm.com \
    --cc=senozhatsky@chromium.org \
    --cc=surenb@google.com \
    --cc=usamaarif642@gmail.com \
    --cc=vinicius.gomes@intel.com \
    --cc=vinodh.gopal@intel.com \
    --cc=wajdi.k.feghali@intel.com \
    --cc=ying.huang@linux.alibaba.com \
    --cc=yosry.ahmed@linux.dev \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).