From: Jihan LIN via B4 Relay <devnull+linjh22s.gmail.com@kernel.org>
To: Minchan Kim <minchan@kernel.org>,
Sergey Senozhatsky <senozhatsky@chromium.org>,
Jens Axboe <axboe@kernel.dk>
Cc: linux-kernel@vger.kernel.org, linux-block@vger.kernel.org,
Jihan LIN <linjh22s@gmail.com>
Subject: [PATCH RFC v2 0/5] zram: Allow zcomps to manage their own streams
Date: Mon, 09 Mar 2026 12:23:03 +0000 [thread overview]
Message-ID: <20260309-b4_zcomp_stream-v2-0-7148622326eb@gmail.com> (raw)
Hi all,
This RFC series introduces a new interface to allow zram compression
backends to manage their own streams, in addition to the existing
per-CPU stream model.
Currently, zram manages compression contexts via preemptive per-CPU
streams, which strictly limits concurrency to the number of online CPUs.
In contrast, hardware accelerators specialized for page compression
generally process PAGE_SIZE payloads (e.g. 4K) using standard
algorithms. These devices expose the limitations of the current model
due to the following features:
- These devices utilize a hardware queue to batch requests. A typical
queue depth (e.g., 256) far exceeds the number of available CPUs.
- These devices are asymmetric. Submission is generally fast and
asynchronous, but completion implies latency.
- Some devices only support compression requests, leaving decompression
to be handled by software.
The current "one-size-fits-all" design lacks the flexibility to support
these devices, preventing effective offloading of compression work.
This series proposes a hybrid approach. While maintaining full backward
compatibility with existing backends, this series introduces a new set
of operations, op->{get, put}_stream(), for backends that wish to manage
their own streams. This allows the backend to handle contention
internally and dynamically select an execution path for the acquired
streams. A new flag is also introduced to indicate this capability at
runtime. zram_write_page() now prefers streams managed by the backend if
a bio is considered asynchronous.
Some design decisions are as follows.
1. The proposed get_stream() does not take gfp_t flags to keep the
interface minimal. By design, backends are fully responsible for
allocation safety.
2. The default per-cpu streams now also imply a synchronous path for the
backends.
3. The recompression path currently relies on the default per-cpu
streams. This is a trade-off, since recompression is primarily for
memory saving, and hardware accelerators typically prioritize
throughput over compression ratio.
4. Backends must implement internal locking if required.
This RFC series focuses on the stream management interface required for
accelerator backends, laying the groundwork for batched asynchronous
operations in zram. Since I cannot verify this on specific accelerators
at this moment, a PoC patch that simulates this behavior in software is
included to verify new stream operations without requiring specific
accelerators. The next step would be to add a non-blocking interface to
fully utilize their concurrency, and allow backends to be built as
separate modules. Any feedback would be greatly appreciated.
Signed-off-by: Jihan LIN <linjh22s@gmail.com>
---
Changes in v2:
- Decouple locking from per-CPU streams by introducing struct
percpu_zstrm (PATCH 2/5)
- Refactor zcomp-managed streams to use struct managed_zstrm (PATCH 3/5)
- Add PoC zcomp-managed streams for lz4 backend (PATCH 5/5, only for
demonstration)
- Rebase to v7.0-rc2
- Link to v1: https://lore.kernel.org/r/20260204-b4_zcomp_stream-v1-0-35c06ce1d332@gmail.com
---
Jihan LIN (5):
zram: Rename zcomp_strm_{init, free}()
zram: Separate the lock from zcomp_strm
zram: Introduce zcomp-managed streams
zram: Use zcomp-managed streams for async write requests
zram: Add lz4 PoC for zcomp-managed streams
drivers/block/zram/backend_lz4.c | 464 +++++++++++++++++++++++++++++++++++++--
drivers/block/zram/zcomp.c | 85 +++++--
drivers/block/zram/zcomp.h | 35 ++-
drivers/block/zram/zram_drv.c | 29 ++-
4 files changed, 562 insertions(+), 51 deletions(-)
---
base-commit: 11439c4635edd669ae435eec308f4ab8a0804808
change-id: 20260202-b4_zcomp_stream-7e9f7884e128
Best regards,
--
Jihan LIN <linjh22s@gmail.com>
next reply other threads:[~2026-03-09 12:23 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-09 12:23 Jihan LIN via B4 Relay [this message]
2026-03-09 12:23 ` [PATCH RFC v2 1/5] zram: Rename zcomp_strm_{init, free}() Jihan LIN via B4 Relay
2026-03-09 12:23 ` [PATCH RFC v2 2/5] zram: Separate the lock from zcomp_strm Jihan LIN via B4 Relay
2026-03-09 12:23 ` [PATCH RFC v2 3/5] zram: Introduce zcomp-managed streams Jihan LIN via B4 Relay
2026-03-10 1:05 ` Sergey Senozhatsky
2026-03-10 13:31 ` Jihan LIN
2026-03-11 8:58 ` Sergey Senozhatsky
2026-03-09 12:23 ` [PATCH RFC v2 4/5] zram: Use zcomp-managed streams for async write requests Jihan LIN via B4 Relay
2026-03-09 12:23 ` [PATCH RFC v2 5/5] zram: Add lz4 PoC for zcomp-managed streams Jihan LIN via B4 Relay
2026-03-11 8:51 ` [PATCH RFC v2 0/5] zram: Allow zcomps to manage their own streams Sergey Senozhatsky
2026-03-13 14:42 ` Jihan LIN
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260309-b4_zcomp_stream-v2-0-7148622326eb@gmail.com \
--to=devnull+linjh22s.gmail.com@kernel.org \
--cc=axboe@kernel.dk \
--cc=linjh22s@gmail.com \
--cc=linux-block@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=minchan@kernel.org \
--cc=senozhatsky@chromium.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox