From: Barry Song <21cnbao@gmail.com>
To: 牛海程 <2023302111378@whu.edu.cn>
Cc: minchan@kernel.org, ngupta@vflare.org,
sergey.senozhatsky.work@gmail.com, axboe@kernel.dk,
linux-kernel@vger.kernel.org, linux-block@vger.kernel.org
Subject: Re: [PATCH/RFC] Zram: Improved Compression with PID/VMA-aware Grouping
Date: Wed, 20 Aug 2025 16:49:15 +0800 [thread overview]
Message-ID: <CAGsJ_4zWHtdYdmDx2NBJVOLkfXEFPtsMmLQL_S5HG4ZC=hRidg@mail.gmail.com> (raw)
In-Reply-To: <1709a150.2340.198c223db7a.Coremail.2023302111378@whu.edu.cn>
On Wed, Aug 20, 2025 at 4:44 PM 牛海程 <2023302111378@whu.edu.cn> wrote:
>
> Dear Linux Kernel Zram Maintainers,
>
> I am an open-source developer and have been working on some potential improvements to the Zram module, focusing on optimizing compression rates.My work is based on the Linux kernel version 5.10.240.
>
> My recent work introduces the following changes:
>
> 1.PID and Virtual Address Tracking: During page swap-in operations at the swap layer, I've implemented a mechanism to record the Process ID (PID) and the corresponding virtual address of the page.
> 2.PID-aware Grouping and VMA Merging in Zram: Within the Zram layer, pages are now grouped based on their recorded PIDs. Following this grouping, pages with similar or contiguous virtual addresses within each PID group are merged before compression. The rationale behind this is that pages belonging to the same process and located adjacently in virtual memory are likely to contain related data, which could lead to better compression ratios when processed together.
This seems unnecessarily complex. How does it differ from the mTHP-based
compression we’ve been experimenting with?
https://lore.kernel.org/linux-mm/20241121222521.83458-1-21cnbao@gmail.com/
In that case, you don't need PID-aware or VMA merging etc.
>
> Preliminary experiments using this approach have shown promising results:
>
> With the LZ4 compression algorithm, we observed an approximate 3% increase in the overall compression rate.
> With the ZSTD compression algorithm, the improvement was even more significant, reaching an approximate 4% increase in the overall compression rate.
>
> I am writing to inquire if you believe this approach and its observed benefits are significant enough to warrant further consideration for upstreaming into the main Linux kernel. I would greatly appreciate your thoughts on the utility and potential implications of these changes, as well as any guidance on how to proceed with further development or formal submission.
>
> Thank you for your time and consideration.
> Best regards,
> Haicheng Niu
Thanks
Barry
next prev parent reply other threads:[~2025-08-20 8:49 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-08-19 11:43 [PATCH/RFC] Zram: Improved Compression with PID/VMA-aware Grouping 牛海程
2025-08-20 8:49 ` Barry Song [this message]
2025-08-21 3:11 ` Sergey Senozhatsky
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAGsJ_4zWHtdYdmDx2NBJVOLkfXEFPtsMmLQL_S5HG4ZC=hRidg@mail.gmail.com' \
--to=21cnbao@gmail.com \
--cc=2023302111378@whu.edu.cn \
--cc=axboe@kernel.dk \
--cc=linux-block@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=minchan@kernel.org \
--cc=ngupta@vflare.org \
--cc=sergey.senozhatsky.work@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).