From: Sergey Senozhatsky <senozhatsky@chromium.org>
To: Barry Song <21cnbao@gmail.com>, Minchan Kim <minchan@kernel.org>,
Andrew Morton <akpm@linux-foundation.org>
Cc: axboe@kernel.dk, linux-block@vger.kernel.org,
senozhatsky@chromium.org, linux-kernel@vger.kernel.org,
zhengtangquan@oppo.com, Barry Song <v-songbaohua@oppo.com>
Subject: Re: [PATCH v2] zram: easy the allocation of zcomp_strm's buffers through vmalloc
Date: Wed, 7 Feb 2024 10:44:42 +0900 [thread overview]
Message-ID: <20240207014442.GI69174@google.com> (raw)
In-Reply-To: <20240206202511.4799-1-21cnbao@gmail.com>
On (24/02/07 09:25), Barry Song wrote:
> From: Barry Song <v-songbaohua@oppo.com>
>
> Firstly, there is no need to keep zcomp_strm's buffers contiguous
> physically.
>
> Secondly, The recent mTHP project has provided the possibility to
> swapout and swapin large folios. Compressing/decompressing large
> blocks can hugely decrease CPU consumption and improve compression
> ratio. This requires us to make zRAM support the compression and
> decompression for large objects.
> With the support of large objects in zRAM of our out-of-tree code,
> we have observed many allocation failures during CPU hotplug as
> large objects need larger buffers. So this change is also more
> future-proof once we begin to bring up multiple sizes in zRAM.
>
> Signed-off-by: Barry Song <v-songbaohua@oppo.com>
Reviewed-by: Sergey Senozhatsky <senozhatsky@chromium.org>
Note:
Taking it in NOT because of the out-of-tree code (we don't really
do that), but because this is executed from CPU offline/online
paths, which can happen on devices with fragmented memory (a valid
concern IMHO).
Minchan, if you have any objections, please chime in.
> @@ -37,7 +38,7 @@ static void zcomp_strm_free(struct zcomp_strm *zstrm)
> {
> if (!IS_ERR_OR_NULL(zstrm->tfm))
> crypto_free_comp(zstrm->tfm);
> - free_pages((unsigned long)zstrm->buffer, 1);
> + vfree(zstrm->buffer);
> zstrm->tfm = NULL;
> zstrm->buffer = NULL;
> }
> @@ -53,7 +54,7 @@ static int zcomp_strm_init(struct zcomp_strm *zstrm, struct zcomp *comp)
> * allocate 2 pages. 1 for compressed data, plus 1 extra for the
> * case when compressed size is larger than the original one
> */
> - zstrm->buffer = (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, 1);
> + zstrm->buffer = vzalloc(2 * PAGE_SIZE);
> if (IS_ERR_OR_NULL(zstrm->tfm) || !zstrm->buffer) {
> zcomp_strm_free(zstrm);
> return -ENOMEM;
> --
> 2.34.1
>
next prev parent reply other threads:[~2024-02-07 1:44 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-02-06 20:25 [PATCH v2] zram: easy the allocation of zcomp_strm's buffers through vmalloc Barry Song
2024-02-07 1:44 ` Sergey Senozhatsky [this message]
2024-02-07 2:40 ` Jens Axboe
2024-02-07 3:14 ` Sergey Senozhatsky
2024-02-07 3:17 ` Jens Axboe
2024-02-07 3:21 ` Sergey Senozhatsky
2024-02-07 5:00 ` Barry Song
2024-02-07 4:36 ` Sergey Senozhatsky
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240207014442.GI69174@google.com \
--to=senozhatsky@chromium.org \
--cc=21cnbao@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=axboe@kernel.dk \
--cc=linux-block@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=minchan@kernel.org \
--cc=v-songbaohua@oppo.com \
--cc=zhengtangquan@oppo.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).