From: Zaslonko Mikhail <zaslonko@linux.ibm.com>
To: Qu Wenruo <quwenruo.btrfs@gmx.com>, Qu Wenruo <wqu@suse.com>,
linux-btrfs@vger.kernel.org
Cc: linux-s390@vger.kernel.org, Vasily Gorbik <gor@linux.ibm.com>
Subject: Re: [PATCH] btrfs: zlib: do not do unnecessary page copying for compression
Date: Tue, 28 May 2024 12:44:19 +0200 [thread overview]
Message-ID: <2db8bc9a-5ff0-40ec-92ba-29c90b6976c7@linux.ibm.com> (raw)
In-Reply-To: <a24ef846-95f9-413d-abfa-54b06281047a@gmx.com>
Hello Qu,
On 28.05.2024 00:09, Qu Wenruo wrote:
>
>
> 在 2024/5/28 01:55, Zaslonko Mikhail 写道:
>> Hello Qu,
>>
>> I remember implementing btrfs zlib changes related to s390 dfltcc compression support a while ago:
>> https://lwn.net/Articles/808809/
>>
>> The workspace buffer size was indeed enlarged for performance reasons.
>>
>> Please see my comments below.
>>
>> On 27.05.2024 11:24, Qu Wenruo wrote:
>>> [BUG]
>>> In function zlib_compress_folios(), we handle the input by:
>>>
>>> - If there are multiple pages left
>>> We copy the page content into workspace->buf, and use workspace->buf
>>> as input for compression.
>>>
>>> But on x86_64 (which doesn't support dfltcc), that buffer size is just
>>> one page, so we're wasting our CPU time copying the page for no
>>> benefit.
>>>
>>> - If there is only one page left
>>> We use the mapped page address as input for compression.
>>>
>>> The problem is, this means we will copy the whole input range except the
>>> last page (can be as large as 124K), without much obvious benefit.
>>>
>>> Meanwhile the cost is pretty obvious.
>>
>> Actually, the behavior for kernels w/o dfltcc support (currently available on s390
>> only) should not be affected.
>> We copy input pages to the workspace->buf only if the buffer size is larger than 1 page.
>> At least it worked this way after my original btrfs zlib patch:
>> https://lwn.net/ml/linux-kernel/20200108105103.29028-1-zaslonko@linux.ibm.com/
>>
>> Has this behavior somehow changed after your page->folio conversion performed for btrfs?
>> https://lore.kernel.org/all/cover.1706521511.git.wqu@suse.com/
>
> My bad, I forgot that the buf_size for non-S390 systems is fixed to one
> page thus the page copy is not utilized for x86_64.
>
> But I'm still wondering if we do not go 4 pages as buffer, how much
> performance penalty would there be?
>
> One of the objective is to prepare for the incoming sector perfect
> subpage compression support, thus I'm re-checking the existing
> compression code, preparing to change them to be subpage compatible.
>
> If we can simplify the behavior without too large performance penalty,
> can we consider just using one single page as buffer?
Based on my earlier estimates, bigger buffer provided up to 60% performance for inflate and up to 30% for
deflate on s390 with dfltcc support.
I don't think giving it away for simplification would be a good idea.
>
> Thanks,
> Qu
next prev parent reply other threads:[~2024-05-28 10:44 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-05-27 9:24 [PATCH] btrfs: zlib: do not do unnecessary page copying for compression Qu Wenruo
2024-05-27 16:25 ` Zaslonko Mikhail
2024-05-27 22:09 ` Qu Wenruo
2024-05-28 10:44 ` Zaslonko Mikhail [this message]
2024-05-28 21:43 ` David Sterba
2024-05-28 22:02 ` Qu Wenruo
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=2db8bc9a-5ff0-40ec-92ba-29c90b6976c7@linux.ibm.com \
--to=zaslonko@linux.ibm.com \
--cc=gor@linux.ibm.com \
--cc=linux-btrfs@vger.kernel.org \
--cc=linux-s390@vger.kernel.org \
--cc=quwenruo.btrfs@gmx.com \
--cc=wqu@suse.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox