public inbox for linux-btrfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Phillip Susi <phill@thesusis.net>
To: Matthew Wilcox <willy@infradead.org>
Cc: linux-fsdevel@vger.kernel.org, Jan Kara <jack@suse.cz>,
	Phillip Lougher <phillip@squashfs.org.uk>,
	linux-erofs@lists.ozlabs.org, linux-btrfs@vger.kernel.org,
	linux-ntfs-dev@lists.sourceforge.net, ntfs3@lists.linux.dev,
	linux-bcache@vger.kernel.org, David Howells <dhowells@redhat.com>,
	Hsin-Yi Wang <hsinyi@chromium.org>
Subject: Re: Readahead for compressed data
Date: Thu, 21 Oct 2021 21:04:45 -0400	[thread overview]
Message-ID: <87tuh9n9w2.fsf@vps.thesusis.net> (raw)
In-Reply-To: <YXHK5HrQpJu9oy8w@casper.infradead.org>


Matthew Wilcox <willy@infradead.org> writes:

> As far as I can tell, the following filesystems support compressed data:
>
> bcachefs, btrfs, erofs, ntfs, squashfs, zisofs
>
> I'd like to make it easier and more efficient for filesystems to
> implement compressed data.  There are a lot of approaches in use today,
> but none of them seem quite right to me.  I'm going to lay out a few
> design considerations next and then propose a solution.  Feel free to
> tell me I've got the constraints wrong, or suggest alternative solutions.
>
> When we call ->readahead from the VFS, the VFS has decided which pages
> are going to be the most useful to bring in, but it doesn't know how
> pages are bundled together into blocks.  As I've learned from talking to
> Gao Xiang, sometimes the filesystem doesn't know either, so this isn't
> something we can teach the VFS.
>
> We (David) added readahead_expand() recently to let the filesystem
> opportunistically add pages to the page cache "around" the area requested
> by the VFS.  That reduces the number of times the filesystem has to
> decompress the same block.  But it can fail (due to memory allocation
> failures or pages already being present in the cache).  So filesystems
> still have to implement some kind of fallback.

Wouldn't it be better to keep the *compressed* data in the cache and
decompress it multiple times if needed rather than decompress it once
and cache the decompressed data?  You would use more CPU time
decompressing multiple times, but be able to cache more data and avoid
more disk IO, which is generally far slower than the CPU can decompress
the data.




  parent reply	other threads:[~2021-10-22  1:17 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-10-21 20:17 Readahead for compressed data Matthew Wilcox
2021-10-22  0:22 ` Gao Xiang
2021-10-22  1:04 ` Phillip Susi [this message]
2021-10-22  1:28   ` Gao Xiang
2021-10-22  1:39     ` Gao Xiang
2021-10-22  2:09   ` Phillip Lougher
2021-10-22  2:31     ` Gao Xiang
2021-10-22  8:41   ` Jan Kara
2021-10-22  9:11     ` Gao Xiang
2021-10-22  9:22       ` Qu Wenruo
2021-10-22  9:39         ` Gao Xiang
2021-10-22  9:54           ` Gao Xiang
2021-10-22 10:40             ` Qu Wenruo
2021-10-25 18:59     ` Phillip Susi
2021-10-22  4:36 ` Phillip Lougher
2021-10-29  6:15 ` Coly Li

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87tuh9n9w2.fsf@vps.thesusis.net \
    --to=phill@thesusis.net \
    --cc=dhowells@redhat.com \
    --cc=hsinyi@chromium.org \
    --cc=jack@suse.cz \
    --cc=linux-bcache@vger.kernel.org \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=linux-erofs@lists.ozlabs.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-ntfs-dev@lists.sourceforge.net \
    --cc=ntfs3@lists.linux.dev \
    --cc=phillip@squashfs.org.uk \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox