From: Jeffle Xu <jefflexu@linux.alibaba.com>
To: dhowells@redhat.com, linux-cachefs@redhat.com, xiang@kernel.org,
chao@kernel.org, linux-erofs@lists.ozlabs.org
Cc: torvalds@linux-foundation.org, gregkh@linuxfoundation.org,
willy@infradead.org, linux-fsdevel@vger.kernel.org,
joseph.qi@linux.alibaba.com, bo.liu@linux.alibaba.com,
tao.peng@linux.alibaba.com, gerry@linux.alibaba.com,
eguan@linux.alibaba.com, linux-kernel@vger.kernel.org,
luodaowen.backend@bytedance.com, tianzichen@kuaishou.com,
yinxin.x@bytedance.com, zhangjiachen.jaycee@bytedance.com,
zhujia.zj@bytedance.com
Subject: [PATCH v11 20/22] erofs: implement fscache-based data readahead
Date: Mon, 9 May 2022 15:40:26 +0800 [thread overview]
Message-ID: <20220509074028.74954-21-jefflexu@linux.alibaba.com> (raw)
In-Reply-To: <20220509074028.74954-1-jefflexu@linux.alibaba.com>
Implement fscache-based data readahead. Also registers an individual
bdi for each erofs instance to enable readahead.
Signed-off-by: Jeffle Xu <jefflexu@linux.alibaba.com>
Reviewed-by: Gao Xiang <hsiangkao@linux.alibaba.com>
---
fs/erofs/fscache.c | 90 ++++++++++++++++++++++++++++++++++++++++++++++
fs/erofs/super.c | 4 +++
2 files changed, 94 insertions(+)
diff --git a/fs/erofs/fscache.c b/fs/erofs/fscache.c
index 5b779812a5ee..a402d8f0a063 100644
--- a/fs/erofs/fscache.c
+++ b/fs/erofs/fscache.c
@@ -162,12 +162,102 @@ static int erofs_fscache_readpage(struct file *file, struct page *page)
return ret;
}
+static void erofs_fscache_unlock_folios(struct readahead_control *rac,
+ size_t len)
+{
+ while (len) {
+ struct folio *folio = readahead_folio(rac);
+
+ len -= folio_size(folio);
+ folio_mark_uptodate(folio);
+ folio_unlock(folio);
+ }
+}
+
+static void erofs_fscache_readahead(struct readahead_control *rac)
+{
+ struct inode *inode = rac->mapping->host;
+ struct super_block *sb = inode->i_sb;
+ size_t len, count, done = 0;
+ erofs_off_t pos;
+ loff_t start, offset;
+ int ret;
+
+ if (!readahead_count(rac))
+ return;
+
+ start = readahead_pos(rac);
+ len = readahead_length(rac);
+
+ do {
+ struct erofs_map_blocks map;
+ struct erofs_map_dev mdev;
+
+ pos = start + done;
+ map.m_la = pos;
+
+ ret = erofs_map_blocks(inode, &map, EROFS_GET_BLOCKS_RAW);
+ if (ret)
+ return;
+
+ offset = start + done;
+ count = min_t(size_t, map.m_llen - (pos - map.m_la),
+ len - done);
+
+ if (!(map.m_flags & EROFS_MAP_MAPPED)) {
+ struct iov_iter iter;
+
+ iov_iter_xarray(&iter, READ, &rac->mapping->i_pages,
+ offset, count);
+ iov_iter_zero(count, &iter);
+
+ erofs_fscache_unlock_folios(rac, count);
+ ret = count;
+ continue;
+ }
+
+ if (map.m_flags & EROFS_MAP_META) {
+ struct folio *folio = readahead_folio(rac);
+
+ ret = erofs_fscache_readpage_inline(folio, &map);
+ if (!ret) {
+ folio_mark_uptodate(folio);
+ ret = folio_size(folio);
+ }
+
+ folio_unlock(folio);
+ continue;
+ }
+
+ mdev = (struct erofs_map_dev) {
+ .m_deviceid = map.m_deviceid,
+ .m_pa = map.m_pa,
+ };
+ ret = erofs_map_dev(sb, &mdev);
+ if (ret)
+ return;
+
+ ret = erofs_fscache_read_folios(mdev.m_fscache->cookie,
+ rac->mapping, offset, count,
+ mdev.m_pa + (pos - map.m_la));
+ /*
+ * For the error cases, the folios will be unlocked when
+ * .readahead() returns.
+ */
+ if (!ret) {
+ erofs_fscache_unlock_folios(rac, count);
+ ret = count;
+ }
+ } while (ret > 0 && ((done += ret) < len));
+}
+
static const struct address_space_operations erofs_fscache_meta_aops = {
.readpage = erofs_fscache_meta_readpage,
};
const struct address_space_operations erofs_fscache_access_aops = {
.readpage = erofs_fscache_readpage,
+ .readahead = erofs_fscache_readahead,
};
int erofs_fscache_register_cookie(struct super_block *sb,
diff --git a/fs/erofs/super.c b/fs/erofs/super.c
index c6755bcae4a6..f68ba929100d 100644
--- a/fs/erofs/super.c
+++ b/fs/erofs/super.c
@@ -619,6 +619,10 @@ static int erofs_fc_fill_super(struct super_block *sb, struct fs_context *fc)
sbi->opt.fsid, true);
if (err)
return err;
+
+ err = super_setup_bdi(sb);
+ if (err)
+ return err;
} else {
if (!sb_set_blocksize(sb, EROFS_BLKSIZ)) {
erofs_err(sb, "failed to set erofs blksize");
--
2.27.0
next prev parent reply other threads:[~2022-05-09 7:51 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-05-09 7:40 [PATCH v11 00/22] fscache,erofs: fscache-based on-demand read semantics Jeffle Xu
2022-05-09 7:40 ` [PATCH v11 01/22] cachefiles: extract write routine Jeffle Xu
2022-05-09 7:40 ` [PATCH v11 02/22] cachefiles: notify the user daemon when looking up cookie Jeffle Xu
2022-05-10 12:50 ` David Howells
2022-05-09 7:40 ` [PATCH v11 03/22] cachefiles: unbind cachefiles gracefully in on-demand mode Jeffle Xu
2022-05-10 12:53 ` David Howells
2022-05-09 7:40 ` [PATCH v11 04/22] cachefiles: notify the user daemon when withdrawing cookie Jeffle Xu
2022-05-09 7:40 ` [PATCH v11 05/22] cachefiles: implement on-demand read Jeffle Xu
2022-05-09 7:40 ` [PATCH v11 06/22] cachefiles: enable on-demand read mode Jeffle Xu
2022-05-10 12:56 ` David Howells
2022-05-10 13:29 ` Gao Xiang
2022-05-09 7:40 ` [PATCH v11 07/22] cachefiles: add tracepoints for " Jeffle Xu
2022-05-09 7:40 ` [PATCH v11 08/22] cachefiles: document " Jeffle Xu
2022-05-10 13:01 ` David Howells
2022-05-09 7:40 ` [PATCH v11 09/22] erofs: make erofs_map_blocks() generally available Jeffle Xu
2022-05-09 7:40 ` [PATCH v11 10/22] erofs: add fscache mode check helper Jeffle Xu
2022-05-09 7:40 ` [PATCH v11 11/22] erofs: register fscache volume Jeffle Xu
2022-05-09 7:40 ` [PATCH v11 12/22] erofs: add fscache context helper functions Jeffle Xu
2022-05-09 7:40 ` [PATCH v11 13/22] erofs: add anonymous inode caching metadata for data blobs Jeffle Xu
2022-05-09 7:40 ` [PATCH v11 14/22] erofs: add erofs_fscache_read_folios() helper Jeffle Xu
2022-05-09 7:40 ` [PATCH v11 15/22] erofs: register fscache context for primary data blob Jeffle Xu
2022-05-09 7:40 ` [PATCH v11 16/22] erofs: register fscache context for extra data blobs Jeffle Xu
2022-05-09 7:40 ` [PATCH v11 17/22] erofs: implement fscache-based metadata read Jeffle Xu
2022-05-09 7:40 ` [PATCH v11 18/22] erofs: implement fscache-based data read for non-inline layout Jeffle Xu
2022-05-09 7:40 ` [PATCH v11 19/22] erofs: implement fscache-based data read for inline layout Jeffle Xu
2022-05-09 7:40 ` Jeffle Xu [this message]
2022-05-09 7:40 ` [PATCH v11 21/22] erofs: add 'fsid' mount option Jeffle Xu
2022-05-09 7:40 ` [PATCH v11 22/22] erofs: change to use asynchronous io for fscache readpage/readahead Jeffle Xu
2022-05-10 6:48 ` [PATCH v11 00/22] fscache, erofs: fscache-based on-demand read semantics 严松
2022-05-10 14:14 ` [PATCH v11 00/22] fscache,erofs: " Chao Yu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220509074028.74954-21-jefflexu@linux.alibaba.com \
--to=jefflexu@linux.alibaba.com \
--cc=bo.liu@linux.alibaba.com \
--cc=chao@kernel.org \
--cc=dhowells@redhat.com \
--cc=eguan@linux.alibaba.com \
--cc=gerry@linux.alibaba.com \
--cc=gregkh@linuxfoundation.org \
--cc=joseph.qi@linux.alibaba.com \
--cc=linux-cachefs@redhat.com \
--cc=linux-erofs@lists.ozlabs.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=luodaowen.backend@bytedance.com \
--cc=tao.peng@linux.alibaba.com \
--cc=tianzichen@kuaishou.com \
--cc=torvalds@linux-foundation.org \
--cc=willy@infradead.org \
--cc=xiang@kernel.org \
--cc=yinxin.x@bytedance.com \
--cc=zhangjiachen.jaycee@bytedance.com \
--cc=zhujia.zj@bytedance.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox