From: Ye Bin <yebin@huaweicloud.com>
To: viro@zeniv.linux.org.uk, brauner@kernel.org, jack@suse.cz,
linux-fsdevel@vger.kernel.org
Cc: linux-kernel@vger.kernel.org, yebin10@huawei.com,
zhangxiaoxu5@huawei.com
Subject: [PATCH 1/3] vfs: introduce shrink_icache_sb() helper
Date: Thu, 10 Oct 2024 19:25:41 +0800 [thread overview]
Message-ID: <20241010112543.1609648-2-yebin@huaweicloud.com> (raw)
In-Reply-To: <20241010112543.1609648-1-yebin@huaweicloud.com>
From: Ye Bin <yebin10@huawei.com>
This patch is prepare for support drop_caches for specify file system.
shrink_icache_sb() helper walk the superblock inode LRU for freeable inodes
and attempt to free them.
Signed-off-by: Ye Bin <yebin10@huawei.com>
---
fs/inode.c | 17 +++++++++++++++++
fs/internal.h | 1 +
2 files changed, 18 insertions(+)
diff --git a/fs/inode.c b/fs/inode.c
index 1939f711d2c9..2129b48571b4 100644
--- a/fs/inode.c
+++ b/fs/inode.c
@@ -1045,6 +1045,23 @@ long prune_icache_sb(struct super_block *sb, struct shrink_control *sc)
return freed;
}
+/*
+ * Walk the superblock inode LRU for freeable inodes and attempt to free them.
+ * Inodes to be freed are moved to a temporary list and then are freed outside
+ * inode_lock by dispose_list().
+ */
+void shrink_icache_sb(struct super_block *sb)
+{
+ do {
+ LIST_HEAD(dispose);
+
+ list_lru_walk(&sb->s_inode_lru, inode_lru_isolate,
+ &dispose, 1024);
+ dispose_list(&dispose);
+ } while (list_lru_count(&sb->s_inode_lru) > 0);
+}
+EXPORT_SYMBOL(shrink_icache_sb);
+
static void __wait_on_freeing_inode(struct inode *inode, bool is_inode_hash_locked);
/*
* Called with the inode lock held.
diff --git a/fs/internal.h b/fs/internal.h
index 81c7a085355c..cee79141e308 100644
--- a/fs/internal.h
+++ b/fs/internal.h
@@ -199,6 +199,7 @@ extern int vfs_open(const struct path *, struct file *);
* inode.c
*/
extern long prune_icache_sb(struct super_block *sb, struct shrink_control *sc);
+extern void shrink_icache_sb(struct super_block *sb);
int dentry_needs_remove_privs(struct mnt_idmap *, struct dentry *dentry);
bool in_group_or_capable(struct mnt_idmap *idmap,
const struct inode *inode, vfsgid_t vfsgid);
--
2.31.1
next prev parent reply other threads:[~2024-10-10 11:11 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-10-10 11:25 [PATCH 0/3] add support for drop_caches for individual filesystem Ye Bin
2024-10-10 11:25 ` Ye Bin [this message]
2024-10-10 12:07 ` [PATCH 1/3] vfs: introduce shrink_icache_sb() helper Jan Kara
2024-10-10 11:25 ` [PATCH 2/3] sysctl: add support for drop_caches for individual filesystem Ye Bin
2024-10-10 12:16 ` Jan Kara
2024-10-10 12:44 ` yebin (H)
2024-10-10 13:35 ` Benjamin Coddington
2024-10-10 17:04 ` Jan Kara
2024-10-11 11:44 ` Amir Goldstein
2024-10-14 11:24 ` Jan Kara
2024-10-10 13:48 ` Thomas Weißschuh
2024-10-10 17:17 ` Al Viro
2024-10-10 11:25 ` [PATCH 3/3] Documentation: add instructions for using 'drop_fs_caches sysctl' sysctl Ye Bin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20241010112543.1609648-2-yebin@huaweicloud.com \
--to=yebin@huaweicloud.com \
--cc=brauner@kernel.org \
--cc=jack@suse.cz \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=viro@zeniv.linux.org.uk \
--cc=yebin10@huawei.com \
--cc=zhangxiaoxu5@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).