linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jan Kara <jack@suse.cz>
To: Ye Bin <yebin@huaweicloud.com>
Cc: viro@zeniv.linux.org.uk, brauner@kernel.org, jack@suse.cz,
	linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org,
	yebin10@huawei.com, zhangxiaoxu5@huawei.com
Subject: Re: [PATCH 1/3] vfs: introduce shrink_icache_sb() helper
Date: Thu, 10 Oct 2024 14:07:49 +0200	[thread overview]
Message-ID: <20241010120749.7x5xdiodu3lwxg7j@quack3> (raw)
In-Reply-To: <20241010112543.1609648-2-yebin@huaweicloud.com>

On Thu 10-10-24 19:25:41, Ye Bin wrote:
> From: Ye Bin <yebin10@huawei.com>
> 
> This patch is prepare for support drop_caches for specify file system.
> shrink_icache_sb() helper walk the superblock inode LRU for freeable inodes
> and attempt to free them.
> 
> Signed-off-by: Ye Bin <yebin10@huawei.com>
> ---
>  fs/inode.c    | 17 +++++++++++++++++
>  fs/internal.h |  1 +
>  2 files changed, 18 insertions(+)
> 
> diff --git a/fs/inode.c b/fs/inode.c
> index 1939f711d2c9..2129b48571b4 100644
> --- a/fs/inode.c
> +++ b/fs/inode.c
> @@ -1045,6 +1045,23 @@ long prune_icache_sb(struct super_block *sb, struct shrink_control *sc)
>  	return freed;
>  }
>  
> +/*
> + * Walk the superblock inode LRU for freeable inodes and attempt to free them.
> + * Inodes to be freed are moved to a temporary list and then are freed outside
> + * inode_lock by dispose_list().
> + */
> +void shrink_icache_sb(struct super_block *sb)
> +{
> +	do {
> +		LIST_HEAD(dispose);
> +
> +		list_lru_walk(&sb->s_inode_lru, inode_lru_isolate,
> +			      &dispose, 1024);
> +		dispose_list(&dispose);
> +	} while (list_lru_count(&sb->s_inode_lru) > 0);
> +}
> +EXPORT_SYMBOL(shrink_icache_sb);

Hum, but this will livelock if we cannot remove all the inodes? Now I guess
inode_lru_isolate() usually removes busy inodes from the LRU so this should
not happen in practice but such behavior is not guaranteed (we can LRU_SKIP
inodes if i_lock is busy or LRU_RETRY if inode has page cache pages). So I
think we need some safety net here...

								Honza
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

  reply	other threads:[~2024-10-10 12:07 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-10-10 11:25 [PATCH 0/3] add support for drop_caches for individual filesystem Ye Bin
2024-10-10 11:25 ` [PATCH 1/3] vfs: introduce shrink_icache_sb() helper Ye Bin
2024-10-10 12:07   ` Jan Kara [this message]
2024-10-10 11:25 ` [PATCH 2/3] sysctl: add support for drop_caches for individual filesystem Ye Bin
2024-10-10 12:16   ` Jan Kara
2024-10-10 12:44     ` yebin (H)
2024-10-10 13:35     ` Benjamin Coddington
2024-10-10 17:04       ` Jan Kara
2024-10-11 11:44         ` Amir Goldstein
2024-10-14 11:24           ` Jan Kara
2024-10-10 13:48   ` Thomas Weißschuh
2024-10-10 17:17   ` Al Viro
2024-10-10 11:25 ` [PATCH 3/3] Documentation: add instructions for using 'drop_fs_caches sysctl' sysctl Ye Bin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20241010120749.7x5xdiodu3lwxg7j@quack3 \
    --to=jack@suse.cz \
    --cc=brauner@kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=viro@zeniv.linux.org.uk \
    --cc=yebin10@huawei.com \
    --cc=yebin@huaweicloud.com \
    --cc=zhangxiaoxu5@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).