From: Hou Tao <houtao@huaweicloud.com>
To: Amir Goldstein <amir73il@gmail.com>
Cc: lsf-pc@lists.linux-foundation.org, Nhat Pham <nphamcs@gmail.com>,
Miklos Szeredi <miklos@szeredi.hu>,
Alexei Starovoitov <ast@kernel.org>,
linux-fsdevel@vger.kernel.org, Yonghong Song <yhs@fb.com>,
bpf <bpf@vger.kernel.org>
Subject: Re: [Lsf-pc] [LSF/MM/BPF TOPIC] bpf iterator for file-system
Date: Mon, 24 Apr 2023 14:45:33 +0800 [thread overview]
Message-ID: <a1e5d6e0-4772-f42a-96b8-eccefdb6127e@huaweicloud.com> (raw)
In-Reply-To: <CAOQ4uxggt_je51t0MWSfRS0o7UFSYj7GDHSJd026kMfF9TvLiA@mail.gmail.com>
Hi,
On 4/16/2023 3:55 PM, Amir Goldstein wrote:
> On Tue, Feb 28, 2023 at 5:47 AM Hou Tao <houtao@huaweicloud.com> wrote:
>> From time to time, new syscalls have been proposed to gain more observability
>> for file-system:
>>
>> (1) getvalues() [0]. It uses a hierarchical namespace API to gather and return
>> multiple values in single syscall.
>> (2) cachestat() [1]. It returns the cache status (e.g., number of dirty pages)
>> of a given file in a scalable way.
>>
>> All these proposals requires adding a new syscall. Here I would like to propose
>> another solution for file system observability: bpf iterator for file system
>> object. The initial idea came when I was trying to implement a filefrag-like
>> page cache tool with support for multi-order folio, so that we can know the
>> number of multi-order folios and the orders of those folios in page cache. After
>> developing a demo for it, I realized that we could use it to provide more
>> observability for file system objects. e.g., dumping the per-cpu iostat for a
>> super block [2], iterating all inodes in a super-block to dump info for
>> specific inodes (e.g., unlinked but pinned inode), or displaying the flags of a
>> specific mount.
>>
>> The BPF iterator was introduced in v5.8 [3] to support flexible content dumping
>> for kernel objects. It works by creating bpf iterator file [4], which is a
>> seq-like read-only file, and the content of the bpf iterator file is determined
>> by a previously loaded bpf program, so userspace can read the bpf iterator file
>> to get the information it needs. However there are some unresolved issues:
>> (1) The privilege.
>> Loading the bpf program requires CAP_ADMIN or CAP_BPF. This means that the
>> observability will be available to the privileged process. Maybe we can load the
>> bpf program through a privileged process and make the bpf iterator file being
>> readable for normal users.
>> (2) Prevent pinning the super-block
>> In the current naive implementation, the bpf iterator simply pins the
>> super-block of the passed fd and prevents the super-block from being destroyed.
>> Perhaps fs-pin is a better choice, so the bpf iterator can be deactivated after
>> the filesystem is umounted.
>>
>> I hope to send out an RFC soon before LSF/MM/BPF for further discussion.
> Hi Hou,
>
> IIUC, there is not much value in making this a cross track session.
> Seems like an FS track session that has not much to do with BPF
> development.
>
> Am I understanding correctly or are there any cross subsystem
> interactions that need to be discussed?
Yes. Although the patchset for file-system iterator is still not ready, but I
think the BPF mechanisms for file-system iterator is ready, so a cross track
session maybe unnecessary.
>
> Perhaps we can join you as co-speaker for Miklos' traditional
> "fsinfo" session?
Thanks. I am glad to be a co-speaker for fsinfo session.
>
> Thanks,
> Amir.
>
>> [0]:
>> https://lore.kernel.org/linux-fsdevel/YnEeuw6fd1A8usjj@miu.piliscsaba.redhat.com/
>> [1]: https://lore.kernel.org/linux-mm/20230219073318.366189-1-nphamcs@gmail.com/
>> [2]:
>> https://lore.kernel.org/linux-fsdevel/CAJfpegsCKEx41KA1S2QJ9gX9BEBG4_d8igA0DT66GFH2ZanspA@mail.gmail.com/
>> [3]: https://lore.kernel.org/bpf/20200509175859.2474608-1-yhs@fb.com/
>> [4]: https://docs.kernel.org/bpf/bpf_iterators.html
>>
>> _______________________________________________
>> Lsf-pc mailing list
>> Lsf-pc@lists.linux-foundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/lsf-pc
next prev parent reply other threads:[~2023-04-24 6:45 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-02-28 3:30 [LSF/MM/BPF TOPIC] bpf iterator for file-system Hou Tao
2023-02-28 19:59 ` Viacheslav Dubeyko
2023-03-08 0:31 ` Andrii Nakryiko
2023-04-16 7:55 ` [Lsf-pc] " Amir Goldstein
2023-04-24 6:45 ` Hou Tao [this message]
2023-04-27 15:54 ` Amir Goldstein
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=a1e5d6e0-4772-f42a-96b8-eccefdb6127e@huaweicloud.com \
--to=houtao@huaweicloud.com \
--cc=amir73il@gmail.com \
--cc=ast@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=lsf-pc@lists.linux-foundation.org \
--cc=miklos@szeredi.hu \
--cc=nphamcs@gmail.com \
--cc=yhs@fb.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).