From: Viacheslav Dubeyko <Slava.Dubeyko@ibm.com>
To: Alex Markuze <amarkuze@redhat.com>, David Howells <dhowells@redhat.com>
Cc: "linux-fsdevel@vger.kernel.org" <linux-fsdevel@vger.kernel.org>,
"idryomov@gmail.com" <idryomov@gmail.com>,
Xiubo Li <xiubli@redhat.com>,
"jlayton@kernel.org" <jlayton@kernel.org>,
"ceph-devel@vger.kernel.org" <ceph-devel@vger.kernel.org>,
"netfs@lists.linux.dev" <netfs@lists.linux.dev>
Subject: Re: Ceph and Netfslib
Date: Wed, 18 Dec 2024 19:06:56 +0000 [thread overview]
Message-ID: <1729f4bf15110c97e0b0590fc715d0837b9ae131.camel@ibm.com> (raw)
In-Reply-To: <3989572.1734546794@warthog.procyon.org.uk>
Hi David,
On Wed, 2024-12-18 at 18:33 +0000, David Howells wrote:
> Hi Alex, Slava,
>
> I don't know whether you know, but I'm working on netfslib-ising ceph
> with an
> eye to moving all the VFS/VM normal I/O interfaces to netfslib (-
> >read_iter,
> ->write_iter, ->readahead, ->read_folio, ->page_mkwrite, -
> >writepages), though
> with wrapping/methods by which each network filesystem can add its
> own
> distinctive flavour.
>
> Also, that would include doing things like content encryption, since
> that is
> generally useful in filesystems and I have plans to support it in
> both AFS and
> CIFS as well.
>
> This means that fs/ceph/ will have practically nothing to do with
> page structs
> or folio structs. All that will be offloaded to netfslib and
> netfslib will
> just hand iov_iters to the client filesystems, including ceph.
>
> This will also allow me to massively simplify the networking code in
> net/ceph/. My aim is to replace all the page array, page lists, bio,
> etc. data types in libceph with a single type that just conveys an
> iov_iter
> and I have a ceph_databuf type that holds a list of pages in the form
> of a
> bio_vec[] and I can extract an iov_iter from that to pass to the
> networking.
>
Sounds like a really good idea to me. Makes a lot of sense.
> Then, for the transmission side, the iov_iter will be passed to the
> TCP socket
> with MSG_SPLICE_PAGES rather than iterating over the data type and
> passing a
> page fragment at a time. We fixed this up for nfsd and Chuck Lever
> reported a
> improvement in throughput (15% if I remember correctly).
>
> The patches I have so far can be found here:
>
>
> https://git.kernel.org/p
> ub_scm_linux_kernel_git_dhowells_linux-2Dfs.git_log_-3Fh-3Dceph-
> 2Diter&d=DwIFAg&c=BSDicqBQBDjDI9RkVyTcHQ&r=q5bIm4AXMzc8NJu1_RGmnQ2fMW
> Kq4Y4RAkElvUgSs00&m=IdnhHmTDiQZNzP_zbJHD5PQFfO3U8UaEuGpDubyf8fFXBu4KQ
> 7NFE-0OklCCoqtp&s=v_tEim-OriGJ7-Mwdc9jHMW6Aj_7RKr5ZwGwjg5gfy8&e=
>
> Note that I have rbd working with the changes I've made to that
> point.
>
>
> Anyway, ... I need to pick someone's brain about whether the way per-
> page
> tracking of snapshots within fs/ceph/ can be simplified.
>
> Firstly, note that there may be a bug in ceph writeback cleanup as it
> stands.
> It calls folio_detach_private() without holding the folio lock (it
> holds the
> writeback lock, but that's not sufficient by MM rules). This means
> you have a
> race between { setting ->private, setting PG_private and inc refcount
> } on one
> hand and { clearing ->private, clearing PG_private and dec refcount }
> on the
> other.
>
I assume you imply ceph_invalidate_folio() method. Am I correct here?
> Unfortunately, you cannot just take the page lock from writeback
> cleanup
> without running the risk of deadlocking against ->writepages()
> wanting to take
> PG_lock and then PG_writeback. And you cannot drop PG_writeback
> first as the
> moment you do that, the page can be deallocated.
>
>
> Secondly, there's a counter, ci->i_wrbuffer_ref, that might actually
> be
> redundant if we do it right as I_PINNING_NETFS_WB offers an
> alternative way we
> might do things. If we set this bit, ->write_inode() will be called
> with
> wbc->unpinned_netfs_wb set when all currently dirty pages have been
> cleaned up
> (see netfs_unpin_writeback()). netfslib currently uses this to pin
> the
> fscache objects but it could perhaps also be used to pin the
> writeback cap for
> ceph.
>
Yeah, ci->i_wrbuffer_ref looks like not very reliable programming
pattern and if we can do it in other way, then it could be more safe
solution. However, this counter is used in multiple places of ceph
code. It needs to find a solution to get rid of this counter in safe
and easy way.
>
> Thirdly, I was under the impression that, for any given page/folio,
> only the
> head snapshot could be altered - and that any older snapshot must be
> flushed
> before we could allow that.
>
>
> Fourthly, the ceph_snap_context struct holds a list of snaps. Does
> it really
> need to, or is just the most recent snap for which the folio holds
> changes
> sufficient?
>
Let me dive into the implementation details. Maybe, Alex can share more
details here.
Thanks,
Slava.
next prev parent reply other threads:[~2024-12-18 19:07 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-12-18 18:33 Ceph and Netfslib David Howells
2024-12-18 18:47 ` Patrick Donnelly
2024-12-18 19:36 ` David Howells
2024-12-18 19:06 ` Viacheslav Dubeyko [this message]
2024-12-18 19:48 ` David Howells
2024-12-23 23:13 ` Viacheslav Dubeyko
2024-12-24 12:56 ` Matthew Wilcox
2024-12-24 21:52 ` Viacheslav Dubeyko
2025-01-09 0:53 ` Viacheslav Dubeyko
2024-12-18 19:43 ` David Howells
2025-03-05 16:34 ` Is EOLDSNAPC actually generated? -- " David Howells
2025-03-05 19:23 ` Alex Markuze
2025-03-05 20:22 ` David Howells
2025-03-06 13:19 ` Alex Markuze
2025-03-06 13:48 ` David Howells
2025-03-06 13:55 ` Alex Markuze
2025-03-06 13:58 ` Venky Shankar
2025-03-06 14:13 ` David Howells
2025-03-06 14:23 ` Alex Markuze
2025-03-06 16:21 ` Gregory Farnum
2025-03-06 17:18 ` Alex Markuze
2025-03-06 15:55 ` David Howells
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1729f4bf15110c97e0b0590fc715d0837b9ae131.camel@ibm.com \
--to=slava.dubeyko@ibm.com \
--cc=amarkuze@redhat.com \
--cc=ceph-devel@vger.kernel.org \
--cc=dhowells@redhat.com \
--cc=idryomov@gmail.com \
--cc=jlayton@kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=netfs@lists.linux.dev \
--cc=xiubli@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox