From: David Howells <dhowells@redhat.com>
To: Alex Markuze <amarkuze@redhat.com>,
Viacheslav Dubeyko <Slava.Dubeyko@ibm.com>
Cc: dhowells@redhat.com, Ilya Dryomov <idryomov@gmail.com>,
Xiubo Li <xiubli@redhat.com>, Jeff Layton <jlayton@kernel.org>,
ceph-devel@vger.kernel.org, netfs@lists.linux.dev,
linux-fsdevel@vger.kernel.org
Subject: Ceph and Netfslib
Date: Wed, 18 Dec 2024 18:33:14 +0000 [thread overview]
Message-ID: <3989572.1734546794@warthog.procyon.org.uk> (raw)
Hi Alex, Slava,
I don't know whether you know, but I'm working on netfslib-ising ceph with an
eye to moving all the VFS/VM normal I/O interfaces to netfslib (->read_iter,
->write_iter, ->readahead, ->read_folio, ->page_mkwrite, ->writepages), though
with wrapping/methods by which each network filesystem can add its own
distinctive flavour.
Also, that would include doing things like content encryption, since that is
generally useful in filesystems and I have plans to support it in both AFS and
CIFS as well.
This means that fs/ceph/ will have practically nothing to do with page structs
or folio structs. All that will be offloaded to netfslib and netfslib will
just hand iov_iters to the client filesystems, including ceph.
This will also allow me to massively simplify the networking code in
net/ceph/. My aim is to replace all the page array, page lists, bio,
etc. data types in libceph with a single type that just conveys an iov_iter
and I have a ceph_databuf type that holds a list of pages in the form of a
bio_vec[] and I can extract an iov_iter from that to pass to the networking.
Then, for the transmission side, the iov_iter will be passed to the TCP socket
with MSG_SPLICE_PAGES rather than iterating over the data type and passing a
page fragment at a time. We fixed this up for nfsd and Chuck Lever reported a
improvement in throughput (15% if I remember correctly).
The patches I have so far can be found here:
https://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs.git/log/?h=ceph-iter
Note that I have rbd working with the changes I've made to that point.
Anyway, ... I need to pick someone's brain about whether the way per-page
tracking of snapshots within fs/ceph/ can be simplified.
Firstly, note that there may be a bug in ceph writeback cleanup as it stands.
It calls folio_detach_private() without holding the folio lock (it holds the
writeback lock, but that's not sufficient by MM rules). This means you have a
race between { setting ->private, setting PG_private and inc refcount } on one
hand and { clearing ->private, clearing PG_private and dec refcount } on the
other.
Unfortunately, you cannot just take the page lock from writeback cleanup
without running the risk of deadlocking against ->writepages() wanting to take
PG_lock and then PG_writeback. And you cannot drop PG_writeback first as the
moment you do that, the page can be deallocated.
Secondly, there's a counter, ci->i_wrbuffer_ref, that might actually be
redundant if we do it right as I_PINNING_NETFS_WB offers an alternative way we
might do things. If we set this bit, ->write_inode() will be called with
wbc->unpinned_netfs_wb set when all currently dirty pages have been cleaned up
(see netfs_unpin_writeback()). netfslib currently uses this to pin the
fscache objects but it could perhaps also be used to pin the writeback cap for
ceph.
Thirdly, I was under the impression that, for any given page/folio, only the
head snapshot could be altered - and that any older snapshot must be flushed
before we could allow that.
Fourthly, the ceph_snap_context struct holds a list of snaps. Does it really
need to, or is just the most recent snap for which the folio holds changes
sufficient?
Thanks,
David
next reply other threads:[~2024-12-18 18:33 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-12-18 18:33 David Howells [this message]
2024-12-18 18:47 ` Ceph and Netfslib Patrick Donnelly
2024-12-18 19:36 ` David Howells
2024-12-18 19:06 ` Viacheslav Dubeyko
2024-12-18 19:48 ` David Howells
2024-12-23 23:13 ` Viacheslav Dubeyko
2024-12-24 12:56 ` Matthew Wilcox
2024-12-24 21:52 ` Viacheslav Dubeyko
2025-01-09 0:53 ` Viacheslav Dubeyko
2024-12-18 19:43 ` David Howells
2025-03-05 16:34 ` Is EOLDSNAPC actually generated? -- " David Howells
2025-03-05 19:23 ` Alex Markuze
2025-03-05 20:22 ` David Howells
2025-03-06 13:19 ` Alex Markuze
2025-03-06 13:48 ` David Howells
2025-03-06 13:55 ` Alex Markuze
2025-03-06 13:58 ` Venky Shankar
2025-03-06 14:13 ` David Howells
2025-03-06 14:23 ` Alex Markuze
2025-03-06 16:21 ` Gregory Farnum
2025-03-06 17:18 ` Alex Markuze
2025-03-06 15:55 ` David Howells
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=3989572.1734546794@warthog.procyon.org.uk \
--to=dhowells@redhat.com \
--cc=Slava.Dubeyko@ibm.com \
--cc=amarkuze@redhat.com \
--cc=ceph-devel@vger.kernel.org \
--cc=idryomov@gmail.com \
--cc=jlayton@kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=netfs@lists.linux.dev \
--cc=xiubli@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox