linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/2] libceph: add new iov_iter msg_data type and use it for reads
@ 2022-06-27 15:54 Jeff Layton
  2022-06-27 15:54 ` [PATCH v2 1/2] libceph: add new iov_iter-based ceph_msg_data_type and ceph_osd_data_type Jeff Layton
  2022-06-27 15:54 ` [PATCH v2 2/2] ceph: use osd_req_op_extent_osd_iter for netfs reads Jeff Layton
  0 siblings, 2 replies; 5+ messages in thread
From: Jeff Layton @ 2022-06-27 15:54 UTC (permalink / raw)
  To: xiubli, idryomov; +Cc: ceph-devel, dhowells, viro, linux-fsdevel

v2:
- make _next handler advance the iterator in preparation for coming
  changes to iov_iter_get_pages

This is an update to the patchset I sent back on June 9th. Since then,
Al informed me that he intends to change iov_iter_get_pages to advance
the iterator automatically. That changes the implementation a bit, in
that we now need to track how far the iov_iter leads the cursor at any
given time.

I've tested this with xfstests and it seems to behave. Cover letter
from the original posting follows.

------------------------8<-------------------------

This patchset was inspired by some earlier work that David Howells did
to add a similar type.

Currently, we take an iov_iter from the netfs layer, turn that into an
array of pages, and then pass that to the messenger which eventually
turns that back into an iov_iter before handing it back to the socket.

This patchset adds a new ceph_msg_data_type that uses an iov_iter
directly instead of requiring an array of pages or bvecs. This allows
us to avoid an extra allocation in the buffered read path, and should
make it easier to plumb in write helpers later.

For now, this is still just a slow, stupid implementation that hands
the socket layer a page at a time like the existing messenger does. It
doesn't yet attempt to pass through the iov_iter directly.

I have some patches that pass the cursor's iov_iter directly to the
socket in the receive path, but it requires some infrastructure that's
not in mainline yet (iov_iter_scan(), for instance). It should be
possible to something similar in the send path as well.

Jeff Layton (2):
  libceph: add new iov_iter-based ceph_msg_data_type and
    ceph_osd_data_type
  ceph: use osd_req_op_extent_osd_iter for netfs reads

 fs/ceph/addr.c                  | 18 +------
 include/linux/ceph/messenger.h  |  8 ++++
 include/linux/ceph/osd_client.h |  4 ++
 net/ceph/messenger.c            | 85 +++++++++++++++++++++++++++++++++
 net/ceph/osd_client.c           | 27 +++++++++++
 5 files changed, 125 insertions(+), 17 deletions(-)

-- 
2.36.1


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2022-07-01 10:16 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-06-27 15:54 [PATCH v2 0/2] libceph: add new iov_iter msg_data type and use it for reads Jeff Layton
2022-06-27 15:54 ` [PATCH v2 1/2] libceph: add new iov_iter-based ceph_msg_data_type and ceph_osd_data_type Jeff Layton
2022-07-01  2:04   ` Xiubo Li
2022-07-01 10:16     ` Jeff Layton
2022-06-27 15:54 ` [PATCH v2 2/2] ceph: use osd_req_op_extent_osd_iter for netfs reads Jeff Layton

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).