From: Viacheslav Dubeyko <Slava.Dubeyko@ibm.com>
To: "slava@dubeyko.com" <slava@dubeyko.com>,
David Howells <dhowells@redhat.com>
Cc: "dongsheng.yang@easystack.cn" <dongsheng.yang@easystack.cn>,
"linux-fsdevel@vger.kernel.org" <linux-fsdevel@vger.kernel.org>,
"ceph-devel@vger.kernel.org" <ceph-devel@vger.kernel.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
Alex Markuze <amarkuze@redhat.com>,
"jlayton@kernel.org" <jlayton@kernel.org>,
"idryomov@gmail.com" <idryomov@gmail.com>,
"linux-block@vger.kernel.org" <linux-block@vger.kernel.org>
Subject: RE: [RFC PATCH 04/35] ceph: Convert ceph_mds_request::r_pagelist to a databuf
Date: Thu, 20 Mar 2025 20:34:40 +0000 [thread overview]
Message-ID: <3fa0bf814ce79765c88211990644a010197b11bf.camel@ibm.com> (raw)
In-Reply-To: <2161520.1742212378@warthog.procyon.org.uk>
On Mon, 2025-03-17 at 11:52 +0000, David Howells wrote:
> slava@dubeyko.com wrote:
>
> > > - err = ceph_pagelist_reserve(pagelist, len +
> > > val_size1 + 8);
> > > + err = ceph_databuf_reserve(dbuf, len + val_size1 +
> > > 8,
> > > + GFP_KERNEL);
> >
> > I know that it's simple change. But this len + val_size1 + 8 looks
> > confusing, anyway. What this hardcoded 8 means? :)
>
> You tell me. The '8' is pre-existing.
>
Yeah, I know. I am simply thinking aloud that we need to rework the CephFS code
somehow to make it more clear and easy understandable. But it has no relations
with your change.
> > > - if (req->r_pagelist) {
> > > - iinfo.xattr_len = req->r_pagelist->length;
> > > - iinfo.xattr_data = req->r_pagelist->mapped_tail;
> > > + if (req->r_dbuf) {
> > > + iinfo.xattr_len = ceph_databuf_len(req->r_dbuf);
> > > + iinfo.xattr_data = kmap_ceph_databuf_page(req-
> > > > r_dbuf, 0);
> >
> > Possibly, it's in another patch. Have we removed req->r_pagelist from
> > the structure?
>
> See patch 20 "libceph: Remove ceph_pagelist".
>
> It cannot be removed here as the kernel must still compile and work at this
> point.
>
> > Do we always have memory pages in ceph_databuf? How
> > kmap_ceph_databuf_page() will behave if it's not memory page.
>
> Are there other sorts of pages?
>
My point is simple. I assumed that if ceph_databuf can handle multiple types of
memory representations, then it could be not only memory pages. Potentially, CXL
memory would require some special management in the future (maybe not). :) But
if we always use regular memory pages under ceph_databuf abstraction, then I
don't see any problem here.
> > Maybe, we need to hide kunmap_local() into something like
> > kunmap_ceph_databuf_page()?
>
> Actually, probably better to rename kmap_ceph_databuf_page() to
> kmap_local_ceph_databuf().
>
> > Maybe, it makes sense to call something like ceph_databuf_length()
> > instead of low level access to dbuf->nr_bvec?
>
> Sounds reasonable. Better to hide the internal workings.
>
> > > + if (as_ctx->dbuf) {
> > > + req->r_dbuf = as_ctx->dbuf;
> > > + as_ctx->dbuf = NULL;
> >
> > Maybe, we need something like swap() method? :)
>
> I could point out that you were complaining about ceph_databuf_get() returning
> a pointer than a void;-).
>
> > > + dbuf = ceph_databuf_req_alloc(2, 0, GFP_KERNEL);
> >
> > So, do we allocate 2 items of zero length here?
>
> You don't. One is the bvec[] count (2) and one is that amount of memory to
> preallocate (0) and attach to that bvec[].
>
Aaah. I see now. Thanks.
> Now, it may make sense to split the API calls to handle a number of different
> scenarios, e.g.: request with just protocol, no pages; request with just
> pages; request with both protocol bits and page list.
>
> > > + if (ceph_databuf_insert_frag(dbuf, 0, sizeof(*header),
> > > GFP_KERNEL) < 0)
> > > + goto out;
> > > + if (ceph_databuf_insert_frag(dbuf, 1, PAGE_SIZE, GFP_KERNEL)
> > > < 0)
> > > goto out;
> > >
> > > + iov_iter_bvec(&iter, ITER_DEST, &dbuf->bvec[1], 1, len);
> >
> > Is it correct &dbuf->bvec[1]? Why do we work with item #1? I think it
> > looks confusing.
>
> Because you have a protocol element (in dbuf->bvec[0]) and a buffer (in
> dbuf->bvec[1]).
It sounds to me that we need to have two declarations (something like this):
#define PROTOCOL_ELEMENT_INDEX 0
#define BUFFER_INDEX 1
>
> An iterator is attached to the buffer and the iterator then conveys it to
> __ceph_sync_read() as the destination.
>
> If you look a few lines further on in the patch, you can see the first
> fragment being accessed:
>
> > + header = kmap_ceph_databuf_page(dbuf, 0);
> > +
>
> Note that, because the read buffer is very likely a whole page, I split them
> into separate sections rather than trying to allocate an order-1 page as that
> would be more likely to fail.
>
> > > - header.data_len = cpu_to_le32(8 + 8 + 4);
> > > - header.file_offset = 0;
> > > + header->data_len = cpu_to_le32(8 + 8 + 4);
> >
> > The same problem of understanding here for me. What this hardcoded 8 +
> > 8 + 4 value means? :)
>
> You need to ask a ceph expert. This is nothing specifically to do with my
> changes. However, I suspect it's the size of the message element.
>
Yeah, I see. :)
> > > - memset(iov.iov_base + boff, 0, PAGE_SIZE - boff);
> > > + p = kmap_ceph_databuf_page(dbuf, 1);
> >
> > Maybe, we need to introduce some constants to address #0 and #1 pages?
> > Because, #0 it's header and I assume #1 is some content.
>
> Whilst that might be useful, I don't know that the 0 and 1... being header and
> content respectively always hold. I haven't checked, but there could even be
> a protocol trailer in some cases as well.
>
> > > - err = ceph_pagelist_reserve(pagelist,
> > > - 4 * 2 + name_len + as_ctx-
> > > > lsmctx.len);
> > > + err = ceph_databuf_reserve(dbuf, 4 * 2 + name_len + as_ctx-
> > > > lsmctx.len,
> > > + GFP_KERNEL);
> >
> > The 4 * 2 + name_len + as_ctx->lsmctx.len looks unclear to me. It wil
> > be good to have some well defined constants here.
>
> Again, nothing specifically to do with my changes.
>
I completely agree.
Thanks,
Slava.
next prev parent reply other threads:[~2025-03-20 20:34 UTC|newest]
Thread overview: 72+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-03-13 23:32 [RFC PATCH 00/35] ceph, rbd, netfs: Make ceph fully use netfslib David Howells
2025-03-13 23:32 ` [RFC PATCH 01/35] ceph: Fix incorrect flush end position calculation David Howells
2025-03-13 23:32 ` [RFC PATCH 02/35] libceph: Rename alignment to offset David Howells
2025-03-14 19:04 ` Viacheslav Dubeyko
2025-03-14 20:01 ` David Howells
2025-03-13 23:32 ` [RFC PATCH 03/35] libceph: Add a new data container type, ceph_databuf David Howells
2025-03-14 20:06 ` Viacheslav Dubeyko
2025-03-17 11:27 ` David Howells
2025-03-13 23:32 ` [RFC PATCH 04/35] ceph: Convert ceph_mds_request::r_pagelist to a databuf David Howells
2025-03-14 22:27 ` slava
2025-03-17 11:52 ` David Howells
2025-03-20 20:34 ` Viacheslav Dubeyko [this message]
2025-03-20 22:01 ` David Howells
2025-03-13 23:32 ` [RFC PATCH 05/35] libceph: Add functions to add ceph_databufs to requests David Howells
2025-03-13 23:32 ` [RFC PATCH 06/35] rbd: Use ceph_databuf for rbd_obj_read_sync() David Howells
2025-03-17 19:08 ` Viacheslav Dubeyko
2025-04-11 13:48 ` David Howells
2025-03-13 23:32 ` [RFC PATCH 07/35] libceph: Change ceph_osdc_call()'s reply to a ceph_databuf David Howells
2025-03-17 19:41 ` Viacheslav Dubeyko
2025-03-17 22:12 ` David Howells
2025-03-13 23:33 ` [RFC PATCH 08/35] libceph: Unexport osd_req_op_cls_request_data_pages() David Howells
2025-03-13 23:33 ` [RFC PATCH 09/35] libceph: Remove osd_req_op_cls_response_data_pages() David Howells
2025-03-13 23:33 ` [RFC PATCH 10/35] libceph: Convert notify_id_pages to a ceph_databuf David Howells
2025-03-13 23:33 ` [RFC PATCH 11/35] ceph: Use ceph_databuf in DIO David Howells
2025-03-17 20:03 ` Viacheslav Dubeyko
2025-03-17 22:26 ` David Howells
2025-03-13 23:33 ` [RFC PATCH 12/35] libceph: Bypass the messenger-v1 Tx loop for databuf/iter data blobs David Howells
2025-03-13 23:33 ` [RFC PATCH 13/35] rbd: Switch from using bvec_iter to iov_iter David Howells
2025-03-18 19:38 ` Viacheslav Dubeyko
2025-03-18 22:13 ` David Howells
2025-03-13 23:33 ` [RFC PATCH 14/35] libceph: Remove bvec and bio data container types David Howells
2025-03-13 23:33 ` [RFC PATCH 15/35] libceph: Make osd_req_op_cls_init() use a ceph_databuf and map it David Howells
2025-03-13 23:33 ` [RFC PATCH 16/35] libceph: Convert req_page of ceph_osdc_call() to ceph_databuf David Howells
2025-03-13 23:33 ` [RFC PATCH 17/35] libceph, rbd: Use ceph_databuf encoding start/stop David Howells
2025-03-18 19:59 ` Viacheslav Dubeyko
2025-03-18 22:19 ` David Howells
2025-03-20 21:45 ` Viacheslav Dubeyko
2025-03-13 23:33 ` [RFC PATCH 18/35] libceph, rbd: Convert some page arrays to ceph_databuf David Howells
2025-03-18 20:02 ` Viacheslav Dubeyko
2025-03-18 22:25 ` David Howells
2025-03-13 23:33 ` [RFC PATCH 19/35] libceph, ceph: Convert users of ceph_pagelist " David Howells
2025-03-18 20:09 ` Viacheslav Dubeyko
2025-03-18 22:27 ` David Howells
2025-03-13 23:33 ` [RFC PATCH 20/35] libceph: Remove ceph_pagelist David Howells
2025-03-13 23:33 ` [RFC PATCH 21/35] libceph: Make notify code use ceph_databuf_enc_start/stop David Howells
2025-03-18 20:12 ` Viacheslav Dubeyko
2025-03-18 22:36 ` David Howells
2025-03-13 23:33 ` [RFC PATCH 22/35] libceph, rbd: Convert ceph_osdc_notify() reply to ceph_databuf David Howells
2025-03-19 0:08 ` Viacheslav Dubeyko
2025-03-20 14:44 ` David Howells
2025-03-13 23:33 ` [RFC PATCH 23/35] rbd: Use ceph_databuf_enc_start/stop() David Howells
2025-03-19 0:32 ` Viacheslav Dubeyko
2025-03-20 14:59 ` Why use plain numbers and totals rather than predef'd constants for RPC sizes? David Howells
2025-03-20 21:48 ` Viacheslav Dubeyko
2025-03-13 23:33 ` [RFC PATCH 24/35] ceph: Make ceph_calc_file_object_mapping() return size as size_t David Howells
2025-03-13 23:33 ` [RFC PATCH 25/35] ceph: Wrap POSIX_FADV_WILLNEED to get caps David Howells
2025-03-13 23:33 ` [RFC PATCH 26/35] ceph: Kill ceph_rw_context David Howells
2025-03-13 23:33 ` [RFC PATCH 27/35] netfs: Pass extra write context to write functions David Howells
2025-03-13 23:33 ` [RFC PATCH 28/35] netfs: Adjust group handling David Howells
2025-03-19 18:57 ` Viacheslav Dubeyko
2025-03-20 15:22 ` David Howells
2025-03-13 23:33 ` [RFC PATCH 29/35] netfs: Allow fs-private data to be handed through to request alloc David Howells
2025-03-13 23:33 ` [RFC PATCH 30/35] netfs: Make netfs_page_mkwrite() use folio_mkwrite_check_truncate() David Howells
2025-03-13 23:33 ` [RFC PATCH 31/35] netfs: Fix netfs_unbuffered_read() to return ssize_t rather than int David Howells
2025-03-13 23:33 ` [RFC PATCH 32/35] netfs: Add some more RMW support for ceph David Howells
2025-03-19 19:14 ` Viacheslav Dubeyko
2025-03-20 15:25 ` David Howells
2025-03-13 23:33 ` [RFC PATCH 33/35] ceph: Use netfslib [INCOMPLETE] David Howells
2025-03-19 19:54 ` Viacheslav Dubeyko
2025-03-20 15:38 ` David Howells
2025-03-13 23:33 ` [RFC PATCH 34/35] ceph: Enable multipage folios for ceph files David Howells
2025-03-13 23:33 ` [RFC PATCH 35/35] ceph: Remove old I/O API bits David Howells
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=3fa0bf814ce79765c88211990644a010197b11bf.camel@ibm.com \
--to=slava.dubeyko@ibm.com \
--cc=amarkuze@redhat.com \
--cc=ceph-devel@vger.kernel.org \
--cc=dhowells@redhat.com \
--cc=dongsheng.yang@easystack.cn \
--cc=idryomov@gmail.com \
--cc=jlayton@kernel.org \
--cc=linux-block@vger.kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=slava@dubeyko.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).