* small sunrpc cleanups v2
@ 2025-05-15 11:48 Christoph Hellwig
2025-05-15 11:48 ` [PATCH 1/3] sunrpc: simplify xdr_init_encode_pages Christoph Hellwig
` (3 more replies)
0 siblings, 4 replies; 9+ messages in thread
From: Christoph Hellwig @ 2025-05-15 11:48 UTC (permalink / raw)
To: Chuck Lever, Jeff Layton, Trond Myklebust, Anna Schumaker
Cc: NeilBrown, Olga Kornievskaia, Dai Ngo, Tom Talpey, linux-nfs
Hi all,
just a few small cleanups done while looking at the code.
Changes since v1:
- invert polarity of a flag
Diffstat:
fs/nfsd/nfs3proc.c | 2
fs/nfsd/nfsproc.c | 2
include/linux/sunrpc/xdr.h | 3
net/sunrpc/socklib.c | 164 ++++++++++++++++-----------------------------
net/sunrpc/xdr.c | 11 +--
5 files changed, 66 insertions(+), 116 deletions(-)
^ permalink raw reply [flat|nested] 9+ messages in thread* [PATCH 1/3] sunrpc: simplify xdr_init_encode_pages 2025-05-15 11:48 small sunrpc cleanups v2 Christoph Hellwig @ 2025-05-15 11:48 ` Christoph Hellwig 2025-05-15 11:48 ` [PATCH 2/3] sunrpc: simplify xdr_partial_copy_from_skb Christoph Hellwig ` (2 subsequent siblings) 3 siblings, 0 replies; 9+ messages in thread From: Christoph Hellwig @ 2025-05-15 11:48 UTC (permalink / raw) To: Chuck Lever, Jeff Layton, Trond Myklebust, Anna Schumaker Cc: NeilBrown, Olga Kornievskaia, Dai Ngo, Tom Talpey, linux-nfs The rqst argument to xdr_init_encode_pages is set to NULL by all callers, and pages is always set to buf->pages. Remove the two arguments and hardcode the assignments. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Jeff Layton <jlayton@kernel.org> --- fs/nfsd/nfs3proc.c | 2 +- fs/nfsd/nfsproc.c | 2 +- include/linux/sunrpc/xdr.h | 3 +-- net/sunrpc/xdr.c | 11 ++++------- 4 files changed, 7 insertions(+), 11 deletions(-) diff --git a/fs/nfsd/nfs3proc.c b/fs/nfsd/nfs3proc.c index 8902fae8c62d..6c94042b03fa 100644 --- a/fs/nfsd/nfs3proc.c +++ b/fs/nfsd/nfs3proc.c @@ -562,7 +562,7 @@ static void nfsd3_init_dirlist_pages(struct svc_rqst *rqstp, buf->pages = rqstp->rq_next_page; rqstp->rq_next_page += (buf->buflen + PAGE_SIZE - 1) >> PAGE_SHIFT; - xdr_init_encode_pages(xdr, buf, buf->pages, NULL); + xdr_init_encode_pages(xdr, buf); } /* diff --git a/fs/nfsd/nfsproc.c b/fs/nfsd/nfsproc.c index 7c573d792252..f1c2c096804b 100644 --- a/fs/nfsd/nfsproc.c +++ b/fs/nfsd/nfsproc.c @@ -577,7 +577,7 @@ static void nfsd_init_dirlist_pages(struct svc_rqst *rqstp, buf->pages = rqstp->rq_next_page; rqstp->rq_next_page++; - xdr_init_encode_pages(xdr, buf, buf->pages, NULL); + xdr_init_encode_pages(xdr, buf); } /* diff --git a/include/linux/sunrpc/xdr.h b/include/linux/sunrpc/xdr.h index a2ab813a9800..29d3a7659727 100644 --- a/include/linux/sunrpc/xdr.h +++ b/include/linux/sunrpc/xdr.h @@ -242,8 +242,7 @@ typedef int (*kxdrdproc_t)(struct rpc_rqst *rqstp, struct xdr_stream *xdr, extern void xdr_init_encode(struct xdr_stream *xdr, struct xdr_buf *buf, __be32 *p, struct rpc_rqst *rqst); -extern void xdr_init_encode_pages(struct xdr_stream *xdr, struct xdr_buf *buf, - struct page **pages, struct rpc_rqst *rqst); +void xdr_init_encode_pages(struct xdr_stream *xdr, struct xdr_buf *buf); extern __be32 *xdr_reserve_space(struct xdr_stream *xdr, size_t nbytes); extern int xdr_reserve_space_vec(struct xdr_stream *xdr, size_t nbytes); extern void __xdr_commit_encode(struct xdr_stream *xdr); diff --git a/net/sunrpc/xdr.c b/net/sunrpc/xdr.c index 4e003cb516fe..1ab973d3e324 100644 --- a/net/sunrpc/xdr.c +++ b/net/sunrpc/xdr.c @@ -992,21 +992,18 @@ EXPORT_SYMBOL_GPL(xdr_init_encode); * xdr_init_encode_pages - Initialize an xdr_stream for encoding into pages * @xdr: pointer to xdr_stream struct * @buf: pointer to XDR buffer into which to encode data - * @pages: list of pages to decode into - * @rqst: pointer to controlling rpc_rqst, for debugging * */ -void xdr_init_encode_pages(struct xdr_stream *xdr, struct xdr_buf *buf, - struct page **pages, struct rpc_rqst *rqst) +void xdr_init_encode_pages(struct xdr_stream *xdr, struct xdr_buf *buf) { xdr_reset_scratch_buffer(xdr); xdr->buf = buf; - xdr->page_ptr = pages; + xdr->page_ptr = buf->pages; xdr->iov = NULL; - xdr->p = page_address(*pages); + xdr->p = page_address(*xdr->page_ptr); xdr->end = (void *)xdr->p + min_t(u32, buf->buflen, PAGE_SIZE); - xdr->rqst = rqst; + xdr->rqst = NULL; } EXPORT_SYMBOL_GPL(xdr_init_encode_pages); -- 2.47.2 ^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 2/3] sunrpc: simplify xdr_partial_copy_from_skb 2025-05-15 11:48 small sunrpc cleanups v2 Christoph Hellwig 2025-05-15 11:48 ` [PATCH 1/3] sunrpc: simplify xdr_init_encode_pages Christoph Hellwig @ 2025-05-15 11:48 ` Christoph Hellwig 2025-05-15 15:45 ` Jeff Layton 2025-05-15 11:48 ` [PATCH 3/3] sunrpc: unexport csum_partial_copy_to_xdr Christoph Hellwig 2025-05-15 19:03 ` small sunrpc cleanups v2 Chuck Lever 3 siblings, 1 reply; 9+ messages in thread From: Christoph Hellwig @ 2025-05-15 11:48 UTC (permalink / raw) To: Chuck Lever, Jeff Layton, Trond Myklebust, Anna Schumaker Cc: NeilBrown, Olga Kornievskaia, Dai Ngo, Tom Talpey, linux-nfs csum_partial_copy_to_xdr can handle a checksumming and non-checksumming case and implements this using a callback, which leads to a lot of boilerplate code and indirect calls in the fast path. Switch to storing a need_checksum flag in struct xdr_skb_reader instead to remove the indirect call and simplify the code. Signed-off-by: Christoph Hellwig <hch@lst.de> --- net/sunrpc/socklib.c | 163 ++++++++++++++++--------------------------- 1 file changed, 59 insertions(+), 104 deletions(-) diff --git a/net/sunrpc/socklib.c b/net/sunrpc/socklib.c index 1b2b84feeec6..75e31e3a3ced 100644 --- a/net/sunrpc/socklib.c +++ b/net/sunrpc/socklib.c @@ -27,97 +27,60 @@ struct xdr_skb_reader { struct sk_buff *skb; unsigned int offset; + bool need_checksum; size_t count; __wsum csum; }; -typedef size_t (*xdr_skb_read_actor)(struct xdr_skb_reader *desc, void *to, - size_t len); - /** * xdr_skb_read_bits - copy some data bits from skb to internal buffer * @desc: sk_buff copy helper * @to: copy destination * @len: number of bytes to copy * - * Possibly called several times to iterate over an sk_buff and copy - * data out of it. + * Possibly called several times to iterate over an sk_buff and copy data out of + * it. */ static size_t xdr_skb_read_bits(struct xdr_skb_reader *desc, void *to, size_t len) { - if (len > desc->count) - len = desc->count; - if (unlikely(skb_copy_bits(desc->skb, desc->offset, to, len))) - return 0; - desc->count -= len; - desc->offset += len; - return len; -} + len = min(len, desc->count); + + if (desc->need_checksum) { + __wsum csum; + + csum = skb_copy_and_csum_bits(desc->skb, desc->offset, to, len); + desc->csum = csum_block_add(desc->csum, csum, desc->offset); + } else { + if (unlikely(skb_copy_bits(desc->skb, desc->offset, to, len))) + return 0; + } -/** - * xdr_skb_read_and_csum_bits - copy and checksum from skb to buffer - * @desc: sk_buff copy helper - * @to: copy destination - * @len: number of bytes to copy - * - * Same as skb_read_bits, but calculate a checksum at the same time. - */ -static size_t xdr_skb_read_and_csum_bits(struct xdr_skb_reader *desc, void *to, size_t len) -{ - unsigned int pos; - __wsum csum2; - - if (len > desc->count) - len = desc->count; - pos = desc->offset; - csum2 = skb_copy_and_csum_bits(desc->skb, pos, to, len); - desc->csum = csum_block_add(desc->csum, csum2, pos); desc->count -= len; desc->offset += len; return len; } -/** - * xdr_partial_copy_from_skb - copy data out of an skb - * @xdr: target XDR buffer - * @base: starting offset - * @desc: sk_buff copy helper - * @copy_actor: virtual method for copying data - * - */ static ssize_t -xdr_partial_copy_from_skb(struct xdr_buf *xdr, unsigned int base, struct xdr_skb_reader *desc, xdr_skb_read_actor copy_actor) +xdr_partial_copy_from_skb(struct xdr_buf *xdr, struct xdr_skb_reader *desc) { - struct page **ppage = xdr->pages; - unsigned int len, pglen = xdr->page_len; - ssize_t copied = 0; - size_t ret; - - len = xdr->head[0].iov_len; - if (base < len) { - len -= base; - ret = copy_actor(desc, (char *)xdr->head[0].iov_base + base, len); - copied += ret; - if (ret != len || !desc->count) - goto out; - base = 0; - } else - base -= len; - - if (unlikely(pglen == 0)) - goto copy_tail; - if (unlikely(base >= pglen)) { - base -= pglen; - goto copy_tail; - } - if (base || xdr->page_base) { - pglen -= base; - base += xdr->page_base; - ppage += base >> PAGE_SHIFT; - base &= ~PAGE_MASK; - } - do { + struct page **ppage = xdr->pages + (xdr->page_base >> PAGE_SHIFT); + unsigned int poff = xdr->page_base & ~PAGE_MASK; + unsigned int pglen = xdr->page_len; + ssize_t copied = 0; + size_t ret; + + if (xdr->head[0].iov_len == 0) + return 0; + + ret = xdr_skb_read_bits(desc, xdr->head[0].iov_base, + xdr->head[0].iov_len); + if (ret != xdr->head[0].iov_len || !desc->count) + return ret; + copied += ret; + + while (pglen) { + unsigned int len = min(PAGE_SIZE - poff, pglen); char *kaddr; /* ACL likes to be lazy in allocating pages - ACLs @@ -126,36 +89,29 @@ xdr_partial_copy_from_skb(struct xdr_buf *xdr, unsigned int base, struct xdr_skb *ppage = alloc_page(GFP_NOWAIT | __GFP_NOWARN); if (unlikely(*ppage == NULL)) { if (copied == 0) - copied = -ENOMEM; - goto out; + return -ENOMEM; + return copied; } } - len = PAGE_SIZE; kaddr = kmap_atomic(*ppage); - if (base) { - len -= base; - if (pglen < len) - len = pglen; - ret = copy_actor(desc, kaddr + base, len); - base = 0; - } else { - if (pglen < len) - len = pglen; - ret = copy_actor(desc, kaddr, len); - } + ret = xdr_skb_read_bits(desc, kaddr + poff, len); flush_dcache_page(*ppage); kunmap_atomic(kaddr); + copied += ret; if (ret != len || !desc->count) - goto out; + return copied; ppage++; - } while ((pglen -= len) != 0); -copy_tail: - len = xdr->tail[0].iov_len; - if (base < len) - copied += copy_actor(desc, (char *)xdr->tail[0].iov_base + base, len - base); -out: + pglen -= len; + poff = 0; + } + + if (xdr->tail[0].iov_len) { + copied += xdr_skb_read_bits(desc, xdr->tail[0].iov_base, + xdr->tail[0].iov_len); + } + return copied; } @@ -169,17 +125,22 @@ xdr_partial_copy_from_skb(struct xdr_buf *xdr, unsigned int base, struct xdr_skb */ int csum_partial_copy_to_xdr(struct xdr_buf *xdr, struct sk_buff *skb) { - struct xdr_skb_reader desc; - - desc.skb = skb; - desc.offset = 0; - desc.count = skb->len - desc.offset; + struct xdr_skb_reader desc = { + .skb = skb, + .count = skb->len - desc.offset, + }; - if (skb_csum_unnecessary(skb)) - goto no_checksum; + if (skb_csum_unnecessary(skb)) { + if (xdr_partial_copy_from_skb(xdr, &desc) < 0) + return -1; + if (desc.count) + return -1; + return 0; + } + desc.need_checksum = true; desc.csum = csum_partial(skb->data, desc.offset, skb->csum); - if (xdr_partial_copy_from_skb(xdr, 0, &desc, xdr_skb_read_and_csum_bits) < 0) + if (xdr_partial_copy_from_skb(xdr, &desc) < 0) return -1; if (desc.offset != skb->len) { __wsum csum2; @@ -194,12 +155,6 @@ int csum_partial_copy_to_xdr(struct xdr_buf *xdr, struct sk_buff *skb) !skb->csum_complete_sw) netdev_rx_csum_fault(skb->dev, skb); return 0; -no_checksum: - if (xdr_partial_copy_from_skb(xdr, 0, &desc, xdr_skb_read_bits) < 0) - return -1; - if (desc.count) - return -1; - return 0; } EXPORT_SYMBOL_GPL(csum_partial_copy_to_xdr); -- 2.47.2 ^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH 2/3] sunrpc: simplify xdr_partial_copy_from_skb 2025-05-15 11:48 ` [PATCH 2/3] sunrpc: simplify xdr_partial_copy_from_skb Christoph Hellwig @ 2025-05-15 15:45 ` Jeff Layton 0 siblings, 0 replies; 9+ messages in thread From: Jeff Layton @ 2025-05-15 15:45 UTC (permalink / raw) To: Christoph Hellwig, Chuck Lever, Trond Myklebust, Anna Schumaker Cc: NeilBrown, Olga Kornievskaia, Dai Ngo, Tom Talpey, linux-nfs On Thu, 2025-05-15 at 13:48 +0200, Christoph Hellwig wrote: > csum_partial_copy_to_xdr can handle a checksumming and non-checksumming > case and implements this using a callback, which leads to a lot of > boilerplate code and indirect calls in the fast path. > > Switch to storing a need_checksum flag in struct xdr_skb_reader instead > to remove the indirect call and simplify the code. > > Signed-off-by: Christoph Hellwig <hch@lst.de> > --- > net/sunrpc/socklib.c | 163 ++++++++++++++++--------------------------- > 1 file changed, 59 insertions(+), 104 deletions(-) > > diff --git a/net/sunrpc/socklib.c b/net/sunrpc/socklib.c > index 1b2b84feeec6..75e31e3a3ced 100644 > --- a/net/sunrpc/socklib.c > +++ b/net/sunrpc/socklib.c > @@ -27,97 +27,60 @@ > struct xdr_skb_reader { > struct sk_buff *skb; > unsigned int offset; > + bool need_checksum; > size_t count; > __wsum csum; > }; > > -typedef size_t (*xdr_skb_read_actor)(struct xdr_skb_reader *desc, void *to, > - size_t len); > - > /** > * xdr_skb_read_bits - copy some data bits from skb to internal buffer > * @desc: sk_buff copy helper > * @to: copy destination > * @len: number of bytes to copy > * > - * Possibly called several times to iterate over an sk_buff and copy > - * data out of it. > + * Possibly called several times to iterate over an sk_buff and copy data out of > + * it. > */ > static size_t > xdr_skb_read_bits(struct xdr_skb_reader *desc, void *to, size_t len) > { > - if (len > desc->count) > - len = desc->count; > - if (unlikely(skb_copy_bits(desc->skb, desc->offset, to, len))) > - return 0; > - desc->count -= len; > - desc->offset += len; > - return len; > -} > + len = min(len, desc->count); > + > + if (desc->need_checksum) { > + __wsum csum; > + > + csum = skb_copy_and_csum_bits(desc->skb, desc->offset, to, len); > + desc->csum = csum_block_add(desc->csum, csum, desc->offset); > + } else { > + if (unlikely(skb_copy_bits(desc->skb, desc->offset, to, len))) > + return 0; > + } > > -/** > - * xdr_skb_read_and_csum_bits - copy and checksum from skb to buffer > - * @desc: sk_buff copy helper > - * @to: copy destination > - * @len: number of bytes to copy > - * > - * Same as skb_read_bits, but calculate a checksum at the same time. > - */ > -static size_t xdr_skb_read_and_csum_bits(struct xdr_skb_reader *desc, void *to, size_t len) > -{ > - unsigned int pos; > - __wsum csum2; > - > - if (len > desc->count) > - len = desc->count; > - pos = desc->offset; > - csum2 = skb_copy_and_csum_bits(desc->skb, pos, to, len); > - desc->csum = csum_block_add(desc->csum, csum2, pos); > desc->count -= len; > desc->offset += len; > return len; > } > > -/** > - * xdr_partial_copy_from_skb - copy data out of an skb > - * @xdr: target XDR buffer > - * @base: starting offset > - * @desc: sk_buff copy helper > - * @copy_actor: virtual method for copying data > - * > - */ > static ssize_t > -xdr_partial_copy_from_skb(struct xdr_buf *xdr, unsigned int base, struct xdr_skb_reader *desc, xdr_skb_read_actor copy_actor) > +xdr_partial_copy_from_skb(struct xdr_buf *xdr, struct xdr_skb_reader *desc) > { > - struct page **ppage = xdr->pages; > - unsigned int len, pglen = xdr->page_len; > - ssize_t copied = 0; > - size_t ret; > - > - len = xdr->head[0].iov_len; > - if (base < len) { > - len -= base; > - ret = copy_actor(desc, (char *)xdr->head[0].iov_base + base, len); > - copied += ret; > - if (ret != len || !desc->count) > - goto out; > - base = 0; > - } else > - base -= len; > - > - if (unlikely(pglen == 0)) > - goto copy_tail; > - if (unlikely(base >= pglen)) { > - base -= pglen; > - goto copy_tail; > - } > - if (base || xdr->page_base) { > - pglen -= base; > - base += xdr->page_base; > - ppage += base >> PAGE_SHIFT; > - base &= ~PAGE_MASK; > - } > - do { > + struct page **ppage = xdr->pages + (xdr->page_base >> PAGE_SHIFT); > + unsigned int poff = xdr->page_base & ~PAGE_MASK; > + unsigned int pglen = xdr->page_len; > + ssize_t copied = 0; > + size_t ret; > + > + if (xdr->head[0].iov_len == 0) > + return 0; > + > + ret = xdr_skb_read_bits(desc, xdr->head[0].iov_base, > + xdr->head[0].iov_len); > + if (ret != xdr->head[0].iov_len || !desc->count) > + return ret; > + copied += ret; > + > + while (pglen) { > + unsigned int len = min(PAGE_SIZE - poff, pglen); > char *kaddr; > > /* ACL likes to be lazy in allocating pages - ACLs > @@ -126,36 +89,29 @@ xdr_partial_copy_from_skb(struct xdr_buf *xdr, unsigned int base, struct xdr_skb > *ppage = alloc_page(GFP_NOWAIT | __GFP_NOWARN); > if (unlikely(*ppage == NULL)) { > if (copied == 0) > - copied = -ENOMEM; > - goto out; > + return -ENOMEM; > + return copied; > } > } > > - len = PAGE_SIZE; > kaddr = kmap_atomic(*ppage); > - if (base) { > - len -= base; > - if (pglen < len) > - len = pglen; > - ret = copy_actor(desc, kaddr + base, len); > - base = 0; > - } else { > - if (pglen < len) > - len = pglen; > - ret = copy_actor(desc, kaddr, len); > - } > + ret = xdr_skb_read_bits(desc, kaddr + poff, len); > flush_dcache_page(*ppage); > kunmap_atomic(kaddr); > + > copied += ret; > if (ret != len || !desc->count) > - goto out; > + return copied; > ppage++; > - } while ((pglen -= len) != 0); > -copy_tail: > - len = xdr->tail[0].iov_len; > - if (base < len) > - copied += copy_actor(desc, (char *)xdr->tail[0].iov_base + base, len - base); > -out: > + pglen -= len; > + poff = 0; > + } > + > + if (xdr->tail[0].iov_len) { > + copied += xdr_skb_read_bits(desc, xdr->tail[0].iov_base, > + xdr->tail[0].iov_len); > + } > + > return copied; > } > > @@ -169,17 +125,22 @@ xdr_partial_copy_from_skb(struct xdr_buf *xdr, unsigned int base, struct xdr_skb > */ > int csum_partial_copy_to_xdr(struct xdr_buf *xdr, struct sk_buff *skb) > { > - struct xdr_skb_reader desc; > - > - desc.skb = skb; > - desc.offset = 0; > - desc.count = skb->len - desc.offset; > + struct xdr_skb_reader desc = { > + .skb = skb, > + .count = skb->len - desc.offset, > + }; > > - if (skb_csum_unnecessary(skb)) > - goto no_checksum; > + if (skb_csum_unnecessary(skb)) { > + if (xdr_partial_copy_from_skb(xdr, &desc) < 0) > + return -1; > + if (desc.count) > + return -1; > + return 0; > + } > > + desc.need_checksum = true; > desc.csum = csum_partial(skb->data, desc.offset, skb->csum); > - if (xdr_partial_copy_from_skb(xdr, 0, &desc, xdr_skb_read_and_csum_bits) < 0) > + if (xdr_partial_copy_from_skb(xdr, &desc) < 0) > return -1; > if (desc.offset != skb->len) { > __wsum csum2; > @@ -194,12 +155,6 @@ int csum_partial_copy_to_xdr(struct xdr_buf *xdr, struct sk_buff *skb) > !skb->csum_complete_sw) > netdev_rx_csum_fault(skb->dev, skb); > return 0; > -no_checksum: > - if (xdr_partial_copy_from_skb(xdr, 0, &desc, xdr_skb_read_bits) < 0) > - return -1; > - if (desc.count) > - return -1; > - return 0; > } > EXPORT_SYMBOL_GPL(csum_partial_copy_to_xdr); > Reviewed-by: Jeff Layton <jlayton@kernel.org> ^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH 3/3] sunrpc: unexport csum_partial_copy_to_xdr 2025-05-15 11:48 small sunrpc cleanups v2 Christoph Hellwig 2025-05-15 11:48 ` [PATCH 1/3] sunrpc: simplify xdr_init_encode_pages Christoph Hellwig 2025-05-15 11:48 ` [PATCH 2/3] sunrpc: simplify xdr_partial_copy_from_skb Christoph Hellwig @ 2025-05-15 11:48 ` Christoph Hellwig 2025-05-15 19:03 ` small sunrpc cleanups v2 Chuck Lever 3 siblings, 0 replies; 9+ messages in thread From: Christoph Hellwig @ 2025-05-15 11:48 UTC (permalink / raw) To: Chuck Lever, Jeff Layton, Trond Myklebust, Anna Schumaker Cc: NeilBrown, Olga Kornievskaia, Dai Ngo, Tom Talpey, linux-nfs csum_partial_copy_to_xdr is only used inside the sunrpc module, so remove the export. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Jeff Layton <jlayton@kernel.org> --- net/sunrpc/socklib.c | 1 - 1 file changed, 1 deletion(-) diff --git a/net/sunrpc/socklib.c b/net/sunrpc/socklib.c index 75e31e3a3ced..3c7e8e677e91 100644 --- a/net/sunrpc/socklib.c +++ b/net/sunrpc/socklib.c @@ -156,7 +156,6 @@ int csum_partial_copy_to_xdr(struct xdr_buf *xdr, struct sk_buff *skb) netdev_rx_csum_fault(skb->dev, skb); return 0; } -EXPORT_SYMBOL_GPL(csum_partial_copy_to_xdr); static inline int xprt_sendmsg(struct socket *sock, struct msghdr *msg, size_t seek) -- 2.47.2 ^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: small sunrpc cleanups v2 2025-05-15 11:48 small sunrpc cleanups v2 Christoph Hellwig ` (2 preceding siblings ...) 2025-05-15 11:48 ` [PATCH 3/3] sunrpc: unexport csum_partial_copy_to_xdr Christoph Hellwig @ 2025-05-15 19:03 ` Chuck Lever 3 siblings, 0 replies; 9+ messages in thread From: Chuck Lever @ 2025-05-15 19:03 UTC (permalink / raw) To: Christoph Hellwig, Jeff Layton, Trond Myklebust, Anna Schumaker Cc: NeilBrown, Olga Kornievskaia, Dai Ngo, Tom Talpey, linux-nfs On Thu, 15 May 2025 13:48:44 +0200, Christoph Hellwig wrote: > just a few small cleanups done while looking at the code. > > Changes since v1: > - invert polarity of a flag > > Diffstat: > fs/nfsd/nfs3proc.c | 2 > fs/nfsd/nfsproc.c | 2 > include/linux/sunrpc/xdr.h | 3 > net/sunrpc/socklib.c | 164 ++++++++++++++++----------------------------- > net/sunrpc/xdr.c | 11 +-- > 5 files changed, 66 insertions(+), 116 deletions(-) > > [...] Applied to nfsd-testing, thanks! I can take these through nfsd-next (for v6.17). Note: I generally prefer breaking up patches like 2/3 into smaller steps. [1/3] sunrpc: simplify xdr_init_encode_pages commit: cee96da0c08381059771458ad46c00ed7fabfcd8 [2/3] sunrpc: simplify xdr_partial_copy_from_skb commit: 9a5441cbb8a1259d62bdf11deed77c172f766c23 [3/3] sunrpc: unexport csum_partial_copy_to_xdr commit: afd5de3042dfa3aed96774892aeee7f5592a1046 -- Chuck Lever ^ permalink raw reply [flat|nested] 9+ messages in thread
* small sunrpc cleanups @ 2025-05-13 8:57 Christoph Hellwig 2025-05-13 8:57 ` [PATCH 2/3] sunrpc: simplify xdr_partial_copy_from_skb Christoph Hellwig 0 siblings, 1 reply; 9+ messages in thread From: Christoph Hellwig @ 2025-05-13 8:57 UTC (permalink / raw) To: Chuck Lever, Jeff Layton, Trond Myklebust, Anna Schumaker Cc: NeilBrown, Olga Kornievskaia, Dai Ngo, Tom Talpey, linux-nfs Hi all, just a few small cleanups done while looking at the code. Diffstat: fs/nfsd/nfs3proc.c | 2 fs/nfsd/nfsproc.c | 2 include/linux/sunrpc/xdr.h | 3 net/sunrpc/socklib.c | 144 +++++++++++++++------------------------------ net/sunrpc/xdr.c | 11 +-- 5 files changed, 57 insertions(+), 105 deletions(-) ^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH 2/3] sunrpc: simplify xdr_partial_copy_from_skb 2025-05-13 8:57 small sunrpc cleanups Christoph Hellwig @ 2025-05-13 8:57 ` Christoph Hellwig 2025-05-13 12:15 ` Jeff Layton 0 siblings, 1 reply; 9+ messages in thread From: Christoph Hellwig @ 2025-05-13 8:57 UTC (permalink / raw) To: Chuck Lever, Jeff Layton, Trond Myklebust, Anna Schumaker Cc: NeilBrown, Olga Kornievskaia, Dai Ngo, Tom Talpey, linux-nfs csum_partial_copy_to_xdr can handle a checksumming and non-checksumming case and implements this using a callback, which leads to a lot of boilerplate code and indirect calls in the fast path. Switch to storing a no_checksum flag in struct xdr_skb_reader instead to remove the indirect call and simplify the code. Signed-off-by: Christoph Hellwig <hch@lst.de> --- net/sunrpc/socklib.c | 143 +++++++++++++++---------------------------- 1 file changed, 50 insertions(+), 93 deletions(-) diff --git a/net/sunrpc/socklib.c b/net/sunrpc/socklib.c index 1b2b84feeec6..7196e7042e0f 100644 --- a/net/sunrpc/socklib.c +++ b/net/sunrpc/socklib.c @@ -27,97 +27,60 @@ struct xdr_skb_reader { struct sk_buff *skb; unsigned int offset; + bool no_checksum; size_t count; __wsum csum; }; -typedef size_t (*xdr_skb_read_actor)(struct xdr_skb_reader *desc, void *to, - size_t len); - /** * xdr_skb_read_bits - copy some data bits from skb to internal buffer * @desc: sk_buff copy helper * @to: copy destination * @len: number of bytes to copy * - * Possibly called several times to iterate over an sk_buff and copy - * data out of it. + * Possibly called several times to iterate over an sk_buff and copy data out of + * it. */ static size_t xdr_skb_read_bits(struct xdr_skb_reader *desc, void *to, size_t len) { - if (len > desc->count) - len = desc->count; - if (unlikely(skb_copy_bits(desc->skb, desc->offset, to, len))) - return 0; - desc->count -= len; - desc->offset += len; - return len; -} + len = min(len, desc->count); + + if (desc->no_checksum) { + if (unlikely(skb_copy_bits(desc->skb, desc->offset, to, len))) + return 0; + } else { + __wsum csum; + + csum = skb_copy_and_csum_bits(desc->skb, desc->offset, to, len); + desc->csum = csum_block_add(desc->csum, csum, desc->offset); + } -/** - * xdr_skb_read_and_csum_bits - copy and checksum from skb to buffer - * @desc: sk_buff copy helper - * @to: copy destination - * @len: number of bytes to copy - * - * Same as skb_read_bits, but calculate a checksum at the same time. - */ -static size_t xdr_skb_read_and_csum_bits(struct xdr_skb_reader *desc, void *to, size_t len) -{ - unsigned int pos; - __wsum csum2; - - if (len > desc->count) - len = desc->count; - pos = desc->offset; - csum2 = skb_copy_and_csum_bits(desc->skb, pos, to, len); - desc->csum = csum_block_add(desc->csum, csum2, pos); desc->count -= len; desc->offset += len; return len; } -/** - * xdr_partial_copy_from_skb - copy data out of an skb - * @xdr: target XDR buffer - * @base: starting offset - * @desc: sk_buff copy helper - * @copy_actor: virtual method for copying data - * - */ static ssize_t -xdr_partial_copy_from_skb(struct xdr_buf *xdr, unsigned int base, struct xdr_skb_reader *desc, xdr_skb_read_actor copy_actor) +xdr_partial_copy_from_skb(struct xdr_buf *xdr, struct xdr_skb_reader *desc) { - struct page **ppage = xdr->pages; - unsigned int len, pglen = xdr->page_len; - ssize_t copied = 0; - size_t ret; - - len = xdr->head[0].iov_len; - if (base < len) { - len -= base; - ret = copy_actor(desc, (char *)xdr->head[0].iov_base + base, len); - copied += ret; - if (ret != len || !desc->count) - goto out; - base = 0; - } else - base -= len; - - if (unlikely(pglen == 0)) - goto copy_tail; - if (unlikely(base >= pglen)) { - base -= pglen; - goto copy_tail; - } - if (base || xdr->page_base) { - pglen -= base; - base += xdr->page_base; - ppage += base >> PAGE_SHIFT; - base &= ~PAGE_MASK; - } - do { + struct page **ppage = xdr->pages + (xdr->page_base >> PAGE_SHIFT); + unsigned int poff = xdr->page_base & ~PAGE_MASK; + unsigned int pglen = xdr->page_len; + ssize_t copied = 0; + size_t ret; + + if (xdr->head[0].iov_len == 0) + return 0; + + ret = xdr_skb_read_bits(desc, xdr->head[0].iov_base, + xdr->head[0].iov_len); + if (ret != xdr->head[0].iov_len || !desc->count) + return ret; + copied += ret; + + while (pglen) { + unsigned int len = min(PAGE_SIZE - poff, pglen); char *kaddr; /* ACL likes to be lazy in allocating pages - ACLs @@ -126,36 +89,29 @@ xdr_partial_copy_from_skb(struct xdr_buf *xdr, unsigned int base, struct xdr_skb *ppage = alloc_page(GFP_NOWAIT | __GFP_NOWARN); if (unlikely(*ppage == NULL)) { if (copied == 0) - copied = -ENOMEM; - goto out; + return -ENOMEM; + return copied; } } - len = PAGE_SIZE; kaddr = kmap_atomic(*ppage); - if (base) { - len -= base; - if (pglen < len) - len = pglen; - ret = copy_actor(desc, kaddr + base, len); - base = 0; - } else { - if (pglen < len) - len = pglen; - ret = copy_actor(desc, kaddr, len); - } + ret = xdr_skb_read_bits(desc, kaddr + poff, len); flush_dcache_page(*ppage); kunmap_atomic(kaddr); + copied += ret; if (ret != len || !desc->count) - goto out; + return copied; ppage++; - } while ((pglen -= len) != 0); -copy_tail: - len = xdr->tail[0].iov_len; - if (base < len) - copied += copy_actor(desc, (char *)xdr->tail[0].iov_base + base, len - base); -out: + pglen -= len; + poff = 0; + } + + if (xdr->tail[0].iov_len) { + copied += xdr_skb_read_bits(desc, xdr->tail[0].iov_base, + xdr->tail[0].iov_len); + } + return copied; } @@ -174,12 +130,13 @@ int csum_partial_copy_to_xdr(struct xdr_buf *xdr, struct sk_buff *skb) desc.skb = skb; desc.offset = 0; desc.count = skb->len - desc.offset; + desc.no_checksum = skb_csum_unnecessary(skb); - if (skb_csum_unnecessary(skb)) + if (desc.no_checksum) goto no_checksum; desc.csum = csum_partial(skb->data, desc.offset, skb->csum); - if (xdr_partial_copy_from_skb(xdr, 0, &desc, xdr_skb_read_and_csum_bits) < 0) + if (xdr_partial_copy_from_skb(xdr, &desc) < 0) return -1; if (desc.offset != skb->len) { __wsum csum2; @@ -195,7 +152,7 @@ int csum_partial_copy_to_xdr(struct xdr_buf *xdr, struct sk_buff *skb) netdev_rx_csum_fault(skb->dev, skb); return 0; no_checksum: - if (xdr_partial_copy_from_skb(xdr, 0, &desc, xdr_skb_read_bits) < 0) + if (xdr_partial_copy_from_skb(xdr, &desc) < 0) return -1; if (desc.count) return -1; -- 2.47.2 ^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH 2/3] sunrpc: simplify xdr_partial_copy_from_skb 2025-05-13 8:57 ` [PATCH 2/3] sunrpc: simplify xdr_partial_copy_from_skb Christoph Hellwig @ 2025-05-13 12:15 ` Jeff Layton 2025-05-14 5:05 ` Christoph Hellwig 0 siblings, 1 reply; 9+ messages in thread From: Jeff Layton @ 2025-05-13 12:15 UTC (permalink / raw) To: Christoph Hellwig, Chuck Lever, Trond Myklebust, Anna Schumaker Cc: NeilBrown, Olga Kornievskaia, Dai Ngo, Tom Talpey, linux-nfs On Tue, 2025-05-13 at 10:57 +0200, Christoph Hellwig wrote: > csum_partial_copy_to_xdr can handle a checksumming and non-checksumming > case and implements this using a callback, which leads to a lot of > boilerplate code and indirect calls in the fast path. > > Switch to storing a no_checksum flag in struct xdr_skb_reader instead > to remove the indirect call and simplify the code. > > Signed-off-by: Christoph Hellwig <hch@lst.de> > --- > net/sunrpc/socklib.c | 143 +++++++++++++++---------------------------- > 1 file changed, 50 insertions(+), 93 deletions(-) > > diff --git a/net/sunrpc/socklib.c b/net/sunrpc/socklib.c > index 1b2b84feeec6..7196e7042e0f 100644 > --- a/net/sunrpc/socklib.c > +++ b/net/sunrpc/socklib.c > @@ -27,97 +27,60 @@ > struct xdr_skb_reader { > struct sk_buff *skb; > unsigned int offset; > + bool no_checksum; The change is reasonable overall, but I'm not a fan of having a negative boolean like this (i.e. one that starts with no_*). Can we reverse the sense of this and call it "must_checksum" or something? > size_t count; > __wsum csum; > }; > > -typedef size_t (*xdr_skb_read_actor)(struct xdr_skb_reader *desc, void *to, > - size_t len); > - > /** > * xdr_skb_read_bits - copy some data bits from skb to internal buffer > * @desc: sk_buff copy helper > * @to: copy destination > * @len: number of bytes to copy > * > - * Possibly called several times to iterate over an sk_buff and copy > - * data out of it. > + * Possibly called several times to iterate over an sk_buff and copy data out of > + * it. > */ > static size_t > xdr_skb_read_bits(struct xdr_skb_reader *desc, void *to, size_t len) > { > - if (len > desc->count) > - len = desc->count; > - if (unlikely(skb_copy_bits(desc->skb, desc->offset, to, len))) > - return 0; > - desc->count -= len; > - desc->offset += len; > - return len; > -} > + len = min(len, desc->count); > + > + if (desc->no_checksum) { > + if (unlikely(skb_copy_bits(desc->skb, desc->offset, to, len))) > + return 0; > + } else { > + __wsum csum; > + > + csum = skb_copy_and_csum_bits(desc->skb, desc->offset, to, len); > + desc->csum = csum_block_add(desc->csum, csum, desc->offset); > + } > > -/** > - * xdr_skb_read_and_csum_bits - copy and checksum from skb to buffer > - * @desc: sk_buff copy helper > - * @to: copy destination > - * @len: number of bytes to copy > - * > - * Same as skb_read_bits, but calculate a checksum at the same time. > - */ > -static size_t xdr_skb_read_and_csum_bits(struct xdr_skb_reader *desc, void *to, size_t len) > -{ > - unsigned int pos; > - __wsum csum2; > - > - if (len > desc->count) > - len = desc->count; > - pos = desc->offset; > - csum2 = skb_copy_and_csum_bits(desc->skb, pos, to, len); > - desc->csum = csum_block_add(desc->csum, csum2, pos); > desc->count -= len; > desc->offset += len; > return len; > } > > -/** > - * xdr_partial_copy_from_skb - copy data out of an skb > - * @xdr: target XDR buffer > - * @base: starting offset > - * @desc: sk_buff copy helper > - * @copy_actor: virtual method for copying data > - * > - */ > static ssize_t > -xdr_partial_copy_from_skb(struct xdr_buf *xdr, unsigned int base, struct xdr_skb_reader *desc, xdr_skb_read_actor copy_actor) > +xdr_partial_copy_from_skb(struct xdr_buf *xdr, struct xdr_skb_reader *desc) > { > - struct page **ppage = xdr->pages; > - unsigned int len, pglen = xdr->page_len; > - ssize_t copied = 0; > - size_t ret; > - > - len = xdr->head[0].iov_len; > - if (base < len) { > - len -= base; > - ret = copy_actor(desc, (char *)xdr->head[0].iov_base + base, len); > - copied += ret; > - if (ret != len || !desc->count) > - goto out; > - base = 0; > - } else > - base -= len; > - > - if (unlikely(pglen == 0)) > - goto copy_tail; > - if (unlikely(base >= pglen)) { > - base -= pglen; > - goto copy_tail; > - } > - if (base || xdr->page_base) { > - pglen -= base; > - base += xdr->page_base; > - ppage += base >> PAGE_SHIFT; > - base &= ~PAGE_MASK; > - } > - do { > + struct page **ppage = xdr->pages + (xdr->page_base >> PAGE_SHIFT); > + unsigned int poff = xdr->page_base & ~PAGE_MASK; > + unsigned int pglen = xdr->page_len; > + ssize_t copied = 0; > + size_t ret; > + > + if (xdr->head[0].iov_len == 0) > + return 0; > + > + ret = xdr_skb_read_bits(desc, xdr->head[0].iov_base, > + xdr->head[0].iov_len); > + if (ret != xdr->head[0].iov_len || !desc->count) > + return ret; > + copied += ret; > + > + while (pglen) { > + unsigned int len = min(PAGE_SIZE - poff, pglen); > char *kaddr; > > /* ACL likes to be lazy in allocating pages - ACLs > @@ -126,36 +89,29 @@ xdr_partial_copy_from_skb(struct xdr_buf *xdr, unsigned int base, struct xdr_skb > *ppage = alloc_page(GFP_NOWAIT | __GFP_NOWARN); > if (unlikely(*ppage == NULL)) { > if (copied == 0) > - copied = -ENOMEM; > - goto out; > + return -ENOMEM; > + return copied; > } > } > > - len = PAGE_SIZE; > kaddr = kmap_atomic(*ppage); > - if (base) { > - len -= base; > - if (pglen < len) > - len = pglen; > - ret = copy_actor(desc, kaddr + base, len); > - base = 0; > - } else { > - if (pglen < len) > - len = pglen; > - ret = copy_actor(desc, kaddr, len); > - } > + ret = xdr_skb_read_bits(desc, kaddr + poff, len); > flush_dcache_page(*ppage); > kunmap_atomic(kaddr); > + > copied += ret; > if (ret != len || !desc->count) > - goto out; > + return copied; > ppage++; > - } while ((pglen -= len) != 0); > -copy_tail: > - len = xdr->tail[0].iov_len; > - if (base < len) > - copied += copy_actor(desc, (char *)xdr->tail[0].iov_base + base, len - base); > -out: > + pglen -= len; > + poff = 0; > + } > + > + if (xdr->tail[0].iov_len) { > + copied += xdr_skb_read_bits(desc, xdr->tail[0].iov_base, > + xdr->tail[0].iov_len); > + } > + > return copied; > } > > @@ -174,12 +130,13 @@ int csum_partial_copy_to_xdr(struct xdr_buf *xdr, struct sk_buff *skb) > desc.skb = skb; > desc.offset = 0; > desc.count = skb->len - desc.offset; > + desc.no_checksum = skb_csum_unnecessary(skb); > > - if (skb_csum_unnecessary(skb)) > + if (desc.no_checksum) > goto no_checksum; > > desc.csum = csum_partial(skb->data, desc.offset, skb->csum); > - if (xdr_partial_copy_from_skb(xdr, 0, &desc, xdr_skb_read_and_csum_bits) < 0) > + if (xdr_partial_copy_from_skb(xdr, &desc) < 0) > return -1; > if (desc.offset != skb->len) { > __wsum csum2; > @@ -195,7 +152,7 @@ int csum_partial_copy_to_xdr(struct xdr_buf *xdr, struct sk_buff *skb) > netdev_rx_csum_fault(skb->dev, skb); > return 0; > no_checksum: > - if (xdr_partial_copy_from_skb(xdr, 0, &desc, xdr_skb_read_bits) < 0) > + if (xdr_partial_copy_from_skb(xdr, &desc) < 0) > return -1; > if (desc.count) > return -1; -- Jeff Layton <jlayton@kernel.org> ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 2/3] sunrpc: simplify xdr_partial_copy_from_skb 2025-05-13 12:15 ` Jeff Layton @ 2025-05-14 5:05 ` Christoph Hellwig 0 siblings, 0 replies; 9+ messages in thread From: Christoph Hellwig @ 2025-05-14 5:05 UTC (permalink / raw) To: Jeff Layton Cc: Christoph Hellwig, Chuck Lever, Trond Myklebust, Anna Schumaker, NeilBrown, Olga Kornievskaia, Dai Ngo, Tom Talpey, linux-nfs On Tue, May 13, 2025 at 07:15:37AM -0500, Jeff Layton wrote: > > + bool no_checksum; > > The change is reasonable overall, but I'm not a fan of having a > negative boolean like this (i.e. one that starts with no_*). Can we > reverse the sense of this and call it "must_checksum" or something? I can invert it. It just seems like the normal case should be no-flag one, but either version will work just fine. ^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2025-05-15 19:03 UTC | newest] Thread overview: 9+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2025-05-15 11:48 small sunrpc cleanups v2 Christoph Hellwig 2025-05-15 11:48 ` [PATCH 1/3] sunrpc: simplify xdr_init_encode_pages Christoph Hellwig 2025-05-15 11:48 ` [PATCH 2/3] sunrpc: simplify xdr_partial_copy_from_skb Christoph Hellwig 2025-05-15 15:45 ` Jeff Layton 2025-05-15 11:48 ` [PATCH 3/3] sunrpc: unexport csum_partial_copy_to_xdr Christoph Hellwig 2025-05-15 19:03 ` small sunrpc cleanups v2 Chuck Lever -- strict thread matches above, loose matches on Subject: below -- 2025-05-13 8:57 small sunrpc cleanups Christoph Hellwig 2025-05-13 8:57 ` [PATCH 2/3] sunrpc: simplify xdr_partial_copy_from_skb Christoph Hellwig 2025-05-13 12:15 ` Jeff Layton 2025-05-14 5:05 ` Christoph Hellwig
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox