From: Chuck Lever <cel@kernel.org>
To: Trond Myklebust <trondmy@kernel.org>,
Anna Schumaker <anna@kernel.org>,
Chuck Lever <chuck.lever@oracle.com>,
Jeff Layton <jlayton@kernel.org>, NeilBrown <neil@brown.name>,
Olga Kornievskaia <okorniev@redhat.com>,
Dai Ngo <Dai.Ngo@oracle.com>, Tom Talpey <tom@talpey.com>,
"David S. Miller" <davem@davemloft.net>,
Eric Dumazet <edumazet@google.com>,
Jakub Kicinski <kuba@kernel.org>,
Paolo Abeni <pabeni@redhat.com>, Simon Horman <horms@kernel.org>
Cc: linux-nfs@vger.kernel.org, netdev@vger.kernel.org,
linux-kernel@vger.kernel.org,
Herbert Xu <herbert@gondor.apana.org.au>,
David Howells <dhowells@redhat.com>,
Simo Sorce <simo@redhat.com>
Subject: [PATCH 03/18] SUNRPC: Add helpers to convert xdr_buf byte ranges to scatterlists
Date: Mon, 27 Apr 2026 09:50:47 -0400 [thread overview]
Message-ID: <20260427-crypto-krb5-api-v1-3-1fc1253b64c0@oracle.com> (raw)
In-Reply-To: <20260427-crypto-krb5-api-v1-0-1fc1253b64c0@oracle.com>
From: Chuck Lever <chuck.lever@oracle.com>
The crypto/krb5 library accepts data in scatterlist form, but
the GSS-API layer presents RPC payloads as struct xdr_buf.
Bridge that gap with a pair of helper functions:
xdr_buf_to_sg() - populate a caller-supplied scatterlist
array from a byte range
xdr_buf_to_sg_alloc() - populate a caller-supplied inline
scatterlist, chaining to a heap-
allocated overflow for large payloads
The inline array (typically stack-allocated at eight entries)
covers the common case of small RPCs with no heap allocation
on the encrypt/decrypt path. Only buffers spanning many pages
incur a kmalloc for the chained extension.
The segment-walking logic follows the same head, page array,
tail traversal as xdr_process_buf(), but populates a
scatterlist directly rather than invoking a per-segment
callback. sg_next() traversal makes the walker safe for
chained scatterlists. Once subsequent patches reroute all
per-message crypto operations through crypto/krb5,
xdr_process_buf() loses its last callers and is removed.
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
include/linux/sunrpc/xdr.h | 15 ++++
net/sunrpc/xdr.c | 199 +++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 214 insertions(+)
diff --git a/include/linux/sunrpc/xdr.h b/include/linux/sunrpc/xdr.h
index b639a6fafcbc..f82446993fde 100644
--- a/include/linux/sunrpc/xdr.h
+++ b/include/linux/sunrpc/xdr.h
@@ -140,6 +140,21 @@ int xdr_alloc_bvec(struct xdr_buf *buf, gfp_t gfp);
void xdr_free_bvec(struct xdr_buf *buf);
unsigned int xdr_buf_to_bvec(struct bio_vec *bvec, unsigned int bvec_size,
const struct xdr_buf *xdr);
+int xdr_buf_to_sg(const struct xdr_buf *buf, unsigned int offset,
+ unsigned int len, struct scatterlist *sg, unsigned int nsg);
+int xdr_buf_to_sg_alloc(const struct xdr_buf *buf, unsigned int offset,
+ unsigned int len, struct scatterlist *sg_head,
+ unsigned int sg_head_nents,
+ struct scatterlist **sg_overflow, gfp_t gfp);
+
+/*
+ * Inline scatterlist entries for xdr_buf_to_sg_alloc(). Sized to cover the
+ * head kvec, tail kvec, and a few page fragments without any heap allocation.
+ */
+enum {
+ XDR_BUF_TO_SG_NENTS = 8,
+};
+
static inline __be32 *xdr_encode_array(__be32 *p, const void *s, unsigned int len)
{
diff --git a/net/sunrpc/xdr.c b/net/sunrpc/xdr.c
index e83d5d0be78b..516833b4c114 100644
--- a/net/sunrpc/xdr.c
+++ b/net/sunrpc/xdr.c
@@ -187,6 +187,205 @@ unsigned int xdr_buf_to_bvec(struct bio_vec *bvec, unsigned int bvec_size,
}
EXPORT_SYMBOL_GPL(xdr_buf_to_bvec);
+/**
+ * xdr_buf_to_sg - Populate a scatterlist from an xdr_buf range
+ * @buf: xdr_buf to map
+ * @offset: starting byte offset within @buf
+ * @len: number of bytes to cover
+ * @sg: scatterlist array initialized with sg_init_table()
+ * @nsg: number of entries available in @sg
+ *
+ * @sg is traversed with sg_next(), so callers may pass a list
+ * assembled with sg_chain().
+ *
+ * Return: on success, the number of scatterlist entries used; the
+ * last used entry is marked with sg_mark_end(). On failure, a
+ * negative errno.
+ */
+int xdr_buf_to_sg(const struct xdr_buf *buf, unsigned int offset,
+ unsigned int len, struct scatterlist *sg, unsigned int nsg)
+{
+ unsigned int page_len, thislen, page_offset;
+ struct scatterlist *cur = sg, *prev = NULL;
+ int nents = 0;
+ int i;
+
+ if (len == 0)
+ return 0;
+
+ if (offset >= buf->head[0].iov_len) {
+ offset -= buf->head[0].iov_len;
+ } else {
+ thislen = min_t(unsigned int,
+ buf->head[0].iov_len - offset, len);
+ if (nents >= nsg)
+ return -ENOSPC;
+ sg_set_buf(cur, buf->head[0].iov_base + offset,
+ thislen);
+ prev = cur;
+ cur = sg_next(cur);
+ nents++;
+ len -= thislen;
+ offset = 0;
+ }
+ if (len == 0)
+ goto done;
+
+ if (offset >= buf->page_len) {
+ offset -= buf->page_len;
+ } else {
+ page_len = min(buf->page_len - offset, len);
+ len -= page_len;
+ page_offset = (offset + buf->page_base) & (PAGE_SIZE - 1);
+ i = (offset + buf->page_base) >> PAGE_SHIFT;
+ thislen = PAGE_SIZE - page_offset;
+ do {
+ if (thislen > page_len)
+ thislen = page_len;
+ if (nents >= nsg)
+ return -ENOSPC;
+ sg_set_page(cur, buf->pages[i],
+ thislen, page_offset);
+ prev = cur;
+ cur = sg_next(cur);
+ nents++;
+ page_len -= thislen;
+ i++;
+ page_offset = 0;
+ thislen = PAGE_SIZE;
+ } while (page_len != 0);
+ offset = 0;
+ }
+ if (len == 0)
+ goto done;
+
+ if (offset < buf->tail[0].iov_len) {
+ thislen = min_t(unsigned int,
+ buf->tail[0].iov_len - offset, len);
+ if (nents >= nsg)
+ return -ENOSPC;
+ sg_set_buf(cur, buf->tail[0].iov_base + offset,
+ thislen);
+ prev = cur;
+ nents++;
+ len -= thislen;
+ }
+ if (len != 0)
+ return -EINVAL;
+
+done:
+ if (prev)
+ sg_mark_end(prev);
+ return nents;
+}
+EXPORT_SYMBOL_GPL(xdr_buf_to_sg);
+
+/*
+ * Count the scatterlist entries needed to cover [offset, offset + len)
+ * within @buf. Mirrors the walk in xdr_buf_to_sg() so the caller can
+ * size an allocation that matches the requested sub-range rather than
+ * the full xdr_buf.
+ */
+static unsigned int xdr_buf_sg_nents(const struct xdr_buf *buf,
+ unsigned int offset, unsigned int len)
+{
+ unsigned int nsg = 0, thislen, page_offset;
+
+ if (len == 0)
+ return 0;
+
+ if (offset < buf->head[0].iov_len) {
+ thislen = min_t(unsigned int,
+ buf->head[0].iov_len - offset, len);
+ nsg++;
+ len -= thislen;
+ offset = 0;
+ } else {
+ offset -= buf->head[0].iov_len;
+ }
+ if (len == 0)
+ return nsg;
+
+ if (offset < buf->page_len) {
+ thislen = min(buf->page_len - offset, len);
+ page_offset = (offset + buf->page_base) & (PAGE_SIZE - 1);
+ nsg += DIV_ROUND_UP(page_offset + thislen, PAGE_SIZE);
+ len -= thislen;
+ offset = 0;
+ } else {
+ offset -= buf->page_len;
+ }
+ if (len == 0)
+ return nsg;
+
+ if (offset < buf->tail[0].iov_len)
+ nsg++;
+ return nsg;
+}
+
+/**
+ * xdr_buf_to_sg_alloc - Populate a scatterlist for an xdr_buf range
+ * @buf: xdr_buf to map
+ * @offset: starting byte offset within @buf
+ * @len: number of bytes to cover
+ * @sg_head: caller-provided scatterlist array (typically stack-allocated)
+ * @sg_head_nents: number of entries in @sg_head
+ * @sg_overflow: OUT: chained extension, or NULL when @sg_head sufficed
+ * @gfp: memory allocation flags for overflow
+ *
+ * Populates @sg_head directly when the xdr_buf fits. When more
+ * entries are needed, an overflow scatterlist is allocated and
+ * chained from @sg_head so that the result is traversable with
+ * sg_next().
+ *
+ * Return: on success, the number of populated scatterlist entries
+ * (counting only data entries, not chain entries). @sg_head is
+ * the head of the resulting list. Caller must kfree @sg_overflow
+ * when done. On failure, a negative errno.
+ */
+int xdr_buf_to_sg_alloc(const struct xdr_buf *buf, unsigned int offset,
+ unsigned int len, struct scatterlist *sg_head,
+ unsigned int sg_head_nents,
+ struct scatterlist **sg_overflow, gfp_t gfp)
+{
+ unsigned int nsg;
+ int ret;
+
+ *sg_overflow = NULL;
+ if (len == 0)
+ return 0;
+
+ nsg = xdr_buf_sg_nents(buf, offset, len);
+ if (nsg == 0)
+ return -EINVAL;
+
+ if (nsg <= sg_head_nents) {
+ sg_init_table(sg_head, nsg);
+ } else {
+ /* +1 replaces the slot sg_chain() consumes as the link. */
+ unsigned int overflow_nents = nsg - sg_head_nents + 1;
+ struct scatterlist *overflow;
+
+ overflow = kmalloc_array(overflow_nents, sizeof(*overflow),
+ gfp);
+ if (!overflow)
+ return -ENOMEM;
+
+ sg_init_table(sg_head, sg_head_nents);
+ sg_init_table(overflow, overflow_nents);
+ sg_chain(sg_head, sg_head_nents, overflow);
+ *sg_overflow = overflow;
+ }
+
+ ret = xdr_buf_to_sg(buf, offset, len, sg_head, nsg);
+ if (ret < 0) {
+ kfree(*sg_overflow);
+ *sg_overflow = NULL;
+ }
+ return ret;
+}
+EXPORT_SYMBOL_GPL(xdr_buf_to_sg_alloc);
+
/**
* xdr_inline_pages - Prepare receive buffer for a large reply
* @xdr: xdr_buf into which reply will be placed
--
2.53.0
next prev parent reply other threads:[~2026-04-27 13:51 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-27 13:50 [PATCH 00/18] Migrate rpcsec_gss_krb5 to the crypto/krb5 library Chuck Lever
2026-04-27 13:50 ` [PATCH 01/18] SUNRPC: Add Kconfig dependency on CRYPTO_KRB5 Chuck Lever
2026-04-27 13:50 ` [PATCH 02/18] SUNRPC: Add crypto/krb5 enctype lookup to krb5_ctx Chuck Lever
2026-04-27 13:50 ` Chuck Lever [this message]
2026-04-27 13:50 ` [PATCH 04/18] SUNRPC: Add errno-to-GSS status conversion helper Chuck Lever
2026-04-27 13:50 ` [PATCH 05/18] SUNRPC: Prepare crypto/krb5 encryption and checksum handles Chuck Lever
2026-04-27 13:50 ` [PATCH 06/18] SUNRPC: Switch wrap token encryption to crypto/krb5 Chuck Lever
2026-04-27 13:50 ` [PATCH 07/18] SUNRPC: Switch wrap token decryption " Chuck Lever
2026-04-27 13:50 ` [PATCH 08/18] SUNRPC: Switch Camellia decrypt " Chuck Lever
2026-04-27 13:50 ` [PATCH 09/18] SUNRPC: Switch MIC token generation " Chuck Lever
2026-04-27 13:50 ` [PATCH 10/18] SUNRPC: Switch MIC token verification " Chuck Lever
2026-04-27 13:50 ` [PATCH 11/18] SUNRPC: Remove get_mic/verify_mic function pointers from enctype table Chuck Lever
2026-04-27 13:50 ` [PATCH 12/18] SUNRPC: Remove wrap/unwrap " Chuck Lever
2026-04-27 13:50 ` [PATCH 13/18] SUNRPC: Remove encrypt/decrypt " Chuck Lever
2026-04-27 13:50 ` [PATCH 14/18] SUNRPC: Remove legacy skcipher/ahash handles from krb5_ctx Chuck Lever
2026-04-27 13:50 ` [PATCH 15/18] SUNRPC: Remove dead code from rpcsec_gss_krb5 Chuck Lever
2026-04-27 13:51 ` [PATCH 16/18] SUNRPC: Remove per-enctype Kconfig options Chuck Lever
2026-04-27 13:51 ` [PATCH 17/18] SUNRPC: Remove redundant crypto Kconfig dependencies Chuck Lever
2026-04-27 13:51 ` [PATCH 18/18] SUNRPC: Remove dead rpcsec_gss_krb5 definitions Chuck Lever
2026-04-29 6:39 ` [PATCH 00/18] Migrate rpcsec_gss_krb5 to the crypto/krb5 library Jeff Layton
2026-04-29 15:17 ` Chuck Lever
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260427-crypto-krb5-api-v1-3-1fc1253b64c0@oracle.com \
--to=cel@kernel.org \
--cc=Dai.Ngo@oracle.com \
--cc=anna@kernel.org \
--cc=chuck.lever@oracle.com \
--cc=davem@davemloft.net \
--cc=dhowells@redhat.com \
--cc=edumazet@google.com \
--cc=herbert@gondor.apana.org.au \
--cc=horms@kernel.org \
--cc=jlayton@kernel.org \
--cc=kuba@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-nfs@vger.kernel.org \
--cc=neil@brown.name \
--cc=netdev@vger.kernel.org \
--cc=okorniev@redhat.com \
--cc=pabeni@redhat.com \
--cc=simo@redhat.com \
--cc=tom@talpey.com \
--cc=trondmy@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox