public inbox for netdev@vger.kernel.org
 help / color / mirror / Atom feed
From: Paolo Abeni <pabeni@redhat.com>
To: ouster@cs.stanford.edu
Cc: horms@kernel.org, netdev@vger.kernel.org, kuba@kernel.org,
	pabeni@redhat.com, edumazet@google.com
Subject: Re: [net-next,v17,08/14] net: homa: create homa_rpc.h and homa_rpc.c
Date: Wed, 18 Mar 2026 08:21:31 +0100	[thread overview]
Message-ID: <20260318072131.268946-1-pabeni@redhat.com> (raw)
In-Reply-To: <20260316223228.2611-9-ouster@cs.stanford.edu>

This is an AI-generated review of your patch. The human sending this
email has considered the AI review valid, or at least plausible.

For local reproduction steps see https://netdev-ai.bots.linux.dev/ai-local.html
---
net: homa: create homa_rpc.h and homa_rpc.c

These files provide basic functions for managing remote procedure calls,
which are the fundamental entities managed by Homa. Each RPC consists
of a request message from a client to a server, followed by a response
message returned from the server to the client.

> diff --git a/net/homa/homa_impl.h b/net/homa/homa_impl.h
> index f55a7b80431d..adedc8757e5b 100644
> --- a/net/homa/homa_impl.h
> +++ b/net/homa/homa_impl.h

[ ... ]

> diff --git a/net/homa/homa_rpc.c b/net/homa/homa_rpc.c
> new file mode 100644
> index 000000000000..9d4b05149805
> --- /dev/null
> +++ b/net/homa/homa_rpc.c
> @@ -0,0 +1,698 @@

[ ... ]

> +/**
> + * homa_rpc_alloc_server() - Allocate and initialize a server RPC (one that is
> + * used to manage an incoming request). If appropriate, the RPC will also
> + * be handed off (we do it here, while we have the socket locked, to avoid
> + * acquiring the socket lock a second time later for the handoff).
> + * @hsk:      Socket that owns this RPC.
> + * @source:   IP address (network byte order) of the RPC's client.
> + * @h:        Header for the first data packet received for this RPC; used
> + *            to initialize the RPC.
> + * @created:  Will be set to 1 if a new RPC was created and 0 if an
> + *            existing RPC was found.
> + *
> + * Return:  A pointer to a new RPC, which is locked, or a negative errno
> + *          if an error occurred. If there is already an RPC corresponding
> + *          to h, then it is returned instead of creating a new RPC.
> + */
> +struct homa_rpc *homa_rpc_alloc_server(struct homa_sock *hsk,
> +				       const struct in6_addr *source,
> +				       struct homa_data_hdr *h, int *created)
> +	__cond_acquires(nonnull, srpc->bucket->lock)
> +{
> +	u64 id = homa_local_id(h->common.sender_id);
> +	struct homa_rpc_bucket *bucket;
> +	struct homa_rpc *srpc = NULL;
> +	int err;
> +
> +	if (!hsk->buffer_pool)
> +		return ERR_PTR(-ENOMEM);
> +
> +	/* Lock the bucket, and make sure no-one else has already created
> +	 * the desired RPC.
> +	 */
> +	bucket = homa_server_rpc_bucket(hsk, id);
> +	homa_bucket_lock(bucket, id);
> +	hlist_for_each_entry(srpc, &bucket->rpcs, hash_links) {
> +		if (srpc->id == id &&
> +		    srpc->dport == ntohs(h->common.sport) &&
> +		    ipv6_addr_equal(&srpc->peer->addr, source)) {
> +			/* RPC already exists; just return it instead
> +			 * of creating a new RPC.
> +			 */
> +			*created = 0;
> +			return srpc;
> +		}
> +	}
> +
> +	/* Initialize fields that don't require the socket lock. */
> +	srpc = kzalloc(sizeof(*srpc), GFP_ATOMIC);

In v6, Paolo Abeni suggested avoiding GFP_ATOMIC by doing the allocation
outside the bucket lock. The current code allocates while holding the
bucket lock (acquired above), which forces GFP_ATOMIC.

Paolo's suggestion was to do a lockless RCU lookup first, and only if that
fails, allocate outside the lock with GFP_KERNEL. Could this allocation be
moved outside the lock to use GFP_KERNEL instead?

Reference: https://lore.kernel.org/netdev/1c82f56c-4353-407b-8897-b8a485606a5f@redhat.com/

Additionally, in a later review on January 28, Paolo and Eric Dumazet
suggested using SLAB_TYPESAFE_BY_RCU with a dedicated kmem_cache for RPC
allocations to improve performance and handle high RPC rates.

This approach would allow lockless RCU lookups and faster reclamation. Paolo
referenced tcp established socket lookup as an example:

  "Have a look at tcp established socket lookup and the SLAB_TYPESAFE_BY_RCU
   flag usage for slab-based allocations. A combo of such flag for RPC
   allocation (using a dedicated kmem_cache) and RCU lookup should improve
   consistently the performances, with a consolidate code layout and no
   unmanageable problems with large number of objects waiting for the grace
   period."

The current code still uses plain kzalloc/kfree without a dedicated cache.
Would it make sense to implement this optimization?

Reference: https://lore.kernel.org/netdev/9209dfbb-ca3a-4fb7-a2fb-0567394f8cda@redhat.com/

> +	if (!srpc) {
> +		err = -ENOMEM;
> +		goto error;
> +	}

[ ... ]

> +error:
> +	homa_bucket_unlock(bucket, id);
> +	if (srpc && srpc->peer)
> +		homa_peer_release(srpc->peer);
> +	kfree(srpc);
> +	return ERR_PTR(err);
> +}

[ ... ]

In the v6 review thread, Paolo and Eric raised concerns about SKB freeing
costs and suggested optimizations. Eric noted that proper software GRO
significantly reduces SKB count using napi_get_frags() and napi_reuse_skb().

Paolo also suggested using deferred SKB freeing (skb_attempt_defer_free) to
reduce the cost. While you mentioned that GRO is implemented in the full
Homa but was left out of the initial patch series, would it be beneficial to
include at least some of these optimizations in this initial submission?

Reference: https://lore.kernel.org/netdev/9209dfbb-ca3a-4fb7-a2fb-0567394f8cda@redhat.com/

[ ... ]

> diff --git a/net/homa/homa_rpc.h b/net/homa/homa_rpc.h
> new file mode 100644
> index 000000000000..98bfd389f5b1
> --- /dev/null
> +++ b/net/homa/homa_rpc.h

[ ... ]

> +/**
> + * homa_protect_rpcs() - Ensures that no RPCs will be reaped for a given
> + * socket until homa_sock_unprotect is called. Typically used by functions

This isn't a bug, but the comment references 'homa_sock_unprotect' which
doesn't exist. Should this be 'homa_unprotect_rpcs'?

> + * that want to scan the active RPCs for a socket without holding the socket
> + * lock.  Multiple calls to this function may be in effect at once. See
> + * "Homa Locking Strategy" in homa_impl.h for more info on why this function
> + * is needed.
> + * @hsk:    Socket whose RPCs should be protected. Must not be locked
> + *          by the caller; will be locked here.
> + *
> + * Return:  1 for success, 0 if the socket has been shutdown, in which
> + *          case its RPCs cannot be protected.
> + */
> +static inline int homa_protect_rpcs(struct homa_sock *hsk)
> +{
> +	int result;
> +
> +	homa_sock_lock(hsk);
> +	result = !hsk->shutdown;
> +	if (result)
> +		atomic_inc(&hsk->protect_count);
> +	homa_sock_unlock(hsk);
> +	return result;
> +}
> +
> +/**
> + * homa_unprotect_rpcs() - Cancel the effect of a previous call to
> + * homa_sock_protect(), so that RPCs can once again be reaped.

This isn't a bug, but the comment references 'homa_sock_protect()' which
doesn't exist. Should this be 'homa_protect_rpcs()'?

> + * @hsk:    Socket whose RPCs should be unprotected.
> + */
> +static inline void homa_unprotect_rpcs(struct homa_sock *hsk)
> +{
> +	atomic_dec(&hsk->protect_count);
> +}
> +
> +/**
> + * homa_rpc_hold() - Increment the reference count on an RPC, which will
> + * prevent it from being freed until homa_rpc_put() is called. References

[ ... ]

> +/**
> + * homa_rpc_put() - Release a reference on an RPC (cancels the effect of
> + * a previous call to homa_rpc_put).
> + * @rpc:      RPC to release.
> + */
> +static inline void homa_rpc_put(struct homa_rpc *rpc)
> +{
> +	refcount_dec(&rpc->refs);
> +}

This isn't a bug, but the kdoc comment says "cancels the effect of a
previous call to homa_rpc_put" which is self-referential. Should this say
"cancels the effect of a previous call to homa_rpc_hold" instead?

The function decrements rpc->refs via refcount_dec(), which is the inverse
operation of homa_rpc_hold() that calls refcount_inc(). This misleading
comment could confuse maintainers about the refcount lifecycle.

[ ... ]


  reply	other threads:[~2026-03-18  7:21 UTC|newest]

Thread overview: 34+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-16 22:32 [PATCH net-next v17 00/14] Begin upstreaming Homa transport protocol John Ousterhout
2026-03-16 22:32 ` [PATCH net-next v17 01/14] net: homa: define user-visible API for Homa John Ousterhout
2026-03-17 10:10   ` kernel test robot
2026-03-17 18:40   ` kernel test robot
2026-03-16 22:32 ` [PATCH net-next v17 02/14] net: homa: create homa_wire.h John Ousterhout
2026-03-16 22:32 ` [PATCH net-next v17 03/14] net: homa: create shared Homa header files John Ousterhout
2026-03-18  7:20   ` [net-next,v17,03/14] " Paolo Abeni
2026-03-19 20:37     ` John Ousterhout
2026-03-16 22:32 ` [PATCH net-next v17 04/14] net: homa: create homa_pool.h and homa_pool.c John Ousterhout
2026-03-16 22:32 ` [PATCH net-next v17 05/14] net: homa: create homa_peer.h and homa_peer.c John Ousterhout
2026-03-18  7:21   ` [net-next,v17,05/14] " Paolo Abeni
2026-03-20 17:13     ` John Ousterhout
2026-03-20 17:20       ` Paolo Abeni
2026-03-16 22:32 ` [PATCH net-next v17 06/14] net: homa: create homa_sock.h and homa_sock.c John Ousterhout
2026-03-16 22:32 ` [PATCH net-next v17 07/14] net: homa: create homa_interest.h and homa_interest.c John Ousterhout
2026-03-18  7:21   ` [net-next,v17,07/14] " Paolo Abeni
2026-03-16 22:32 ` [PATCH net-next v17 08/14] net: homa: create homa_rpc.h and homa_rpc.c John Ousterhout
2026-03-18  7:21   ` Paolo Abeni [this message]
2026-03-23 22:43     ` [net-next,v17,08/14] " John Ousterhout
2026-03-24  8:55       ` Paolo Abeni
2026-03-16 22:32 ` [PATCH net-next v17 09/14] net: homa: create homa_outgoing.c John Ousterhout
2026-03-18  7:21   ` [net-next,v17,09/14] " Paolo Abeni
2026-03-20 18:21     ` John Ousterhout
2026-03-16 22:32 ` [PATCH net-next v17 10/14] net: homa: create homa_utils.c John Ousterhout
2026-03-16 22:32 ` [PATCH net-next v17 11/14] net: homa: create homa_incoming.c John Ousterhout
2026-03-18  7:21   ` [net-next,v17,11/14] " Paolo Abeni
2026-03-20 20:51     ` John Ousterhout
2026-03-16 22:32 ` [PATCH net-next v17 12/14] net: homa: create homa_timer.c John Ousterhout
2026-03-16 22:32 ` [PATCH net-next v17 13/14] net: homa: create homa_plumbing.c John Ousterhout
2026-03-18  7:21   ` [net-next,v17,13/14] " Paolo Abeni
2026-03-20 21:49     ` John Ousterhout
2026-03-16 22:32 ` [PATCH net-next v17 14/14] net: homa: create Makefile and Kconfig John Ousterhout
2026-03-17 18:51   ` kernel test robot
2026-03-17 19:26   ` kernel test robot

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260318072131.268946-1-pabeni@redhat.com \
    --to=pabeni@redhat.com \
    --cc=edumazet@google.com \
    --cc=horms@kernel.org \
    --cc=kuba@kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=ouster@cs.stanford.edu \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox