From: Paolo Abeni <pabeni@redhat.com>
To: ouster@cs.stanford.edu
Cc: horms@kernel.org, netdev@vger.kernel.org, kuba@kernel.org,
pabeni@redhat.com, edumazet@google.com
Subject: Re: [net-next,v17,13/14] net: homa: create homa_plumbing.c
Date: Wed, 18 Mar 2026 08:21:49 +0100 [thread overview]
Message-ID: <20260318072149.269101-1-pabeni@redhat.com> (raw)
In-Reply-To: <20260316223228.2611-14-ouster@cs.stanford.edu>
This is an AI-generated review of your patch. The human sending this
email has considered the AI review valid, or at least plausible.
For local reproduction steps see https://netdev-ai.bots.linux.dev/ai-local.html
---
net: homa: create homa_plumbing.c
homa_plumbing.c contains functions that connect Homa to the rest of
the Linux kernel, such as dispatch tables used by Linux and the
top-level functions that Linux invokes from those dispatch tables.
> diff --git a/net/homa/homa_plumbing.c b/net/homa/homa_plumbing.c
> --- /dev/null
> +++ b/net/homa/homa_plumbing.c
[ ... ]
> +/**
> + * homa_ioc_info() - The top-level function that implements the
> + * HOMAIOCINFO ioctl for Homa sockets.
> + * @sock: Socket for this request
> + * @arg: The address in user space of the argument to ioctl, which
> + * is a homa_info struct.
> + *
> + * Return: 0 on success, otherwise a negative errno. Sets hsk->error_msg
> + * on errors.
> + */
> +int homa_ioc_info(struct socket *sock, unsigned long arg)
> +{
> + struct homa_sock *hsk = homa_sk(sock->sk);
> + struct homa_rpc_info rinfo;
> + struct homa_info hinfo;
> + struct homa_rpc *rpc;
> + int bytes_avl;
> + char *dst;
> +
> + if (unlikely(copy_from_user(&hinfo, (void __user *)arg,
> + sizeof(hinfo)))) {
> + hsk->error_msg = "invalid address for homa_info";
> + return -EFAULT;
> + }
> +
> + if (!homa_protect_rpcs(hsk)) {
> + hsk->error_msg = "socket has been shut down";
> + return -ESHUTDOWN;
> + }
> + hinfo.bpool_avail_bytes = homa_pool_avail_bytes(hsk->buffer_pool);
> + hinfo.port = hsk->port;
> + dst = (char *)hinfo.rpc_info;
> + bytes_avl = hinfo.rpc_info_length;
> + hinfo.num_rpcs = 0;
> + list_for_each_entry_rcu(rpc, &hsk->active_rpcs, active_links) {
^^^^
Is rcu_read_lock() needed around this list_for_each_entry_rcu() loop?
All three other functions that use the same homa_protect_rpcs() +
list_for_each_entry_rcu pattern (homa_abort_rpcs in homa_rpc.c,
homa_timer in homa_timer.c, and homa_abort_sock_rpcs in homa_rpc.c)
bracket the traversal with rcu_read_lock()/rcu_read_unlock().
The homa_protect_rpcs() mechanism prevents RPC reaping via an atomic
counter, but doesn't provide RCU read-side protection.
Without rcu_read_lock(), list_for_each_entry_rcu() triggers a lockdep
RCU warning, and on PREEMPT_RCU configurations the reader can be
preempted during traversal without the RCU subsystem knowing a reader
is active, breaking the RCU contract.
> + homa_rpc_lock(rpc);
> + if (rpc->state == RPC_DEAD) {
> + homa_rpc_unlock(rpc);
> + continue;
> + }
> + homa_rpc_get_info(rpc, &rinfo);
> + homa_rpc_unlock(rpc);
> + if (dst && bytes_avl >= sizeof(rinfo)) {
> + if (copy_to_user((void __user *)dst, &rinfo,
> + sizeof(rinfo))) {
> + homa_unprotect_rpcs(hsk);
> + hsk->error_msg = "couldn't copy homa_rpc_info to user space: invalid or read-only address?";
> + return -EFAULT;
> + }
> + dst += sizeof(rinfo);
> + bytes_avl -= sizeof(rinfo);
> + }
> + hinfo.num_rpcs++;
> + }
> + homa_unprotect_rpcs(hsk);
[ ... ]
> +/**
> + * homa_socket() - Implements the socket(2) system call for sockets.
> + * @sk: Socket on which the system call was invoked. The non-Homa
> + * parts have already been initialized.
> + *
> + * Return: always 0 (success).
> + */
> +int homa_socket(struct sock *sk)
> +{
> + struct homa_sock *hsk = homa_sk(sk);
> + int result;
> +
> + result = homa_sock_init(hsk);
> + if (result != 0) {
> + homa_sock_shutdown(hsk);
^^^^^^^^^^^^^^^^^^^
Can this trigger a spinlock bug?
When homa_sock_init() fails because homa_pool_alloc() returns
ERR_PTR(-ENOMEM), it returns before spin_lock_init(&hsk->lock) is
called in homa_sock_init().
The error path then calls homa_sock_shutdown(hsk), whose first action
is homa_sock_lock(hsk) = spin_lock_bh(&hsk->lock) on the
never-initialized spinlock.
With CONFIG_DEBUG_SPINLOCK, the lock's magic field (zero from slab
allocation) doesn't match SPINLOCK_MAGIC (0xdead4ead), causing
spin_bug() to call BUG().
While homa_sock_shutdown() would immediately return after checking
!hsk->homa (NULL), the BUG() fires inside spin_lock_bh() before that
check is reached.
This is reachable under memory pressure.
> + homa_sock_destroy(&hsk->sock);
> + }
> + return result;
> +}
[ ... ]
> +/**
> + * homa_getsockopt() - Implements the getsockopt system call for Homa sockets.
> + * @sk: Socket on which the system call was invoked.
> + * @level: Selects level in the network stack to handle the request;
> + * must be IPPROTO_HOMA.
> + * @optname: Identifies a particular setsockopt operation.
> + * @optval: Address in user space where the option's value should be stored.
> + * @optlen: Number of bytes available at optval; will be overwritten with
> + * actual number of bytes stored.
> + * Return: 0 on success, otherwise a negative errno. Sets hsk->error_msg
> + * on errors.
> + */
> +int homa_getsockopt(struct sock *sk, int level, int optname,
> + char __user *optval, int __user *optlen)
> +{
> + struct homa_sock *hsk = homa_sk(sk);
> + struct homa_rcvbuf_args rcvbuf_args;
> + int is_server;
> + void *result;
> + int len;
> +
> + if (copy_from_sockptr(&len, USER_SOCKPTR(optlen), sizeof(int))) {
> + hsk->error_msg = "invalid address for optlen argument to getsockopt";
> + return -EFAULT;
> + }
> +
> + if (level != IPPROTO_HOMA) {
> + hsk->error_msg = "homa_setsockopt invoked with level not IPPROTO_HOMA";
^^^^^^^^^^^^^^
This isn't a bug, but the error message says 'homa_setsockopt' instead
of 'homa_getsockopt'. This is a copy-paste error from homa_setsockopt().
The error_msg field is returned to applications via the HOMAIOCINFO
ioctl, so users debugging socket option failures would see a misleading
message pointing to the wrong function.
> + return -ENOPROTOOPT;
> + }
> + if (optname == SO_HOMA_RCVBUF) {
[ ... ]
> +/**
> + * homa_recvmsg() - Receive a message from a Homa socket.
> + * @sk: Socket on which the system call was invoked.
> + * @msg: Controlling information for the receive.
> + * @len: Total bytes of space available in msg->msg_iov; not used.
> + * @flags: Flags from system call; only MSG_DONTWAIT is used.
> + * @addr_len: Store the length of the sender address here
> + * Return: The length of the message on success, otherwise a negative
> + * errno. Sets hsk->error_msg on errors.
> + */
> +int homa_recvmsg(struct sock *sk, struct msghdr *msg, size_t len, int flags,
> + int *addr_len)
> +{
[ ... ]
> + if (rpc->error) {
> + hsk->error_msg = "RPC failed";
> + result = rpc->error;
> + } else {
> + result = rpc->msgin.length;
> + }
> + result = rpc->error ? rpc->error : rpc->msgin.length;
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This isn't a bug, but there's redundant dead-store code here.
The if/else block assigns 'result' to either 'rpc->error' or
'rpc->msgin.length', but the next line immediately overwrites 'result'
with the identical ternary expression.
The two 'result = ...' assignments inside the if/else are dead stores
that are always overwritten. The if/else block's only surviving side
effect is setting 'hsk->error_msg = "RPC failed"' when rpc->error is
nonzero.
This appears to be an editing mistake where either the if/else should
be the authoritative result assignment (and the ternary deleted), or
the ternary should remain and the if/else should be simplified to only
set error_msg (removing its result assignments).
> +
> + /* Collect result information. */
> + control.id = rpc->id;
[ ... ]
> +/**
> + * homa_hrtimer() - This function is invoked by the hrtimer mechanism to
> + * wake up the timer thread. Runs at IRQ level.
> + * @timer: The timer that triggered; not used.
> + *
> + * Return: Always HRTIMER_RESTART.
> + */
> +enum hrtimer_restart homa_hrtimer(struct hrtimer *timer)
> +{
> + wake_up_process(timer_kthread);
> + return HRTIMER_NORESTART;
^^^^^^^^^^^^^^^^^^^^^^
The kernel-doc comment states 'Return: Always HRTIMER_RESTART' but the
function returns HRTIMER_NORESTART.
The runtime behavior is correct (homa_timer_main re-arms the hrtimer
explicitly via hrtimer_start on each loop iteration, so HRTIMER_NORESTART
is the intended return value), but the comment contradicts the code.
This misleads readers into believing the timer auto-restarts, when it
does not.
> +}
next prev parent reply other threads:[~2026-03-18 7:22 UTC|newest]
Thread overview: 34+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-16 22:32 [PATCH net-next v17 00/14] Begin upstreaming Homa transport protocol John Ousterhout
2026-03-16 22:32 ` [PATCH net-next v17 01/14] net: homa: define user-visible API for Homa John Ousterhout
2026-03-17 10:10 ` kernel test robot
2026-03-17 18:40 ` kernel test robot
2026-03-16 22:32 ` [PATCH net-next v17 02/14] net: homa: create homa_wire.h John Ousterhout
2026-03-16 22:32 ` [PATCH net-next v17 03/14] net: homa: create shared Homa header files John Ousterhout
2026-03-18 7:20 ` [net-next,v17,03/14] " Paolo Abeni
2026-03-19 20:37 ` John Ousterhout
2026-03-16 22:32 ` [PATCH net-next v17 04/14] net: homa: create homa_pool.h and homa_pool.c John Ousterhout
2026-03-16 22:32 ` [PATCH net-next v17 05/14] net: homa: create homa_peer.h and homa_peer.c John Ousterhout
2026-03-18 7:21 ` [net-next,v17,05/14] " Paolo Abeni
2026-03-20 17:13 ` John Ousterhout
2026-03-20 17:20 ` Paolo Abeni
2026-03-16 22:32 ` [PATCH net-next v17 06/14] net: homa: create homa_sock.h and homa_sock.c John Ousterhout
2026-03-16 22:32 ` [PATCH net-next v17 07/14] net: homa: create homa_interest.h and homa_interest.c John Ousterhout
2026-03-18 7:21 ` [net-next,v17,07/14] " Paolo Abeni
2026-03-16 22:32 ` [PATCH net-next v17 08/14] net: homa: create homa_rpc.h and homa_rpc.c John Ousterhout
2026-03-18 7:21 ` [net-next,v17,08/14] " Paolo Abeni
2026-03-23 22:43 ` John Ousterhout
2026-03-24 8:55 ` Paolo Abeni
2026-03-16 22:32 ` [PATCH net-next v17 09/14] net: homa: create homa_outgoing.c John Ousterhout
2026-03-18 7:21 ` [net-next,v17,09/14] " Paolo Abeni
2026-03-20 18:21 ` John Ousterhout
2026-03-16 22:32 ` [PATCH net-next v17 10/14] net: homa: create homa_utils.c John Ousterhout
2026-03-16 22:32 ` [PATCH net-next v17 11/14] net: homa: create homa_incoming.c John Ousterhout
2026-03-18 7:21 ` [net-next,v17,11/14] " Paolo Abeni
2026-03-20 20:51 ` John Ousterhout
2026-03-16 22:32 ` [PATCH net-next v17 12/14] net: homa: create homa_timer.c John Ousterhout
2026-03-16 22:32 ` [PATCH net-next v17 13/14] net: homa: create homa_plumbing.c John Ousterhout
2026-03-18 7:21 ` Paolo Abeni [this message]
2026-03-20 21:49 ` [net-next,v17,13/14] " John Ousterhout
2026-03-16 22:32 ` [PATCH net-next v17 14/14] net: homa: create Makefile and Kconfig John Ousterhout
2026-03-17 18:51 ` kernel test robot
2026-03-17 19:26 ` kernel test robot
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260318072149.269101-1-pabeni@redhat.com \
--to=pabeni@redhat.com \
--cc=edumazet@google.com \
--cc=horms@kernel.org \
--cc=kuba@kernel.org \
--cc=netdev@vger.kernel.org \
--cc=ouster@cs.stanford.edu \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox