public inbox for netdev@vger.kernel.org
 help / color / mirror / Atom feed
From: arjan@linux.intel.com
To: netdev@vger.kernel.org
Cc: anderson@allelesecurity.com, dhowells@redhat.com,
	marc.dionne@auristor.com, kuba@kernel.org, pabeni@redhat.com,
	linux-kernel@vger.kernel.org, jaltman@auristor.com,
	horms@kernel.org, Arjan van de Ven <arjan@linux.intel.com>
Subject: Re: [BUG] rxrpc: Client connection leak and BUG() call during kernel IO thread exit
Date: Thu, 23 Apr 2026 07:48:00 -0700	[thread overview]
Message-ID: <20260423144801.292566-1-arjan@linux.intel.com> (raw)
In-Reply-To: <CAPhRvkyZGKHRTBhV3P2PCCRxmRKGEvJQ0W5a9SMW3qwS2hp2Qw@mail.gmail.com>

From: Arjan van de Ven <arjan@linux.intel.com>

This email is created by automation to help kernel developers deal
with a large volume of AI generated bug reports by decoding oopses
into more actionable information.


Decoded Backtrace

--- rxrpc_destroy_client_conn_ids (inlined into rxrpc_purge_client_connections)
    Source: net/rxrpc/conn_client.c

 54 static void rxrpc_destroy_client_conn_ids(struct rxrpc_local *local)
 55 {
 56     struct rxrpc_connection *conn;
 57     int id;
 58
 59     if (!idr_is_empty(&local->conn_ids)) {
 60         idr_for_each_entry(&local->conn_ids, conn, id) {
 61             pr_err("AF_RXRPC: Leaked client conn %p {%d}\n",
 62                    conn, refcount_read(&conn->ref));
 63         }
 64         BUG();   // <- crash here
 65     }
 66
 67     idr_destroy(&local->conn_ids);
 68 }

--- rxrpc_destroy_local
    Source: net/rxrpc/local_object.c

420 void rxrpc_destroy_local(struct rxrpc_local *local)
421 {
422     struct socket *socket = local->socket;
423     struct rxrpc_net *rxnet = local->rxnet;
    ...
427     local->dead = true;
    ...
433     rxrpc_clean_up_local_conns(local);
434     rxrpc_service_connection_reaper(&rxnet->service_conn_reaper);
435     ASSERT(!local->service);
    ...
450     rxrpc_purge_queue(&local->rx_queue);
451     rxrpc_purge_client_connections(local);   // <- call here
452     page_frag_cache_drain(&local->tx_alloc);
453 }

--- rxrpc_io_thread
    Source: net/rxrpc/io_thread.c

554     if (!list_empty(&local->new_client_calls))
555         rxrpc_connect_client_calls(local);
    ...
569     if (should_stop)
570         break;
    ...
596     __set_current_state(TASK_RUNNING);
598     rxrpc_destroy_local(local);   // <- call here
601     return 0;


Tentative Analysis

The crash fires the unconditional BUG() at net/rxrpc/conn_client.c:64
because local->conn_ids is non-empty when rxrpc_destroy_local() is
called by the krxrpcio I/O thread during socket teardown.

When a client sendmsg() queues a call, the I/O thread picks it up via
rxrpc_connect_client_calls(). That function allocates a client
connection (rxrpc_alloc_client_connection()), registers it in the
local->conn_ids IDR with refcount=1, stores it in bundle->conns[], and
moves the call from new_client_calls to bundle->waiting_calls.

Once new_client_calls is empty and kthread_should_stop() is true, the
I/O thread exits its loop and calls rxrpc_destroy_local(). Inside that
function, rxrpc_clean_up_local_conns() iterates only the
local->idle_client_conns list. A connection that is in bundle->conns[]
but has never been activated on a channel (and thus never went idle) is
completely missed. rxrpc_purge_client_connections() then finds the
connection still registered in conn_ids and fires BUG().

The coverage gap was introduced by commit 9d35d880e0e4 ("rxrpc: Move
client call connection to the I/O thread"), which created a new
"allocated in bundle, not yet idle" state for connections that the
existing idle-list cleanup does not handle.

Note: fc9de52de38f ("rxrpc: Fix missing locking causing hanging calls"),
already present in 6.18.13, fixes a related missing-lock bug in the
same code area but does not address this idle-list coverage gap.


Potential Solution

rxrpc_clean_up_local_conns() should be extended to also release
connections stored in bundle->conns[] that have not yet appeared on
idle_client_conns. After the existing idle-list loop, the function
should iterate over all entries in local->client_bundles (the RB-tree
of active bundles), call rxrpc_unbundle_conn() on each occupied
bundle->conns[] slot, and put the connection. This ensures
rxrpc_destroy_client_conn_ids() always finds an empty IDR.


More information

Oops-Analysis: http://oops.fenrus.org/reports/lkml/CAPhRvkyZGKHRTBhV3P2PCCRxmRKGEvJQ0W5a9SMW3qwS2hp2Qw/
Assisted-by: GitHub-Copilot:claude-sonnet-4.6 linux-kernel-oops-x86.

      reply	other threads:[~2026-04-23 14:46 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-01  4:19 [BUG] rxrpc: Client connection leak and BUG() call during kernel IO thread exit Anderson Nascimento
2026-04-22 16:08 ` David Howells
2026-04-22 16:18   ` Anderson Nascimento
2026-04-22 16:25   ` Anderson Nascimento
2026-04-22 16:37     ` Anderson Nascimento
2026-04-23 14:48       ` arjan [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260423144801.292566-1-arjan@linux.intel.com \
    --to=arjan@linux.intel.com \
    --cc=anderson@allelesecurity.com \
    --cc=dhowells@redhat.com \
    --cc=horms@kernel.org \
    --cc=jaltman@auristor.com \
    --cc=kuba@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=marc.dionne@auristor.com \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox