From: Manoj Vishwanathan <manojvishy@google.com>
To: Tony Nguyen <anthony.l.nguyen@intel.com>,
Przemek Kitszel <przemyslaw.kitszel@intel.com>,
"David S. Miller" <davem@davemloft.net>,
Eric Dumazet <edumazet@google.com>,
intel-wired-lan@lists.osuosl.org
Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
google-lan-reviews@googlegroups.com,
Manoj Vishwanathan <manojvishy@google.com>
Subject: [PATCH v1 1/5] idpf: address an rtnl lock splat in tx timeout recovery path
Date: Tue, 13 Aug 2024 18:27:43 +0000 [thread overview]
Message-ID: <20240813182747.1770032-2-manojvishy@google.com> (raw)
In-Reply-To: <20240813182747.1770032-1-manojvishy@google.com>
Adopt the same pattern as in other places in the code to take the rtnl
lock during hard resets.
Tested the patch by injecting tx timeout in IDPF , observe that idpf
recovers and IDPF comes back reachable
Without this patch causes there is a splat:
[ 270.145214] WARNING: CPU: PID: at net/sched/sch_generic.c:534 dev_watchdog
Signed-off-by: Manoj Vishwanathan <manojvishy@google.com>
---
drivers/net/ethernet/intel/idpf/idpf_txrx.c | 14 +++++++++++++-
1 file changed, 13 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
index af2879f03b8d..3c01be90fa75 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c
+++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
@@ -4328,14 +4328,26 @@ int idpf_vport_intr_init(struct idpf_vport *vport)
{
char *int_name;
int err;
+ bool hr_reset_in_prog;
err = idpf_vport_intr_init_vec_idx(vport);
if (err)
return err;
idpf_vport_intr_map_vector_to_qs(vport);
+ /**
+ * If we're in normal up path, the stack already takes the
+ * rtnl_lock for us, however, if we're doing up as a part of a
+ * hard reset, we'll need to take the lock ourself before
+ * touching the netdev.
+ */
+ hr_reset_in_prog = test_bit(IDPF_HR_RESET_IN_PROG,
+ vport->adapter->flags);
+ if (hr_reset_in_prog)
+ rtnl_lock();
idpf_vport_intr_napi_add_all(vport);
-
+ if (hr_reset_in_prog)
+ rtnl_unlock();
err = vport->adapter->dev_ops.reg_ops.intr_reg_init(vport);
if (err)
goto unroll_vectors_alloc;
--
2.46.0.76.ge559c4bf1a-goog
next prev parent reply other threads:[~2024-08-13 18:28 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-08-13 18:27 [PATCH v1 0/5] IDPF Virtchnl fixes Manoj Vishwanathan
2024-08-13 18:27 ` Manoj Vishwanathan [this message]
2024-08-13 18:27 ` [PATCH v1 2/5] idpf: Acquire the lock before accessing the xn->salt Manoj Vishwanathan
2024-08-13 18:27 ` [PATCH v1 3/5] idpf: convert workqueues to unbound Manoj Vishwanathan
2024-08-13 18:27 ` [PATCH v1 4/5] idpf: more info during virtchnl transaction time out Manoj Vishwanathan
2024-08-13 19:28 ` [Intel-wired-lan] " Paul Menzel
2024-08-13 18:27 ` [PATCH v1 5/5] idpf: warn on possible ctlq overflow Manoj Vishwanathan
2024-08-13 19:19 ` Willem de Bruijn
2024-08-15 22:55 ` [PATCH v1 0/5] IDPF Virtchnl fixes Tony Nguyen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240813182747.1770032-2-manojvishy@google.com \
--to=manojvishy@google.com \
--cc=anthony.l.nguyen@intel.com \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=google-lan-reviews@googlegroups.com \
--cc=intel-wired-lan@lists.osuosl.org \
--cc=linux-kernel@vger.kernel.org \
--cc=netdev@vger.kernel.org \
--cc=przemyslaw.kitszel@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox