From: Maxim Mikityanskiy <maximmi@nvidia.com>
To: Jay Vosburgh <j.vosburgh@gmail.com>,
Veaceslav Falico <vfalico@gmail.com>,
Andy Gospodarek <andy@greyhouse.net>,
Mahesh Bandewar <maheshb@google.com>,
Nikolay Aleksandrov <nikolay@redhat.com>
Cc: "David S. Miller" <davem@davemloft.net>,
Jakub Kicinski <kuba@kernel.org>,
Tariq Toukan <tariqt@nvidia.com>, <netdev@vger.kernel.org>,
"Maxim Mikityanskiy" <maximmi@nvidia.com>
Subject: [RFC PATCH net] bonding: Work around lockdep_is_held false positives
Date: Mon, 22 Mar 2021 14:38:46 +0200 [thread overview]
Message-ID: <20210322123846.3024549-1-maximmi@nvidia.com> (raw)
After lockdep gets triggered for the first time, it gets disabled, and
lockdep_enabled() will return false. It will affect lockdep_is_held(),
which will start returning true all the time. Normally, it just disables
checks that expect a lock to be held. However, the bonding code checks
that a lock is NOT held, which triggers a false positive in WARN_ON.
This commit addresses the issue by replacing lockdep_is_held with
spin_is_locked, which should have the same effect, but without suffering
from disabling lockdep.
Fixes: ee6377147409 ("bonding: Simplify the xmit function for modes that use xmit_hash")
Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
---
While this patch works around the issue, I would like to discuss better
options. Another straightforward approach is to extend lockdep API with
lockdep_is_not_held(), which will be basically !lockdep_is_held() when
lockdep is enabled, but will return true when !lockdep_enabled().
However, there is no reliable way to check that some lock is not held
without taking it ourselves (because the lock may be taken by another
thread after the check). Could someone explain why this code tries to
make this check? Maybe we could figure out some better way to achieve
the original goal.
drivers/net/bonding/bond_main.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
index 74cbbb22470b..b2fe4e93cb8e 100644
--- a/drivers/net/bonding/bond_main.c
+++ b/drivers/net/bonding/bond_main.c
@@ -4391,9 +4391,7 @@ int bond_update_slave_arr(struct bonding *bond, struct slave *skipslave)
int agg_id = 0;
int ret = 0;
-#ifdef CONFIG_LOCKDEP
- WARN_ON(lockdep_is_held(&bond->mode_lock));
-#endif
+ WARN_ON(spin_is_locked(&bond->mode_lock));
usable_slaves = kzalloc(struct_size(usable_slaves, arr,
bond->slave_cnt), GFP_KERNEL);
--
2.25.1
next reply other threads:[~2021-03-22 12:40 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-03-22 12:38 Maxim Mikityanskiy [this message]
2021-03-22 14:09 ` [RFC PATCH net] bonding: Work around lockdep_is_held false positives Leon Romanovsky
2021-03-23 17:34 ` Maxim Mikityanskiy
2021-03-23 17:56 ` Jay Vosburgh
2021-03-23 19:02 ` Maxim Mikityanskiy
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210322123846.3024549-1-maximmi@nvidia.com \
--to=maximmi@nvidia.com \
--cc=andy@greyhouse.net \
--cc=davem@davemloft.net \
--cc=j.vosburgh@gmail.com \
--cc=kuba@kernel.org \
--cc=maheshb@google.com \
--cc=netdev@vger.kernel.org \
--cc=nikolay@redhat.com \
--cc=tariqt@nvidia.com \
--cc=vfalico@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).