From: Veaceslav Falico <vfalico@redhat.com>
To: dingtianhong <dingtianhong@huawei.com>
Cc: Jay Vosburgh <fubar@us.ibm.com>,
Andy Gospodarek <andy@greyhouse.net>,
"David S. Miller" <davem@davemloft.net>,
Netdev <netdev@vger.kernel.org>
Subject: Re: [1/4] bonding: don't call slave_xxx_netpoll under spinlocks
Date: Sat, 20 Jul 2013 12:38:01 +0200 [thread overview]
Message-ID: <20130720103800.GB9149@redhat.com> (raw)
In-Reply-To: <51EA3B03.7010302@huawei.com>
On Sat, Jul 20, 2013 at 03:23:47PM +0800, dingtianhong wrote:
>the slave_xxx_netpoll may sleep, so it should't be called under spinlocks.
I don't really see how it may sleep, it was specifically changed to not
sleep actually. However, see below...
>
>the slave point of the bonding will not be changed outside rtnl lock,
>so rtnl lock is enough here.
Yep, as far as I see there's really no need to take the lock, both the
slave list and the netpoll part are always protected by rtnl lock, unless
I'm missing something, and indeed .ndo_netpoll_setup() is always called
under rtnl.
BTW, bond_netpoll_cleanup() has the same problem - maybe you could check if
we can remove the bond->lock from there also and update the patch?
>
>Signed-off-by: Ding Tianhong <dingtianhong@huawei.com>
>
>---
>drivers/net/bonding/bond_main.c | 15 +++------------
> 1 file changed, 3 insertions(+), 12 deletions(-)
>
>diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
>index 07f257d4..5eb75ef 100644
>--- a/drivers/net/bonding/bond_main.c
>+++ b/drivers/net/bonding/bond_main.c
>@@ -1249,8 +1249,9 @@ static void bond_poll_controller(struct net_device *bond_dev)
> {
> }
>
>-static void __bond_netpoll_cleanup(struct bonding *bond)
>+static void bond_netpoll_cleanup(struct net_device *bond_dev)
> {
>+ struct bonding *bond = netdev_priv(bond_dev);
> struct slave *slave;
> int i;
>
>@@ -1258,14 +1259,6 @@ static void __bond_netpoll_cleanup(struct bonding *bond)
> if (IS_UP(slave->dev))
> slave_disable_netpoll(slave);
> }
>-static void bond_netpoll_cleanup(struct net_device *bond_dev)
>-{
>- struct bonding *bond = netdev_priv(bond_dev);
>-
>- read_lock(&bond->lock);
>- __bond_netpoll_cleanup(bond);
>- read_unlock(&bond->lock);
>-}
>
> static int bond_netpoll_setup(struct net_device *dev, struct netpoll_info *ni, gfp_t gfp)
> {
>@@ -1273,15 +1266,13 @@ static int bond_netpoll_setup(struct net_device *dev, struct netpoll_info *ni, g
> struct slave *slave;
> int i, err = 0;
>
>- read_lock(&bond->lock);
> bond_for_each_slave(bond, slave, i) {
> err = slave_enable_netpoll(slave);
> if (err) {
>- __bond_netpoll_cleanup(bond);
>+ bond_netpoll_cleanup(dev);
> break;
> }
> }
>- read_unlock(&bond->lock);
> return err;
> }
>
next prev parent reply other threads:[~2013-07-20 10:38 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-07-20 7:23 [PATCH 1/4] bonding: don't call slave_xxx_netpoll under spinlocks Ding Tianhong
2013-07-20 10:38 ` Veaceslav Falico [this message]
2013-07-22 0:40 ` [1/4] " Ding Tianhong
2013-07-22 22:48 ` [PATCH 1/4] " David Miller
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20130720103800.GB9149@redhat.com \
--to=vfalico@redhat.com \
--cc=andy@greyhouse.net \
--cc=davem@davemloft.net \
--cc=dingtianhong@huawei.com \
--cc=fubar@us.ibm.com \
--cc=netdev@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).