From: Javier Cardona <javier@cozybit.com>
To: Johannes Berg <johannes@sipsolutions.net>
Cc: "John W. Linville" <linville@tuxdriver.com>,
linux-wireless <linux-wireless@vger.kernel.org>
Subject: Re: [PATCH] mac80211: fix and simplify mesh locking
Date: Tue, 17 May 2011 16:05:29 -0700 [thread overview]
Message-ID: <BANLkTikyymbnXHANmB3N0ecJWdkY9xCG=Q@mail.gmail.com> (raw)
In-Reply-To: <1305364803.3437.10.camel@jlt3.sipsolutions.net>
On Sat, May 14, 2011 at 2:20 AM, Johannes Berg
<johannes@sipsolutions.net> wrote:
> On Sat, 2011-05-14 at 11:00 +0200, Johannes Berg wrote:
>> From: Johannes Berg <johannes.berg@intel.com>
>>
>> The locking in mesh_{mpath,mpp}_table_grow not only
>> has an rcu_read_unlock() missing, it's also racy
>> (though really only technically since it's invoked
>> from a single function only) since it obtains the
>> new size of the table without any locking, so two
>> invocations of the function could attempt the same
>> resize.
>
> Actually, it _can_ happen, if you have multiple mesh interfaces.
I'm seeing this after trying your patch, probably because the
allocations mesh_table_alloc() can block. In the past I had tried to
allocate the table before entering the critical section. If that is
not possible for the race condition you mention, then I guess we'll
have to make those allocations GFP_ATOMIC?
363.767523] BUG: scheduling while atomic: kworker/u:2/621/0x10000200
[ 363.768239] 3 locks held by kworker/u:2/621:
[ 363.768516] #0: (name){.+.+.+}, at: [<c10429e7>]
process_one_work+0x17c/0x31d
[ 363.769605] #1: ((&sdata->work)){+.+.+.}, at: [<c10429e7>]
process_one_work+0x17c/0x31d
[ 363.770149] #2: (pathtbl_resize_lock){++.-+.}, at: [<c8fb01ce>]
mesh_mpath_table_grow+0xf/0x5c [mac80211]
[ 363.771122] Modules linked in: mac80211_hwsim mac80211
[ 363.771824] Pid: 621, comm: kworker/u:2 Not tainted 2.6.39-rc7-wl+ #399
[ 363.772304] Call Trace:
[ 363.772611] [<c8faf69f>] ? mesh_table_alloc+0x25/0xc2 [mac80211]
[ 363.772969] [<c102c049>] __schedule_bug+0x5e/0x65
[ 363.773350] [<c14672af>] schedule+0x68/0x699
[ 363.773608] [<c1058328>] ? __lock_acquire+0xab3/0xb59
[ 363.773900] [<c105694e>] ? valid_state+0x1a/0x13d
[ 363.774244] [<c1056c10>] ? mark_lock+0x19f/0x1d9
[ 363.774527] [<c105720b>] ? check_usage_forwards+0x68/0x68
[ 363.774873] [<c1056c8d>] ? mark_held_locks+0x43/0x5b
[ 363.775303] [<c8faf69f>] ? mesh_table_alloc+0x25/0xc2 [mac80211]
[ 363.775654] [<c102e544>] __cond_resched+0x16/0x26
[ 363.775918] [<c1467a92>] _cond_resched+0x1d/0x28
Javier
--
Javier Cardona
cozybit Inc.
http://www.cozybit.com
next prev parent reply other threads:[~2011-05-17 23:05 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-05-14 9:00 [PATCH] mac80211: fix and simplify mesh locking Johannes Berg
2011-05-14 9:20 ` Johannes Berg
2011-05-17 23:05 ` Javier Cardona [this message]
2011-05-17 23:13 ` [PATCH] mac80211: Don't sleep when growing the mesh path Javier Cardona
2011-05-18 22:45 ` [PATCH] mac80211: fix and simplify mesh locking Johannes Berg
2011-05-19 1:40 ` Javier Cardona
2011-05-19 4:14 ` Johannes Berg
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='BANLkTikyymbnXHANmB3N0ecJWdkY9xCG=Q@mail.gmail.com' \
--to=javier@cozybit.com \
--cc=johannes@sipsolutions.net \
--cc=linux-wireless@vger.kernel.org \
--cc=linville@tuxdriver.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).