* [PATCH RESEND 1/3] net: Fix possible race in peernet2id_alloc()
@ 2017-12-29 7:29 Kirill Tkhai
2017-12-29 7:29 ` [PATCH RESEND 2/3] net: Add BUG_ON() to get_net() Kirill Tkhai
` (2 more replies)
0 siblings, 3 replies; 5+ messages in thread
From: Kirill Tkhai @ 2017-12-29 7:29 UTC (permalink / raw)
To: netdev, davem; +Cc: eric.dumazet, ktkhai, ebiederm
peernet2id_alloc() is racy without rtnl_lock() as atomic_read(&peer->count)
under net->nsid_lock does not guarantee, peer is alive:
rcu_read_lock()
peernet2id_alloc() ..
spin_lock_bh(&net->nsid_lock) ..
atomic_read(&peer->count) == 1 ..
.. put_net()
.. cleanup_net()
.. for_each_net(tmp)
.. spin_lock_bh(&tmp->nsid_lock)
.. __peernet2id(tmp, net) == -1
.. ..
.. ..
__peernet2id_alloc(alloc == true) ..
.. ..
rcu_read_unlock() ..
.. synchronize_rcu()
.. kmem_cache_free(net)
After the above situation, net::netns_id contains id pointing to freed memory,
and any other dereferencing by the id will operate with this freed memory.
Currently, peernet2id_alloc() is used under rtnl_lock() everywhere except
ovs_vport_cmd_fill_info(), and this race can't occur. But peernet2id_alloc()
is generic interface, and better we fix it before someone really starts
use it in wrong context.
Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
---
net/core/net_namespace.c | 23 +++++++++++++++++++----
1 file changed, 19 insertions(+), 4 deletions(-)
diff --git a/net/core/net_namespace.c b/net/core/net_namespace.c
index 60a71be75aea..6a4eab438221 100644
--- a/net/core/net_namespace.c
+++ b/net/core/net_namespace.c
@@ -221,17 +221,32 @@ static void rtnl_net_notifyid(struct net *net, int cmd, int id);
*/
int peernet2id_alloc(struct net *net, struct net *peer)
{
- bool alloc;
+ bool alloc = false, alive = false;
int id;
- if (atomic_read(&net->count) == 0)
- return NETNSA_NSID_NOT_ASSIGNED;
spin_lock_bh(&net->nsid_lock);
- alloc = atomic_read(&peer->count) == 0 ? false : true;
+ /* Spinlock guarantees we never hash a peer to net->netns_ids
+ * after idr_destroy(&net->netns_ids) occurs in cleanup_net().
+ */
+ if (atomic_read(&net->count) == 0) {
+ id = NETNSA_NSID_NOT_ASSIGNED;
+ goto unlock;
+ }
+ /*
+ * When peer is obtained from RCU lists, we may race with
+ * its cleanup. Check whether it's alive, and this guarantees
+ * we never hash a peer back to net->netns_ids, after it has
+ * just been idr_remove()'d from there in cleanup_net().
+ */
+ if (maybe_get_net(peer))
+ alive = alloc = true;
id = __peernet2id_alloc(net, peer, &alloc);
+unlock:
spin_unlock_bh(&net->nsid_lock);
if (alloc && id >= 0)
rtnl_net_notifyid(net, RTM_NEWNSID, id);
+ if (alive)
+ put_net(peer);
return id;
}
EXPORT_SYMBOL_GPL(peernet2id_alloc);
^ permalink raw reply related [flat|nested] 5+ messages in thread* [PATCH RESEND 2/3] net: Add BUG_ON() to get_net()
2017-12-29 7:29 [PATCH RESEND 1/3] net: Fix possible race in peernet2id_alloc() Kirill Tkhai
@ 2017-12-29 7:29 ` Kirill Tkhai
2017-12-29 7:30 ` [PATCH RESEND 3/3] net: Remove spinlock from get_net_ns_by_id() Kirill Tkhai
2017-12-29 18:18 ` [PATCH RESEND 1/3] net: Fix possible race in peernet2id_alloc() Eric W. Biederman
2 siblings, 0 replies; 5+ messages in thread
From: Kirill Tkhai @ 2017-12-29 7:29 UTC (permalink / raw)
To: netdev, davem; +Cc: eric.dumazet, ktkhai, ebiederm
Since people may mistakenly obtain destroying net
from net_namespace_list and from net::netns_ids
without checking for its net::counter, let's protect
against such situations and insert BUG_ON() to stop
move on after this.
Panic is better, than memory corruption and undefined
behavior.
Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
---
include/net/net_namespace.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/net/net_namespace.h b/include/net/net_namespace.h
index 10f99dafd5ac..ff0e47471d5b 100644
--- a/include/net/net_namespace.h
+++ b/include/net/net_namespace.h
@@ -195,7 +195,7 @@ void __put_net(struct net *net);
static inline struct net *get_net(struct net *net)
{
- atomic_inc(&net->count);
+ BUG_ON(atomic_inc_return(&net->count) <= 1);
return net;
}
^ permalink raw reply related [flat|nested] 5+ messages in thread* [PATCH RESEND 3/3] net: Remove spinlock from get_net_ns_by_id()
2017-12-29 7:29 [PATCH RESEND 1/3] net: Fix possible race in peernet2id_alloc() Kirill Tkhai
2017-12-29 7:29 ` [PATCH RESEND 2/3] net: Add BUG_ON() to get_net() Kirill Tkhai
@ 2017-12-29 7:30 ` Kirill Tkhai
2017-12-29 18:18 ` [PATCH RESEND 1/3] net: Fix possible race in peernet2id_alloc() Eric W. Biederman
2 siblings, 0 replies; 5+ messages in thread
From: Kirill Tkhai @ 2017-12-29 7:30 UTC (permalink / raw)
To: netdev, davem; +Cc: eric.dumazet, ktkhai, ebiederm
idr_find() is safe under rcu_read_lock() and
maybe_get_net() guarantees that net is alive.
Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
---
net/core/net_namespace.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/net/core/net_namespace.c b/net/core/net_namespace.c
index 6a4eab438221..a675f35a18ff 100644
--- a/net/core/net_namespace.c
+++ b/net/core/net_namespace.c
@@ -279,11 +279,9 @@ struct net *get_net_ns_by_id(struct net *net, int id)
return NULL;
rcu_read_lock();
- spin_lock_bh(&net->nsid_lock);
peer = idr_find(&net->netns_ids, id);
if (peer)
peer = maybe_get_net(peer);
- spin_unlock_bh(&net->nsid_lock);
rcu_read_unlock();
return peer;
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH RESEND 1/3] net: Fix possible race in peernet2id_alloc()
2017-12-29 7:29 [PATCH RESEND 1/3] net: Fix possible race in peernet2id_alloc() Kirill Tkhai
2017-12-29 7:29 ` [PATCH RESEND 2/3] net: Add BUG_ON() to get_net() Kirill Tkhai
2017-12-29 7:30 ` [PATCH RESEND 3/3] net: Remove spinlock from get_net_ns_by_id() Kirill Tkhai
@ 2017-12-29 18:18 ` Eric W. Biederman
2017-12-30 18:04 ` Kirill Tkhai
2 siblings, 1 reply; 5+ messages in thread
From: Eric W. Biederman @ 2017-12-29 18:18 UTC (permalink / raw)
To: Kirill Tkhai; +Cc: netdev, davem, eric.dumazet
Kirill Tkhai <ktkhai@virtuozzo.com> writes:
> peernet2id_alloc() is racy without rtnl_lock() as atomic_read(&peer->count)
> under net->nsid_lock does not guarantee, peer is alive:
>
> rcu_read_lock()
> peernet2id_alloc() ..
> spin_lock_bh(&net->nsid_lock) ..
> atomic_read(&peer->count) == 1 ..
> .. put_net()
> .. cleanup_net()
> .. for_each_net(tmp)
> .. spin_lock_bh(&tmp->nsid_lock)
> .. __peernet2id(tmp, net) == -1
> .. ..
> .. ..
> __peernet2id_alloc(alloc == true) ..
> .. ..
> rcu_read_unlock() ..
> .. synchronize_rcu()
> .. kmem_cache_free(net)
>
> After the above situation, net::netns_id contains id pointing to freed memory,
> and any other dereferencing by the id will operate with this freed memory.
>
> Currently, peernet2id_alloc() is used under rtnl_lock() everywhere except
> ovs_vport_cmd_fill_info(), and this race can't occur. But peernet2id_alloc()
> is generic interface, and better we fix it before someone really starts
> use it in wrong context.
Nacked-by: "Eric W. Biederman" <ebiederm@xmission.com>
I have already made a clear objection to the first unnecessary and
confusing hunk. Simply resending the muddle headed code doesn't make it
better.
Eric
> Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
> ---
> net/core/net_namespace.c | 23 +++++++++++++++++++----
> 1 file changed, 19 insertions(+), 4 deletions(-)
>
> diff --git a/net/core/net_namespace.c b/net/core/net_namespace.c
> index 60a71be75aea..6a4eab438221 100644
> --- a/net/core/net_namespace.c
> +++ b/net/core/net_namespace.c
> @@ -221,17 +221,32 @@ static void rtnl_net_notifyid(struct net *net, int cmd, int id);
> */
> int peernet2id_alloc(struct net *net, struct net *peer)
> {
> - bool alloc;
> + bool alloc = false, alive = false;
> int id;
>
> - if (atomic_read(&net->count) == 0)
> - return NETNSA_NSID_NOT_ASSIGNED;
> spin_lock_bh(&net->nsid_lock);
> - alloc = atomic_read(&peer->count) == 0 ? false : true;
> + /* Spinlock guarantees we never hash a peer to net->netns_ids
> + * after idr_destroy(&net->netns_ids) occurs in cleanup_net().
> + */
> + if (atomic_read(&net->count) == 0) {
> + id = NETNSA_NSID_NOT_ASSIGNED;
> + goto unlock;
> + }
> + /*
> + * When peer is obtained from RCU lists, we may race with
> + * its cleanup. Check whether it's alive, and this guarantees
> + * we never hash a peer back to net->netns_ids, after it has
> + * just been idr_remove()'d from there in cleanup_net().
> + */
> + if (maybe_get_net(peer))
> + alive = alloc = true;
> id = __peernet2id_alloc(net, peer, &alloc);
> +unlock:
> spin_unlock_bh(&net->nsid_lock);
> if (alloc && id >= 0)
> rtnl_net_notifyid(net, RTM_NEWNSID, id);
> + if (alive)
> + put_net(peer);
> return id;
> }
> EXPORT_SYMBOL_GPL(peernet2id_alloc);
^ permalink raw reply [flat|nested] 5+ messages in thread* Re: [PATCH RESEND 1/3] net: Fix possible race in peernet2id_alloc()
2017-12-29 18:18 ` [PATCH RESEND 1/3] net: Fix possible race in peernet2id_alloc() Eric W. Biederman
@ 2017-12-30 18:04 ` Kirill Tkhai
0 siblings, 0 replies; 5+ messages in thread
From: Kirill Tkhai @ 2017-12-30 18:04 UTC (permalink / raw)
To: Eric W. Biederman; +Cc: netdev, davem, eric.dumazet
On 29.12.2017 21:18, Eric W. Biederman wrote:
> Kirill Tkhai <ktkhai@virtuozzo.com> writes:
>
>> peernet2id_alloc() is racy without rtnl_lock() as atomic_read(&peer->count)
>> under net->nsid_lock does not guarantee, peer is alive:
>>
>> rcu_read_lock()
>> peernet2id_alloc() ..
>> spin_lock_bh(&net->nsid_lock) ..
>> atomic_read(&peer->count) == 1 ..
>> .. put_net()
>> .. cleanup_net()
>> .. for_each_net(tmp)
>> .. spin_lock_bh(&tmp->nsid_lock)
>> .. __peernet2id(tmp, net) == -1
>> .. ..
>> .. ..
>> __peernet2id_alloc(alloc == true) ..
>> .. ..
>> rcu_read_unlock() ..
>> .. synchronize_rcu()
>> .. kmem_cache_free(net)
>>
>> After the above situation, net::netns_id contains id pointing to freed memory,
>> and any other dereferencing by the id will operate with this freed memory.
>>
>> Currently, peernet2id_alloc() is used under rtnl_lock() everywhere except
>> ovs_vport_cmd_fill_info(), and this race can't occur. But peernet2id_alloc()
>> is generic interface, and better we fix it before someone really starts
>> use it in wrong context.
>
> Nacked-by: "Eric W. Biederman" <ebiederm@xmission.com>
>
> I have already made a clear objection to the first unnecessary and
> confusing hunk. Simply resending the muddle headed code doesn't make it
> better.
You provided comments on my changes and you asked couple of questions. I replied
your questions and explained, why it seems important to made the first hunk. Since
there were questions from you I interpreted the conversation is a discussion. Later
there was no an answer from you, and patchset status became not clear for me, and I
wrote about that. I had no an aim to disappoint you or ignore your position.
Thank you for the reply. Now the position is clear for me. I'll remove the first
hunk and resend the changed patchset like you suggested.
Kirill
>> Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
>> ---
>> net/core/net_namespace.c | 23 +++++++++++++++++++----
>> 1 file changed, 19 insertions(+), 4 deletions(-)
>>
>> diff --git a/net/core/net_namespace.c b/net/core/net_namespace.c
>> index 60a71be75aea..6a4eab438221 100644
>> --- a/net/core/net_namespace.c
>> +++ b/net/core/net_namespace.c
>> @@ -221,17 +221,32 @@ static void rtnl_net_notifyid(struct net *net, int cmd, int id);
>> */
>> int peernet2id_alloc(struct net *net, struct net *peer)
>> {
>> - bool alloc;
>> + bool alloc = false, alive = false;
>> int id;
>>
>> - if (atomic_read(&net->count) == 0)
>> - return NETNSA_NSID_NOT_ASSIGNED;
>> spin_lock_bh(&net->nsid_lock);
>> - alloc = atomic_read(&peer->count) == 0 ? false : true;
>> + /* Spinlock guarantees we never hash a peer to net->netns_ids
>> + * after idr_destroy(&net->netns_ids) occurs in cleanup_net().
>> + */
>> + if (atomic_read(&net->count) == 0) {
>> + id = NETNSA_NSID_NOT_ASSIGNED;
>> + goto unlock;
>> + }
>> + /*
>> + * When peer is obtained from RCU lists, we may race with
>> + * its cleanup. Check whether it's alive, and this guarantees
>> + * we never hash a peer back to net->netns_ids, after it has
>> + * just been idr_remove()'d from there in cleanup_net().
>> + */
>> + if (maybe_get_net(peer))
>> + alive = alloc = true;
>> id = __peernet2id_alloc(net, peer, &alloc);
>> +unlock:
>> spin_unlock_bh(&net->nsid_lock);
>> if (alloc && id >= 0)
>> rtnl_net_notifyid(net, RTM_NEWNSID, id);
>> + if (alive)
>> + put_net(peer);
>> return id;
>> }
>> EXPORT_SYMBOL_GPL(peernet2id_alloc);
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2017-12-30 18:04 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-12-29 7:29 [PATCH RESEND 1/3] net: Fix possible race in peernet2id_alloc() Kirill Tkhai
2017-12-29 7:29 ` [PATCH RESEND 2/3] net: Add BUG_ON() to get_net() Kirill Tkhai
2017-12-29 7:30 ` [PATCH RESEND 3/3] net: Remove spinlock from get_net_ns_by_id() Kirill Tkhai
2017-12-29 18:18 ` [PATCH RESEND 1/3] net: Fix possible race in peernet2id_alloc() Eric W. Biederman
2017-12-30 18:04 ` Kirill Tkhai
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).