* [PATCH] Reduce uidhash lock hold time when lookup succeeds
@ 2011-02-17 23:52 Matt Helsley
2011-02-18 18:25 ` Serge E. Hallyn
0 siblings, 1 reply; 2+ messages in thread
From: Matt Helsley @ 2011-02-17 23:52 UTC (permalink / raw)
To: linux-kernel
Cc: Matt Helsley, David Howells, Pavel Emelyanov, Alexey Dobriyan,
Serge E. Hallyn, containers
When lookup succeeds we don't need the "new" user struct which hasn't
been linked into the uidhash. So we can immediately drop the lock and
then free "new" rather than free it with the lock held.
Signed-off-by: Matt Helsley <matthltc@us.ibm.com>
Cc: David Howells <dhowells@redhat.com>
Cc: Pavel Emelyanov <xemul@parallels.com>
Cc: Alexey Dobriyan <adobriyan@gmail.com>
Cc: "Serge E. Hallyn" <serge@hallyn.com>
Cc: containers@lists.linux-foundation.org
---
kernel/user.c | 12 +++++++-----
1 files changed, 7 insertions(+), 5 deletions(-)
diff --git a/kernel/user.c b/kernel/user.c
index 5c598ca..4ea8e58 100644
--- a/kernel/user.c
+++ b/kernel/user.c
@@ -157,16 +157,18 @@ struct user_struct *alloc_uid(struct user_namespace *ns, uid_t uid)
*/
spin_lock_irq(&uidhash_lock);
up = uid_hash_find(uid, hashent);
- if (up) {
+ if (!up) {
+ uid_hash_insert(new, hashent);
+ up = new;
+ }
+ spin_unlock_irq(&uidhash_lock);
+
+ if (up != new) {
put_user_ns(ns);
key_put(new->uid_keyring);
key_put(new->session_keyring);
kmem_cache_free(uid_cachep, new);
- } else {
- uid_hash_insert(new, hashent);
- up = new;
}
- spin_unlock_irq(&uidhash_lock);
}
return up;
--
1.6.3.3
^ permalink raw reply related [flat|nested] 2+ messages in thread* Re: [PATCH] Reduce uidhash lock hold time when lookup succeeds
2011-02-17 23:52 [PATCH] Reduce uidhash lock hold time when lookup succeeds Matt Helsley
@ 2011-02-18 18:25 ` Serge E. Hallyn
0 siblings, 0 replies; 2+ messages in thread
From: Serge E. Hallyn @ 2011-02-18 18:25 UTC (permalink / raw)
To: Matt Helsley
Cc: linux-kernel, Pavel Emelyanov, containers, David Howells,
Alexey Dobriyan
Quoting Matt Helsley (matthltc@us.ibm.com):
> When lookup succeeds we don't need the "new" user struct which hasn't
> been linked into the uidhash. So we can immediately drop the lock and
> then free "new" rather than free it with the lock held.
>
> Signed-off-by: Matt Helsley <matthltc@us.ibm.com>
> Cc: David Howells <dhowells@redhat.com>
> Cc: Pavel Emelyanov <xemul@parallels.com>
> Cc: Alexey Dobriyan <adobriyan@gmail.com>
> Cc: "Serge E. Hallyn" <serge@hallyn.com>
Acked-by: Serge E. Hallyn <serge@hallyn.com>
And might I say that the label 'out_unlock' in that function is
sadly named :)
> Cc: containers@lists.linux-foundation.org
> ---
> kernel/user.c | 12 +++++++-----
> 1 files changed, 7 insertions(+), 5 deletions(-)
>
> diff --git a/kernel/user.c b/kernel/user.c
> index 5c598ca..4ea8e58 100644
> --- a/kernel/user.c
> +++ b/kernel/user.c
> @@ -157,16 +157,18 @@ struct user_struct *alloc_uid(struct user_namespace *ns, uid_t uid)
> */
> spin_lock_irq(&uidhash_lock);
> up = uid_hash_find(uid, hashent);
> - if (up) {
> + if (!up) {
> + uid_hash_insert(new, hashent);
> + up = new;
> + }
> + spin_unlock_irq(&uidhash_lock);
> +
> + if (up != new) {
> put_user_ns(ns);
> key_put(new->uid_keyring);
> key_put(new->session_keyring);
> kmem_cache_free(uid_cachep, new);
> - } else {
> - uid_hash_insert(new, hashent);
> - up = new;
> }
> - spin_unlock_irq(&uidhash_lock);
> }
>
> return up;
> --
> 1.6.3.3
>
> _______________________________________________
> Containers mailing list
> Containers@lists.linux-foundation.org
> https://lists.linux-foundation.org/mailman/listinfo/containers
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2011-02-18 18:25 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-02-17 23:52 [PATCH] Reduce uidhash lock hold time when lookup succeeds Matt Helsley
2011-02-18 18:25 ` Serge E. Hallyn
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox