* [PATCH] Reduce cpuset.c write_lock_irq() to read_lock()
@ 2007-05-24 1:23 Paul Menage
2007-05-24 1:35 ` Paul Jackson
0 siblings, 1 reply; 2+ messages in thread
From: Paul Menage @ 2007-05-24 1:23 UTC (permalink / raw)
To: pj, akpm; +Cc: linux-kernel, menage
cpuset.c:update_nodemask() uses a write_lock_irq() on tasklist_lock to
block concurrent forks; a read_lock() suffices and is less intrusive.
Signed-off-by: Paul Menage<menage@google.com>
---
kernel/cpuset.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
Index: scratch-2.6.22-rc1-mm1/kernel/cpuset.c
===================================================================
--- scratch-2.6.22-rc1-mm1.orig/kernel/cpuset.c
+++ scratch-2.6.22-rc1-mm1/kernel/cpuset.c
@@ -923,10 +923,10 @@ static int update_nodemask(struct cpuset
mmarray = kmalloc(ntasks * sizeof(*mmarray), GFP_KERNEL);
if (!mmarray)
goto done;
- write_lock_irq(&tasklist_lock); /* block fork */
+ read_lock(&tasklist_lock); /* block fork */
if (atomic_read(&cs->count) <= ntasks)
break; /* got enough */
- write_unlock_irq(&tasklist_lock); /* try again */
+ read_unlock(&tasklist_lock); /* try again */
kfree(mmarray);
}
@@ -948,7 +948,7 @@ static int update_nodemask(struct cpuset
continue;
mmarray[n++] = mm;
} while_each_thread(g, p);
- write_unlock_irq(&tasklist_lock);
+ read_unlock(&tasklist_lock);
/*
* Now that we've dropped the tasklist spinlock, we can
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: [PATCH] Reduce cpuset.c write_lock_irq() to read_lock()
2007-05-24 1:23 [PATCH] Reduce cpuset.c write_lock_irq() to read_lock() Paul Menage
@ 2007-05-24 1:35 ` Paul Jackson
0 siblings, 0 replies; 2+ messages in thread
From: Paul Jackson @ 2007-05-24 1:35 UTC (permalink / raw)
To: Paul Menage; +Cc: akpm, linux-kernel, menage
Paul M wrote:
> cpuset.c:update_nodemask() uses a write_lock_irq() on tasklist_lock to
> block concurrent forks; a read_lock() suffices and is less intrusive.
Seems reasonable to me - thanks.
> - write_lock_irq(&tasklist_lock); /* block fork */
> + read_lock(&tasklist_lock); /* block fork */
> if (atomic_read(&cs->count) <= ntasks)
> break; /* got enough */
> - write_unlock_irq(&tasklist_lock); /* try again */
> + read_unlock(&tasklist_lock); /* try again */
Too bad you didn't keep the nicely aligned comments aligned ;).
--
I won't rest till it's the best ...
Programmer, Linux Scalability
Paul Jackson <pj@sgi.com> 1.925.600.0401
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2007-05-24 1:36 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-05-24 1:23 [PATCH] Reduce cpuset.c write_lock_irq() to read_lock() Paul Menage
2007-05-24 1:35 ` Paul Jackson
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox