public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH] semaphore fairness patch against test11-pre6
@ 2000-11-18  1:01 David Mansfield
  2000-11-18  9:45 ` Christoph Rohland
  0 siblings, 1 reply; 7+ messages in thread
From: David Mansfield @ 2000-11-18  1:01 UTC (permalink / raw)
  To: Linus Torvalds; +Cc: lkml

[-- Attachment #1: Type: text/plain, Size: 1780 bytes --]

Hi Linus et al,

I've applied your semaphore fairness patch (slightly fixed) below.  It
fixes my original bug report of vmstat, ps etc. stalls waiting for the
mmap_sem.  I can now run my memory 'hog' processes and actually see
vmstat update every second even under heavy memory pressure.  More
importantly, ps works so I can find the pid to kill.  I'm no expert in
checking for races, but I went over all (I think) the 2 process cases as
well as I could and they seem to look ok to me, but what do I know.  I
know someone else reported it didn't fix the problem, but perhaps that's
some other issue.

I ran many 'ps' (20?) in the background trying to simulate many process
contention, and everything still worked fine.  I've run some other
stress tests too (dbench, my own I/O throughput test etc) and so far
all's well (famous last words).

If you can find the time to check this out more completely, I recommend
it, because it seems like a great improvement to be able to accurately
see vmstat numbers in times of system load.  I hope the other side
effects are beneficial as well :-)

The change to the patch was that you had 'if (sleepers > 1)' when
obviously you meant 'if (sem->sleepers > 1)'...

Here's your patch again (also attached in case of mangling):

--- linux/arch/i386/kernel/semaphore.c	2000/11/16 19:58:26	1.3
+++ linux/arch/i386/kernel/semaphore.c	2000/11/17 23:12:48
@@ -64,6 +64,14 @@
 
 	spin_lock_irq(&semaphore_lock);
 	sem->sleepers++;
+
+	/*
+	 * Are there other people waiting for this?
+	 * They get to go first.
+	 */
+	if (sem->sleepers > 1)
+		goto inside;
+
 	for (;;) {
 		int sleepers = sem->sleepers;
 
@@ -76,6 +84,7 @@
 			break;
 		}
 		sem->sleepers = 1;	/* us - see -1 above */
+inside:
 		spin_unlock_irq(&semaphore_lock);
 
 		schedule();

[-- Attachment #2: sem-patch.test11-pre6 --]
[-- Type: text/plain, Size: 747 bytes --]

Index: linux/arch/i386/kernel/semaphore.c
===================================================================
RCS file: /home/kernel/cvs_master/linux/arch/i386/kernel/semaphore.c,v
retrieving revision 1.3
diff -u -r1.3 semaphore.c
--- linux/arch/i386/kernel/semaphore.c	2000/11/16 19:58:26	1.3
+++ linux/arch/i386/kernel/semaphore.c	2000/11/17 23:12:48
@@ -64,6 +64,14 @@
 
 	spin_lock_irq(&semaphore_lock);
 	sem->sleepers++;
+
+	/*
+	 * Are there other people waiting for this?
+	 * They get to go first.
+	 */
+	if (sem->sleepers > 1)
+		goto inside;
+
 	for (;;) {
 		int sleepers = sem->sleepers;
 
@@ -76,6 +84,7 @@
 			break;
 		}
 		sem->sleepers = 1;	/* us - see -1 above */
+inside:
 		spin_unlock_irq(&semaphore_lock);
 
 		schedule();

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2000-11-20 14:10 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2000-11-18  1:01 [PATCH] semaphore fairness patch against test11-pre6 David Mansfield
2000-11-18  9:45 ` Christoph Rohland
2000-11-19  1:12   ` Andrew Morton
2000-11-19  1:47     ` Linus Torvalds
2000-11-19 12:51       ` Andrew Morton
2000-11-19 18:46         ` Linus Torvalds
2000-11-20 13:39           ` Andrew Morton

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox