public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/3] ipc/sem.c: Optimization for reducing spinlock contention
@ 2010-04-28 19:06 Manfred Spraul
  2010-04-28 19:06 ` [PATCH 1/2] ipc/sem.c: Optimize update_queue() for bulk wakeup calls Manfred Spraul
  0 siblings, 1 reply; 7+ messages in thread
From: Manfred Spraul @ 2010-04-28 19:06 UTC (permalink / raw)
  To: LKML, Andrew Morton
  Cc: Chris Mason, Zach Brown, Jens Axboe, Nick Piggin, Manfred Spraul

Hi,

I've cleaned up the patches I sent a week ago:
- they pass now checkpatch.pl
- some comments updated/added
- a bug with semctl(,,SETALL,) is fixed
- tested against LTP

Andrew: Could you add them the next -mm?

The following series of patches tries to fix the spinlock contention
reported by Chris Manson: His benchmark exposes problems of the current
code:

- In the worst case, the algorithm used by update_queue() is O(N^2).
  Bulk wake-up calls can enter this worst case.
  The patch series fix that.
  Note that the benchmark app doesn't expose the problem, it just should
  be fixed: Real world apps might do the wake-ups in another order
  than perfect FIFO.

- The part of the code that runs within the semaphore array spinlock
  is significantly larger than necessary.
  The patch series fixes that. This change is responsible for the
  main improvement.

- The cacheline with the spinlock is also used for a variable that
  is read in the hot path (sem_base) and for a variable that is
  unnecessarily written to multiple times (sem_otime).
  The last step of the series cacheline-aligns the spinlock.

--
	Manfred

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH 1/2] ipc/sem.c: Optimize update_queue() for bulk wakeup calls
  2010-04-28 19:06 [PATCH 0/3] ipc/sem.c: Optimization for reducing spinlock contention Manfred Spraul
@ 2010-04-28 19:06 ` Manfred Spraul
  2010-04-28 19:06   ` [PATCH 2/2] ipc/sem.c: move wake_up_process out of the spinlock section Manfred Spraul
  0 siblings, 1 reply; 7+ messages in thread
From: Manfred Spraul @ 2010-04-28 19:06 UTC (permalink / raw)
  To: LKML, Andrew Morton
  Cc: Chris Mason, Zach Brown, Jens Axboe, Nick Piggin, Manfred Spraul

The SysV semaphore code allows to perform multiple operations on all
semaphores in the array as atomic operations.
After a modification, update_queue() checks which of the waiting tasks
can complete.
The algorithm that is used to identify the tasks is O(N^2) in the worst case.
For some cases, it is simple to avoid the O(N^2).

The patch adds a detection logic for some cases, especially for the case of
an array where all sleeping tasks are single sembuf operations and a
multi-sembuf operation is used to wake up multiple tasks.

A big database application uses that approach.

The patch fixes wakeup due to semctl(,,SETALL,) - the initial version of the
patch breaks that.

Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
---
 ipc/sem.c |  110 +++++++++++++++++++++++++++++++++++++++++++++++++++++-------
 1 files changed, 97 insertions(+), 13 deletions(-)

diff --git a/ipc/sem.c b/ipc/sem.c
index dbef95b..3f6d781 100644
--- a/ipc/sem.c
+++ b/ipc/sem.c
@@ -434,6 +434,69 @@ static void unlink_queue(struct sem_array *sma, struct sem_queue *q)
 		sma->complex_count--;
 }
 
+/** check_restart(sma, q)
+ * @sma: semaphore array
+ * @q: the operation that just completed
+ *
+ * update_queue is O(N^2) when it restarts scanning the whole queue of
+ * waiting operations. Therefore this function checks if the restart is
+ * really necessary. It is called after a previously waiting operation
+ * was completed.
+ */
+static int check_restart(struct sem_array *sma, struct sem_queue *q)
+{
+	struct sem *curr;
+	struct sem_queue *h;
+
+	/* if the operation didn't modify the array, then no restart */
+	if (q->alter == 0)
+		return 0;
+
+	/* pending complex operations are too difficult to analyse */
+	if (sma->complex_count)
+		return 1;
+
+	/* we were a sleeping complex operation. Too difficult */
+	if (q->nsops > 1)
+		return 1;
+
+	curr = sma->sem_base + q->sops[0].sem_num;
+
+	/* No-one waits on this queue */
+	if (list_empty(&curr->sem_pending))
+		return 0;
+
+	/* the new semaphore value */
+	if (curr->semval) {
+		/* It is impossible that someone waits for the new value:
+		 * - q is a previously sleeping simple operation that
+		 *   altered the array. It must be a decrement, because
+		 *   simple increments never sleep.
+		 * - The value is not 0, thus wait-for-zero won't proceed.
+		 * - If there are older (higher priority) decrements
+		 *   in the queue, then they have observed the original
+		 *   semval value and couldn't proceed. The operation
+		 *   decremented to value - thus they won't proceed either.
+		 */
+		BUG_ON(q->sops[0].sem_op >= 0);
+		return 0;
+	}
+	/*
+	 * semval is 0. Check if there are wait-for-zero semops.
+	 * They must be the first entries in the per-semaphore simple queue
+	 */
+	h = list_first_entry(&curr->sem_pending, struct sem_queue, simple_list);
+	BUG_ON(h->nsops != 1);
+	BUG_ON(h->sops[0].sem_num != q->sops[0].sem_num);
+
+	/* Yes, there is a wait-for-zero semop. Restart */
+	if (h->sops[0].sem_op == 0)
+		return 1;
+
+	/* Again - no-one is waiting for the new value. */
+	return 0;
+}
+
 
 /**
  * update_queue(sma, semnum): Look for tasks that can be completed.
@@ -469,7 +532,7 @@ static void update_queue(struct sem_array *sma, int semnum)
 again:
 	walk = pending_list->next;
 	while (walk != pending_list) {
-		int error, alter;
+		int error, restart;
 
 		q = (struct sem_queue *)((char *)walk - offset);
 		walk = walk->next;
@@ -494,22 +557,43 @@ again:
 
 		unlink_queue(sma, q);
 
-		/*
-		 * The next operation that must be checked depends on the type
-		 * of the completed operation:
-		 * - if the operation modified the array, then restart from the
-		 *   head of the queue and check for threads that might be
-		 *   waiting for the new semaphore values.
-		 * - if the operation didn't modify the array, then just
-		 *   continue.
-		 */
-		alter = q->alter;
+		if (error)
+			restart = 0;
+		else
+			restart = check_restart(sma, q);
+
 		wake_up_sem_queue(q, error);
-		if (alter && !error)
+		if (restart)
 			goto again;
 	}
 }
 
+/** do_smart_update(sma, sops, nsops): Optimized update_queue
+ * @sma: semaphore array
+ * @sops: operations that were performed
+ * @nsops: number of operations
+ *
+ * do_smart_update() does the required called to update_queue, based on the
+ * actual changes that were performed on the semaphore array.
+ */
+void do_smart_update(struct sem_array *sma, struct sembuf *sops, int nsops)
+{
+	int i;
+
+	if (sma->complex_count || sops == NULL) {
+		update_queue(sma, -1);
+		return;
+	}
+
+	for (i = 0; i < nsops; i++) {
+		if (sops[i].sem_op > 0 ||
+			(sops[i].sem_op < 0 &&
+				sma->sem_base[sops[i].sem_num].semval == 0))
+			update_queue(sma, sops[i].sem_num);
+	}
+}
+
+
 /* The following counts are associated to each semaphore:
  *   semncnt        number of tasks waiting on semval being nonzero
  *   semzcnt        number of tasks waiting on semval being zero
@@ -1225,7 +1309,7 @@ SYSCALL_DEFINE4(semtimedop, int, semid, struct sembuf __user *, tsops,
 	error = try_atomic_semop (sma, sops, nsops, un, task_tgid_vnr(current));
 	if (error <= 0) {
 		if (alter && error == 0)
-			update_queue(sma, (nsops == 1) ? sops[0].sem_num : -1);
+			do_smart_update(sma, sops, nsops);
 
 		goto out_unlock_free;
 	}
-- 
1.6.6.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 2/2] ipc/sem.c: move wake_up_process out of the spinlock section
  2010-04-28 19:06 ` [PATCH 1/2] ipc/sem.c: Optimize update_queue() for bulk wakeup calls Manfred Spraul
@ 2010-04-28 19:06   ` Manfred Spraul
  2010-04-28 19:06     ` [PATCH 3/3] [PATCH] ipc/sem.c: cacheline align the ipc spinlock for semaphores Manfred Spraul
  2010-05-11 21:21     ` [PATCH 2/2] ipc/sem.c: move wake_up_process out of the spinlock section Andrew Morton
  0 siblings, 2 replies; 7+ messages in thread
From: Manfred Spraul @ 2010-04-28 19:06 UTC (permalink / raw)
  To: LKML, Andrew Morton
  Cc: Chris Mason, Zach Brown, Jens Axboe, Nick Piggin, Manfred Spraul

The wake-up part of semtimedop() consists out of two steps:
- the right tasks must be identified.
- they must be woken up.

Right now, both steps run while the array spinlock is held.
This patch reorders the code and moves the actual wake_up_process()
behind the point where the spinlock is dropped.

The code also moves setting sem->sem_otime to one place: It does not
make sense to set the last modify time multiple times.

Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
---
 ipc/sem.c |  117 +++++++++++++++++++++++++++++++++++++++++++++----------------
 1 files changed, 87 insertions(+), 30 deletions(-)

diff --git a/ipc/sem.c b/ipc/sem.c
index 3f6d781..c263309 100644
--- a/ipc/sem.c
+++ b/ipc/sem.c
@@ -381,7 +381,6 @@ static int try_atomic_semop (struct sem_array * sma, struct sembuf * sops,
 		sop--;
 	}
 	
-	sma->sem_otime = get_seconds();
 	return 0;
 
 out_of_range:
@@ -404,25 +403,50 @@ undo:
 	return result;
 }
 
-/*
- * Wake up a process waiting on the sem queue with a given error.
- * The queue is invalid (may not be accessed) after the function returns.
+/** wake_up_sem_queue_prepare(q, error): Prepare wake-up
+ * @q: queue entry that must be signaled
+ * @error: Error value for the signal
+ *
+ * Prepare the wake-up of the queue entry q.
  */
-static void wake_up_sem_queue(struct sem_queue *q, int error)
+static void wake_up_sem_queue_prepare(struct list_head *pt,
+				struct sem_queue *q, int error)
 {
-	/*
-	 * Hold preempt off so that we don't get preempted and have the
-	 * wakee busy-wait until we're scheduled back on. We're holding
-	 * locks here so it may not strictly be needed, however if the
-	 * locks become preemptible then this prevents such a problem.
-	 */
-	preempt_disable();
+	if (list_empty(pt)) {
+		/*
+		 * Hold preempt off so that we don't get preempted and have the
+		 * wakee busy-wait until we're scheduled back on.
+		 */
+		preempt_disable();
+	}
 	q->status = IN_WAKEUP;
-	wake_up_process(q->sleeper);
-	/* hands-off: q can disappear immediately after writing q->status. */
-	smp_wmb();
-	q->status = error;
-	preempt_enable();
+	q->pid = error;
+
+	list_add_tail(&q->simple_list, pt);
+}
+
+/** wake_up_sem_queue_do(pt): Do the actual wake-up
+ * @pt: list of tasks to be woken up
+ *
+ * Do the actual wake-up.
+ * The function is called without any locks held, thus the semaphore array
+ * could be destroyed already and the tasks can disappear as soon as the
+ * status is set to the actual return code.
+ */
+static void wake_up_sem_queue_do(struct list_head *pt)
+{
+	struct sem_queue *q, *t;
+	int did_something;
+
+	did_something = !list_empty(pt);
+	list_for_each_entry_safe(q, t, pt, simple_list) {
+		wake_up_process(q->sleeper);
+		/* q can disappear immediately after writing q->status. */
+		smp_wmb();
+		q->status = q->pid;
+	}
+	if (did_something)
+		preempt_enable();
 }
 
 static void unlink_queue(struct sem_array *sma, struct sem_queue *q)
@@ -502,17 +526,22 @@ static int check_restart(struct sem_array *sma, struct sem_queue *q)
  * update_queue(sma, semnum): Look for tasks that can be completed.
  * @sma: semaphore array.
  * @semnum: semaphore that was modified.
+ * @pt: list head for the tasks that must be woken up.
  *
  * update_queue must be called after a semaphore in a semaphore array
  * was modified. If multiple semaphore were modified, then @semnum
  * must be set to -1.
+ * The tasks that must be woken up are added to @pt. The return code
+ * is stored in q->pid.
+ * The function return 1 if at least one array variable was modified.
  */
-static void update_queue(struct sem_array *sma, int semnum)
+static int update_queue(struct sem_array *sma, int semnum, struct list_head *pt)
 {
 	struct sem_queue *q;
 	struct list_head *walk;
 	struct list_head *pending_list;
 	int offset;
+	int retval;
 
 	/* if there are complex operations around, then knowing the semaphore
 	 * that was modified doesn't help us. Assume that multiple semaphores
@@ -556,41 +585,55 @@ again:
 			continue;
 
 		unlink_queue(sma, q);
+		if (q->alter)
+			retval = 1;
 
 		if (error)
 			restart = 0;
 		else
 			restart = check_restart(sma, q);
 
-		wake_up_sem_queue(q, error);
+		wake_up_sem_queue_prepare(pt, q, error);
 		if (restart)
 			goto again;
 	}
+	return retval;
 }
 
-/** do_smart_update(sma, sops, nsops): Optimized update_queue
+/** do_smart_update(sma, sops, nsops, otime, pt): Optimized update_queue
  * @sma: semaphore array
  * @sops: operations that were performed
  * @nsops: number of operations
+ * @otime: force setting otime
+ * @pt: list head of the tasks that must be woken up.
  *
  * do_smart_update() does the required called to update_queue, based on the
  * actual changes that were performed on the semaphore array.
+ * Note that the function does not do the actual wake-up: the caller is
+ * responsible for calling wake_up_sem_queue_do(@pt).
+ * It is safe to perform this call after dropping all locks.
  */
-void do_smart_update(struct sem_array *sma, struct sembuf *sops, int nsops)
+void do_smart_update(struct sem_array *sma, struct sembuf *sops, int nsops,
+			int otime, struct list_head *pt)
 {
 	int i;
 
 	if (sma->complex_count || sops == NULL) {
-		update_queue(sma, -1);
-		return;
+		if (update_queue(sma, -1, pt))
+			otime = 1;
+		goto done;
 	}
 
 	for (i = 0; i < nsops; i++) {
 		if (sops[i].sem_op > 0 ||
 			(sops[i].sem_op < 0 &&
 				sma->sem_base[sops[i].sem_num].semval == 0))
-			update_queue(sma, sops[i].sem_num);
+			if (update_queue(sma, sops[i].sem_num, pt))
+				otime = 1;
 	}
+done:
+	if (otime)
+		sma->sem_otime = get_seconds();
 }
 
 
@@ -656,6 +699,7 @@ static void freeary(struct ipc_namespace *ns, struct kern_ipc_perm *ipcp)
 	struct sem_undo *un, *tu;
 	struct sem_queue *q, *tq;
 	struct sem_array *sma = container_of(ipcp, struct sem_array, sem_perm);
+	struct list_head tasks;
 
 	/* Free the existing undo structures for this semaphore set.  */
 	assert_spin_locked(&sma->sem_perm.lock);
@@ -669,15 +713,17 @@ static void freeary(struct ipc_namespace *ns, struct kern_ipc_perm *ipcp)
 	}
 
 	/* Wake up all pending processes and let them fail with EIDRM. */
+	INIT_LIST_HEAD(&tasks);
 	list_for_each_entry_safe(q, tq, &sma->sem_pending, list) {
 		unlink_queue(sma, q);
-		wake_up_sem_queue(q, -EIDRM);
+		wake_up_sem_queue_prepare(&tasks, q, -EIDRM);
 	}
 
 	/* Remove the semaphore set from the IDR */
 	sem_rmid(ns, sma);
 	sem_unlock(sma);
 
+	wake_up_sem_queue_do(&tasks);
 	ns->used_sems -= sma->sem_nsems;
 	security_sem_free(sma);
 	ipc_rcu_putref(sma);
@@ -799,11 +845,13 @@ static int semctl_main(struct ipc_namespace *ns, int semid, int semnum,
 	ushort fast_sem_io[SEMMSL_FAST];
 	ushort* sem_io = fast_sem_io;
 	int nsems;
+	struct list_head tasks;
 
 	sma = sem_lock_check(ns, semid);
 	if (IS_ERR(sma))
 		return PTR_ERR(sma);
 
+	INIT_LIST_HEAD(&tasks);
 	nsems = sma->sem_nsems;
 
 	err = -EACCES;
@@ -891,7 +939,7 @@ static int semctl_main(struct ipc_namespace *ns, int semid, int semnum,
 		}
 		sma->sem_ctime = get_seconds();
 		/* maybe some queued-up processes were waiting for this */
-		update_queue(sma, -1);
+		do_smart_update(sma, NULL, 0, 0, &tasks);
 		err = 0;
 		goto out_unlock;
 	}
@@ -933,13 +981,15 @@ static int semctl_main(struct ipc_namespace *ns, int semid, int semnum,
 		curr->sempid = task_tgid_vnr(current);
 		sma->sem_ctime = get_seconds();
 		/* maybe some queued-up processes were waiting for this */
-		update_queue(sma, semnum);
+		do_smart_update(sma, NULL, 0, 0, &tasks);
 		err = 0;
 		goto out_unlock;
 	}
 	}
 out_unlock:
 	sem_unlock(sma);
+	wake_up_sem_queue_do(&tasks);
+
 out_free:
 	if(sem_io != fast_sem_io)
 		ipc_free(sem_io, sizeof(ushort)*nsems);
@@ -1213,6 +1263,7 @@ SYSCALL_DEFINE4(semtimedop, int, semid, struct sembuf __user *, tsops,
 	struct sem_queue queue;
 	unsigned long jiffies_left = 0;
 	struct ipc_namespace *ns;
+	struct list_head tasks;
 
 	ns = current->nsproxy->ipc_ns;
 
@@ -1261,6 +1312,8 @@ SYSCALL_DEFINE4(semtimedop, int, semid, struct sembuf __user *, tsops,
 	} else
 		un = NULL;
 
+	INIT_LIST_HEAD(&tasks);
+
 	sma = sem_lock_check(ns, semid);
 	if (IS_ERR(sma)) {
 		if (un)
@@ -1309,7 +1362,7 @@ SYSCALL_DEFINE4(semtimedop, int, semid, struct sembuf __user *, tsops,
 	error = try_atomic_semop (sma, sops, nsops, un, task_tgid_vnr(current));
 	if (error <= 0) {
 		if (alter && error == 0)
-			do_smart_update(sma, sops, nsops);
+			do_smart_update(sma, sops, nsops, 1, &tasks);
 
 		goto out_unlock_free;
 	}
@@ -1386,6 +1439,8 @@ SYSCALL_DEFINE4(semtimedop, int, semid, struct sembuf __user *, tsops,
 
 out_unlock_free:
 	sem_unlock(sma);
+
+	wake_up_sem_queue_do(&tasks);
 out_free:
 	if(sops != fast_sops)
 		kfree(sops);
@@ -1446,6 +1501,7 @@ void exit_sem(struct task_struct *tsk)
 	for (;;) {
 		struct sem_array *sma;
 		struct sem_undo *un;
+		struct list_head tasks;
 		int semid;
 		int i;
 
@@ -1509,10 +1565,11 @@ void exit_sem(struct task_struct *tsk)
 				semaphore->sempid = task_tgid_vnr(current);
 			}
 		}
-		sma->sem_otime = get_seconds();
 		/* maybe some queued-up processes were waiting for this */
-		update_queue(sma, -1);
+		INIT_LIST_HEAD(&tasks);
+		do_smart_update(sma, NULL, 0, 1, &tasks);
 		sem_unlock(sma);
+		wake_up_sem_queue_do(&tasks);
 
 		call_rcu(&un->rcu, free_un);
 	}
-- 
1.6.6.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 3/3] [PATCH] ipc/sem.c: cacheline align the ipc spinlock for semaphores
  2010-04-28 19:06   ` [PATCH 2/2] ipc/sem.c: move wake_up_process out of the spinlock section Manfred Spraul
@ 2010-04-28 19:06     ` Manfred Spraul
  2010-05-11 21:21     ` [PATCH 2/2] ipc/sem.c: move wake_up_process out of the spinlock section Andrew Morton
  1 sibling, 0 replies; 7+ messages in thread
From: Manfred Spraul @ 2010-04-28 19:06 UTC (permalink / raw)
  To: LKML, Andrew Morton
  Cc: Chris Mason, Zach Brown, Jens Axboe, Nick Piggin, Manfred Spraul

This patch cacheline aligns the spinlock for sysv semaphores:
Without the patch, the spinlock and sem_otime [written by
every semop that modified the array] and sem_base [read in the
hot path of try_atomic_semop()] can be in the same cacheline.

Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
---
 include/linux/sem.h |    4 +++-
 1 files changed, 3 insertions(+), 1 deletions(-)

diff --git a/include/linux/sem.h b/include/linux/sem.h
index 8a4adbe..f2961af 100644
--- a/include/linux/sem.h
+++ b/include/linux/sem.h
@@ -79,6 +79,7 @@ struct  seminfo {
 #ifdef __KERNEL__
 #include <asm/atomic.h>
 #include <linux/rcupdate.h>
+#include <linux/cache.h>
 
 struct task_struct;
 
@@ -91,7 +92,8 @@ struct sem {
 
 /* One sem_array data structure for each set of semaphores in the system. */
 struct sem_array {
-	struct kern_ipc_perm	sem_perm;	/* permissions .. see ipc.h */
+	struct kern_ipc_perm	____cacheline_aligned_in_smp
+				sem_perm;	/* permissions .. see ipc.h */
 	time_t			sem_otime;	/* last semop time */
 	time_t			sem_ctime;	/* last change time */
 	struct sem		*sem_base;	/* ptr to first semaphore in array */
-- 
1.6.6.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH 2/2] ipc/sem.c: move wake_up_process out of the spinlock section
  2010-04-28 19:06   ` [PATCH 2/2] ipc/sem.c: move wake_up_process out of the spinlock section Manfred Spraul
  2010-04-28 19:06     ` [PATCH 3/3] [PATCH] ipc/sem.c: cacheline align the ipc spinlock for semaphores Manfred Spraul
@ 2010-05-11 21:21     ` Andrew Morton
  2010-05-12 17:34       ` Manfred Spraul
  1 sibling, 1 reply; 7+ messages in thread
From: Andrew Morton @ 2010-05-11 21:21 UTC (permalink / raw)
  To: Manfred Spraul; +Cc: LKML, Chris Mason, Zach Brown, Jens Axboe, Nick Piggin

On Wed, 28 Apr 2010 21:06:27 +0200
Manfred Spraul <manfred@colorfullife.com> wrote:

> The wake-up part of semtimedop() consists out of two steps:
> - the right tasks must be identified.
> - they must be woken up.
> 
> Right now, both steps run while the array spinlock is held.
> This patch reorders the code and moves the actual wake_up_process()
> behind the point where the spinlock is dropped.
> 
> The code also moves setting sem->sem_otime to one place: It does not
> make sense to set the last modify time multiple times.

ipc/sem.c: In function 'update_queue':
ipc/sem.c:545: warning: 'retval' may be used uninitialized in this function

which indeed was a bug.

--- a/ipc/sem.c~ipc-semc-move-wake_up_process-out-of-the-spinlock-section-fix-2
+++ a/ipc/sem.c
@@ -542,7 +542,7 @@ static int update_queue(struct sem_array
 	struct list_head *walk;
 	struct list_head *pending_list;
 	int offset;
-	int retval;
+	int retval = 0;
 
 	/* if there are complex operations around, then knowing the semaphore
 	 * that was modified doesn't help us. Assume that multiple semaphores
_

But I worry that the patch which you sent might not have been the one
which you tested.


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 2/2] ipc/sem.c: move wake_up_process out of the spinlock section
  2010-05-11 21:21     ` [PATCH 2/2] ipc/sem.c: move wake_up_process out of the spinlock section Andrew Morton
@ 2010-05-12 17:34       ` Manfred Spraul
  2010-05-12 18:18         ` Manfred Spraul
  0 siblings, 1 reply; 7+ messages in thread
From: Manfred Spraul @ 2010-05-12 17:34 UTC (permalink / raw)
  To: Andrew Morton; +Cc: LKML, Chris Mason, Zach Brown, Jens Axboe, Nick Piggin

On 05/11/2010 11:21 PM, Andrew Morton wrote:
> On Wed, 28 Apr 2010 21:06:27 +0200
> Manfred Spraul<manfred@colorfullife.com>  wrote:
>
>    
>> The wake-up part of semtimedop() consists out of two steps:
>> - the right tasks must be identified.
>> - they must be woken up.
>>
>> Right now, both steps run while the array spinlock is held.
>> This patch reorders the code and moves the actual wake_up_process()
>> behind the point where the spinlock is dropped.
>>
>> The code also moves setting sem->sem_otime to one place: It does not
>> make sense to set the last modify time multiple times.
>>      
> ipc/sem.c: In function 'update_queue':
> ipc/sem.c:545: warning: 'retval' may be used uninitialized in this function
>
> which indeed was a bug.
>
>    
Duh - hiding in shame.

The effect would be that e.g. a semctl(SETALL) operation might change 
sem_otime.
semctl(SETALL) must only change sem_ctime (and sem_otime only if it 
causes a wakeup
and the woken up thread modifies the array)


> +++ a/ipc/sem.c
> @@ -542,7 +542,7 @@ static int update_queue(struct sem_array
>   	struct list_head *walk;
>   	struct list_head *pending_list;
>   	int offset;
> -	int retval;
> +	int retval = 0;
>
>   	/* if there are complex operations around, then knowing the semaphore
>   	 * that was modified doesn't help us. Assume that multiple semaphores
> _
>
> But I worry that the patch which you sent might not have been the one
> which you tested.
>    

I think I tested the patch that I sent out.

Thus I'll retest everything (including sem_ctime/sem_otime and SETALL)

The odd thing: My gcc somehow doesn't report the missing initialization :-(

 >>>
[manfred@cores linux-2.6]$ grep 'int update_queue' -A 10 ipc/sem.c
static int update_queue(struct sem_array *sma, int semnum, struct 
list_head *pt)
{
         struct sem_queue *q;
         struct list_head *walk;
         struct list_head *pending_list;
         int offset;
         int retval;

         /* if there are complex operations around, then knowing the 
semaphore
          * that was modified doesn't help us. Assume that multiple 
semaphores
          * were modified.
[manfred@cores linux-2.6]$ touch ipc/sem.o
[manfred@cores linux-2.6]$ make ipc
   CHK     include/linux/version.h
   CHK     include/generated/utsrelease.h
   CALL    scripts/checksyscalls.sh
   LD      ipc/built-in.o
[manfred@cores linux-2.6]$ gcc --version
gcc (GCC) 4.4.3 20100127 (Red Hat 4.4.3-4)
Copyright (C) 2010 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

[manfred@cores linux-2.6]$ cat ipc/.sem.o.cmd
cmd_ipc/sem.o := gcc -Wp,-MD,ipc/.sem.o.d  -nostdinc -isystem 
/usr/lib/gcc/x86_64-redhat-linux/4.4.3/include 
-I/home/manfred/git/linux-2.6/arch/x86/include -Iinclude  -include 
include/generated/autoconf.h -D__KERNEL__ -Wall -Wundef 
-Wstrict-prototypes -Wno-trigraphs -fno-strict-aliasing -fno-common 
-Werror-implicit-function-declaration -Wno-format-security 
-fno-delete-null-pointer-checks -Os -m64 -mtune=generic -mno-red-zone 
-mcmodel=kernel -funit-at-a-time -maccumulate-outgoing-args 
-DCONFIG_AS_CFI=1 -DCONFIG_AS_CFI_SIGNAL_FRAME=1 -pipe -Wno-sign-compare 
-fno-asynchronous-unwind-tables -mno-sse -mno-mmx -mno-sse2 -mno-3dnow 
-Wframe-larger-than=1024 -fno-stack-protector -fno-omit-frame-pointer 
-fno-optimize-sibling-calls -g -Wdeclaration-after-statement 
-Wno-pointer-sign -fno-strict-overflow -fno-dwarf2-cfi-asm 
-fconserve-stack   -D"KBUILD_STR(s)=\#s" 
-D"KBUILD_BASENAME=KBUILD_STR(sem)"  -D"KBUILD_MODNAME=KBUILD_STR(sem)"  
-c -o ipc/sem.o ipc/sem.c

<<<

--
     Manfred

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 2/2] ipc/sem.c: move wake_up_process out of the spinlock section
  2010-05-12 17:34       ` Manfred Spraul
@ 2010-05-12 18:18         ` Manfred Spraul
  0 siblings, 0 replies; 7+ messages in thread
From: Manfred Spraul @ 2010-05-12 18:18 UTC (permalink / raw)
  To: Andrew Morton; +Cc: LKML, Chris Mason, Zach Brown, Jens Axboe, Nick Piggin

On 05/12/2010 07:34 PM, Manfred Spraul wrote:
>
> The effect would be that e.g. a semctl(SETALL) operation might change 
> sem_otime.
> semctl(SETALL) must only change sem_ctime (and sem_otime only if it 
> causes a wakeup
> and the woken up thread modifies the array)
>
Just for your information: I'm not sure why semctl(SETALL) changes 
sem_ctime at all.
According to the opengroup, it shouldn't do that.

On FreeBSD, it doesn't.

Any proposals how to fix it?
Should I remove the update to sem_ctime?


 > [manfred@cores linux-2.6]$ touch ipc/sem.o
It should have been "touch ipc/sem.c":

 > [manfred@cores linux-2.6]$ touch ipc/sem.c
 > [manfred@cores linux-2.6]$ make ipc
 >   CHK     include/linux/version.h
 >   CHK     include/generated/utsrelease.h
 >  CALL    scripts/checksyscalls.sh
 >  CC      ipc/sem.o
 >  LD      ipc/built-in.o

--
     Manfred

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2010-05-12 18:17 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-04-28 19:06 [PATCH 0/3] ipc/sem.c: Optimization for reducing spinlock contention Manfred Spraul
2010-04-28 19:06 ` [PATCH 1/2] ipc/sem.c: Optimize update_queue() for bulk wakeup calls Manfred Spraul
2010-04-28 19:06   ` [PATCH 2/2] ipc/sem.c: move wake_up_process out of the spinlock section Manfred Spraul
2010-04-28 19:06     ` [PATCH 3/3] [PATCH] ipc/sem.c: cacheline align the ipc spinlock for semaphores Manfred Spraul
2010-05-11 21:21     ` [PATCH 2/2] ipc/sem.c: move wake_up_process out of the spinlock section Andrew Morton
2010-05-12 17:34       ` Manfred Spraul
2010-05-12 18:18         ` Manfred Spraul

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox