public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [ckpatch][8/29] track_mutexes-1.patch
@ 2006-06-18  7:31 Con Kolivas
  2006-06-18 10:58 ` Nikita Danilov
  2006-06-18 11:28 ` [ck] " Juho Saarikko
  0 siblings, 2 replies; 4+ messages in thread
From: Con Kolivas @ 2006-06-18  7:31 UTC (permalink / raw)
  To: linux list; +Cc: ck list

Keep a record of how many mutexes are held by any task. This allows cpu
scheduler code to use this information in decision making for tasks that
hold contended resources.

Signed-off-by: Con Kolivas <kernel@kolivas.org>

---
 include/linux/init_task.h |    1 +
 include/linux/sched.h     |    1 +
 kernel/fork.c             |    1 +
 kernel/mutex.c            |   25 +++++++++++++++++++++++--
 4 files changed, 26 insertions(+), 2 deletions(-)

Index: linux-ck-dev/include/linux/init_task.h
===================================================================
--- linux-ck-dev.orig/include/linux/init_task.h	2006-06-18 15:20:14.000000000 +1000
+++ linux-ck-dev/include/linux/init_task.h	2006-06-18 15:23:44.000000000 +1000
@@ -120,6 +120,7 @@ extern struct group_info init_groups;
 	.blocked	= {{0}},					\
 	.alloc_lock	= SPIN_LOCK_UNLOCKED,				\
 	.proc_lock	= SPIN_LOCK_UNLOCKED,				\
+	.mutexes_held	= 0,						\
 	.journal_info	= NULL,						\
 	.cpu_timers	= INIT_CPU_TIMERS(tsk.cpu_timers),		\
 	.fs_excl	= ATOMIC_INIT(0),				\
Index: linux-ck-dev/include/linux/sched.h
===================================================================
--- linux-ck-dev.orig/include/linux/sched.h	2006-06-18 15:23:38.000000000 +1000
+++ linux-ck-dev/include/linux/sched.h	2006-06-18 15:23:44.000000000 +1000
@@ -845,6 +845,7 @@ struct task_struct {
 	/* mutex deadlock detection */
 	struct mutex_waiter *blocked_on;
 #endif
+	unsigned long mutexes_held;
 
 /* journalling filesystem info */
 	void *journal_info;
Index: linux-ck-dev/kernel/fork.c
===================================================================
--- linux-ck-dev.orig/kernel/fork.c	2006-06-18 15:20:14.000000000 +1000
+++ linux-ck-dev/kernel/fork.c	2006-06-18 15:23:44.000000000 +1000
@@ -1022,6 +1022,7 @@ static task_t *copy_process(unsigned lon
 	p->io_context = NULL;
 	p->io_wait = NULL;
 	p->audit_context = NULL;
+	p->mutexes_held = 0;
 	cpuset_fork(p);
 #ifdef CONFIG_NUMA
  	p->mempolicy = mpol_copy(p->mempolicy);
Index: linux-ck-dev/kernel/mutex.c
===================================================================
--- linux-ck-dev.orig/kernel/mutex.c	2006-06-18 15:20:14.000000000 +1000
+++ linux-ck-dev/kernel/mutex.c	2006-06-18 15:23:44.000000000 +1000
@@ -58,6 +58,16 @@ EXPORT_SYMBOL(__mutex_init);
 static void fastcall noinline __sched
 __mutex_lock_slowpath(atomic_t *lock_count __IP_DECL__);
 
+static inline void inc_mutex_count(void)
+{
+	current->mutexes_held++;
+}
+
+static inline void dec_mutex_count(void)
+{
+	current->mutexes_held--;
+}
+
 /***
  * mutex_lock - acquire the mutex
  * @lock: the mutex to be acquired
@@ -87,6 +97,7 @@ void fastcall __sched mutex_lock(struct 
 	 * 'unlocked' into 'locked' state.
 	 */
 	__mutex_fastpath_lock(&lock->count, __mutex_lock_slowpath);
+	inc_mutex_count();
 }
 
 EXPORT_SYMBOL(mutex_lock);
@@ -112,6 +123,7 @@ void fastcall __sched mutex_unlock(struc
 	 * into 'unlocked' state:
 	 */
 	__mutex_fastpath_unlock(&lock->count, __mutex_unlock_slowpath);
+	dec_mutex_count();
 }
 
 EXPORT_SYMBOL(mutex_unlock);
@@ -254,9 +266,14 @@ __mutex_lock_interruptible_slowpath(atom
  */
 int fastcall __sched mutex_lock_interruptible(struct mutex *lock)
 {
+	int ret;
+
 	might_sleep();
-	return __mutex_fastpath_lock_retval
+	ret = __mutex_fastpath_lock_retval
 			(&lock->count, __mutex_lock_interruptible_slowpath);
+	if (likely(!ret))
+		inc_mutex_count();
+	return ret;
 }
 
 EXPORT_SYMBOL(mutex_lock_interruptible);
@@ -308,8 +325,12 @@ static inline int __mutex_trylock_slowpa
  */
 int fastcall mutex_trylock(struct mutex *lock)
 {
-	return __mutex_fastpath_trylock(&lock->count,
+	int ret = __mutex_fastpath_trylock(&lock->count,
 					__mutex_trylock_slowpath);
+
+	if (likely(ret))
+		inc_mutex_count();
+	return ret;
 }
 
 EXPORT_SYMBOL(mutex_trylock);

-- 
-ck

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [ckpatch][8/29] track_mutexes-1.patch
  2006-06-18  7:31 [ckpatch][8/29] track_mutexes-1.patch Con Kolivas
@ 2006-06-18 10:58 ` Nikita Danilov
  2006-06-18 11:28 ` [ck] " Juho Saarikko
  1 sibling, 0 replies; 4+ messages in thread
From: Nikita Danilov @ 2006-06-18 10:58 UTC (permalink / raw)
  To: Con Kolivas; +Cc: ck list, Linux Kernel Mailing List

Con Kolivas writes:
 > Keep a record of how many mutexes are held by any task. This allows cpu
 > scheduler code to use this information in decision making for tasks that
 > hold contended resources.

Natural extension of this idea is to specify priority bump for every
mutex, with default mutex initializer setting this value to 1.

In fact, this looks like an old UNIX method of specifying priority value
as an argument to the call of sleeping function (sleep, tsleep,
etc.).

 > 
 > Signed-off-by: Con Kolivas <kernel@kolivas.org>
 > 

[...]

 >  	void *journal_info;
 > Index: linux-ck-dev/kernel/fork.c
 > ===================================================================
 > --- linux-ck-dev.orig/kernel/fork.c	2006-06-18 15:20:14.000000000 +1000
 > +++ linux-ck-dev/kernel/fork.c	2006-06-18 15:23:44.000000000 +1000
 > @@ -1022,6 +1022,7 @@ static task_t *copy_process(unsigned lon
 >  	p->io_context = NULL;
 >  	p->io_wait = NULL;
 >  	p->audit_context = NULL;
 > +	p->mutexes_held = 0;

You may also add 

        BUG_ON(p->mutexes_held != 0);

check to do_exit().

Nikita.

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [ck] [ckpatch][8/29] track_mutexes-1.patch
  2006-06-18  7:31 [ckpatch][8/29] track_mutexes-1.patch Con Kolivas
  2006-06-18 10:58 ` Nikita Danilov
@ 2006-06-18 11:28 ` Juho Saarikko
  2006-06-18 11:34   ` Con Kolivas
  1 sibling, 1 reply; 4+ messages in thread
From: Juho Saarikko @ 2006-06-18 11:28 UTC (permalink / raw)
  To: Con Kolivas; +Cc: linux list, ck list

On Sun, 2006-06-18 at 10:31, Con Kolivas wrote:
> Keep a record of how many mutexes are held by any task. This allows cpu
> scheduler code to use this information in decision making for tasks that
> hold contended resources.

So, if I'm an userspace application trying to overcome nice or
scheduling class limitations, I can simply create a lot of mutexes, lock
them all, and get better scheduling ?-)

A better way would be to track what task holds what mutex, and when some
task tries to lock an already locked one, temporarily elevate the task
holding the mutex to the priority of the highest priority task blocking
on it (if higher than what the holding task already has, of course).
Then return the task to normal when it unlocks the mutex.

This might be more trouble and cost more overhead than it's worth, but
in theory, it would be a supreme system.


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [ck] [ckpatch][8/29] track_mutexes-1.patch
  2006-06-18 11:28 ` [ck] " Juho Saarikko
@ 2006-06-18 11:34   ` Con Kolivas
  0 siblings, 0 replies; 4+ messages in thread
From: Con Kolivas @ 2006-06-18 11:34 UTC (permalink / raw)
  To: Juho Saarikko; +Cc: linux list, ck list

On Sunday 18 June 2006 21:28, Juho Saarikko wrote:
> On Sun, 2006-06-18 at 10:31, Con Kolivas wrote:
> > Keep a record of how many mutexes are held by any task. This allows cpu
> > scheduler code to use this information in decision making for tasks that
> > hold contended resources.
>
> So, if I'm an userspace application trying to overcome nice or
> scheduling class limitations, I can simply create a lot of mutexes, lock
> them all, and get better scheduling ?-)
>
> A better way would be to track what task holds what mutex, and when some
> task tries to lock an already locked one, temporarily elevate the task
> holding the mutex to the priority of the highest priority task blocking
> on it (if higher than what the holding task already has, of course).
> Then return the task to normal when it unlocks the mutex.
>
> This might be more trouble and cost more overhead than it's worth, but
> in theory, it would be a supreme system.

No you misunderstand why I use it here. I am not doing priority inheritance at 
all; that comes with all sorts of risks and complexities. This is done purely 
to prevent SCHED_IDLEPRIO tasks from grabbing a mutex and then never getting 
scheduled due to IDLEPRIO semantics while another task is effectively starved 
waiting on that mutex. It is used in -ck only to convert SCHED_IDLEPRIO tasks 
to nice 19 SCHED_NORMAL tasks while they're holding mutexes to do this.

-- 
-ck

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2006-06-18 11:34 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2006-06-18  7:31 [ckpatch][8/29] track_mutexes-1.patch Con Kolivas
2006-06-18 10:58 ` Nikita Danilov
2006-06-18 11:28 ` [ck] " Juho Saarikko
2006-06-18 11:34   ` Con Kolivas

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox