public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* Re: [patch] queued spinlocks (i386)
@ 2007-03-25 15:54 Oleg Nesterov
  2007-03-27 15:22 ` Andi Kleen
  2007-03-28  7:04 ` Nick Piggin
  0 siblings, 2 replies; 9+ messages in thread
From: Oleg Nesterov @ 2007-03-25 15:54 UTC (permalink / raw)
  To: Nick Piggin, Ravikiran G Thirumalai, Ingo Molnar, Nikita Danilov,
	Andrew Morton
  Cc: linux-kernel

I am sorry for being completely off-topic, but I've been wondering for the
long time...

What if we replace raw_spinlock_t.slock with "struct task_struct *owner" ?

	void _spin_lock(spinlock_t *lock)
	{
		struct task_struct *owner;

		for (;;) {
			preempt_disable();
			if (likely(_raw_spin_trylock(lock)))
				break;
			preempt_enable();

			while (!spin_can_lock(lock)) {
				rcu_read_lock();
				owner = lock->owner;
				if (owner && current->prio < owner->prio &&
				    !test_tsk_thread_flag(owner, TIF_NEED_RESCHED))
					set_tsk_thread_flag(owner, TIF_NEED_RESCHED);
				rcu_read_unlock();
				cpu_relax();
			}
		}

		lock->owner = current;
	}

	void _spin_unlock(spinlock_t *lock)
	{
		lock->owner = NULL;
		_raw_spin_unlock(lock);
		preempt_enable();
	}

Now we don't need need_lockbreak(lock), need_resched() is enough, and we take
->prio into consideration.

Makes sense? Or stupid?

Oleg.


^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2007-03-30  4:44 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-03-25 15:54 [patch] queued spinlocks (i386) Oleg Nesterov
2007-03-27 15:22 ` Andi Kleen
2007-03-28  7:04 ` Nick Piggin
2007-03-29 18:42   ` Oleg Nesterov
2007-03-29 22:16     ` Davide Libenzi
2007-03-30  2:06       ` Lee Revell
2007-03-30  2:17         ` Nick Piggin
2007-03-30  4:44           ` Lee Revell
2007-03-30  1:53     ` Nick Piggin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox