Hello, please CC: replies to me as I am not subscribed to the list. > The real problem with this patch is that if a real time task yields, the > patch will cause the scheduler to pick a lower priority task or a > SCHED_OTHER task. This one is not so easy to solve. You want to scan > the run_list in the proper order so that the real time task will be the > last pick at its priority. Problem is, the pre load with the prev task > is out of order. You might try: http://rtsched.sourceforge.net/ No it's not a problem at all, since RR tasks will just be moved to the end of the queue and no SCHED_YIELD flag is set for them => no lower-priority task may be scheduled. However, I found a bug in my own patch :-) The problem is that when a process yields and no process has a timeslice left, recalc is called. But then we lose YIELD flag once again. So the simple solution (and hopefully this time right :-) was to NOT clear YIELD flag at all before exit from schedule() and move test for this flag from goodness_prev() to goodness() function (getting rid of goodness_prev() altogether). However, one of my tests still show strange behavior, so maybe you will get 3rd version of the patch :-) Anyway, I got good 30% performance boost for high-contention case in user-space spinlocks when sched_yield() is working right. Another function that would be very interesting is possibility to give up our timeslice to specific other process. This way I could transfer control to other process/thread that owns the lock directly so that process/thread may finish working with the lock. This can again speed up everything. When I have now 4 processes contending for a lock, I get performance 1x. However, when there are 20 processes contending, performance is only 0.7x. I suppose this is due to excessive context switches. I will try to implement something like "sched_switchto" to switch to specific pid (from user space) and see if that helps. Or is there such a function already? Ivan Schreter is@zapwerk.com