From: Ivan Schreter <is@zapwerk.com>
To: george anzinger <george@mvista.com>
Cc: linux-kernel@vger.kernel.org
Subject: Re: [patch] sched_yield in 2.2.x - version 2
Date: Wed, 30 May 2001 12:54:52 +0200 [thread overview]
Message-ID: <01053013065000.01375@linux> (raw)
In-Reply-To: <01053002030500.01197@linux> <3B146125.77845217@mvista.com>
In-Reply-To: <3B146125.77845217@mvista.com>
[-- Attachment #1: Type: text/plain, Size: 2012 bytes --]
Hello,
please CC: replies to me as I am not subscribed to the list.
> The real problem with this patch is that if a real time task yields, the
> patch will cause the scheduler to pick a lower priority task or a
> SCHED_OTHER task. This one is not so easy to solve. You want to scan
> the run_list in the proper order so that the real time task will be the
> last pick at its priority. Problem is, the pre load with the prev task
> is out of order. You might try: http://rtsched.sourceforge.net/
No it's not a problem at all, since RR tasks will just be moved to the end of
the queue and no SCHED_YIELD flag is set for them => no lower-priority task may
be scheduled.
However, I found a bug in my own patch :-)
The problem is that when a process yields and no process has a timeslice left,
recalc is called. But then we lose YIELD flag once again. So the simple
solution (and hopefully this time right :-) was to NOT clear YIELD flag at all
before exit from schedule() and move test for this flag from goodness_prev() to
goodness() function (getting rid of goodness_prev() altogether).
However, one of my tests still show strange behavior, so maybe you will get 3rd
version of the patch :-) Anyway, I got good 30% performance boost for
high-contention case in user-space spinlocks when sched_yield() is working
right.
Another function that would be very interesting is possibility to give up our
timeslice to specific other process. This way I could transfer control to other
process/thread that owns the lock directly so that process/thread may finish
working with the lock. This can again speed up everything. When I have now 4
processes contending for a lock, I get performance 1x. However, when there are
20 processes contending, performance is only 0.7x. I suppose this is due to
excessive context switches. I will try to implement something like
"sched_switchto" to switch to specific pid (from user space) and see if that
helps. Or is there such a function already?
Ivan Schreter
is@zapwerk.com
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: sched_patch.diff --]
[-- Type: text/x-c; name="sched_patch.diff", Size: 1636 bytes --]
--- kernel/sched.c.orig Wed May 30 01:17:24 2001
+++ kernel/sched.c Wed May 30 12:30:03 2001
@@ -145,6 +145,11 @@
{
int weight;
+ if (p->policy & SCHED_YIELD) {
+ /* do not schedule yielded process now */
+ return -1;
+ }
+
/*
* Realtime process, select the first one on the
* runqueue (taking priorities within processes
@@ -183,25 +188,6 @@
}
/*
- * subtle. We want to discard a yielded process only if it's being
- * considered for a reschedule. Wakeup-time 'queries' of the scheduling
- * state do not count. Another optimization we do: sched_yield()-ed
- * processes are runnable (and thus will be considered for scheduling)
- * right when they are calling schedule(). So the only place we need
- * to care about SCHED_YIELD is when we calculate the previous process'
- * goodness ...
- */
-static inline int prev_goodness (struct task_struct * prev,
- struct task_struct * p, int this_cpu)
-{
- if (p->policy & SCHED_YIELD) {
- p->policy &= ~SCHED_YIELD;
- return 0;
- }
- return goodness(prev, p, this_cpu);
-}
-
-/*
* the 'goodness value' of replacing a process on a given CPU.
* positive value means 'replace', zero or negative means 'dont'.
*/
@@ -740,6 +726,10 @@
/* Do we need to re-calculate counters? */
if (!c)
goto recalculate;
+
+ /* clean up potential SCHED_YIELD bit */
+ prev->policy &= ~SCHED_YIELD;
+
/*
* from this point on nothing can prevent us from
* switching to the next task, save this fact in
@@ -809,7 +799,7 @@
}
still_running:
- c = prev_goodness(prev, prev, this_cpu);
+ c = goodness(prev, prev, this_cpu);
next = prev;
goto still_running_back;
prev parent reply other threads:[~2001-05-30 11:08 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2001-05-29 23:49 [patch] sched_yield in 2.2.x Ivan Schreter
2001-05-30 2:55 ` george anzinger
2001-05-30 9:07 ` Ivan Schreter
2001-05-30 10:54 ` Ivan Schreter [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=01053013065000.01375@linux \
--to=is@zapwerk.com \
--cc=george@mvista.com \
--cc=linux-kernel@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox