* sched_yield() version 2.4.24
@ 2004-03-30 16:47 Richard B. Johnson
2004-03-30 16:58 ` Chris Friesen
0 siblings, 1 reply; 10+ messages in thread
From: Richard B. Johnson @ 2004-03-30 16:47 UTC (permalink / raw)
To: Linux kernel
Anybody know why a task that does:
for(;;)
sched_yield();
Shows 100% CPU utiliization when there are other tasks that
are actually getting the CPU? It seems that a caller to
sched_yield() does not show that it is sleeping for any
portion of the time it gives up the CPU. On the other hand,
if usleep(0) is substituted, the task is shown to be sleeping.
This shows that the accounting for sched_yield() is mucked
up. It works fine, it gives up the CPU to other tasks. However,
`top` shows it as a CPU hog, which it isn't.
Simple code to check it out:
extern void sched_yield(void);
extern int usleep(int);
int main()
{
#if BAD
for(;;)
sched_yield();
#endif
for(;;)
usleep(0);
}
Cheers,
Dick Johnson
Penguin : Linux version 2.4.24 on an i686 machine (797.90 BogoMips).
Note 96.31% of all statistics are fiction.
^ permalink raw reply [flat|nested] 10+ messages in thread* Re: sched_yield() version 2.4.24 2004-03-30 16:47 sched_yield() version 2.4.24 Richard B. Johnson @ 2004-03-30 16:58 ` Chris Friesen 2004-03-30 17:09 ` Richard B. Johnson 0 siblings, 1 reply; 10+ messages in thread From: Chris Friesen @ 2004-03-30 16:58 UTC (permalink / raw) To: root; +Cc: Linux kernel Richard B. Johnson wrote: > Anybody know why a task that does: > > for(;;) > sched_yield(); > > Shows 100% CPU utiliization when there are other tasks that > are actually getting the CPU? What do the other tasks show for cpu in top? Maybe it's an artifact of the timer-based process sampling for cpu utilization, and it just happens to be running when the timer interrupt fires, so it keeps getting billed? Chris ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: sched_yield() version 2.4.24 2004-03-30 16:58 ` Chris Friesen @ 2004-03-30 17:09 ` Richard B. Johnson 2004-03-30 17:30 ` Chris Friesen 0 siblings, 1 reply; 10+ messages in thread From: Richard B. Johnson @ 2004-03-30 17:09 UTC (permalink / raw) To: Chris Friesen; +Cc: Linux kernel On Tue, 30 Mar 2004, Chris Friesen wrote: > Richard B. Johnson wrote: > > Anybody know why a task that does: > > > > for(;;) > > sched_yield(); > > > > Shows 100% CPU utiliization when there are other tasks that > > are actually getting the CPU? > > What do the other tasks show for cpu in top? > Well in excess of 100% on a single-CPU system. 12:02pm up 1 day, 53 min, 4 users, load average: 2.54, 1.25, 0.90 34 processes: 31 sleeping, 3 running, 0 zombie, 0 stopped CPU states: 65.8% user, 134.6% system, 0.0% nice, 0.0% idle Mem: 322352K av, 101772K used, 220580K free, 0K shrd, 9836K buff Swap: 1044208K av, 1044208K used, 0K free 20240K cached PID USER PRI NI SIZE RSS SHARE STAT LIB %CPU %MEM TIME COMMAND 7144 root 19 0 5564 5564 1444 R 0 82.5 1.7 2:27 client 7143 root 15 0 980 976 428 S 0 59.9 0.3 1:57 server 7142 root 18 0 1464 1464 1444 R 0 56.0 0.4 1:39 client 7163 root 11 0 568 564 432 R 0 1.9 0.1 0:00 top [SNIPPED...sleeping tasks] Here, one of the 'client' tasks is yielding its CPU time when it is waiting on a semaphore from the first one. > Maybe it's an artifact of the timer-based process sampling for cpu > utilization, and it just happens to be running when the timer interrupt > fires, so it keeps getting billed? > > Chris > I think somebody forgot to put something into the 'current' structure when sys_sched_yield() gets called. Cheers, Dick Johnson Penguin : Linux version 2.4.24 on an i686 machine (797.90 BogoMips). Note 96.31% of all statistics are fiction. ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: sched_yield() version 2.4.24 2004-03-30 17:09 ` Richard B. Johnson @ 2004-03-30 17:30 ` Chris Friesen 2004-03-30 17:52 ` Ben Greear 0 siblings, 1 reply; 10+ messages in thread From: Chris Friesen @ 2004-03-30 17:30 UTC (permalink / raw) To: root; +Cc: Linux kernel Richard B. Johnson wrote: > Well in excess of 100% on a single-CPU system. Very odd. > 12:02pm up 1 day, 53 min, 4 users, load average: 2.54, 1.25, 0.90 > 34 processes: 31 sleeping, 3 running, 0 zombie, 0 stopped > CPU states: 65.8% user, 134.6% system, 0.0% nice, 0.0% idle > Mem: 322352K av, 101772K used, 220580K free, 0K shrd, 9836K buff > Swap: 1044208K av, 1044208K used, 0K free 20240K cached > > PID USER PRI NI SIZE RSS SHARE STAT LIB %CPU %MEM TIME COMMAND > 7144 root 19 0 5564 5564 1444 R 0 82.5 1.7 2:27 client > 7143 root 15 0 980 976 428 S 0 59.9 0.3 1:57 server > 7142 root 18 0 1464 1464 1444 R 0 56.0 0.4 1:39 client > 7163 root 11 0 568 564 432 R 0 1.9 0.1 0:00 top > [SNIPPED...sleeping tasks] The cpu util accounting code in kernel/timer.c hasn't changed in 2.4 since 2002. Must be somewhere else. Anyone else have any ideas? Chris ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: sched_yield() version 2.4.24 2004-03-30 17:30 ` Chris Friesen @ 2004-03-30 17:52 ` Ben Greear 2004-03-30 19:40 ` Denis Vlasenko 0 siblings, 1 reply; 10+ messages in thread From: Ben Greear @ 2004-03-30 17:52 UTC (permalink / raw) To: Chris Friesen; +Cc: root, Linux kernel Chris Friesen wrote: > The cpu util accounting code in kernel/timer.c hasn't changed in 2.4 > since 2002. Must be somewhere else. > > Anyone else have any ideas? As another sample point, I have fired up about 100 processes with each process having 10+ threads. On my dual-xeon, I see maybe 15 processes shown as 99% CPU in 'top'. System load was near 25 when I was looking, but the machine was still quite responsive. I'm guessing this is just an artifact of having lots of processes running very often and top is just not able to calculate with fine enough granularity? This is on 2.4.25 kernel. Ben -- Ben Greear <greearb@candelatech.com> Candela Technologies Inc http://www.candelatech.com ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: sched_yield() version 2.4.24 2004-03-30 17:52 ` Ben Greear @ 2004-03-30 19:40 ` Denis Vlasenko 2004-03-30 20:29 ` Richard B. Johnson 0 siblings, 1 reply; 10+ messages in thread From: Denis Vlasenko @ 2004-03-30 19:40 UTC (permalink / raw) To: Ben Greear, Chris Friesen; +Cc: root, Linux kernel On Tuesday 30 March 2004 19:52, Ben Greear wrote: > Chris Friesen wrote: > > The cpu util accounting code in kernel/timer.c hasn't changed in 2.4 > > since 2002. Must be somewhere else. > > > > Anyone else have any ideas? > > As another sample point, I have fired up about 100 processes with > each process having 10+ threads. On my dual-xeon, I see maybe 15 > processes shown as 99% CPU in 'top'. System load was near 25 > when I was looking, but the machine was still quite responsive. There was a top bug with exactly this symptom. Fixed. I use procps-2.0.18. > I'm guessing this is just an artifact of having lots of processes running > very often and top is just not able to calculate with fine enough > granularity? > > This is on 2.4.25 kernel. > > Ben -- vda ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: sched_yield() version 2.4.24 2004-03-30 19:40 ` Denis Vlasenko @ 2004-03-30 20:29 ` Richard B. Johnson 2004-03-30 23:10 ` Diego Calleja García 2004-03-31 13:53 ` Richard B. Johnson 0 siblings, 2 replies; 10+ messages in thread From: Richard B. Johnson @ 2004-03-30 20:29 UTC (permalink / raw) To: Denis Vlasenko; +Cc: Ben Greear, Chris Friesen, Linux kernel On Tue, 30 Mar 2004, Denis Vlasenko wrote: > On Tuesday 30 March 2004 19:52, Ben Greear wrote: > > Chris Friesen wrote: > > > The cpu util accounting code in kernel/timer.c hasn't changed in 2.4 > > > since 2002. Must be somewhere else. > > > > > > Anyone else have any ideas? > > > > As another sample point, I have fired up about 100 processes with > > each process having 10+ threads. On my dual-xeon, I see maybe 15 > > processes shown as 99% CPU in 'top'. System load was near 25 > > when I was looking, but the machine was still quite responsive. > > There was a top bug with exactly this symptom. Fixed. > I use procps-2.0.18. > Wonderful! Now, where do I find the sources now that RedHat has gone "commercial" and is keeping everything secret? I followed the http://sources.redhat.com/procps/ instructions __exactly__ and get this: Script started on Tue Mar 30 15:27:02 2004 quark:/home/johnson/foo[1] cvs -d :pserver:anoncvs@sources.redhat.com:/procps login anoncvs Logging in to :pserver:anoncvs@sources.redhat.com:2401/procps CVS password: /procps: no such repository quark:/home/johnson/foo[2] exit Script done on Tue Mar 30 15:28:32 2004 Cheers, Dick Johnson Penguin : Linux version 2.4.24 on an i686 machine (797.90 BogoMips). Note 96.31% of all statistics are fiction. ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: sched_yield() version 2.4.24 2004-03-30 20:29 ` Richard B. Johnson @ 2004-03-30 23:10 ` Diego Calleja García 2004-03-31 13:53 ` Richard B. Johnson 1 sibling, 0 replies; 10+ messages in thread From: Diego Calleja García @ 2004-03-30 23:10 UTC (permalink / raw) To: root; +Cc: vda, greearb, cfriesen, linux-kernel El Tue, 30 Mar 2004 15:29:29 -0500 (EST) "Richard B. Johnson" <root@chaos.analogic.com> escribió: > Wonderful! Now, where do I find the sources now that RedHat has > gone "commercial" and is keeping everything secret? Exactly *why* are you trying to spread FUD? Use other distro if you don't like instead of usign stupid arguments. > I followed the http://sources.redhat.com/procps/ instructions > __exactly__ and get this: Me too. diego@estel:/tmp$ cvs -d :pserver:anoncvs@sources.redhat.com:/cvs/procps login anoncvs Logging in to :pserver:anoncvs@sources.redhat.com:2401/cvs/procps CVS password: diego@estel:/tmp$ cvs -d :pserver:anoncvs@sources.redhat.com:/cvs/procps co procps cvs server: Updating procps U procps/.cvsignore U procps/BUGS U procps/COPYING U procps/COPYING.LIB U procps/INSTALL U procps/Makefile U procps/NEWS U procps/TODO U procps/free.1 U procps/free.c U procps/pgrep.1 U procps/pgrep.c U procps/pkill.1 U procps/pmap.1 U procps/pmap.c U procps/procps.spec U procps/skill.1 U procps/skill.c U procps/slabtop.1 U procps/slabtop.c U procps/snice.1 U procps/sysctl.8 U procps/sysctl.c U procps/sysctl.conf.5 U procps/tload.1 U procps/tload.c U procps/top.1 U procps/top.c U procps/uptime.1 U procps/uptime.c U procps/vmstat.8 U procps/vmstat.c U procps/w.1 U procps/w.c U procps/watch.1 U procps/watch.c cvs server: Updating procps/proc U procps/proc/.cvsignore U procps/proc/Makefile U procps/proc/compare.c U procps/proc/devname.c U procps/proc/ksym.c U procps/proc/procps.h U procps/proc/pwcache.c U procps/proc/readproc.c U procps/proc/readproc.h U procps/proc/signals.c U procps/proc/slab.c U procps/proc/slab.h U procps/proc/status.c U procps/proc/sysinfo.c U procps/proc/sysinfo.h U procps/proc/version.c U procps/proc/version.h U procps/proc/vmstat.c U procps/proc/vmstat.h U procps/proc/whattime.c cvs server: Updating procps/ps U procps/ps/.cvsignore U procps/ps/HACKING U procps/ps/Makefile U procps/ps/common.h U procps/ps/display.c U procps/ps/escape.c U procps/ps/global.c U procps/ps/help.c U procps/ps/output.c U procps/ps/parser.c U procps/ps/ps.1 U procps/ps/regression U procps/ps/select.c U procps/ps/sortformat.c U procps/ps/stacktrace.c cvs server: Updating procps/xproc diego@estel:/tmp$ ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: sched_yield() version 2.4.24 2004-03-30 20:29 ` Richard B. Johnson 2004-03-30 23:10 ` Diego Calleja García @ 2004-03-31 13:53 ` Richard B. Johnson 2004-04-01 0:05 ` Eric W. Biederman 1 sibling, 1 reply; 10+ messages in thread From: Richard B. Johnson @ 2004-03-31 13:53 UTC (permalink / raw) To: Denis Vlasenko; +Cc: Ben Greear, Chris Friesen, Linux kernel On Tue, 30 Mar 2004, Richard B. Johnson wrote: > On Tue, 30 Mar 2004, Denis Vlasenko wrote: > > > On Tuesday 30 March 2004 19:52, Ben Greear wrote: > > > Chris Friesen wrote: > > > > The cpu util accounting code in kernel/timer.c hasn't changed in 2.4 > > > > since 2002. Must be somewhere else. > > > > > > > > Anyone else have any ideas? > > > > > > As another sample point, I have fired up about 100 processes with > > > each process having 10+ threads. On my dual-xeon, I see maybe 15 > > > processes shown as 99% CPU in 'top'. System load was near 25 > > > when I was looking, but the machine was still quite responsive. > > > > There was a top bug with exactly this symptom. Fixed. > > I use procps-2.0.18. > > > Wonderful! Now, where do I find the sources now that RedHat has > gone "commercial" and is keeping everything secret? > > I followed the http://sources.redhat.com/procps/ instructions > __exactly__ and get this: > > Script started on Tue Mar 30 15:27:02 2004 > quark:/home/johnson/foo[1] cvs -d :pserver:anoncvs@sources.redhat.com:/procps login anoncvs > Logging in to :pserver:anoncvs@sources.redhat.com:2401/procps > CVS password: > /procps: no such repository > quark:/home/johnson/foo[2] exit > Script done on Tue Mar 30 15:28:32 2004 > The RedHat server was apparently broken yesterday. There were many persons who tried to get the source. Eventually Burton Windle sent me a copy of the source, that he had previously acquired, after he tried to access it also. I compiled the source and the problem persists. Any task that executes sched_yield() will get "charged" for the time that it has given away. This is not correct. Maybe it is not correctable, but it is still not correct. In addition to it being "unfair", it messes up the totals because tasks that are using the CPU time given up, also get charged. Cheers, Dick Johnson Penguin : Linux version 2.4.24 on an i686 machine (797.90 BogoMips). Note 96.31% of all statistics are fiction. ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: sched_yield() version 2.4.24 2004-03-31 13:53 ` Richard B. Johnson @ 2004-04-01 0:05 ` Eric W. Biederman 0 siblings, 0 replies; 10+ messages in thread From: Eric W. Biederman @ 2004-04-01 0:05 UTC (permalink / raw) To: root; +Cc: Denis Vlasenko, Ben Greear, Chris Friesen, Linux kernel "Richard B. Johnson" <root@chaos.analogic.com> writes: > On Tue, 30 Mar 2004, Richard B. Johnson wrote: > > > On Tue, 30 Mar 2004, Denis Vlasenko wrote: > > > > > On Tuesday 30 March 2004 19:52, Ben Greear wrote: > > > > Chris Friesen wrote: > > > > > The cpu util accounting code in kernel/timer.c hasn't changed in 2.4 > > > > > since 2002. Must be somewhere else. > > > > > > > > > > Anyone else have any ideas? > > > > > > > > As another sample point, I have fired up about 100 processes with > > > > each process having 10+ threads. On my dual-xeon, I see maybe 15 > > > > processes shown as 99% CPU in 'top'. System load was near 25 > > > > when I was looking, but the machine was still quite responsive. > > > > > > There was a top bug with exactly this symptom. Fixed. > > > I use procps-2.0.18. > > > > > Wonderful! Now, where do I find the sources now that RedHat has > > gone "commercial" and is keeping everything secret? > > > > I followed the http://sources.redhat.com/procps/ instructions > > __exactly__ and get this: > > > > Script started on Tue Mar 30 15:27:02 2004 > > quark:/home/johnson/foo[1] cvs -d :pserver:anoncvs@sources.redhat.com:/procps > login anoncvs > > > Logging in to :pserver:anoncvs@sources.redhat.com:2401/procps > > CVS password: > > /procps: no such repository > > quark:/home/johnson/foo[2] exit > > Script done on Tue Mar 30 15:28:32 2004 > > > > The RedHat server was apparently broken yesterday. There were many > persons who tried to get the source. Eventually Burton Windle > sent me a copy of the source, that he had previously acquired, > after he tried to access it also. > > I compiled the source and the problem persists. Any task that > executes sched_yield() will get "charged" for the time that it > has given away. This is not correct. Maybe it is not correctable, > but it is still not correct. In addition to it being "unfair", > it messes up the totals because tasks that are using the CPU time > given up, also get charged. Could it be that there are no other process with equal or greater priority so that the process calling sched_yield gets called again? Eric ^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2004-04-02 19:43 UTC | newest] Thread overview: 10+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2004-03-30 16:47 sched_yield() version 2.4.24 Richard B. Johnson 2004-03-30 16:58 ` Chris Friesen 2004-03-30 17:09 ` Richard B. Johnson 2004-03-30 17:30 ` Chris Friesen 2004-03-30 17:52 ` Ben Greear 2004-03-30 19:40 ` Denis Vlasenko 2004-03-30 20:29 ` Richard B. Johnson 2004-03-30 23:10 ` Diego Calleja García 2004-03-31 13:53 ` Richard B. Johnson 2004-04-01 0:05 ` Eric W. Biederman
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox