public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Chuck Ebbert <cebbert@redhat.com>
To: Ingo Molnar <mingo@elte.hu>
Cc: Antoine Martin <antoine@nagafix.co.uk>,
	Satyam Sharma <satyam.sharma@gmail.com>,
	Linux Kernel Development <linux-kernel@vger.kernel.org>,
	Peter Zijlstra <a.p.zijlstra@chello.nl>
Subject: Re: CFS: some bad numbers with Java/database threading [FIXED]
Date: Tue, 18 Sep 2007 19:02:06 -0400	[thread overview]
Message-ID: <46F058EE.1080408@redhat.com> (raw)
In-Reply-To: <20070918224656.GA26719@elte.hu>

On 09/18/2007 06:46 PM, Ingo Molnar wrote:
>>> We need a (tested) 
>>> solution for 2.6.23 and the CFS-devel patches are not for 2.6.23. I've 
>>> attached below the latest version of the -rc6 yield patch - the switch 
>>> is not dependent on SCHED_DEBUG anymore but always available.
>>>
>> Is this going to be merged? And will you be making the default == 1 or 
>> just leaving it at 0, which forces people who want the older behavior 
>> to modify the default?
> 
> not at the moment - Antoine suggested that the workload is probably fine 
> and the patch against -rc6 would have no clear effect anyway so we have 
> nothing to merge right now. (Note that there's no "older behavior" 
> possible, unless we want to emulate all of the O(1) scheduler's 
> behavior.) But ... we could still merge something like that patch, but a 
> clearer testcase is needed. The JVM's i have access to work fine.

I just got a bug report today:

https://bugzilla.redhat.com/show_bug.cgi?id=295071

==================================================

Description of problem:

The CFS scheduler does not seem to implement sched_yield correctly. If one
program loops with a sched_yield and another program prints out timing
information in a loop. You will see that if both are taskset to the same core
that the timing stats will be twice as long as when they are on different cores.
This problem was not in 2.6.21-1.3194 but showed up in 2.6.22.4-65 and continues
in the newest released kernel 2.6.22.5-76. 

Version-Release number of selected component (if applicable):

2.6.22.4-65 through 2.6.22.5-76

How reproducible:

Very

Steps to Reproduce:
compile task1
int main() {
        while (1) {
            sched_yield();
        }
        return 0;
}

and compile task2

#include <stdio.h>
#include <sys/time.h>
int main() {
    while (1) {
        int i;
        struct timeval t0,t1;
        double usec;

        gettimeofday(&t0, 0);
        for (i = 0; i < 100000000; ++i)
            ;
        gettimeofday(&t1, 0);

        usec = (t1.tv_sec * 1e6 + t1.tv_usec) - (t0.tv_sec * 1e6 + t0.tv_usec);
        printf ("%8.0f\n", usec);
    }
    return 0;
}

Then run:
"taskset -c 0 ./task1"
"taskset -c 0 ./task2"

You will see that both tasks use 50% of the CPU. 
Then kill task2 and run:
"taskset -c 1 ./task2"

Now task2 will run twice as fast verifying that it is not some anomaly with the
way top calculates CPU usage with sched_yield.
  
Actual results:
Tasks with sched_yield do not yield like they are suppose to.

Expected results:
The sched_yield task's CPU usage should go to near 0% when another task is on
the same CPU.

  reply	other threads:[~2007-09-18 23:02 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2007-09-12 23:10 CFS: some bad numbers with Java/database threading Antoine Martin
2007-09-13  7:18 ` David Schwartz
2007-09-12 23:33   ` Nick Piggin
2007-09-13 19:02     ` Antoine Martin
2007-09-13 21:47       ` David Schwartz
2007-09-13 11:24 ` CFS: " Ingo Molnar
2007-09-14  8:32   ` Ingo Molnar
2007-09-14 10:06     ` Satyam Sharma
2007-09-14 15:25       ` CFS: some bad numbers with Java/database threading [FIXED] Antoine Martin
2007-09-14 15:32         ` Ingo Molnar
2007-09-18 17:00           ` Chuck Ebbert
2007-09-18 22:46             ` Ingo Molnar
2007-09-18 23:02               ` Chuck Ebbert [this message]
2007-09-19 18:45                 ` David Schwartz
2007-09-19 19:48                   ` Chris Friesen
2007-09-19 22:56                     ` David Schwartz
2007-09-19 23:05                       ` David Schwartz
2007-09-19 23:52                         ` David Schwartz
2007-09-19 19:18                 ` Ingo Molnar
2007-09-19 19:39                   ` Linus Torvalds
2007-09-19 19:56                     ` Ingo Molnar
2007-09-19 20:26                       ` Ingo Molnar
2007-09-19 20:28                       ` Linus Torvalds
2007-09-19 21:41                         ` Ingo Molnar
2007-09-19 21:49                           ` Ingo Molnar
2007-09-19 21:58                           ` Peter Zijlstra
2007-09-26  1:46                           ` CFS: new java yield graphs Antoine Martin
2007-09-27  8:35                             ` Ingo Molnar
2007-09-19 20:00                   ` CFS: some bad numbers with Java/database threading [FIXED] Chris Friesen
2007-09-14 16:01       ` CFS: some bad numbers with Java/database threading Satyam Sharma
2007-09-14 16:08         ` Satyam Sharma
2007-09-17 12:17         ` Antoine Martin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=46F058EE.1080408@redhat.com \
    --to=cebbert@redhat.com \
    --cc=a.p.zijlstra@chello.nl \
    --cc=antoine@nagafix.co.uk \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@elte.hu \
    --cc=satyam.sharma@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox