public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Gregory Haskins <gregory.haskins@gmail.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Steven Rostedt <rostedt@goodmis.org>,
	linux-kernel@vger.kernel.org, Ingo Molnar <mingo@elte.hu>,
	Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Steven Rostedt <srostedt@redhat.com>
Subject: Re: [PATCH 1/2] sched: check for pushing rt tasks after all scheduling
Date: Wed, 29 Jul 2009 09:14:41 -0400	[thread overview]
Message-ID: <4A704B41.9000505@gmail.com> (raw)
In-Reply-To: <1248856884.6987.3043.camel@twins>

[-- Attachment #1: Type: text/plain, Size: 1249 bytes --]

Peter Zijlstra wrote:
> On Wed, 2009-07-29 at 00:21 -0400, Steven Rostedt wrote:
>> plain text document attachment
>> (0001-sched-check-for-pushing-rt-tasks-after-all-schedulin.patch)
>> From: Steven Rostedt <srostedt@redhat.com>
>>
>> The current method for pushing RT tasks after scheduling only
>> happens after a context switch. But we found cases where a task
>> is set up on a run queue to be pushed but the push never happens
>> because the schedule chooses the same task.
>>
>> This bug was found with the help of Gregory Haskins and the use of
>> ftrace (trace_printk). It tooks several days for both of us analyzing
>> the code and the trace output to find this.
> 
> 
>> +               if (current->sched_class->needs_post_schedule)
>> +                       post_schedule = current->sched_class->needs_post_schedule(rq);
> 
> 
>> +       if (post_schedule)
>> +               current->sched_class->post_schedule(rq);
> 
> 
> Why can't we omit that first call, and do the second unconditionally,
> using storage in the class rq to save state?

Yeah, that is a good idea.  Plus I see another bug that Steven and I
overlooked.  Steve is out on holiday today, so I will put together a v2.

Regards,
-Greg


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 267 bytes --]

  reply	other threads:[~2009-07-29 13:14 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-07-29  4:21 [PATCH 0/2] [GIT PULL] sched: fixes for rt-migration-test failures Steven Rostedt
2009-07-29  4:21 ` [PATCH 1/2] sched: check for pushing rt tasks after all scheduling Steven Rostedt
2009-07-29  8:41   ` Peter Zijlstra
2009-07-29 13:14     ` Gregory Haskins [this message]
2009-07-29 15:08     ` [PATCH] sched: enhance the pre/post scheduling logic Gregory Haskins
2009-07-30  7:36       ` Peter Zijlstra
2009-08-02 13:13       ` [tip:sched/core] sched: Enhance " tip-bot for Gregory Haskins
2009-08-02 13:12   ` [tip:sched/core] sched: Check for pushing rt tasks after all scheduling tip-bot for Steven Rostedt
2009-07-29  4:21 ` [PATCH 2/2] sched: add new prio to cpupri before removing old prio Steven Rostedt
2009-08-02 13:13   ` [tip:sched/core] sched: Add " tip-bot for Steven Rostedt

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4A704B41.9000505@gmail.com \
    --to=gregory.haskins@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@elte.hu \
    --cc=peterz@infradead.org \
    --cc=rostedt@goodmis.org \
    --cc=srostedt@redhat.com \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox