linux-rt-users.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* idle task starvation with rt patch
@ 2009-05-06 19:47 David L
  2009-05-06 21:04 ` Nivedita Singhvi
  0 siblings, 1 reply; 7+ messages in thread
From: David L @ 2009-05-06 19:47 UTC (permalink / raw)
  To: linux-rt-users

Hi,

I'm a newbie trying to see if the rt patch will improve
the performance of a PPC Linux receiver.  We need
to read data from an FGPA correlator about every msec
although we can tolerate 5-10 msecs from time to time.

Without the real-time patches, we find that we sometimes
miss our timing deadlines especially when we have multiple
processes with "real-time" (SCHED_FIFO) threads.  I tried
to use the 2.6.29.2-rt11 patch to see if it helped, but things
got much worse.  We have about 30-60 percent idle CPU
without the patches... after applying the patches, I found
that we had essentially no idle time.  Below are excepts
from the kernel sched_switch trace file while our process
is running with and without the real-time patches applied.

I notice there are IRQ-related processes with the real-time
patches that don't exist in without the patches.  But the main
thing to notice is that there is no idle time which eventually
causes the process to crash because low priority threads
are starved for too long.  Am I doing something wrong or
is it expected overhead of the real-time patches will cause
problems like this under this kind of interrupt load?

Thanks,

                 David


       rx-10082 [000]  2123.751923:  10082: 50:R   + [000]   181: 49:D IRQ-151
        rx-10082 [000]  2123.751942:  10082: 50:R ==> [000]   181: 49:R IRQ-151
         IRQ-151-181   [000]  2123.751989:    181: 49:D   + [000] 10081: 48:D rx
         IRQ-151-181   [000]  2123.752006:    181: 49:D ==> [000] 10081: 48:R rx
        rx-10081 [000]  2123.752065:  10081: 48:T ==> [000]   181: 48:D IRQ-151
         IRQ-151-181   [000]  2123.752095:    181: 48:D   + [000] 10081: 48:T rx
         IRQ-151-181   [000]  2123.752116:    181: 49:D ==> [000] 10081: 48:S rx
        rx-10081 [000]  2123.752209:  10081: 48:R   + [000]   900: 49:D IRQ-18
        rx-10081 [000]  2123.752345:  10081: 48:R   + [000]     4:
49:D sirq-timer/0
        rx-10081 [000]  2123.753061:  10081: 48:D ==> [000]   181: 49:D IRQ-151
         IRQ-151-181   [000]  2123.753184:    181: 49:D ==> [000]
900: 49:R IRQ-18
          IRQ-18-900   [000]  2123.753231:    900: 49:R   + [000]
181: 49:D IRQ-151
          IRQ-18-900   [000]  2123.753304:    900: 49:D ==> [000]
4: 49:R sirq-timer/0
    sirq-timer/0-4     [000]  2123.753366:      4: 49:D ==> [000]
181: 49:R IRQ-151
         IRQ-151-181   [000]  2123.753442:    181: 49:D ==> [000] 10082: 50:R rx
        rx-10082 [000]  2123.753482:  10082: 50:R   + [000]   181: 49:D IRQ-151
        rx-10082 [000]  2123.753502:  10082: 50:R ==> [000]   181: 49:R IRQ-151
         IRQ-151-181   [000]  2123.753549:    181: 49:D   + [000] 10081: 48:D rx
         IRQ-151-181   [000]  2123.753565:    181: 49:D ==> [000] 10081: 48:R rx
        rx-10081 [000]  2123.753626:  10081: 48:T ==> [000]   181: 48:D IRQ-151
         IRQ-151-181   [000]  2123.753655:    181: 48:D   + [000] 10081: 48:T rx
         IRQ-151-181   [000]  2123.753676:    181: 49:D ==> [000] 10081: 48:S rx
        rx-10081 [000]  2123.754014:  10081: 48:R   + [000]   900: 49:D IRQ-18
        rx-10081 [000]  2123.754511:  10081: 48:D ==> [000]   181: 49:D IRQ-151
         IRQ-151-181   [000]  2123.754597:    181: 49:D ==> [000]
900: 49:R IRQ-18
          IRQ-18-900   [000]  2123.754654:    900: 49:D   + [000]
181: 49:D IRQ-151
          IRQ-18-900   [000]  2123.754708:    900: 49:D ==> [000]
181: 49:R IRQ-151
         IRQ-151-181   [000]  2123.754787:    181: 49:D ==> [000] 10082: 50:R rx
        rx-10082 [000]  2123.754826:  10082: 50:R   + [000]   181: 49:D IRQ-151
        rx-10082 [000]  2123.754844:  10082: 50:R ==> [000]   181: 49:R IRQ-151
         IRQ-151-181   [000]  2123.754903:    181: 49:D   + [000]
900: 49:D IRQ-18
         IRQ-151-181   [000]  2123.754930:    181: 49:D   + [000] 10081: 48:D rx
         IRQ-151-181   [000]  2123.754946:    181: 49:D ==> [000] 10081: 48:R rx
        rx-10081 [000]  2123.755006:  10081: 48:T ==> [000]   181: 48:D IRQ-151
         IRQ-151-181   [000]  2123.755036:    181: 48:D   + [000] 10081: 48:T rx
         IRQ-151-181   [000]  2123.755058:    181: 49:D ==> [000] 10081: 48:S rx
        rx-10081 [000]  2123.755799:  10081: 48:D ==> [000]   900: 49:R IRQ-18
          IRQ-18-900   [000]  2123.755903:    900: 49:D ==> [000]
181: 49:D IRQ-151
         IRQ-151-181   [000]  2123.755960:    181: 49:D   + [000] 10081: 48:D rx
         IRQ-151-181   [000]  2123.755977:    181: 49:D ==> [000] 10081: 48:R rx
        rx-10081 [000]  2123.756033:  10081: 48:T ==> [000]   181: 48:D IRQ-151
         IRQ-151-181   [000]  2123.756063:    181: 48:D   + [000] 10081: 48:T rx


        rx-2440  [000]   642.061841:   2440: 48:S ==> [000]  2451: 72:R
        rx-2451  [000]   642.061979:   2451: 72:R   + [000]  2447: 50:S
        rx-2451  [000]   642.062015:   2451: 72:R ==> [000]  2447: 50:R
        rx-2447  [000]   642.062205:   2447: 50:S ==> [000]  2451: 72:R
        rx-2451  [000]   642.062303:   2451: 72:S ==> [000]   905:120:R
         telnetd-905   [000]   642.062503:    905:120:R   + [000]  2440: 48:S
         telnetd-905   [000]   642.062560:    905:120:R ==> [000]  2440: 48:R
        rx-2440  [000]   642.062798:   2440: 48:S ==> [000]   905:120:R
         telnetd-905   [000]   642.063368:    905:120:R   + [000]  2440: 48:S
         telnetd-905   [000]   642.063427:    905:120:R ==> [000]  2440: 48:R
        rx-2440  [000]   642.063747:   2440: 48:S ==> [000]   905:120:R
         telnetd-905   [000]   642.063996:    905:120:S ==> [000]     0:140:R
          <idle>-0     [000]   642.064275:      0:140:R   + [000]  2440: 48:S
          <idle>-0     [000]   642.064362:      0:140:R ==> [000]  2440: 48:R
        rx-2440  [000]   642.064580:   2440: 48:S ==> [000]     0:140:R
          <idle>-0     [000]   642.065170:      0:140:R   + [000]  2440: 48:S
          <idle>-0     [000]   642.065257:      0:140:R ==> [000]  2440: 48:R
        rx-2440  [000]   642.065437:   2440: 48:S ==> [000]     0:140:R
          <idle>-0     [000]   642.066070:      0:140:R   + [000]  2440: 48:S
          <idle>-0     [000]   642.066156:      0:140:R ==> [000]  2440: 48:R
        rx-2440  [000]   642.066352:   2440: 48:S ==> [000]     0:140:R
          <idle>-0     [000]   642.066969:      0:140:R   + [000]  2440: 48:S
          <idle>-0     [000]   642.067056:      0:140:R ==> [000]  2440: 48:R
        rx-2440  [000]   642.067364:   2440: 48:S ==> [000]     0:140:R
          <idle>-0     [000]   642.067872:      0:140:R   + [000]  2440: 48:S
          <idle>-0     [000]   642.067958:      0:140:R ==> [000]  2440: 48:R
        rx-2440  [000]   642.068167:   2440: 48:S ==> [000]     0:140:R
          <idle>-0     [000]   642.068770:      0:140:R   + [000]  2440: 48:S
          <idle>-0     [000]   642.068857:      0:140:R ==> [000]  2440: 48:R
        rx-2440  [000]   642.069032:   2440: 48:S ==> [000]     0:140:R
          <idle>-0     [000]   642.069668:      0:140:R   + [000]  2440: 48:S
          <idle>-0     [000]   642.069755:      0:140:R ==> [000]  2440: 48:R
        rx-2440  [000]   642.069956:   2440: 48:S ==> [000]     0:140:R
          <idle>-0     [000]   642.070569:      0:140:R   + [000]  2440: 48:S
          <idle>-0     [000]   642.070655:      0:140:R ==> [000]  2440: 48:R
        rx-2440  [000]   642.070851:   2440: 48:S ==> [000]     0:140:R
          <idle>-0     [000]   642.071529:      0:140:R   + [000]  2440: 48:S
          <idle>-0     [000]   642.071722:      0:140:R ==> [000]  2440: 48:R
        rx-2440  [000]   642.071866:   2440: 48:R   + [000]  2451: 72:S

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: idle task starvation with rt patch
  2009-05-06 19:47 idle task starvation with rt patch David L
@ 2009-05-06 21:04 ` Nivedita Singhvi
  2009-05-06 22:19   ` David L
  0 siblings, 1 reply; 7+ messages in thread
From: Nivedita Singhvi @ 2009-05-06 21:04 UTC (permalink / raw)
  To: David L; +Cc: linux-rt-users

David L wrote:
> Hi,
> 
> I'm a newbie trying to see if the rt patch will improve
> the performance of a PPC Linux receiver.  We need
> to read data from an FGPA correlator about every msec
> although we can tolerate 5-10 msecs from time to time.
> 
> Without the real-time patches, we find that we sometimes
> miss our timing deadlines especially when we have multiple
> processes with "real-time" (SCHED_FIFO) threads.  I tried

I take it you weren't seeing crashes on mainline?

> to use the 2.6.29.2-rt11 patch to see if it helped, but things
> got much worse.  We have about 30-60 percent idle CPU
> without the patches... after applying the patches, I found
> that we had essentially no idle time.  Below are excepts
> from the kernel sched_switch trace file while our process
> is running with and without the real-time patches applied.

What priority are you running your real time tasks at?

> I notice there are IRQ-related processes with the real-time
> patches that don't exist in without the patches.  But the main
> thing to notice is that there is no idle time which eventually
> causes the process to crash because low priority threads

Which process crashed? 

> are starved for too long.  Am I doing something wrong or
> is it expected overhead of the real-time patches will cause
> problems like this under this kind of interrupt load?

Making sure your priorities are set right so you don't
starve essential processes is important...rt makes it
easy to shoot yourself in the foot...

thanks,
Nivedita

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: idle task starvation with rt patch
  2009-05-06 21:04 ` Nivedita Singhvi
@ 2009-05-06 22:19   ` David L
  2009-05-06 23:18     ` Nivedita Singhvi
  0 siblings, 1 reply; 7+ messages in thread
From: David L @ 2009-05-06 22:19 UTC (permalink / raw)
  To: Nivedita Singhvi; +Cc: linux-rt-users

On Wed, May 6, 2009 at 2:04 PM, Nivedita Singhvi wrote:
> David L wrote:
>>
>> Hi,
>>
>> I'm a newbie trying to see if the rt patch will improve
>> the performance of a PPC Linux receiver.  We need
>> to read data from an FGPA correlator about every msec
>> although we can tolerate 5-10 msecs from time to time.
>>
>> Without the real-time patches, we find that we sometimes
>> miss our timing deadlines especially when we have multiple
>> processes with "real-time" (SCHED_FIFO) threads.  I tried
>
> I take it you weren't seeing crashes on mainline?
No, our system works pretty well with the mainline kernel,
but it seems to very occasionally not meet our real-time
timing requirements.


>
>> to use the 2.6.29.2-rt11 patch to see if it helped, but things
>> got much worse.  We have about 30-60 percent idle CPU
>> without the patches... after applying the patches, I found
>> that we had essentially no idle time.  Below are excepts
>> from the kernel sched_switch trace file while our process
>> is running with and without the real-time patches applied.
>
> What priority are you running your real time tasks at?
In mainline, we have our receiver application schedules about
6 threads, all with SCHED_FIFO priority between about 65 and 97.
After applying the real-time patch, I noticed some IRQ handling
processes that appeared to have a real-time priority of about 50.
So I tried adjusting our application's priority to have only one thread
with priority higher than 50 (the one with the real-time requirements
that nomincally uses about 250 usec per msec of CPU).


>
>> I notice there are IRQ-related processes with the real-time
>> patches that don't exist in without the patches.  But the main
>> thing to notice is that there is no idle time which eventually
>> causes the process to crash because low priority threads
>
> Which process crashed?
Our receiver process which tracks RF signals based on the
information from an FPGA that provides baseband accumulated
samples at about 1000 times per second.  It has some lower
priority SCHED_FIFO threads that have relatively loose timing
constraints but do need about a second of CPU time every few
seconds to prevent a queue overflow that causes the process
to assert and crash by design.

>>
>> are starved for too long.  Am I doing something wrong or
>> is it expected overhead of the real-time patches will cause
>> problems like this under this kind of interrupt load?
>
> Making sure your priorities are set right so you don't
> starve essential processes is important...rt makes it
> easy to shoot yourself in the foot...

The priorities all seem to be set right... without the real-time
patches, it seems that the kernel isn't real-time enough to
meet our timing requirements in some cases.  With the real-time
patches, the CPU loading for the identical process goes from
about 50% to about 100%.  I'm wondering why there is such
a dramatic difference in CPU usage.


Thanks,

                  David
--
To unsubscribe from this list: send the line "unsubscribe linux-rt-users" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: idle task starvation with rt patch
  2009-05-06 22:19   ` David L
@ 2009-05-06 23:18     ` Nivedita Singhvi
  2009-05-07  5:29       ` Sujit Karataparambil
  0 siblings, 1 reply; 7+ messages in thread
From: Nivedita Singhvi @ 2009-05-06 23:18 UTC (permalink / raw)
  To: David L; +Cc: linux-rt-users

David L wrote:

> In mainline, we have our receiver application schedules about
> 6 threads, all with SCHED_FIFO priority between about 65 and 97.
> After applying the real-time patch, I noticed some IRQ handling
> processes that appeared to have a real-time priority of about 50.
> So I tried adjusting our application's priority to have only one thread
> with priority higher than 50 (the one with the real-time requirements
> that nomincally uses about 250 usec per msec of CPU).

Right - the bulk of the interrupt overhead (including
processing of softirqs etc) would not occur (or rather,
would not be allowed to preempt your SCHED_FIFO task at 
higher priority).

> Our receiver process which tracks RF signals based on the
> information from an FPGA that provides baseband accumulated
> samples at about 1000 times per second.  It has some lower
> priority SCHED_FIFO threads that have relatively loose timing
> constraints but do need about a second of CPU time every few
> seconds to prevent a queue overflow that causes the process
> to assert and crash by design.

A second of CPU time every few seconds sounds like a
lot, actually - even if lower priority, it still sounds
like it's a must-happen, and you might need to bump up
its priority(?).

> The priorities all seem to be set right... without the real-time
> patches, it seems that the kernel isn't real-time enough to
> meet our timing requirements in some cases.  With the real-time
> patches, the CPU loading for the identical process goes from
> about 50% to about 100%.  I'm wondering why there is such
> a dramatic difference in CPU usage.

You might need to instrument it further or use ftrace
to figure out what's happening. RT is slower to process
incoming interrupts (kernel threads vs softirqs) so it
might be a factor - do you use affinity to pin threads
and/or interrupts? You might be able to play with that
to perhaps avoid the problem. It should not behave this
differently, but I could be wrong...

thanks,
Nivedita

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: idle task starvation with rt patch
  2009-05-06 23:18     ` Nivedita Singhvi
@ 2009-05-07  5:29       ` Sujit Karataparambil
  2009-05-07 13:01         ` David L
  0 siblings, 1 reply; 7+ messages in thread
From: Sujit Karataparambil @ 2009-05-07  5:29 UTC (permalink / raw)
  To: Nivedita Singhvi; +Cc: David L, linux-rt-users

On Thu, May 7, 2009 at 4:48 AM, Nivedita Singhvi <niv@us.ibm.com> wrote:
> David L wrote:
>
>> In mainline, we have our receiver application schedules about
>> 6 threads, all with SCHED_FIFO priority between about 65 and 97.
>> After applying the real-time patch, I noticed some IRQ handling
>> processes that appeared to have a real-time priority of about 50.
>> So I tried adjusting our application's priority to have only one thread
>> with priority higher than 50 (the one with the real-time requirements
>> that nomincally uses about 250 usec per msec of CPU).
>
> Right - the bulk of the interrupt overhead (including
> processing of softirqs etc) would not occur (or rather,
> would not be allowed to preempt your SCHED_FIFO task at higher priority).

Could you be more specific on what part of the Real Time Code is this
priority being set? Whether it can be changed to get the current
application running? Also What is the Specific part to PPC32 and PPC64?

>
>> Our receiver process which tracks RF signals based on the
>> information from an FPGA that provides baseband accumulated
>> samples at about 1000 times per second.  It has some lower
>> priority SCHED_FIFO threads that have relatively loose timing
>> constraints but do need about a second of CPU time every few
>> seconds to prevent a queue overflow that causes the process
>> to assert and crash by design.
>
> A second of CPU time every few seconds sounds like a
> lot, actually - even if lower priority, it still sounds
> like it's a must-happen, and you might need to bump up
> its priority(?).

Could you tell us some tool that can be used to measure the time taken
other than the time function. Also how much of the time is required by
the SCHED_FIFO?

>
>> The priorities all seem to be set right... without the real-time
>> patches, it seems that the kernel isn't real-time enough to
>> meet our timing requirements in some cases.  With the real-time
>> patches, the CPU loading for the identical process goes from
>> about 50% to about 100%.  I'm wondering why there is such
>> a dramatic difference in CPU usage.
>
> You might need to instrument it further or use ftrace
> to figure out what's happening. RT is slower to process
> incoming interrupts (kernel threads vs softirqs) so it
> might be a factor - do you use affinity to pin threads
> and/or interrupts? You might be able to play with that
> to perhaps avoid the problem. It should not behave this
> differently, but I could be wrong...

Could you tell whether it is HARD real time or Soft Real Time?
>
> thanks,
> Nivedita
> --
> To unsubscribe from this list: send the line "unsubscribe linux-rt-users" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>



-- 
-- Sujit K M
--
To unsubscribe from this list: send the line "unsubscribe linux-rt-users" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: idle task starvation with rt patch
  2009-05-07  5:29       ` Sujit Karataparambil
@ 2009-05-07 13:01         ` David L
  2009-05-07 13:15           ` Sujit Karataparambil
  0 siblings, 1 reply; 7+ messages in thread
From: David L @ 2009-05-07 13:01 UTC (permalink / raw)
  To: Sujit Karataparambil; +Cc: linux-rt-users

On Wed, May 6, 2009 at 10:29 PM, Sujit Karataparambil wrote:
> On Thu, May 7, 2009 at 4:48 AM, Nivedita Singhvi wrote:
>> David L wrote:
>>
>>> In mainline, we have our receiver application schedules about
>>> 6 threads, all with SCHED_FIFO priority between about 65 and 97.
>>> After applying the real-time patch, I noticed some IRQ handling
>>> processes that appeared to have a real-time priority of about 50.
>>> So I tried adjusting our application's priority to have only one thread
>>> with priority higher than 50 (the one with the real-time requirements
>>> that nomincally uses about 250 usec per msec of CPU).
>>
>> Right - the bulk of the interrupt overhead (including
>> processing of softirqs etc) would not occur (or rather,
>> would not be allowed to preempt your SCHED_FIFO task at higher priority).
>
> Could you be more specific on what part of the Real Time Code is this
> priority being set?

I'm not sure I understand this question, but I'll try to
answer it anyway.  There is one process with a handful
of SCHED_FIFO-scheduled threads.  The highest
priority one blocks on a read from a driver that interfaces
with an FPGA and wakes up every ~950 microseconds.
It seems to consume about 250 microseconds per
msec using the mainline kernel.  That thread can tolerate
frequent delays of a few msec with a rare delay of up to
about 7 msec... after that, we lose track of the RF
signals we're tracking.

There are a few lower priority threads that do some
processing of the data demodulated by the highest
priority thread.  Those have relatively soft real-time
requirements.  Earlier I said they needed a second
of CPU time every few seconds.  Really, it's probably
more like a second every 5 seconds.


> Whether it can be changed to get the current
> application running?
I don't think so, but I haven't exhaustively tried different
priorities.  I've just tried a few that seem reasonable.  I know
it doesn't work with the same priority levels for which it works
almost perfectly without the real-time patches.  And I know it
doesn't work after lowering all of the priorities such that only
one is above 50.  The symptom is 2x higher CPU loading
relative to the non-real-time-patched kernel.


> Also What is the Specific part to PPC32 and PPC64?
PPC32 (specifically, an MPC5200)

>
>>
>>> Our receiver process which tracks RF signals based on the
>>> information from an FPGA that provides baseband accumulated
>>> samples at about 1000 times per second.  It has some lower
>>> priority SCHED_FIFO threads that have relatively loose timing
>>> constraints but do need about a second of CPU time every few
>>> seconds to prevent a queue overflow that causes the process
>>> to assert and crash by design.
>>
>> A second of CPU time every few seconds sounds like a
>> lot, actually - even if lower priority, it still sounds
>> like it's a must-happen, and you might need to bump up
>> its priority(?).
>
> Could you tell us some tool that can be used to measure the time taken
> other than the time function. Also how much of the time is required by
> the SCHED_FIFO?

Without the real-time patch applied, I used top to watch the CPU
consumption per thread.  I also used the kernel sched_switch
tracing tool and the application monitors its own CPU usage
using clock_gettime with a clock initialized by pthread_getcpuclockid.
With the real-time patch applied, top doesn't run the way I usually
run it because the CPU is starved.  I guess I could run top at a
high priority, but I haven't tried that.  But the kernel trace I showed
with the original email shows what is running when, and it shows
that the CPU is never idle.


>>
>>> The priorities all seem to be set right... without the real-time
>>> patches, it seems that the kernel isn't real-time enough to
>>> meet our timing requirements in some cases.  With the real-time
>>> patches, the CPU loading for the identical process goes from
>>> about 50% to about 100%.  I'm wondering why there is such
>>> a dramatic difference in CPU usage.
>>
>> You might need to instrument it further or use ftrace
>> to figure out what's happening. RT is slower to process
>> incoming interrupts (kernel threads vs softirqs) so it
>> might be a factor - do you use affinity to pin threads
>> and/or interrupts? You might be able to play with that
>> to perhaps avoid the problem. It should not behave this
>> differently, but I could be wrong...
>
Even without the real-time patches, I've found that the
overhead of the kernel ftrace tracing is so high that
the process doesn't run properly.

> Could you tell whether it is HARD real time or Soft Real Time?

I guess it depends on the exact definition.  The thread that
interacts with the FPGA needs to read the FGPA data typically
within say 2-3 msec with a rare excursion to  ~7 msec.
The mainline kernel does this under most conditions but it seems
like it just barely makes it and some small changes to the system can
cause us to miss those deadlines.

Thanks,

            David
--
To unsubscribe from this list: send the line "unsubscribe linux-rt-users" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: idle task starvation with rt patch
  2009-05-07 13:01         ` David L
@ 2009-05-07 13:15           ` Sujit Karataparambil
  0 siblings, 0 replies; 7+ messages in thread
From: Sujit Karataparambil @ 2009-05-07 13:15 UTC (permalink / raw)
  To: David L; +Cc: linux-rt-users

On Thu, May 7, 2009 at 6:31 PM, David L <idht4n@gmail.com> wrote:
> On Wed, May 6, 2009 at 10:29 PM, Sujit Karataparambil wrote:

> I'm not sure I understand this question, but I'll try to
> answer it anyway.  There is one process with a handful
> of SCHED_FIFO-scheduled threads.  The highest
> priority one blocks on a read from a driver that interfaces
> with an FPGA and wakes up every ~950 microseconds.
> It seems to consume about 250 microseconds per
> msec using the mainline kernel.  That thread can tolerate
> frequent delays of a few msec with a rare delay of up to
> about 7 msec... after that, we lose track of the RF
> signals we're tracking.

Why cannot you have mutiple threads for SCHED_FIFO?
Say you could have one process with the Highest Priority alone on an
low priority process. With the other matching process as per
their priority requirements? Using shared memory.

This requires a lot of R&D in terms of sleep issued on the application?

>
> There are a few lower priority threads that do some
> processing of the data demodulated by the highest
> priority thread.  Those have relatively soft real-time
> requirements.  Earlier I said they needed a second
> of CPU time every few seconds.  Really, it's probably
> more like a second every 5 seconds.

I think this might not be a problem? After the First Step is taken?

>
>> Whether it can be changed to get the current
>> application running?
> I don't think so, but I haven't exhaustively tried different
> priorities.  I've just tried a few that seem reasonable.  I know
> it doesn't work with the same priority levels for which it works
> almost perfectly without the real-time patches.  And I know it
> doesn't work after lowering all of the priorities such that only
> one is above 50.  The symptom is 2x higher CPU loading
> relative to the non-real-time-patched kernel.

Is the Processor supporting SMP? or Does it have SMP Capabilities?
>> Also What is the Specific part to PPC32 and PPC64?
> PPC32 (specifically, an MPC5200)

http://kerneltrap.org/Linux/Balancing_Real_Time_Threads


> Without the real-time patch applied, I used top to watch the CPU
> consumption per thread.  I also used the kernel sched_switch
> tracing tool and the application monitors its own CPU usage
> using clock_gettime with a clock initialized by pthread_getcpuclockid.
> With the real-time patch applied, top doesn't run the way I usually
> run it because the CPU is starved.  I guess I could run top at a
> high priority, but I haven't tried that.  But the kernel trace I showed
> with the original email shows what is running when, and it shows
> that the CPU is never idle.

http://kerneltrap.org/Linux/Balancing_Real_Time_Threads


> I guess it depends on the exact definition.  The thread that
> interacts with the FPGA needs to read the FGPA data typically
> within say 2-3 msec with a rare excursion to  ~7 msec.
> The mainline kernel does this under most conditions but it seems
> like it just barely makes it and some small changes to the system can
> cause us to miss those deadlines.

Could the data be preread or part of it during boot/initialization.

-- 
-- Sujit K M
--
To unsubscribe from this list: send the line "unsubscribe linux-rt-users" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2009-05-07 13:15 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-05-06 19:47 idle task starvation with rt patch David L
2009-05-06 21:04 ` Nivedita Singhvi
2009-05-06 22:19   ` David L
2009-05-06 23:18     ` Nivedita Singhvi
2009-05-07  5:29       ` Sujit Karataparambil
2009-05-07 13:01         ` David L
2009-05-07 13:15           ` Sujit Karataparambil

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).