linux-rt-users.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Strange behavior of pthread_setaffinity_np
@ 2010-04-19  9:45 Primiano Tucci
  2010-04-19 11:07 ` Sujit K M
  2010-04-19 20:18 ` Primiano Tucci
  0 siblings, 2 replies; 5+ messages in thread
From: Primiano Tucci @ 2010-04-19  9:45 UTC (permalink / raw)
  To: linux-rt-users

Hi all,
I am an Italian researcher and I am working on a Real Time scheduling
infrastructure. I am currently using Linux Kernel 2.6.29.6-rt24-smp
(PREEMPT-RT Patch) running on a Intel Q9550 CPU.
I am experiencing strange behaviors with the pthread_setaffinity_np API.

This is my scenario, I have 4 Real Time Threads (SCHED_FIFO)
distributed as follows:

T0 : CPU 0, Priority 2 (HIGH)
T1 : CPU 1, Priority 2 (HIGH)
T3 : CPU 0, Priority 1 (LOW)
T4 : CPU 1, Priority 1 (LOW)

So T0 and T1 are actually the "big bosses" on CPUs #0 and #1, T3 and
T4, instead, never execute (let's assume that each thread is a simple
busy wait that never sleeps/yields)
Now, at a certain point, from T0 code, I want to migrate T4 from CPU
#1 to #0, keeping its low priority.
Therefore I perform a pthread_setaffinity_np from T0 changing T4 mask
from CPU #1 to #0.

In this scenario it happens that T3 (that should never execute since
there is T0 with higher priority currently running on the same CPU #0)
"emerge" and executes for a bit.
It seems that the pthread_setaffinity_np syscall is somehow
"suspensive" for the time needed to migrate T4 and let the scheduler
to execute T3 for that bunch of time.

Is this behavior expected (I did not find any documentation about
this)? How can avoid it?

Thanks in advance,
Primiano

--
 Primiano Tucci
 http://www.primianotucci.com

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Strange behavior of pthread_setaffinity_np
  2010-04-19  9:45 Strange behavior of pthread_setaffinity_np Primiano Tucci
@ 2010-04-19 11:07 ` Sujit K M
  2010-04-19 11:51   ` Primiano Tucci
  2010-04-19 20:18 ` Primiano Tucci
  1 sibling, 1 reply; 5+ messages in thread
From: Sujit K M @ 2010-04-19 11:07 UTC (permalink / raw)
  To: Primiano Tucci; +Cc: linux-rt-users

All these are best guesses.

On Mon, Apr 19, 2010 at 3:15 PM, Primiano Tucci <p.tucci@gmail.com> wrote:
> Hi all,
> I am an Italian researcher and I am working on a Real Time scheduling
> infrastructure. I am currently using Linux Kernel 2.6.29.6-rt24-smp
> (PREEMPT-RT Patch) running on a Intel Q9550 CPU.
> I am experiencing strange behaviors with the pthread_setaffinity_np API.
>
> This is my scenario, I have 4 Real Time Threads (SCHED_FIFO)
> distributed as follows:
>
> T0 : CPU 0, Priority 2 (HIGH)
> T1 : CPU 1, Priority 2 (HIGH)
> T3 : CPU 0, Priority 1 (LOW)
> T4 : CPU 1, Priority 1 (LOW)

Could you check with the manual whether the following documentation
specifies your Processor.
http://www.intel.com/design/core2quad/documentation.htm

The reason I am asking is that what ever you are stating above in
terms of thread affinity would not
even qualify as an Core2duo.

>
> So T0 and T1 are actually the "big bosses" on CPUs #0 and #1, T3 and
> T4, instead, never execute (let's assume that each thread is a simple
> busy wait that never sleeps/yields)
> Now, at a certain point, from T0 code, I want to migrate T4 from CPU
> #1 to #0, keeping its low priority.
> Therefore I perform a pthread_setaffinity_np from T0 changing T4 mask
> from CPU #1 to #0.

This approach is not at all correct as the thread affinity should be
closer to the core than the processor.
If this is supported.

>
> In this scenario it happens that T3 (that should never execute since
> there is T0 with higher priority currently running on the same CPU #0)
> "emerge" and executes for a bit.
> It seems that the pthread_setaffinity_np syscall is somehow
> "suspensive" for the time needed to migrate T4 and let the scheduler
> to execute T3 for that bunch of time.

I think what is happening is that Once you have scheduled the code on
processor basis, It tends to ignore
core logic, but depends more on the processor logic.

>
> Is this behavior expected (I did not find any documentation about
> this)? How can avoid it?

I think you will have to set the affinity to core level than at processor level.

>
> Thanks in advance,
> Primiano
>
> --
>  Primiano Tucci
>  http://www.primianotucci.com
> --
> To unsubscribe from this list: send the line "unsubscribe linux-rt-users" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>



-- 
-- Sujit K M

blog(http://kmsujit.blogspot.com/)
--
To unsubscribe from this list: send the line "unsubscribe linux-rt-users" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Strange behavior of pthread_setaffinity_np
  2010-04-19 11:07 ` Sujit K M
@ 2010-04-19 11:51   ` Primiano Tucci
       [not found]     ` <w2n921ca19c1004190501n36c7f10ch484cda701e261ee9@mail.gmail.com>
  0 siblings, 1 reply; 5+ messages in thread
From: Primiano Tucci @ 2010-04-19 11:51 UTC (permalink / raw)
  To: linux-rt-users

Hi Sujit,
thanks for your reply, but I have not completely understood your point.
Does the kernel make (from the thread viewpoint) differences between
Cores of a Processor, or multiple processors?
In my previous speak I generally used the term CPU #0 and #1 to refer
to two different cores of a same Processor (a quad core Q9550).
Do you mean I need a different API to sett affinity on a per-core
basis rather than a per-processor basis? It sound strange to me, as in
my little knowledge Cores are viewed, by the system, as different
processor, just as in the case of a regular Multi Processor system.

Thanks,
Primiano

On Mon, Apr 19, 2010 at 1:07 PM, Sujit K M <sjt.kar@gmail.com> wrote:
> All these are best guesses.
>
> On Mon, Apr 19, 2010 at 3:15 PM, Primiano Tucci <p.tucci@gmail.com> wrote:
>> Hi all,
>> I am an Italian researcher and I am working on a Real Time scheduling
>> infrastructure. I am currently using Linux Kernel 2.6.29.6-rt24-smp
>> (PREEMPT-RT Patch) running on a Intel Q9550 CPU.
>> I am experiencing strange behaviors with the pthread_setaffinity_np API.
>>
>> This is my scenario, I have 4 Real Time Threads (SCHED_FIFO)
>> distributed as follows:
>>
>> T0 : CPU 0, Priority 2 (HIGH)
>> T1 : CPU 1, Priority 2 (HIGH)
>> T3 : CPU 0, Priority 1 (LOW)
>> T4 : CPU 1, Priority 1 (LOW)
>
> Could you check with the manual whether the following documentation
> specifies your Processor.
> http://www.intel.com/design/core2quad/documentation.htm
>
> The reason I am asking is that what ever you are stating above in
> terms of thread affinity would not
> even qualify as an Core2duo.
>
>>
>> So T0 and T1 are actually the "big bosses" on CPUs #0 and #1, T3 and
>> T4, instead, never execute (let's assume that each thread is a simple
>> busy wait that never sleeps/yields)
>> Now, at a certain point, from T0 code, I want to migrate T4 from CPU
>> #1 to #0, keeping its low priority.
>> Therefore I perform a pthread_setaffinity_np from T0 changing T4 mask
>> from CPU #1 to #0.
>
> This approach is not at all correct as the thread affinity should be
> closer to the core than the processor.
> If this is supported.
>
>>
>> In this scenario it happens that T3 (that should never execute since
>> there is T0 with higher priority currently running on the same CPU #0)
>> "emerge" and executes for a bit.
>> It seems that the pthread_setaffinity_np syscall is somehow
>> "suspensive" for the time needed to migrate T4 and let the scheduler
>> to execute T3 for that bunch of time.
>
> I think what is happening is that Once you have scheduled the code on
> processor basis, It tends to ignore
> core logic, but depends more on the processor logic.
>
>>
>> Is this behavior expected (I did not find any documentation about
>> this)? How can avoid it?
>
> I think you will have to set the affinity to core level than at processor level.
>
>>
>> Thanks in advance,
>> Primiano
>>
>> --
>>  Primiano Tucci
>>  http://www.primianotucci.com
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-rt-users" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
>
>
>
> --
> -- Sujit K M
>
> blog(http://kmsujit.blogspot.com/)
>
--
To unsubscribe from this list: send the line "unsubscribe linux-rt-users" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Strange behavior of pthread_setaffinity_np
       [not found]     ` <w2n921ca19c1004190501n36c7f10ch484cda701e261ee9@mail.gmail.com>
@ 2010-04-19 12:01       ` Sujit K M
  0 siblings, 0 replies; 5+ messages in thread
From: Sujit K M @ 2010-04-19 12:01 UTC (permalink / raw)
  To: Primiano Tucci; +Cc: RT

On Mon, Apr 19, 2010 at 5:31 PM, Sujit K M <sjt.kar@gmail.com> wrote:
> On Mon, Apr 19, 2010 at 5:21 PM, Primiano Tucci <p.tucci@gmail.com> wrote:
>> Hi Sujit,
>> thanks for your reply, but I have not completely understood your point.
>> Does the kernel make (from the thread viewpoint) differences between
>> Cores of a Processor, or multiple processors?
>
> Yes, It certainly does for Kernel Level Routines. But on User level
> Applications,
> These Might not be the case.
>
>> In my previous speak I generally used the term CPU #0 and #1 to refer
>> to two different cores of a same Processor (a quad core Q9550).
>
> What I had in mind for Quad was Four Cores per Processors.
>
>> Do you mean I need a different API to sett affinity on a per-core
>> basis rather than a per-processor basis? It sound strange to me, as in
>> my little knowledge Cores are viewed, by the system, as different
>> processor, just as in the case of a regular Multi Processor system.
>
> I donot think these API's Have been developed.
>



-- 
-- Sujit K M

blog(http://kmsujit.blogspot.com/)

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Strange behavior of pthread_setaffinity_np
  2010-04-19  9:45 Strange behavior of pthread_setaffinity_np Primiano Tucci
  2010-04-19 11:07 ` Sujit K M
@ 2010-04-19 20:18 ` Primiano Tucci
  1 sibling, 0 replies; 5+ messages in thread
From: Primiano Tucci @ 2010-04-19 20:18 UTC (permalink / raw)
  To: linux-rt-users

I think I solved the question:
pthread_setaffinity_np is based on sched_setaffinity syscall.
Actually sched_setaffinity performs a read_lock(&tasklist_lock);
However, with the introduction of the PREEMPT_RT patch, the read_lock
is preemptible, thats why the Thread T0 Yields in favor of T2.

I think the sched.c should be revised regarding the PREEMPT_RT patch,
and the scheduling related syscalls should adopt non pre-emptible
(e.g. raw_spinlock_t) spinlocks rather than preemptible one, in order
to avoid unwilling behaviors like the one showed.

Regards,
Primiano


On Mon, Apr 19, 2010 at 11:45 AM, Primiano Tucci <p.tucci@gmail.com> wrote:
> Hi all,
> I am an Italian researcher and I am working on a Real Time scheduling
> infrastructure. I am currently using Linux Kernel 2.6.29.6-rt24-smp
> (PREEMPT-RT Patch) running on a Intel Q9550 CPU.
> I am experiencing strange behaviors with the pthread_setaffinity_np API.
>
> This is my scenario, I have 4 Real Time Threads (SCHED_FIFO)
> distributed as follows:
>
> T0 : CPU 0, Priority 2 (HIGH)
> T1 : CPU 1, Priority 2 (HIGH)
> T3 : CPU 0, Priority 1 (LOW)
> T4 : CPU 1, Priority 1 (LOW)
>
> So T0 and T1 are actually the "big bosses" on CPUs #0 and #1, T3 and
> T4, instead, never execute (let's assume that each thread is a simple
> busy wait that never sleeps/yields)
> Now, at a certain point, from T0 code, I want to migrate T4 from CPU
> #1 to #0, keeping its low priority.
> Therefore I perform a pthread_setaffinity_np from T0 changing T4 mask
> from CPU #1 to #0.
>
> In this scenario it happens that T3 (that should never execute since
> there is T0 with higher priority currently running on the same CPU #0)
> "emerge" and executes for a bit.
> It seems that the pthread_setaffinity_np syscall is somehow
> "suspensive" for the time needed to migrate T4 and let the scheduler
> to execute T3 for that bunch of time.
>
> Is this behavior expected (I did not find any documentation about
> this)? How can avoid it?
>
> Thanks in advance,
> Primiano
>
> --
>  Primiano Tucci
>  http://www.primianotucci.com
>
--
To unsubscribe from this list: send the line "unsubscribe linux-rt-users" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2010-04-19 20:18 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-04-19  9:45 Strange behavior of pthread_setaffinity_np Primiano Tucci
2010-04-19 11:07 ` Sujit K M
2010-04-19 11:51   ` Primiano Tucci
     [not found]     ` <w2n921ca19c1004190501n36c7f10ch484cda701e261ee9@mail.gmail.com>
2010-04-19 12:01       ` Sujit K M
2010-04-19 20:18 ` Primiano Tucci

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).