public inbox for linux-rt-users@vger.kernel.org
 help / color / mirror / Atom feed
* Cyclictest results on Sparc64 with PREEMPT_RT
@ 2014-01-27  8:20 Allen Pais
  2014-02-07 12:35 ` Sebastian Andrzej Siewior
  0 siblings, 1 reply; 6+ messages in thread
From: Allen Pais @ 2014-01-27  8:20 UTC (permalink / raw)
  To: linux-rt-users, Thomas Gleixner, davem,
	"Sebastian Andrzej Siewior bigeasy"

Hi,

Here's a quick update on how Sparc64(with PREEMPT_RT) behaved with cyclictest.

./cyclictest -l 10000 -i 1000 -n -p 80 -q

With PREEMPT_RT
kernel version: v3.10.24-rt22

(with out load)
Min:6 Act:7 Avg:7 Max:10

(with Load, without hackbench)
Min:6 Act:7 Avg:7 Max:46


Without PREEMPT_RT
kernel version: v3.10

(with out load)
Min:12 Act:13 Avg:13 Max:16

(with load, without hackbench)
Min:10 Act:16 Avg:15 Max:813

But with load after point of repeated tests, the system hit a Soft lockup.

<snip>
[ 1143.894099] INFO: rcu_preempt self-detected stall on CPU { 36}  (t=2100 jiffies g=373 c=372 q=61)
[ 1143.894130]   CPU[  0]: TSTATE[0000009980001602] TPC[000000000048d1ac] TNPC[000000000048d1b0] TASK[ksoftirqd/0:3]
[ 1143.894151]              TPC[idle_cpu+0x2c/0x80] O7[cpumask_next_and+0x18/0x80] I7[find_busiest_group+0x21c/0xa40] RPC[load_balance+0xe8/0x880]
.....
<snip>

trace:

[ 1150.135499] BUG: soft lockup - CPU#36 stuck for 23s! [swapper/36:0]
[ 1150.135552] Modules linked in: usb_storage binfmt_misc ehci_pci ehci_hcd sg n2_rng rng_core ext4 jbd2 crc16 sr_mod mpt2sas scsi_transport_sas raid_class sunvnet sunvdc dm_mirror dm_region_hash dm_log dm_mod be2iscsi iscsi_boot_sysfs bnx2i cnic uio ipv6 cxgb4i cxgb4 cxgb3i libcxgbi cxgb3 mdio libiscsi_tcp libiscsi scsi_transport_iscsi
[ 1150.135556] CPU: 36 PID: 0 Comm: swapper/36 Tainted: G        W    3.10.22-rt19+ #9
[ 1150.135559] task: fffff80fd4dc5b00 ti: fffff80fd4dfc000 task.ti: fffff80fd4dfc000
[ 1150.135561] TSTATE: 0000000080001601 TPC: 0000000000404b54 TNPC: 0000000000404b58 Y: 00000000    Tainted: G        W   
[ 1150.135564] TPC: <rtrap_no_irq_enable+0x0/0xc>
[ 1150.135566] g0: 00000000009d4080 g1: fffff80fd4dfc000 g2: 0000000001010001 g3: 0000000001010001
[ 1150.135567] g4: fffff80fd4dc5b00 g5: fffff80fde86c000 g6: fffff80fd4dfc000 g7: 00000000009dc140
[ 1150.135569] o0: 0000000000000001 o1: fffff80fd4dfec80 o2: 0000000000404b58 o3: 0000000000000000
[ 1150.135570] o4: 000000000000004f o5: 0000000000000185 sp: fffff80fd4dfe3c1 ret_pc: 00000000004209f4
[ 1150.135572] RPC: <tl0_irq15+0x14/0x20>
[ 1150.135574] l0: 0000000000001000 l1: 0000000080001600 l2: 00000000004209f0 l3: 000000000000000a
[ 1150.135576] l4: 0000000000000000 l5: 0000000fdea2c000 l6: fffff80fd4dfc000 l7: 0000000080001001
[ 1150.135577] i0: 0000000000000001 i1: fffff80fd4dfede0 i2: 0000000000404b58 i3: 0000000000000000
[ 1150.135578] i4: 000000000000004f i5: 0000000000000185 i6: fffff80fd4dfe521 i7: 00000000004209f4
[ 1150.135581] I7: <tl0_irq15+0x14/0x20>
[ 1150.135582] Call Trace:
[ 1150.135584]  [0000000000404b54] rtrap_no_irq_enable+0x0/0xc
[ 1150.135586]  [0000000000404b54] rtrap_no_irq_enable+0x0/0xc
[ 1150.135588]  [0000000000404b54] rtrap_no_irq_enable+0x0/0xc
[ 1150.135590]  [0000000000404b54] rtrap_no_irq_enable+0x0/0xc
[ 1150.135592]  [0000000000404b54] rtrap_no_irq_enable+0x0/0xc
[ 1150.135594]  [0000000000404b54] rtrap_no_irq_enable+0x0/0xc
[ 1150.135595]  [0000000000404b54] rtrap_no_irq_enable+0x0/0xc
[ 1150.135597]  [0000000000404b54] rtrap_no_irq_enable+0x0/0xc
[ 1150.135599]  [0000000000404b54] rtrap_no_irq_enable+0x0/0xc
[ 1150.135600]  [0000000000404b54] rtrap_no_irq_enable+0x0/0xc
[ 1150.135604]  [00000000004acc00] in_lock_functions+0x0/0x40
[ 1150.135608]  [000000000080a038] add_preempt_count+0xd8/0x140
[ 1150.135610]  [000000000080617c] __schedule+0x1c/0x500
[ 1150.135613]  [0000000000806b7c] schedule+0x1c/0xc0
[ 1150.135615]  [0000000000806f8c] schedule_preempt_disabled+0xc/0x40
[ 1150.135617]  [000000000049dd10] cpu_startup_entry+0x150/0x300
[ 1160.917971] sd 0:0:0:0: attempting task abort! scmd(fffff80fcea88620)
[ 1164.407662] sd 0:0:0:0: [sda] CDB: 
[ 1164.414603] Read(10): 28 00 27 21 3f b3 00 00 08 00
[ 1164.424320] scsi target0:0:0: handle(0x0009), sas_address(0x5000cca025967659), phy(0)
[ 1164.439960] scsi target0:0:0: enclosure_logical_id(0x50800200013890f8), slot(0)

Message from syslogd@localhost at Jan 27 02:51:15 ...
 kernel:[ 1150.135499] BUG: soft lockup - CPU#36 stuck for 23s! [swapper/36:0]
[ 1194.455533] mpt2sas0: mpt2sas_scsih_issue_tm: timeout
[ 1194.465348] mf:
	01000009 00000100 00000000 00000000 00000000 00000000 00000000 00000000 
	00000000 00000000 00000000 00000000 00000362 
[ 1204.487799] mpt2sas0: sending diag reset !!
[ 1205.598040] mpt2sas0: diag reset: SUCCESS

Am yet to debug what went wrong. 

- Allen

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Cyclictest results on Sparc64 with PREEMPT_RT
  2014-01-27  8:20 Cyclictest results on Sparc64 with PREEMPT_RT Allen Pais
@ 2014-02-07 12:35 ` Sebastian Andrzej Siewior
  2014-02-07 12:41   ` Allen Pais
  0 siblings, 1 reply; 6+ messages in thread
From: Sebastian Andrzej Siewior @ 2014-02-07 12:35 UTC (permalink / raw)
  To: Allen Pais
  Cc: linux-rt-users, Thomas Gleixner, davem,
	SebastianAndrzejSiewiorbigeasy

* Allen Pais | 2014-01-27 13:50:43 [+0530]:

>Hi,
Hi,

>[ 1143.894099] INFO: rcu_preempt self-detected stall on CPU { 36}  (t=2100 jiffies g=373 c=372 q=61)
>[ 1143.894130]   CPU[  0]: TSTATE[0000009980001602] TPC[000000000048d1ac] TNPC[000000000048d1b0] TASK[ksoftirqd/0:3]
>[ 1143.894151]              TPC[idle_cpu+0x2c/0x80] O7[cpumask_next_and+0x18/0x80] I7[find_busiest_group+0x21c/0xa40] RPC[load_balance+0xe8/0x880]

so you have CPU stall on cpu36
>
>[ 1150.135499] BUG: soft lockup - CPU#36 stuck for 23s! [swapper/36:0]
>[ 1150.135564] TPC: <rtrap_no_irq_enable+0x0/0xc>
>[ 1150.135572] RPC: <tl0_irq15+0x14/0x20>
>[ 1150.135581] I7: <tl0_irq15+0x14/0x20>

This is your NMI handler where you are right now…

>[ 1150.135582] Call Trace:
>[ 1150.135584]  [0000000000404b54] rtrap_no_irq_enable+0x0/0xc
>[ 1150.135586]  [0000000000404b54] rtrap_no_irq_enable+0x0/0xc
>[ 1150.135588]  [0000000000404b54] rtrap_no_irq_enable+0x0/0xc
>[ 1150.135590]  [0000000000404b54] rtrap_no_irq_enable+0x0/0xc
>[ 1150.135592]  [0000000000404b54] rtrap_no_irq_enable+0x0/0xc
>[ 1150.135594]  [0000000000404b54] rtrap_no_irq_enable+0x0/0xc
>[ 1150.135595]  [0000000000404b54] rtrap_no_irq_enable+0x0/0xc
>[ 1150.135597]  [0000000000404b54] rtrap_no_irq_enable+0x0/0xc
>[ 1150.135599]  [0000000000404b54] rtrap_no_irq_enable+0x0/0xc
>[ 1150.135600]  [0000000000404b54] rtrap_no_irq_enable+0x0/0xc

This is the stack where the NMI got injected. Not sure why this is more
than once here but then this the sparc64 trace I see :)

>[ 1150.135604]  [00000000004acc00] in_lock_functions+0x0/0x40
>[ 1150.135608]  [000000000080a038] add_preempt_count+0xd8/0x140
>[ 1150.135610]  [000000000080617c] __schedule+0x1c/0x500
>[ 1150.135613]  [0000000000806b7c] schedule+0x1c/0xc0
>[ 1150.135615]  [0000000000806f8c] schedule_preempt_disabled+0xc/0x40
>[ 1150.135617]  [000000000049dd10] cpu_startup_entry+0x150/0x300

And this where the CPU was before the NMI. Doesn't look blocking.
in_lock_functions() compares a few values no locking involved so the CPU
probably was here while the NMI hit and an usec later it might be an
instruction later. What I thnig is odd, is that it is exactly at the
begin of the function, not an instruction later.

>Am yet to debug what went wrong. 
>
>- Allen

Sebastian
--
To unsubscribe from this list: send the line "unsubscribe linux-rt-users" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Cyclictest results on Sparc64 with PREEMPT_RT
  2014-02-07 12:35 ` Sebastian Andrzej Siewior
@ 2014-02-07 12:41   ` Allen Pais
  2014-02-07 13:25     ` Sebastian Andrzej Siewior
  0 siblings, 1 reply; 6+ messages in thread
From: Allen Pais @ 2014-02-07 12:41 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: linux-rt-users, Thomas Gleixner, davem,
	SebastianAndrzejSiewiorbigeasy

[-- Attachment #1: Type: text/plain, Size: 3797 bytes --]

Sebastian,
> 
> This is the stack where the NMI got injected. Not sure why this is more
> than once here but then this the sparc64 trace I see :)
> 
>> [ 1150.135604]  [00000000004acc00] in_lock_functions+0x0/0x40
>> [ 1150.135608]  [000000000080a038] add_preempt_count+0xd8/0x140
>> [ 1150.135610]  [000000000080617c] __schedule+0x1c/0x500
>> [ 1150.135613]  [0000000000806b7c] schedule+0x1c/0xc0
>> [ 1150.135615]  [0000000000806f8c] schedule_preempt_disabled+0xc/0x40
>> [ 1150.135617]  [000000000049dd10] cpu_startup_entry+0x150/0x300
> 
> And this where the CPU was before the NMI. Doesn't look blocking.
> in_lock_functions() compares a few values no locking involved so the CPU
> probably was here while the NMI hit and an usec later it might be an
> instruction later. What I thnig is odd, is that it is exactly at the
> begin of the function, not an instruction later.
> 

I haven't made much progress yet. These appear when the machine is under
stress(hackbench/dd). There's also another issue that popped up while
I ran hack bench, here's the brief trace

[ 6694.884398] kernel BUG at kernel/rtmutex.c:738!
[ 6694.884402]               \|/ ____ \|/
[ 6694.884402]               "@'/ .. \`@"
[ 6694.884402]               /_| \__/ |_\
[ 6694.884402]                  \__U_/
[ 6694.884403] hackbench(18821): Kernel bad sw trap 5 [#2]
[ 6694.884408] CPU: 8 PID: 18821 Comm: hackbench Tainted: G      D W    3.10.24-rt22+ #11
[ 6694.884410] task: fffff80f8f4a2580 ti: fffff80f8ebd4000 task.ti: fffff80f8ebd4000
[ 6694.884413] TSTATE: 0000004411001603 TPC: 0000000000878ec4 TNPC: 0000000000878ec8 Y: 00000000    Tainted: G      D W   
[ 6694.884425] TPC: <rt_spin_lock_slowlock+0x304/0x340>
[ 6694.884427] g0: 0000000000000000 g1: 0000000000000000 g2: 0000000000000000 g3: 0000000000de5800
[ 6694.884429] g4: fffff80f8f4a2580 g5: fffff80fd089c000 g6: fffff80f8ebd4000 g7: 726e656c2f72746d
[ 6694.884430] o0: 00000000009bfaf0 o1: 00000000000002e2 o2: 0000000000000000 o3: 0000000000000001
[ 6694.884432] o4: 0000000000000002 o5: 0000000000000000 sp: fffff80fff9b70d1 ret_pc: 0000000000878ebc
[ 6694.884434] RPC: <rt_spin_lock_slowlock+0x2fc/0x340>
[ 6694.884437] l0: fffff80fff9b7990 l1: fffff80f8f4a2580 l2: fffff80f8f4a2bd0 l3: 000001001fb75040
[ 6694.884438] l4: 0000000000000000 l5: 0000000000e25c00 l6: 0000000000000008 l7: 0000000000000008
[ 6694.884440] i0: fffff80f97836070 i1: 0000000000512400 i2: 0000000000000001 i3: 0000000000000000
[ 6694.884441] i4: 0000000000000002 i5: 0000000000000000 i6: fffff80fff9b7211 i7: 00000000008790ac
[ 6694.884444] I7: <rt_spin_lock+0xc/0x40>
[ 6694.884445] Call Trace:
[ 6694.884448]  [00000000008790ac] rt_spin_lock+0xc/0x40
[ 6694.884454]  [000000000052e30c] unmap_single_vma+0x1ec/0x6c0
[ 6694.884456]  [000000000052e808] unmap_vmas+0x28/0x60
[ 6694.884459]  [0000000000530cc8] exit_mmap+0x88/0x160
[ 6694.884465]  [000000000045e0d4] mmput+0x34/0xe0
[ 6694.884469]  [00000000004669fc] do_exit+0x1fc/0xa40
[ 6694.884473]  [000000000087a650] perfctr_irq+0x3d0/0x420
[ 6694.884477]  [00000000004209f4] tl0_irq15+0x14/0x20
[ 6694.884482]  [0000000000671e4c] do_raw_spin_lock+0xac/0x120
[ 6694.884485]  [0000000000879cc8] _raw_spin_lock_irqsave+0x68/0xa0
[ 6694.884488]  [0000000000452074] flush_tsb_user+0x14/0x120
[ 6694.884490]  [00000000004515a8] flush_tlb_pending+0x68/0xe0
[ 6694.884492]  [0000000000451800] tlb_batch_add+0x1e0/0x200
[ 6694.884496]  [000000000053bef8] ptep_clear_flush+0x38/0x60
[ 6694.884498]  [000000000052a9fc] do_wp_page+0x1dc/0x860
[ 6694.884500]  [000000000052b3f8] handle_pte_fault+0x378/0x7c0

These are the two issues I have ran into with stress. Otherwise the machine is quite stable
with light load(compress/decompress and building the kernel).

Attached are the graphs of the system on light load.

Thanks,

Allen


[-- Attachment #2: plot_OL.png --]
[-- Type: image/png, Size: 3800 bytes --]

[-- Attachment #3: plot_RT.png --]
[-- Type: image/png, Size: 3548 bytes --]

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Cyclictest results on Sparc64 with PREEMPT_RT
  2014-02-07 12:41   ` Allen Pais
@ 2014-02-07 13:25     ` Sebastian Andrzej Siewior
  2014-02-07 13:30       ` Allen Pais
  0 siblings, 1 reply; 6+ messages in thread
From: Sebastian Andrzej Siewior @ 2014-02-07 13:25 UTC (permalink / raw)
  To: Allen Pais; +Cc: linux-rt-users, Thomas Gleixner, davem

On 02/07/2014 01:41 PM, Allen Pais wrote:
> Sebastian,

Hi Allen,

> I haven't made much progress yet. These appear when the machine is under
> stress(hackbench/dd). There's also another issue that popped up while
> I ran hack bench, here's the brief trace
> 
> [ 6694.884398] kernel BUG at kernel/rtmutex.c:738!
> [ 6694.884402]               \|/ ____ \|/
> [ 6694.884402]               "@'/ .. \`@"
> [ 6694.884402]               /_| \__/ |_\
> [ 6694.884402]                  \__U_/

I think we need this in generic code. I'm actually a little jealous
that only sparc has this.

> [ 6694.884403] hackbench(18821): Kernel bad sw trap 5 [#2]
> [ 6694.884408] CPU: 8 PID: 18821 Comm: hackbench Tainted: G      D W    3.10.24-rt22+ #11
> [ 6694.884410] task: fffff80f8f4a2580 ti: fffff80f8ebd4000 task.ti: fffff80f8ebd4000
> [ 6694.884413] TSTATE: 0000004411001603 TPC: 0000000000878ec4 TNPC: 0000000000878ec8 Y: 00000000    Tainted: G      D W   
> [ 6694.884425] TPC: <rt_spin_lock_slowlock+0x304/0x340>
> [ 6694.884427] g0: 0000000000000000 g1: 0000000000000000 g2: 0000000000000000 g3: 0000000000de5800
> [ 6694.884429] g4: fffff80f8f4a2580 g5: fffff80fd089c000 g6: fffff80f8ebd4000 g7: 726e656c2f72746d
> [ 6694.884430] o0: 00000000009bfaf0 o1: 00000000000002e2 o2: 0000000000000000 o3: 0000000000000001
> [ 6694.884432] o4: 0000000000000002 o5: 0000000000000000 sp: fffff80fff9b70d1 ret_pc: 0000000000878ebc
> [ 6694.884434] RPC: <rt_spin_lock_slowlock+0x2fc/0x340>
> [ 6694.884437] l0: fffff80fff9b7990 l1: fffff80f8f4a2580 l2: fffff80f8f4a2bd0 l3: 000001001fb75040
> [ 6694.884438] l4: 0000000000000000 l5: 0000000000e25c00 l6: 0000000000000008 l7: 0000000000000008
> [ 6694.884440] i0: fffff80f97836070 i1: 0000000000512400 i2: 0000000000000001 i3: 0000000000000000
> [ 6694.884441] i4: 0000000000000002 i5: 0000000000000000 i6: fffff80fff9b7211 i7: 00000000008790ac
> [ 6694.884444] I7: <rt_spin_lock+0xc/0x40>
> [ 6694.884445] Call Trace:
> [ 6694.884448]  [00000000008790ac] rt_spin_lock+0xc/0x40
> [ 6694.884454]  [000000000052e30c] unmap_single_vma+0x1ec/0x6c0
> [ 6694.884456]  [000000000052e808] unmap_vmas+0x28/0x60
> [ 6694.884459]  [0000000000530cc8] exit_mmap+0x88/0x160
> [ 6694.884465]  [000000000045e0d4] mmput+0x34/0xe0
> [ 6694.884469]  [00000000004669fc] do_exit+0x1fc/0xa40
> [ 6694.884473]  [000000000087a650] perfctr_irq+0x3d0/0x420
> [ 6694.884477]  [00000000004209f4] tl0_irq15+0x14/0x20
> [ 6694.884482]  [0000000000671e4c] do_raw_spin_lock+0xac/0x120
> [ 6694.884485]  [0000000000879cc8] _raw_spin_lock_irqsave+0x68/0xa0
> [ 6694.884488]  [0000000000452074] flush_tsb_user+0x14/0x120
> [ 6694.884490]  [00000000004515a8] flush_tlb_pending+0x68/0xe0
> [ 6694.884492]  [0000000000451800] tlb_batch_add+0x1e0/0x200
> [ 6694.884496]  [000000000053bef8] ptep_clear_flush+0x38/0x60
> [ 6694.884498]  [000000000052a9fc] do_wp_page+0x1dc/0x860
> [ 6694.884500]  [000000000052b3f8] handle_pte_fault+0x378/0x7c0
> 
> These are the two issues I have ran into with stress. Otherwise the machine is quite stable
> with light load(compress/decompress and building the kernel).

This is a dead lock. Whatever lock you go after, you are already
holding it in this context / hackbench. I don't know how you got from
perfctr_irq() to do_exit() but you shouldn't do this in hardirq
context.

But calling do_exit() is probably error recovery since it would kill
hackbench and I assume it wasn't done yet.
I see also tl0_irq15() in your stack trace. This is that evil NMI that
checks if the system is stalling. I think that you stuck in
flush_tsb_user() on that raw_lock and somebody is not letting it go and
so you spin for ever. Maybe full lockdep shows you some informations
about wrong context locking etc.

> Thanks,
> 
> Allen
> 

Sebastian

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Cyclictest results on Sparc64 with PREEMPT_RT
  2014-02-07 13:25     ` Sebastian Andrzej Siewior
@ 2014-02-07 13:30       ` Allen Pais
  2014-02-11 21:44         ` Kirill Tkhai
  0 siblings, 1 reply; 6+ messages in thread
From: Allen Pais @ 2014-02-07 13:30 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior; +Cc: linux-rt-users, Thomas Gleixner, davem

Sebastian,
> 
> This is a dead lock. Whatever lock you go after, you are already
> holding it in this context / hackbench. I don't know how you got from
> perfctr_irq() to do_exit() but you shouldn't do this in hardirq
> context.
> 
> But calling do_exit() is probably error recovery since it would kill
> hackbench and I assume it wasn't done yet.
> I see also tl0_irq15() in your stack trace. This is that evil NMI that
> checks if the system is stalling. I think that you stuck in
> flush_tsb_user() on that raw_lock and somebody is not letting it go and
> so you spin for ever. Maybe full lockdep shows you some informations
> about wrong context locking etc.
> 
Yes, there's someone's holding the lock and not releasing it in
flush_tsb_user(). I'll check with lockdep.

Thanks,
Allen


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Cyclictest results on Sparc64 with PREEMPT_RT
  2014-02-07 13:30       ` Allen Pais
@ 2014-02-11 21:44         ` Kirill Tkhai
  0 siblings, 0 replies; 6+ messages in thread
From: Kirill Tkhai @ 2014-02-11 21:44 UTC (permalink / raw)
  To: Allen Pais, Sebastian Andrzej Siewior
  Cc: linux-rt-users, Thomas Gleixner, davem@davemloft.net



07.02.2014, 17:31, "Allen Pais" <allen.pais@oracle.com>:
> Sebastian,
>
>>  This is a dead lock. Whatever lock you go after, you are already
>>  holding it in this context / hackbench. I don't know how you got from
>>  perfctr_irq() to do_exit() but you shouldn't do this in hardirq
>>  context.
>>
>>  But calling do_exit() is probably error recovery since it would kill
>>  hackbench and I assume it wasn't done yet.
>>  I see also tl0_irq15() in your stack trace. This is that evil NMI that
>>  checks if the system is stalling. I think that you stuck in
>>  flush_tsb_user() on that raw_lock and somebody is not letting it go and
>>  so you spin for ever. Maybe full lockdep shows you some informations
>>  about wrong context locking etc.
>
> Yes, there's someone's holding the lock and not releasing it in
> flush_tsb_user(). I'll check with lockdep.

I'm looking at arch_enter_lazy_mmu_mode()/arch_leave_lazy_mmu_mode().
It looks like it's assumed they are called with preemption disabled.
In !RT case they are called after spin_lock() which gives preemption
disabled. In RT the spinlocks are mutexes and there is preemption.
I think we need something like migrate_disable() here.

I have no sparc right now and did not check this carefully. Check if
you are interested :)

Kirill

> Thanks,
> Allen
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-rt-users" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-rt-users" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2014-02-11 21:52 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-01-27  8:20 Cyclictest results on Sparc64 with PREEMPT_RT Allen Pais
2014-02-07 12:35 ` Sebastian Andrzej Siewior
2014-02-07 12:41   ` Allen Pais
2014-02-07 13:25     ` Sebastian Andrzej Siewior
2014-02-07 13:30       ` Allen Pais
2014-02-11 21:44         ` Kirill Tkhai

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox