* 2.6.23.1: mdadm/raid5 hung/d-state
@ 2007-11-04 12:03 Justin Piszcz
2007-11-04 12:39 ` 2.6.23.1: mdadm/raid5 hung/d-state (md3_raid5 stuck in endless loop?) Justin Piszcz
` (3 more replies)
0 siblings, 4 replies; 35+ messages in thread
From: Justin Piszcz @ 2007-11-04 12:03 UTC (permalink / raw)
To: linux-kernel, linux-raid
# ps auxww | grep D
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 273 0.0 0.0 0 0 ? D Oct21 14:40 [pdflush]
root 274 0.0 0.0 0 0 ? D Oct21 13:00 [pdflush]
After several days/weeks, this is the second time this has happened, while
doing regular file I/O (decompressing a file), everything on the device
went into D-state.
# mdadm -D /dev/md3
/dev/md3:
Version : 00.90.03
Creation Time : Wed Aug 22 10:38:53 2007
Raid Level : raid5
Array Size : 1318680576 (1257.59 GiB 1350.33 GB)
Used Dev Size : 146520064 (139.73 GiB 150.04 GB)
Raid Devices : 10
Total Devices : 10
Preferred Minor : 3
Persistence : Superblock is persistent
Update Time : Sun Nov 4 06:38:29 2007
State : active
Active Devices : 10
Working Devices : 10
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 1024K
UUID : e37a12d1:1b0b989a:083fb634:68e9eb49
Events : 0.4309
Number Major Minor RaidDevice State
0 8 33 0 active sync /dev/sdc1
1 8 49 1 active sync /dev/sdd1
2 8 65 2 active sync /dev/sde1
3 8 81 3 active sync /dev/sdf1
4 8 97 4 active sync /dev/sdg1
5 8 113 5 active sync /dev/sdh1
6 8 129 6 active sync /dev/sdi1
7 8 145 7 active sync /dev/sdj1
8 8 161 8 active sync /dev/sdk1
9 8 177 9 active sync /dev/sdl1
If I wanted to find out what is causing this, what type of debugging would
I have to enable to track it down? Any attempt to read/write files on the
devices fails (also going into d-state). Is there any useful information
I can get currently before rebooting the machine?
# pwd
/sys/block/md3/md
# ls
array_state dev-sdj1/ rd2@ stripe_cache_active
bitmap_set_bits dev-sdk1/ rd3@ stripe_cache_size
chunk_size dev-sdl1/ rd4@ suspend_hi
component_size layout rd5@ suspend_lo
dev-sdc1/ level rd6@ sync_action
dev-sdd1/ metadata_version rd7@ sync_completed
dev-sde1/ mismatch_cnt rd8@ sync_speed
dev-sdf1/ new_dev rd9@ sync_speed_max
dev-sdg1/ raid_disks reshape_position sync_speed_min
dev-sdh1/ rd0@ resync_start
dev-sdi1/ rd1@ safe_mode_delay
# cat array_state
active-idle
# cat mismatch_cnt
0
# cat stripe_cache_active
1
# cat stripe_cache_size
16384
# cat sync_action
idle
# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md1 : active raid1 sdb2[1] sda2[0]
136448 blocks [2/2] [UU]
md2 : active raid1 sdb3[1] sda3[0]
129596288 blocks [2/2] [UU]
md3 : active raid5 sdl1[9] sdk1[8] sdj1[7] sdi1[6] sdh1[5] sdg1[4] sdf1[3]
sde1[2] sdd1[1] sdc1[0]
1318680576 blocks level 5, 1024k chunk, algorithm 2 [10/10]
[UUUUUUUUUU]
md0 : active raid1 sdb1[1] sda1[0]
16787776 blocks [2/2] [UU]
unused devices: <none>
#
Justin.
^ permalink raw reply [flat|nested] 35+ messages in thread* Re: 2.6.23.1: mdadm/raid5 hung/d-state (md3_raid5 stuck in endless loop?) 2007-11-04 12:03 2.6.23.1: mdadm/raid5 hung/d-state Justin Piszcz @ 2007-11-04 12:39 ` Justin Piszcz 2007-11-04 12:48 ` 2.6.23.1: mdadm/raid5 hung/d-state Michael Tokarev ` (2 subsequent siblings) 3 siblings, 0 replies; 35+ messages in thread From: Justin Piszcz @ 2007-11-04 12:39 UTC (permalink / raw) To: linux-kernel, linux-raid; +Cc: xfs Time to reboot, before reboot: top - 07:30:23 up 13 days, 13:33, 10 users, load average: 16.00, 15.99, 14.96 Tasks: 221 total, 7 running, 209 sleeping, 0 stopped, 5 zombie Cpu(s): 0.0%us, 25.5%sy, 0.0%ni, 74.5%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 8039432k total, 1744356k used, 6295076k free, 164k buffers Swap: 16787768k total, 160k used, 16787608k free, 616960k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 688 root 15 -5 0 0 0 R 100 0.0 121:21.43 md3_raid5 273 root 20 0 0 0 0 D 0 0.0 14:40.68 pdflush 274 root 20 0 0 0 0 D 0 0.0 13:00.93 pdflush # cat /proc/fs/xfs/stat extent_alloc 301974 256068291 310513 240764389 abt 1900173 15346352 738568 731314 blk_map 276979807 235589732 864002 211245834 591619 513439614 0 bmbt 50717 367726 14177 11846 dir 3818065 361561 359723 975628 trans 48452 2648064 570998 ig 6034530 2074424 43153 3960106 0 3869384 460831 log 282781 10454333 3028 399803 173488 push_ail 3267594 0 1620 2611 730365 0 4476 0 10269 0 xstrat 291940 0 rw 61423078 103732605 attr 0 0 0 0 icluster 312958 97323 419837 vnodes 90721 4019823 0 1926744 3929102 3929102 3929102 0 buf 14678900 11027087 3651843 25743 760449 0 0 15775888 280425 xpc 966925905920 1047628533165 1162276949815 debug 0 # cat meminfo MemTotal: 8039432 kB MemFree: 6287000 kB Buffers: 164 kB Cached: 617072 kB SwapCached: 0 kB Active: 178404 kB Inactive: 589880 kB SwapTotal: 16787768 kB SwapFree: 16787608 kB Dirty: 494280 kB Writeback: 86004 kB AnonPages: 151240 kB Mapped: 17092 kB Slab: 259696 kB SReclaimable: 170876 kB SUnreclaim: 88820 kB PageTables: 11448 kB NFS_Unstable: 0 kB Bounce: 0 kB CommitLimit: 20807484 kB Committed_AS: 353536 kB VmallocTotal: 34359738367 kB VmallocUsed: 15468 kB VmallocChunk: 34359722699 kB # echo 3 > /proc/sys/vm/drop_caches # cat /proc/meminfo MemTotal: 8039432 kB MemFree: 6418352 kB Buffers: 32 kB Cached: 597908 kB SwapCached: 0 kB Active: 172028 kB Inactive: 579808 kB SwapTotal: 16787768 kB SwapFree: 16787608 kB Dirty: 494312 kB Writeback: 86004 kB AnonPages: 154104 kB Mapped: 17416 kB Slab: 144072 kB SReclaimable: 53100 kB SUnreclaim: 90972 kB PageTables: 11832 kB NFS_Unstable: 0 kB Bounce: 0 kB CommitLimit: 20807484 kB Committed_AS: 360748 kB VmallocTotal: 34359738367 kB VmallocUsed: 15468 kB VmallocChunk: 34359722699 kB Nothing is actually happening on the device itself however. Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdc 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdd 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sde 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdf 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdg 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdh 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdi 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdj 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdk 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdl 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 md0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 md3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 md2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 md1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 # vmstat 1 procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 6 0 160 6420244 32 600092 0 0 221 227 5 1 1 1 98 0 6 0 160 6420228 32 600120 0 0 0 0 1015 142 0 25 75 0 6 0 160 6420228 32 600120 0 0 0 0 1005 127 0 25 75 0 6 0 160 6420228 32 600120 0 0 0 41 1022 151 0 26 74 0 6 0 160 6420228 32 600120 0 0 0 0 1011 131 0 25 75 0 6 0 160 6420228 32 600120 0 0 0 0 1013 124 0 25 75 0 6 0 160 6420228 32 600120 0 0 0 0 1042 129 0 25 75 0 # uname -mr 2.6.23.1 x86_64 # cat /proc/vmstat nr_free_pages 1598911 nr_inactive 146381 nr_active 42724 nr_anon_pages 37181 nr_mapped 4097 nr_file_pages 151975 nr_dirty 123572 nr_writeback 21501 nr_slab_reclaimable 16152 nr_slab_unreclaimable 24284 nr_page_table_pages 2823 nr_unstable 0 nr_bounce 0 nr_vmscan_write 20712 pgpgin 1015377151 pgpgout 1043634578 pswpin 0 pswpout 40 pgalloc_dma 4 pgalloc_dma32 319052932 pgalloc_normal 621945603 pgalloc_movable 0 pgfree 942598566 pgactivate 31123819 pgdeactivate 18438560 pgfault 360236898 pgmajfault 16158 pgrefill_dma 0 pgrefill_dma32 11683348 pgrefill_normal 18799274 pgrefill_movable 0 pgsteal_dma 0 pgsteal_dma32 176658679 pgsteal_normal 233628315 pgsteal_movable 0 pgscan_kswapd_dma 0 pgscan_kswapd_dma32 164181746 pgscan_kswapd_normal 217338820 pgscan_kswapd_movable 0 pgscan_direct_dma 0 pgscan_direct_dma32 13074075 pgscan_direct_normal 17342937 pgscan_direct_movable 0 pginodesteal 332816 slabs_scanned 12368000 kswapd_steal 380216091 kswapd_inodesteal 9858653 pageoutrun 1167045 allocstall 68454 pgrotated 40 # cat /proc/zoneinfo Node 0, zone DMA pages free 2601 min 3 low 3 high 4 scanned 0 (a: 11 i: 12) spanned 4096 present 2486 nr_free_pages 2601 nr_inactive 0 nr_active 0 nr_anon_pages 0 nr_mapped 1 nr_file_pages 0 nr_dirty 0 nr_writeback 0 nr_slab_reclaimable 0 nr_slab_unreclaimable 4 nr_page_table_pages 0 nr_unstable 0 nr_bounce 0 nr_vmscan_write 0 protection: (0, 3246, 7917, 7917) pagesets cpu: 0 pcp: 0 count: 0 high: 0 batch: 1 cpu: 0 pcp: 1 count: 0 high: 0 batch: 1 vm stats threshold: 6 cpu: 1 pcp: 0 count: 0 high: 0 batch: 1 cpu: 1 pcp: 1 count: 0 high: 0 batch: 1 vm stats threshold: 6 cpu: 2 pcp: 0 count: 0 high: 0 batch: 1 cpu: 2 pcp: 1 count: 0 high: 0 batch: 1 vm stats threshold: 6 cpu: 3 pcp: 0 count: 0 high: 0 batch: 1 cpu: 3 pcp: 1 count: 0 high: 0 batch: 1 vm stats threshold: 6 all_unreclaimable: 1 prev_priority: 12 start_pfn: 0 Node 0, zone DMA32 pages free 699197 min 1166 low 1457 high 1749 scanned 0 (a: 14 i: 0) spanned 1044480 present 831104 nr_free_pages 699197 nr_inactive 38507 nr_active 11855 nr_anon_pages 11228 nr_mapped 612 nr_file_pages 39127 nr_dirty 38462 nr_writeback 34 nr_slab_reclaimable 8164 nr_slab_unreclaimable 4747 nr_page_table_pages 756 nr_unstable 0 nr_bounce 0 nr_vmscan_write 6132 protection: (0, 0, 4671, 4671) pagesets cpu: 0 pcp: 0 count: 183 high: 186 batch: 31 cpu: 0 pcp: 1 count: 52 high: 62 batch: 15 vm stats threshold: 36 cpu: 1 pcp: 0 count: 23 high: 186 batch: 31 cpu: 1 pcp: 1 count: 14 high: 62 batch: 15 vm stats threshold: 36 cpu: 2 pcp: 0 count: 173 high: 186 batch: 31 cpu: 2 pcp: 1 count: 61 high: 62 batch: 15 vm stats threshold: 36 cpu: 3 pcp: 0 count: 95 high: 186 batch: 31 cpu: 3 pcp: 1 count: 57 high: 62 batch: 15 vm stats threshold: 36 all_unreclaimable: 0 prev_priority: 12 start_pfn: 4096 Node 0, zone Normal pages free 897091 min 1678 low 2097 high 2517 scanned 0 (a: 29 i: 0) spanned 1212416 present 1195840 nr_free_pages 897091 nr_inactive 107874 nr_active 30878 nr_anon_pages 25956 nr_mapped 3484 nr_file_pages 112857 nr_dirty 85110 nr_writeback 21467 nr_slab_reclaimable 7988 nr_slab_unreclaimable 19546 nr_page_table_pages 2067 nr_unstable 0 nr_bounce 0 nr_vmscan_write 14580 protection: (0, 0, 0, 0) pagesets cpu: 0 pcp: 0 count: 124 high: 186 batch: 31 cpu: 0 pcp: 1 count: 1 high: 62 batch: 15 vm stats threshold: 42 cpu: 1 pcp: 0 count: 68 high: 186 batch: 31 cpu: 1 pcp: 1 count: 9 high: 62 batch: 15 vm stats threshold: 42 cpu: 2 pcp: 0 count: 79 high: 186 batch: 31 cpu: 2 pcp: 1 count: 10 high: 62 batch: 15 vm stats threshold: 42 cpu: 3 pcp: 0 count: 47 high: 186 batch: 31 cpu: 3 pcp: 1 count: 60 high: 62 batch: 15 vm stats threshold: 42 all_unreclaimable: 0 prev_priority: 12 start_pfn: 1048576 On Sun, 4 Nov 2007, Justin Piszcz wrote: > # ps auxww | grep D > USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND > root 273 0.0 0.0 0 0 ? D Oct21 14:40 [pdflush] > root 274 0.0 0.0 0 0 ? D Oct21 13:00 [pdflush] > > After several days/weeks, this is the second time this has happened, while > doing regular file I/O (decompressing a file), everything on the device went > into D-state. > > # mdadm -D /dev/md3 > /dev/md3: > Version : 00.90.03 > Creation Time : Wed Aug 22 10:38:53 2007 > Raid Level : raid5 > Array Size : 1318680576 (1257.59 GiB 1350.33 GB) > Used Dev Size : 146520064 (139.73 GiB 150.04 GB) > Raid Devices : 10 > Total Devices : 10 > Preferred Minor : 3 > Persistence : Superblock is persistent > > Update Time : Sun Nov 4 06:38:29 2007 > State : active > Active Devices : 10 > Working Devices : 10 > Failed Devices : 0 > Spare Devices : 0 > > Layout : left-symmetric > Chunk Size : 1024K > > UUID : e37a12d1:1b0b989a:083fb634:68e9eb49 > Events : 0.4309 > > Number Major Minor RaidDevice State > 0 8 33 0 active sync /dev/sdc1 > 1 8 49 1 active sync /dev/sdd1 > 2 8 65 2 active sync /dev/sde1 > 3 8 81 3 active sync /dev/sdf1 > 4 8 97 4 active sync /dev/sdg1 > 5 8 113 5 active sync /dev/sdh1 > 6 8 129 6 active sync /dev/sdi1 > 7 8 145 7 active sync /dev/sdj1 > 8 8 161 8 active sync /dev/sdk1 > 9 8 177 9 active sync /dev/sdl1 > > If I wanted to find out what is causing this, what type of debugging would I > have to enable to track it down? Any attempt to read/write files on the > devices fails (also going into d-state). Is there any useful information I > can get currently before rebooting the machine? > > # pwd > /sys/block/md3/md > # ls > array_state dev-sdj1/ rd2@ stripe_cache_active > bitmap_set_bits dev-sdk1/ rd3@ stripe_cache_size > chunk_size dev-sdl1/ rd4@ suspend_hi > component_size layout rd5@ suspend_lo > dev-sdc1/ level rd6@ sync_action > dev-sdd1/ metadata_version rd7@ sync_completed > dev-sde1/ mismatch_cnt rd8@ sync_speed > dev-sdf1/ new_dev rd9@ sync_speed_max > dev-sdg1/ raid_disks reshape_position sync_speed_min > dev-sdh1/ rd0@ resync_start > dev-sdi1/ rd1@ safe_mode_delay > # cat array_state > active-idle > # cat mismatch_cnt > 0 > # cat stripe_cache_active > 1 > # cat stripe_cache_size > 16384 > # cat sync_action > idle > # cat /proc/mdstat > Personalities : [raid1] [raid6] [raid5] [raid4] > md1 : active raid1 sdb2[1] sda2[0] > 136448 blocks [2/2] [UU] > > md2 : active raid1 sdb3[1] sda3[0] > 129596288 blocks [2/2] [UU] > > md3 : active raid5 sdl1[9] sdk1[8] sdj1[7] sdi1[6] sdh1[5] sdg1[4] sdf1[3] > sde1[2] sdd1[1] sdc1[0] > 1318680576 blocks level 5, 1024k chunk, algorithm 2 [10/10] > [UUUUUUUUUU] > > md0 : active raid1 sdb1[1] sda1[0] > 16787776 blocks [2/2] [UU] > > unused devices: <none> > # > > Justin. > ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: 2.6.23.1: mdadm/raid5 hung/d-state 2007-11-04 12:03 2.6.23.1: mdadm/raid5 hung/d-state Justin Piszcz 2007-11-04 12:39 ` 2.6.23.1: mdadm/raid5 hung/d-state (md3_raid5 stuck in endless loop?) Justin Piszcz @ 2007-11-04 12:48 ` Michael Tokarev 2007-11-04 12:52 ` Justin Piszcz 2007-11-04 13:40 ` BERTRAND Joël 2007-11-04 21:49 ` Neil Brown 3 siblings, 1 reply; 35+ messages in thread From: Michael Tokarev @ 2007-11-04 12:48 UTC (permalink / raw) To: Justin Piszcz; +Cc: linux-kernel, linux-raid Justin Piszcz wrote: > # ps auxww | grep D > USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND > root 273 0.0 0.0 0 0 ? D Oct21 14:40 [pdflush] > root 274 0.0 0.0 0 0 ? D Oct21 13:00 [pdflush] > > After several days/weeks, this is the second time this has happened, > while doing regular file I/O (decompressing a file), everything on the > device went into D-state. The next time you come across something like that, do a SysRq-T dump and post that. It shows a stack trace of all processes - and in particular, where exactly each task is stuck. /mjt ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: 2.6.23.1: mdadm/raid5 hung/d-state 2007-11-04 12:48 ` 2.6.23.1: mdadm/raid5 hung/d-state Michael Tokarev @ 2007-11-04 12:52 ` Justin Piszcz 2007-11-04 14:55 ` Michael Tokarev 0 siblings, 1 reply; 35+ messages in thread From: Justin Piszcz @ 2007-11-04 12:52 UTC (permalink / raw) To: Michael Tokarev; +Cc: linux-kernel, linux-raid, xfs On Sun, 4 Nov 2007, Michael Tokarev wrote: > Justin Piszcz wrote: >> # ps auxww | grep D >> USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND >> root 273 0.0 0.0 0 0 ? D Oct21 14:40 [pdflush] >> root 274 0.0 0.0 0 0 ? D Oct21 13:00 [pdflush] >> >> After several days/weeks, this is the second time this has happened, >> while doing regular file I/O (decompressing a file), everything on the >> device went into D-state. > > The next time you come across something like that, do a SysRq-T dump and > post that. It shows a stack trace of all processes - and in particular, > where exactly each task is stuck. > > /mjt > Yes I got it before I rebooted, ran that and then dmesg > file. Here it is: [1172609.665902] ffffffff80747dc0 ffffffff80747dc0 ffffffff80747dc0 ffffffff80744d80 [1172609.668768] ffffffff80747dc0 ffff81015c3aa918 ffff810091c899b4 ffff810091c899a8 [1172609.668871] Call Trace: [1172609.674472] [<ffffffff8059171f>] schedule_timeout+0x5f/0xd0 [1172609.677362] [<ffffffff8022f920>] process_timeout+0x0/0x10 [1172609.680243] [<ffffffff8027b528>] do_select+0x468/0x560 [1172609.683105] [<ffffffff8027bbc0>] __pollwait+0x0/0x130 [1172609.685969] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172609.688851] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172609.691712] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172609.694534] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172609.697324] [<ffffffff8050bd41>] skb_copy_datagram_iovec+0x1a1/0x260 [1172609.700103] [<ffffffff80592da9>] _spin_lock_bh+0x9/0x20 [1172609.702856] [<ffffffff80505743>] release_sock+0x13/0xb0 [1172609.705598] [<ffffffff80547700>] tcp_recvmsg+0x370/0x940 [1172609.708303] [<ffffffff80505080>] sock_common_recvmsg+0x30/0x50 [1172609.710999] [<ffffffff80502b9b>] sock_aio_read+0x11b/0x130 [1172609.713694] [<ffffffff8027b829>] core_sys_select+0x209/0x300 [1172609.716397] [<ffffffff8023acc0>] autoremove_wake_function+0x0/0x30 [1172609.719112] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172609.721824] [<ffffffff8022b51e>] current_fs_time+0x1e/0x30 [1172609.724525] [<ffffffff803e3582>] tty_ldisc_deref+0x52/0x80 [1172609.727215] [<ffffffff8027bdc1>] sys_select+0xd1/0x1c0 [1172609.729880] [<ffffffff8020bb4e>] system_call+0x7e/0x83 [1172609.732517] [1172609.735115] bash S 0000000000000000 0 30959 30958 [1172609.737742] ffff810091c8be88 0000000000000086 0000000000000000 ffff8101ea172e20 [1172609.740404] ffffffff80747dc0 ffffffff80747dc0 ffffffff80747dc0 ffffffff80744d80 [1172609.743087] ffffffff80747dc0 ffff81015c3ab028 ffff810091c8be54 ffff810091c8be48 [1172609.743190] Call Trace: [1172609.748404] [<ffffffff80229439>] do_wait+0x599/0xc90 [1172609.751071] [<ffffffff8021f7e3>] __wake_up+0x43/0x70 [1172609.753714] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172609.756345] [<ffffffff8020bb4e>] system_call+0x7e/0x83 [1172609.758967] [1172609.761522] sr S 0000000000000000 0 30966 30959 [1172609.764123] ffff810122d7de88 0000000000000082 0000000000000000 ffff8101eab3ee20 [1172609.766769] ffffffff80747dc0 ffffffff80747dc0 ffffffff80747dc0 ffffffff80744d80 [1172609.769442] ffffffff80747dc0 ffff8101ea173028 ffff810122d7de54 ffff810122d7de48 [1172609.769545] Call Trace: [1172609.774734] [<ffffffff80229439>] do_wait+0x599/0xc90 [1172609.777369] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172609.779999] [<ffffffff8020bb4e>] system_call+0x7e/0x83 [1172609.782616] [1172609.785168] screen S 0000000000000000 0 30972 30966 [1172609.787768] ffff810144597f68 0000000000000086 ffff810144597f30 00000000ffffffff [1172609.790416] ffffffff80747dc0 ffffffff80747dc0 ffffffff80747dc0 ffffffff80744d80 [1172609.793085] ffffffff80747dc0 ffff8101eab3f028 ffff810144597f34 ffff810144597f28 [1172609.793188] Call Trace: [1172609.798381] [<ffffffff8022b0c5>] alarm_setitimer+0x35/0x70 [1172609.801049] [<ffffffff80232a09>] sys_pause+0x19/0x30 [1172609.803705] [<ffffffff8020bb4e>] system_call+0x7e/0x83 [1172609.806361] [1172609.808980] sshd S 0000000000000000 0 30973 7582 [1172609.811659] ffff810084003bf8 0000000000000082 0000000000000000 ffffffff80508e74 [1172609.814376] ffffffff80747dc0 ffffffff80747dc0 ffffffff80747dc0 ffffffff80744d80 [1172609.817104] ffffffff80747dc0 ffff8101ea172208 ffff810084003bc4 ffff810084003bb8 [1172609.817207] Call Trace: [1172609.822530] [<ffffffff80508e74>] skb_queue_tail+0x24/0x60 [1172609.825292] [<ffffffff80591755>] schedule_timeout+0x95/0xd0 [1172609.828060] [<ffffffff8023ae53>] prepare_to_wait+0x23/0x80 [1172609.830820] [<ffffffff80575ac6>] unix_stream_recvmsg+0x386/0x550 [1172609.833587] [<ffffffff8023acc0>] autoremove_wake_function+0x0/0x30 [1172609.836344] [<ffffffff80277aa0>] link_path_walk+0x80/0xf0 [1172609.839074] [<ffffffff80502b9b>] sock_aio_read+0x11b/0x130 [1172609.841794] [<ffffffff8026b5f9>] get_unused_fd_flags+0x79/0x120 [1172609.844488] [<ffffffff8026d349>] do_sync_read+0xd9/0x120 [1172609.847161] [<ffffffff8023acc0>] autoremove_wake_function+0x0/0x30 [1172609.849848] [<ffffffff8026b7cf>] __dentry_open+0x11f/0x1b0 [1172609.852541] [<ffffffff8026b96a>] do_filp_open+0x3a/0x50 [1172609.855235] [<ffffffff8026dcf7>] vfs_read+0x157/0x160 [1172609.857922] [<ffffffff8026e113>] sys_read+0x53/0x90 [1172609.860620] [<ffffffff8020bb4e>] system_call+0x7e/0x83 [1172609.863343] [1172609.866063] sshd S 0000000000000000 0 30975 30973 [1172609.868838] ffff810175c219e8 0000000000000086 ffff810175c219b0 0000000000000002 [1172609.871649] ffffffff80747dc0 ffffffff80747dc0 ffffffff80747dc0 ffffffff80744d80 [1172609.874490] ffffffff80747dc0 ffff81021b27d738 ffff810175c219b4 ffff810175c219a8 [1172609.874594] Call Trace: [1172609.880153] [<ffffffff8059171f>] schedule_timeout+0x5f/0xd0 [1172609.883020] [<ffffffff8022f920>] process_timeout+0x0/0x10 [1172609.885890] [<ffffffff8027b528>] do_select+0x468/0x560 [1172609.888742] [<ffffffff8027bbc0>] __pollwait+0x0/0x130 [1172609.891581] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172609.894430] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172609.897258] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172609.900060] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172609.902841] [<ffffffff80267609>] add_partial+0x19/0x60 [1172609.905606] [<ffffffff80268a0d>] __slab_free+0x15d/0x310 [1172609.908363] [<ffffffff80592da9>] _spin_lock_bh+0x9/0x20 [1172609.911093] [<ffffffff80505743>] release_sock+0x13/0xb0 [1172609.913795] [<ffffffff80547700>] tcp_recvmsg+0x370/0x940 [1172609.916486] [<ffffffff80505080>] sock_common_recvmsg+0x30/0x50 [1172609.919151] [<ffffffff80502b9b>] sock_aio_read+0x11b/0x130 [1172609.921799] [<ffffffff8027b829>] core_sys_select+0x209/0x300 [1172609.924455] [<ffffffff8023acc0>] autoremove_wake_function+0x0/0x30 [1172609.927122] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172609.929786] [<ffffffff8022b51e>] current_fs_time+0x1e/0x30 [1172609.932438] [<ffffffff803e3582>] tty_ldisc_deref+0x52/0x80 [1172609.935083] [<ffffffff8027bdc1>] sys_select+0xd1/0x1c0 [1172609.937702] [<ffffffff8020bb4e>] system_call+0x7e/0x83 [1172609.940292] [1172609.942843] bash S 0000000000000000 0 30976 30975 [1172609.945423] ffff8101bf371e88 0000000000000082 0000000000000000 ffff81021e322710 [1172609.948037] ffffffff80747dc0 ffffffff80747dc0 ffffffff80747dc0 ffffffff80744d80 [1172609.950671] ffffffff80747dc0 ffff8101882bf738 ffff8101bf371e54 ffff8101bf371e48 [1172609.950774] Call Trace: [1172609.955888] [<ffffffff80229439>] do_wait+0x599/0xc90 [1172609.958505] [<ffffffff8021f7e3>] __wake_up+0x43/0x70 [1172609.961098] [<ffffffff8027a520>] vfs_ioctl+0x220/0x2c0 [1172609.963662] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172609.966234] [<ffffffff8027a609>] sys_ioctl+0x49/0x80 [1172609.968766] [<ffffffff8020bb4e>] system_call+0x7e/0x83 [1172609.971279] [1172609.973759] screen S 0000000000000000 0 30991 30976 [1172609.976308] ffff8101a8329f68 0000000000000086 0000000000000000 00000000ffffffff [1172609.978892] ffffffff80747dc0 ffffffff80747dc0 ffffffff80747dc0 ffffffff80744d80 [1172609.981501] ffffffff80747dc0 ffff81021e322918 ffff8101a8329f34 ffff8101a8329f28 [1172609.981605] Call Trace: [1172609.986634] [<ffffffff8022b0c5>] alarm_setitimer+0x35/0x70 [1172609.989220] [<ffffffff80232a09>] sys_pause+0x19/0x30 [1172609.991766] [<ffffffff8020bb4e>] system_call+0x7e/0x83 [1172609.994292] [1172609.996787] screen D ffff8100a18ff800 0 30992 30991 [1172609.999344] ffff8101a854dd28 0000000000000086 ffff81022854ddb7 ffff8101a854dcd8 [1172610.001953] ffffffff80747dc0 ffffffff80747dc0 ffffffff80747dc0 ffffffff80744d80 [1172610.004574] ffffffff80747dc0 ffff810170233028 ffffffff80656bcb ffffffff8021f8bc [1172610.004677] Call Trace: [1172610.009752] [<ffffffff8021f8bc>] task_rq_lock+0x4c/0x90 [1172610.012366] [<ffffffff8021fb88>] try_to_wake_up+0x68/0x3b0 [1172610.014981] [<ffffffff80590f9d>] wait_for_completion+0x7d/0xc0 [1172610.017594] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172610.020208] [<ffffffff8023706a>] flush_cpu_workqueue+0x6a/0x90 [1172610.022828] [<ffffffff802371b0>] wq_barrier_func+0x0/0x10 [1172610.025447] [<ffffffff80237183>] flush_workqueue+0x33/0x50 [1172610.028076] [<ffffffff803e633f>] release_dev+0x44f/0x750 [1172610.030710] [<ffffffff80284847>] mntput_no_expire+0x27/0xb0 [1172610.033339] [<ffffffff803e6651>] tty_release+0x11/0x20 [1172610.035958] [<ffffffff8026e531>] __fput+0xb1/0x1a0 [1172610.038547] [<ffffffff8026b544>] filp_close+0x54/0x90 [1172610.041106] [<ffffffff8026ccc6>] sys_close+0x96/0x100 [1172610.043652] [<ffffffff8020bb4e>] system_call+0x7e/0x83 [1172610.046160] [1172610.048618] bash ? 0000000000000000 0 30993 30992 [1172610.051135] ffff8101aa2a3ee8 0000000000000046 ffff8101aa2a3eb0 0000000000000011 [1172610.053708] ffffffff80747dc0 ffffffff80747dc0 ffffffff80747dc0 ffffffff80744d80 [1172610.056312] ffffffff80747dc0 ffff810170233738 ffff8101aa2a3eb4 ffff8101aa2a3ea8 [1172610.056415] Call Trace: [1172610.061510] [<ffffffff8022a18e>] do_exit+0x5be/0x8a0 [1172610.064172] [<ffffffff8022a49c>] do_group_exit+0x2c/0x80 [1172610.066859] [<ffffffff8020bb4e>] system_call+0x7e/0x83 [1172610.069537] [1172610.072190] sshd S 0000000000000000 0 7001 7582 [1172610.074908] ffff8100792b1bf8 0000000000000082 0000000000000000 ffff8101e9c51b80 [1172610.077679] ffffffff80747dc0 ffffffff80747dc0 ffffffff80747dc0 ffffffff80744d80 [1172610.080477] ffffffff80747dc0 ffff8102234ff738 ffff8100792b1bc4 ffff8100792b1bb8 [1172610.080580] Call Trace: [1172610.086042] [<ffffffff80591755>] schedule_timeout+0x95/0xd0 [1172610.088861] [<ffffffff8023ae53>] prepare_to_wait+0x23/0x80 [1172610.091673] [<ffffffff80575ac6>] unix_stream_recvmsg+0x386/0x550 [1172610.094492] [<ffffffff8023acc0>] autoremove_wake_function+0x0/0x30 [1172610.097318] [<ffffffff80277aa0>] link_path_walk+0x80/0xf0 [1172610.100148] [<ffffffff80502b9b>] sock_aio_read+0x11b/0x130 [1172610.102976] [<ffffffff8026b5f9>] get_unused_fd_flags+0x79/0x120 [1172610.105822] [<ffffffff8026d349>] do_sync_read+0xd9/0x120 [1172610.108651] [<ffffffff8023acc0>] autoremove_wake_function+0x0/0x30 [1172610.111495] [<ffffffff8026b7cf>] __dentry_open+0x11f/0x1b0 [1172610.114319] [<ffffffff8026b96a>] do_filp_open+0x3a/0x50 [1172610.117118] [<ffffffff8026dcf7>] vfs_read+0x157/0x160 [1172610.119902] [<ffffffff8026e113>] sys_read+0x53/0x90 [1172610.122638] [<ffffffff8020bb4e>] system_call+0x7e/0x83 [1172610.125360] [1172610.128056] sshd S 0000000000000000 0 7003 7001 [1172610.130818] ffff8100675a39e8 0000000000000082 ffff8100675a39b0 0000000000000002 [1172610.133623] ffffffff80747dc0 ffffffff80747dc0 ffffffff80747dc0 ffffffff80744d80 [1172610.136446] ffffffff80747dc0 ffff810225459028 ffff8100675a39b4 ffff8100675a39a8 [1172610.136549] Call Trace: [1172610.142064] [<ffffffff8059171f>] schedule_timeout+0x5f/0xd0 [1172610.144899] [<ffffffff8022f920>] process_timeout+0x0/0x10 [1172610.147716] [<ffffffff8027b528>] do_select+0x468/0x560 [1172610.150495] [<ffffffff8027bbc0>] __pollwait+0x0/0x130 [1172610.153260] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172610.156005] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172610.158707] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172610.161378] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172610.164026] [<ffffffff8050bd41>] skb_copy_datagram_iovec+0x1a1/0x260 [1172610.166675] [<ffffffff80592da9>] _spin_lock_bh+0x9/0x20 [1172610.169315] [<ffffffff80505743>] release_sock+0x13/0xb0 [1172610.171917] [<ffffffff80547700>] tcp_recvmsg+0x370/0x940 [1172610.174494] [<ffffffff80505080>] sock_common_recvmsg+0x30/0x50 [1172610.177085] [<ffffffff80502b9b>] sock_aio_read+0x11b/0x130 [1172610.179638] [<ffffffff8027b829>] core_sys_select+0x209/0x300 [1172610.182178] [<ffffffff8023acc0>] autoremove_wake_function+0x0/0x30 [1172610.184734] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172610.187290] [<ffffffff8022b51e>] current_fs_time+0x1e/0x30 [1172610.189837] [<ffffffff803e3582>] tty_ldisc_deref+0x52/0x80 [1172610.192370] [<ffffffff8027bdc1>] sys_select+0xd1/0x1c0 [1172610.194900] [<ffffffff8020bb4e>] system_call+0x7e/0x83 [1172610.197426] [1172610.199919] bash S 000000000000000e 0 7004 7003 [1172610.202470] ffff8100cc263e88 0000000000000082 80000000804ca065 ffff81022367f530 [1172610.205071] ffffffff80747dc0 ffffffff80747dc0 ffffffff80747dc0 ffffffff80744d80 [1172610.207699] ffffffff80747dc0 ffff8102234fe918 ffff8100cc263e38 ffff810035a16348 [1172610.207802] Call Trace: [1172610.212949] [<ffffffff8021cb22>] do_page_fault+0x202/0x890 [1172610.215618] [<ffffffff80229439>] do_wait+0x599/0xc90 [1172610.218263] [<ffffffff8021f7e3>] __wake_up+0x43/0x70 [1172610.220900] [<ffffffff8027a520>] vfs_ioctl+0x220/0x2c0 [1172610.223509] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172610.226109] [<ffffffff8027a609>] sys_ioctl+0x49/0x80 [1172610.228693] [<ffffffff8020bb4e>] system_call+0x7e/0x83 [1172610.231240] [1172610.233746] aur S 0000000000000000 0 7014 7004 [1172610.236319] ffff810098071e88 0000000000000086 ffff810098071e50 ffffffff80232c93 [1172610.238941] ffffffff80747dc0 ffffffff80747dc0 ffffffff80747dc0 ffffffff80744d80 [1172610.241566] ffffffff80747dc0 ffff81022367f738 ffff810098071e54 ffff810098071e48 [1172610.241669] Call Trace: [1172610.246766] [<ffffffff80232c93>] get_signal_to_deliver+0x73/0x470 [1172610.249380] [<ffffffff80229439>] do_wait+0x599/0xc90 [1172610.251983] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172610.254563] [<ffffffff8020bb4e>] system_call+0x7e/0x83 [1172610.257122] [1172610.259648] aur S 0000000000000004 0 7066 7014 [1172610.262226] ffff810085231e88 0000000000000086 ffff8101ea314ce8 ffffffff80232c93 [1172610.264844] ffffffff80747dc0 ffffffff80747dc0 ffffffff80747dc0 ffffffff80744d80 [1172610.267471] ffffffff80747dc0 ffff8101ea314918 ffffffff802302ce ffffffff8020b3d6 [1172610.267574] Call Trace: [1172610.272674] [<ffffffff80232c93>] get_signal_to_deliver+0x73/0x470 [1172610.275315] [<ffffffff802302ce>] recalc_sigpending+0xe/0x30 [1172610.277948] [<ffffffff8020b3d6>] do_notify_resume+0x536/0x7a0 [1172610.280577] [<ffffffff80229439>] do_wait+0x599/0xc90 [1172610.283199] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172610.285840] [<ffffffff8020bb4e>] system_call+0x7e/0x83 [1172610.288491] [1172610.291116] unrar D ffff8100aa785c80 0 7135 7066 [1172610.293792] ffff8101ecf4ddb8 0000000000000086 ffff8101ecf4dd80 0000000000000000 [1172610.296525] ffffffff80747dc0 ffffffff80747dc0 ffffffff80747dc0 ffffffff80744d80 [1172610.299256] ffffffff80747dc0 ffff81021e53f028 ffff8101ecf4dd84 ffff8101ecf4dd78 [1172610.299359] Call Trace: [1172610.304629] [<ffffffff80385635>] vn_iowait+0x75/0xa0 [1172610.307301] [<ffffffff8023acc0>] autoremove_wake_function+0x0/0x30 [1172610.309979] [<ffffffff8036d19c>] xfs_trans_alloc+0x9c/0xb0 [1172610.312653] [<ffffffff80357fb5>] xfs_itruncate_start+0x35/0xe0 [1172610.315340] [<ffffffff80372c3a>] xfs_free_eofblocks+0x17a/0x280 [1172610.318032] [<ffffffff80377e74>] xfs_release+0x134/0x1e0 [1172610.320711] [<ffffffff8037ed7a>] xfs_file_release+0x1a/0x30 [1172610.323417] [<ffffffff8026e531>] __fput+0xb1/0x1a0 [1172610.326144] [<ffffffff8026b544>] filp_close+0x54/0x90 [1172610.328895] [<ffffffff8026ccc6>] sys_close+0x96/0x100 [1172610.331631] [<ffffffff8020bb4e>] system_call+0x7e/0x83 [1172610.334353] [1172610.337050] sshd D 0000000000000000 0 7187 7582 [1172610.339811] ffff81002b62fd28 0000000000000086 ffff81002b62fcf0 ffff81002b62fcd8 [1172610.342618] ffffffff80747dc0 ffffffff80747dc0 ffffffff80747dc0 ffffffff80744d80 [1172610.345448] ffffffff80747dc0 ffff8101ccda3028 ffff81002b62fcf4 ffff81002b62fce8 [1172610.345551] Call Trace: [1172610.351072] [<ffffffff80590f9d>] wait_for_completion+0x7d/0xc0 [1172610.353915] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172610.356765] [<ffffffff8023706a>] flush_cpu_workqueue+0x6a/0x90 [1172610.359622] [<ffffffff802371b0>] wq_barrier_func+0x0/0x10 [1172610.362477] [<ffffffff80237183>] flush_workqueue+0x33/0x50 [1172610.365337] [<ffffffff803e633f>] release_dev+0x44f/0x750 [1172610.368184] [<ffffffff8026c04a>] sys_fchmodat+0x6a/0x120 [1172610.371026] [<ffffffff803e6651>] tty_release+0x11/0x20 [1172610.373843] [<ffffffff8026e531>] __fput+0xb1/0x1a0 [1172610.376628] [<ffffffff8026b544>] filp_close+0x54/0x90 [1172610.379402] [<ffffffff8026ccc6>] sys_close+0x96/0x100 [1172610.382135] [<ffffffff8020bb4e>] system_call+0x7e/0x83 [1172610.384846] [1172610.387529] sshd ? 0000000000000000 0 7218 7187 [1172610.390280] ffff81013bd7bee8 0000000000000046 ffff81013bd7beb0 0000000000000011 [1172610.393084] ffffffff80747dc0 ffffffff80747dc0 ffffffff80747dc0 ffffffff80744d80 [1172610.395907] ffffffff80747dc0 ffff8101bf6ae918 ffff81013bd7beb4 ffff81013bd7bea8 [1172610.396010] Call Trace: [1172610.401528] [<ffffffff8022246c>] __cond_resched+0x1c/0x50 [1172610.404362] [<ffffffff8022a18e>] do_exit+0x5be/0x8a0 [1172610.407192] [<ffffffff8022a49c>] do_group_exit+0x2c/0x80 [1172610.409993] [<ffffffff8020bb4e>] system_call+0x7e/0x83 [1172610.412776] [1172610.415520] sshd S 0000000000000000 0 7236 7582 [1172610.418293] ffff8101e4a89bf8 0000000000000082 ffff8101e4a89bc0 ffff81013bf542c0 [1172610.421090] ffffffff80747dc0 ffffffff80747dc0 ffffffff80747dc0 ffffffff80744d80 [1172610.423898] ffffffff80747dc0 ffff810168684208 ffff8101e4a89bc4 ffff8101e4a89bb8 [1172610.424001] Call Trace: [1172610.429474] [<ffffffff80591755>] schedule_timeout+0x95/0xd0 [1172610.432284] [<ffffffff8023ae53>] prepare_to_wait+0x23/0x80 [1172610.435096] [<ffffffff80575ac6>] unix_stream_recvmsg+0x386/0x550 [1172610.437896] [<ffffffff8023acc0>] autoremove_wake_function+0x0/0x30 [1172610.440690] [<ffffffff80277aa0>] link_path_walk+0x80/0xf0 [1172610.443487] [<ffffffff80502b9b>] sock_aio_read+0x11b/0x130 [1172610.446249] [<ffffffff8026b5f9>] get_unused_fd_flags+0x79/0x120 [1172610.448997] [<ffffffff8026d349>] do_sync_read+0xd9/0x120 [1172610.451737] [<ffffffff8023acc0>] autoremove_wake_function+0x0/0x30 [1172610.454491] [<ffffffff8026b7cf>] __dentry_open+0x11f/0x1b0 [1172610.457244] [<ffffffff8026b96a>] do_filp_open+0x3a/0x50 [1172610.459989] [<ffffffff8026dcf7>] vfs_read+0x157/0x160 [1172610.462724] [<ffffffff8026e113>] sys_read+0x53/0x90 [1172610.465430] [<ffffffff8020bb4e>] system_call+0x7e/0x83 [1172610.468131] [1172610.470765] sshd S 0000000000000000 0 7238 7236 [1172610.473440] ffff810046e1f9e8 0000000000000082 ffff810046e1f9b0 0000000000000002 [1172610.476161] ffffffff80747dc0 ffffffff80747dc0 ffffffff80747dc0 ffffffff80744d80 [1172610.478873] ffffffff80747dc0 ffff810168685028 ffff810046e1f9b4 ffff810046e1f9a8 [1172610.478975] Call Trace: [1172610.484236] [<ffffffff8059171f>] schedule_timeout+0x5f/0xd0 [1172610.486940] [<ffffffff8022f920>] process_timeout+0x0/0x10 [1172610.489645] [<ffffffff8027b528>] do_select+0x468/0x560 [1172610.492340] [<ffffffff8027bbc0>] __pollwait+0x0/0x130 [1172610.495030] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172610.497738] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172610.500417] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172610.503076] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172610.505711] [<ffffffff8050bd41>] skb_copy_datagram_iovec+0x1a1/0x260 [1172610.508366] [<ffffffff80592da9>] _spin_lock_bh+0x9/0x20 [1172610.511004] [<ffffffff80505743>] release_sock+0x13/0xb0 [1172610.513638] [<ffffffff80547700>] tcp_recvmsg+0x370/0x940 [1172610.516245] [<ffffffff80505080>] sock_common_recvmsg+0x30/0x50 [1172610.518841] [<ffffffff80502b9b>] sock_aio_read+0x11b/0x130 [1172610.521423] [<ffffffff8027b829>] core_sys_select+0x209/0x300 [1172610.523974] [<ffffffff8023acc0>] autoremove_wake_function+0x0/0x30 [1172610.526518] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172610.529058] [<ffffffff8022b51e>] current_fs_time+0x1e/0x30 [1172610.531592] [<ffffffff803e3582>] tty_ldisc_deref+0x52/0x80 [1172610.534118] [<ffffffff8027bdc1>] sys_select+0xd1/0x1c0 [1172610.536645] [<ffffffff8020bb4e>] system_call+0x7e/0x83 [1172610.539162] [1172610.541651] bash S 000000000000000e 0 7239 7238 [1172610.544203] ffff8100aae5fe88 0000000000000082 80000001bab2c065 ffff810145b6ae20 [1172610.546809] ffffffff80747dc0 ffffffff80747dc0 ffffffff80747dc0 ffffffff80744d80 [1172610.549446] ffffffff80747dc0 ffff810168685738 ffff8100aae5fe38 ffff810065785f18 [1172610.549550] Call Trace: [1172610.554709] [<ffffffff8021cb22>] do_page_fault+0x202/0x890 [1172610.557368] [<ffffffff8021ed69>] update_curr+0x109/0x120 [1172610.560022] [<ffffffff80229439>] do_wait+0x599/0xc90 [1172610.562647] [<ffffffff805907c6>] __sched_text_start+0x166/0x23d [1172610.565267] [<ffffffff8021f7e3>] __wake_up+0x43/0x70 [1172610.567871] [<ffffffff8027a520>] vfs_ioctl+0x220/0x2c0 [1172610.570435] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172610.572998] [<ffffffff8027a609>] sys_ioctl+0x49/0x80 [1172610.575555] [<ffffffff8020bb4e>] system_call+0x7e/0x83 [1172610.578118] [1172610.580652] sshd S 0000000000000000 0 7248 7582 [1172610.583235] ffff8101120d5bf8 0000000000000082 ffff8101120d5bc0 ffff81001e998dc0 [1172610.585865] ffffffff80747dc0 ffffffff80747dc0 ffffffff80747dc0 ffffffff80744d80 [1172610.588489] ffffffff80747dc0 ffff810130906208 ffff8101120d5bc4 ffff8101120d5bb8 [1172610.588592] Call Trace: [1172610.593666] [<ffffffff80591755>] schedule_timeout+0x95/0xd0 [1172610.596253] [<ffffffff8023ae53>] prepare_to_wait+0x23/0x80 [1172610.598824] [<ffffffff80575ac6>] unix_stream_recvmsg+0x386/0x550 [1172610.601405] [<ffffffff8023acc0>] autoremove_wake_function+0x0/0x30 [1172610.603992] [<ffffffff80277aa0>] link_path_walk+0x80/0xf0 [1172610.606571] [<ffffffff80502b9b>] sock_aio_read+0x11b/0x130 [1172610.609138] [<ffffffff8026b5f9>] get_unused_fd_flags+0x79/0x120 [1172610.611720] [<ffffffff8026d349>] do_sync_read+0xd9/0x120 [1172610.614293] [<ffffffff8023acc0>] autoremove_wake_function+0x0/0x30 [1172610.616883] [<ffffffff8026b7cf>] __dentry_open+0x11f/0x1b0 [1172610.619463] [<ffffffff8026b96a>] do_filp_open+0x3a/0x50 [1172610.622029] [<ffffffff8026dcf7>] vfs_read+0x157/0x160 [1172610.624594] [<ffffffff8026e113>] sys_read+0x53/0x90 [1172610.627144] [<ffffffff8020bb4e>] system_call+0x7e/0x83 [1172610.629703] [1172610.632237] sshd S 0000000000000000 0 7250 7248 [1172610.634822] ffff810126f3d9e8 0000000000000086 ffff810126f3d9b0 0000000000000002 [1172610.637453] ffffffff80747dc0 ffffffff80747dc0 ffffffff80747dc0 ffffffff80744d80 [1172610.640086] ffffffff80747dc0 ffff810130907028 ffff810126f3d9b4 ffff810126f3d9a8 [1172610.640190] Call Trace: [1172610.645268] [<ffffffff8059171f>] schedule_timeout+0x5f/0xd0 [1172610.647857] [<ffffffff8022f920>] process_timeout+0x0/0x10 [1172610.650429] [<ffffffff8027b528>] do_select+0x468/0x560 [1172610.652990] [<ffffffff8027bbc0>] __pollwait+0x0/0x130 [1172610.655552] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172610.658131] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172610.660680] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172610.663230] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172610.665779] [<ffffffff8050bd41>] skb_copy_datagram_iovec+0x1a1/0x260 [1172610.668368] [<ffffffff80592da9>] _spin_lock_bh+0x9/0x20 [1172610.670946] [<ffffffff80505743>] release_sock+0x13/0xb0 [1172610.673510] [<ffffffff80547700>] tcp_recvmsg+0x370/0x940 [1172610.676075] [<ffffffff80505080>] sock_common_recvmsg+0x30/0x50 [1172610.678653] [<ffffffff80502b9b>] sock_aio_read+0x11b/0x130 [1172610.681222] [<ffffffff8027b829>] core_sys_select+0x209/0x300 [1172610.683798] [<ffffffff8023acc0>] autoremove_wake_function+0x0/0x30 [1172610.686386] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172610.688970] [<ffffffff8022b51e>] current_fs_time+0x1e/0x30 [1172610.691546] [<ffffffff803e3582>] tty_ldisc_deref+0x52/0x80 [1172610.694114] [<ffffffff8027bdc1>] sys_select+0xd1/0x1c0 [1172610.696683] [<ffffffff8020bb4e>] system_call+0x7e/0x83 [1172610.699244] [1172610.701782] bash S 000000000000000e 0 7251 7250 [1172610.704370] ffff810121e8de88 0000000000000086 800000008e47c065 ffff8101afbec710 [1172610.707005] ffffffff80747dc0 ffffffff80747dc0 ffffffff80747dc0 ffffffff80744d80 [1172610.709641] ffffffff80747dc0 ffff810130907738 ffff810121e8de38 ffff8101e65ef9d8 [1172610.709744] Call Trace: [1172610.714827] [<ffffffff8021cb22>] do_page_fault+0x202/0x890 [1172610.717419] [<ffffffff80229439>] do_wait+0x599/0xc90 [1172610.719979] [<ffffffff8021f7e3>] __wake_up+0x43/0x70 [1172610.722535] [<ffffffff8027a520>] vfs_ioctl+0x220/0x2c0 [1172610.725088] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172610.727650] [<ffffffff8027a609>] sys_ioctl+0x49/0x80 [1172610.730203] [<ffffffff8020bb4e>] system_call+0x7e/0x83 [1172610.732759] [1172610.735278] su S 0000000000000000 0 7269 7251 [1172610.737850] ffff8101a5007e88 0000000000000086 ffff8101a5007e50 ffff8100219c0e20 [1172610.740475] ffffffff80747dc0 ffffffff80747dc0 ffffffff80747dc0 ffffffff80744d80 [1172610.743107] ffffffff80747dc0 ffff8101afbec918 ffff8101a5007e54 ffff8101a5007e48 [1172610.743210] Call Trace: [1172610.748316] [<ffffffff80229439>] do_wait+0x599/0xc90 [1172610.750913] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172610.753518] [<ffffffff8020bb4e>] system_call+0x7e/0x83 [1172610.756084] [1172610.758600] bash S 0000000000000000 0 7270 7269 [1172610.761175] ffff81014bc9be88 0000000000000086 ffff81014bc9be50 ffff810139e7c000 [1172610.763792] ffffffff80747dc0 ffffffff80747dc0 ffffffff80747dc0 ffffffff80744d80 [1172610.766419] ffffffff80747dc0 ffff8100219c1028 ffff81014bc9be54 ffff81014bc9be48 [1172610.766521] Call Trace: [1172610.771636] [<ffffffff80229439>] do_wait+0x599/0xc90 [1172610.774264] [<ffffffff8021f7e3>] __wake_up+0x43/0x70 [1172610.776885] [<ffffffff8027a520>] vfs_ioctl+0x220/0x2c0 [1172610.779492] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172610.782107] [<ffffffff8027a609>] sys_ioctl+0x49/0x80 [1172610.784719] [<ffffffff8020bb4e>] system_call+0x7e/0x83 [1172610.787329] [1172610.789920] sshd S 0000000000000000 0 7278 7582 [1172610.792579] ffff810194cf5bf8 0000000000000086 ffff810194cf5bc0 ffff810010755600 [1172610.795276] ffffffff80747dc0 ffffffff80747dc0 ffffffff80747dc0 ffffffff80744d80 [1172610.797987] ffffffff80747dc0 ffff81002d667738 ffff810194cf5bc4 ffff810194cf5bb8 [1172610.798090] Call Trace: [1172610.803311] [<ffffffff80591755>] schedule_timeout+0x95/0xd0 [1172610.805992] [<ffffffff8023ae53>] prepare_to_wait+0x23/0x80 [1172610.808641] [<ffffffff80575ac6>] unix_stream_recvmsg+0x386/0x550 [1172610.811284] [<ffffffff8023acc0>] autoremove_wake_function+0x0/0x30 [1172610.813937] [<ffffffff80277aa0>] link_path_walk+0x80/0xf0 [1172610.816593] [<ffffffff80502b9b>] sock_aio_read+0x11b/0x130 [1172610.819250] [<ffffffff8026b5f9>] get_unused_fd_flags+0x79/0x120 [1172610.821914] [<ffffffff8026d349>] do_sync_read+0xd9/0x120 [1172610.824602] [<ffffffff8023acc0>] autoremove_wake_function+0x0/0x30 [1172610.827337] [<ffffffff8026b7cf>] __dentry_open+0x11f/0x1b0 [1172610.830101] [<ffffffff8026b96a>] do_filp_open+0x3a/0x50 [1172610.832855] [<ffffffff8026dcf7>] vfs_read+0x157/0x160 [1172610.835593] [<ffffffff8026e113>] sys_read+0x53/0x90 [1172610.838321] [<ffffffff8020bb4e>] system_call+0x7e/0x83 [1172610.841049] [1172610.843744] sshd S 0000000000000000 0 7280 7278 [1172610.846501] ffff81013acb39e8 0000000000000082 ffff81013acb39b0 0000000000000002 [1172610.849305] ffffffff80747dc0 ffffffff80747dc0 ffffffff80747dc0 ffffffff80744d80 [1172610.852125] ffffffff80747dc0 ffff81011060e918 ffff81013acb39b4 ffff81013acb39a8 [1172610.852228] Call Trace: [1172610.857719] [<ffffffff8059171f>] schedule_timeout+0x5f/0xd0 [1172610.860553] [<ffffffff8022f920>] process_timeout+0x0/0x10 [1172610.863393] [<ffffffff8027b528>] do_select+0x468/0x560 [1172610.866228] [<ffffffff8027bbc0>] __pollwait+0x0/0x130 [1172610.869046] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172610.871874] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172610.874652] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172610.877377] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172610.880068] [<ffffffff80267609>] add_partial+0x19/0x60 [1172610.882722] [<ffffffff80268a0d>] __slab_free+0x15d/0x310 [1172610.885354] [<ffffffff80592da9>] _spin_lock_bh+0x9/0x20 [1172610.887984] [<ffffffff80505743>] release_sock+0x13/0xb0 [1172610.890616] [<ffffffff80547700>] tcp_recvmsg+0x370/0x940 [1172610.893251] [<ffffffff80505080>] sock_common_recvmsg+0x30/0x50 [1172610.895887] [<ffffffff80502b9b>] sock_aio_read+0x11b/0x130 [1172610.898512] [<ffffffff8027b829>] core_sys_select+0x209/0x300 [1172610.901144] [<ffffffff8023acc0>] autoremove_wake_function+0x0/0x30 [1172610.903780] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172610.906421] [<ffffffff8022b51e>] current_fs_time+0x1e/0x30 [1172610.909040] [<ffffffff803e3582>] tty_ldisc_deref+0x52/0x80 [1172610.911632] [<ffffffff8027bdc1>] sys_select+0xd1/0x1c0 [1172610.914215] [<ffffffff8020bb4e>] system_call+0x7e/0x83 [1172610.916754] [1172610.919253] bash S 000000000000000e 0 7281 7280 [1172610.921808] ffff8101919e3e88 0000000000000082 80000001542be065 ffff8100867c7530 [1172610.924409] ffffffff80747dc0 ffffffff80747dc0 ffffffff80747dc0 ffffffff80744d80 [1172610.927021] ffffffff80747dc0 ffff81011060f028 ffff8101919e3e38 ffff8101aae8a930 [1172610.927124] Call Trace: [1172610.932186] [<ffffffff8021cb22>] do_page_fault+0x202/0x890 [1172610.934771] [<ffffffff80229439>] do_wait+0x599/0xc90 [1172610.937337] [<ffffffff8021f7e3>] __wake_up+0x43/0x70 [1172610.939863] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172610.942391] [<ffffffff8020bb4e>] system_call+0x7e/0x83 [1172610.944923] [1172610.947429] su S 0000000000000000 0 7288 7281 [1172610.949987] ffff81004e873e88 0000000000000086 ffff81004e873e50 ffff81011060f530 [1172610.952588] ffffffff80747dc0 ffffffff80747dc0 ffffffff80747dc0 ffffffff80744d80 [1172610.955214] ffffffff80747dc0 ffff8100867c7738 ffff81004e873e54 ffff81004e873e48 [1172610.955317] Call Trace: [1172610.960412] [<ffffffff80229439>] do_wait+0x599/0xc90 [1172610.963007] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172610.965602] [<ffffffff8020bb4e>] system_call+0x7e/0x83 [1172610.968186] [1172610.970703] bash S 0000000000000000 0 7289 7288 [1172610.973262] ffff810043dbfdb8 0000000000000082 ffff810043dbfd80 0000000000000fee [1172610.975867] ffffffff80747dc0 ffffffff80747dc0 ffffffff80747dc0 ffffffff80744d80 [1172610.978487] ffffffff80747dc0 ffff81011060f738 ffff810043dbfd84 ffff810043dbfd78 [1172610.978590] Call Trace: [1172610.983666] [<ffffffff80591755>] schedule_timeout+0x95/0xd0 [1172610.986293] [<ffffffff8023af8c>] add_wait_queue+0x1c/0x60 [1172610.988916] [<ffffffff803e9bf8>] read_chan+0x228/0x6f0 [1172610.991531] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172610.994156] [<ffffffff803e6710>] tty_read+0xb0/0x100 [1172610.996766] [<ffffffff8026dc65>] vfs_read+0xc5/0x160 [1172610.999359] [<ffffffff8026e113>] sys_read+0x53/0x90 [1172611.001936] [<ffffffff8020bb4e>] system_call+0x7e/0x83 [1172611.004527] [1172611.007092] strace S 0000000000000000 0 7319 7270 [1172611.009707] ffff8101534a9e88 0000000000000086 ffff8101534a9e50 0000000000000092 [1172611.012368] ffffffff80747dc0 ffffffff80747dc0 ffffffff80747dc0 ffffffff80744d80 [1172611.015032] ffffffff80747dc0 ffff810139e7c208 ffff8101534a9e54 ffff8101534a9e48 [1172611.015135] Call Trace: [1172611.020281] [<ffffffff80231215>] __group_send_sig_info+0x75/0xa0 [1172611.022914] [<ffffffff80229439>] do_wait+0x599/0xc90 [1172611.025526] [<ffffffff80231aa1>] kill_pid_info+0x51/0x90 [1172611.028133] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172611.030760] [<ffffffff8020bb4e>] system_call+0x7e/0x83 [1172611.033389] [1172611.035983] rm D 0000000000000000 0 7463 7239 [1172611.038664] ffff8101254a3b08 0000000000000086 ffff8101254a3ad0 ffffffff80592c6c [1172611.041422] ffffffff80747dc0 ffffffff80747dc0 ffffffff80747dc0 ffffffff80744d80 [1172611.044223] ffffffff80747dc0 ffff810145b6b028 ffff8101254a3ad4 ffff8101254a3ac8 [1172611.044326] Call Trace: [1172611.049792] [<ffffffff80592c6c>] __down+0x10c/0x11f [1172611.052604] [<ffffffff80592c07>] __down+0xa7/0x11f [1172611.055405] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172611.058227] [<ffffffff80592886>] __down_failed+0x35/0x3a [1172611.061044] [<ffffffff8037c09e>] xfs_buf_lock+0x3e/0x40 [1172611.063866] [<ffffffff80367be5>] xfs_getsb+0x15/0x40 [1172611.066676] [<ffffffff8036e2da>] xfs_trans_getsb+0x5a/0xb0 [1172611.069478] [<ffffffff8036d1bf>] xfs_trans_apply_sb_deltas+0xf/0x370 [1172611.072281] [<ffffffff8036d5be>] _xfs_trans_commit+0x9e/0x3c0 [1172611.075085] [<ffffffff8039c301>] __up_read+0x21/0xb0 [1172611.077884] [<ffffffff80327522>] xfs_free_extent+0xe2/0x110 [1172611.080690] [<ffffffff80379e9c>] kmem_zone_alloc+0x5c/0xd0 [1172611.083499] [<ffffffff80379e9c>] kmem_zone_alloc+0x5c/0xd0 [1172611.086267] [<ffffffff80379f42>] kmem_zone_zalloc+0x32/0x50 [1172611.089024] [<ffffffff80357c9b>] xfs_itruncate_finish+0xdb/0x320 [1172611.091768] [<ffffffff80378311>] xfs_inactive+0x3f1/0x520 [1172611.094486] [<ffffffff80384629>] xfs_fs_clear_inode+0xa9/0x100 [1172611.097203] [<ffffffff802821e8>] clear_inode+0x58/0xf0 [1172611.099883] [<ffffffff80282369>] generic_delete_inode+0xe9/0xf0 [1172611.102557] [<ffffffff802783fa>] do_unlinkat+0x14a/0x1c0 [1172611.105235] [<ffffffff8059301d>] error_exit+0x0/0x84 [1172611.107916] [<ffffffff8020bb4e>] system_call+0x7e/0x83 [1172611.110592] [1172611.113232] pickup S 0000000000000000 0 7573 30580 [1172611.115922] ffff81021d34be58 0000000000000086 ffff81021d34be20 0000000000000000 [1172611.118661] ffffffff80747dc0 ffffffff80747dc0 ffffffff80747dc0 ffffffff80744d80 [1172611.121398] ffffffff80747dc0 ffff8101964af028 ffff81021d34be24 ffff81021d34be18 [1172611.121501] Call Trace: [1172611.126801] [<ffffffff8059171f>] schedule_timeout+0x5f/0xd0 [1172611.129501] [<ffffffff8022f920>] process_timeout+0x0/0x10 [1172611.132189] [<ffffffff8029af2d>] sys_epoll_wait+0x1bd/0x4e0 [1172611.134877] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172611.137570] [<ffffffff8020bb4e>] system_call+0x7e/0x83 [1172611.140247] [1172611.142890] bash D 0000000000000000 0 8896 1 [1172611.145570] ffff8101cdf07ac8 0000000000000046 ffff8101cdf07a90 ffff810226a79800 [1172611.148276] ffffffff80747dc0 ffffffff80747dc0 ffffffff80747dc0 ffffffff80744d80 [1172611.150995] ffffffff80747dc0 ffff810114417738 ffff8101cdf07a94 ffff8101cdf07a88 [1172611.151098] Call Trace: [1172611.156317] [<ffffffff80590f9d>] wait_for_completion+0x7d/0xc0 [1172611.159006] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172611.161706] [<ffffffff8023706a>] flush_cpu_workqueue+0x6a/0x90 [1172611.164401] [<ffffffff802371b0>] wq_barrier_func+0x0/0x10 [1172611.167086] [<ffffffff80237183>] flush_workqueue+0x33/0x50 [1172611.169785] [<ffffffff803e633f>] release_dev+0x44f/0x750 [1172611.172499] [<ffffffff805907c6>] __sched_text_start+0x166/0x23d [1172611.175232] [<ffffffff803e6651>] tty_release+0x11/0x20 [1172611.177948] [<ffffffff8026e531>] __fput+0xb1/0x1a0 [1172611.180652] [<ffffffff8026b544>] filp_close+0x54/0x90 [1172611.183331] [<ffffffff80228971>] put_files_struct+0xb1/0xd0 [1172611.185991] [<ffffffff80229d79>] do_exit+0x1a9/0x8a0 [1172611.188636] [<ffffffff80230c85>] __dequeue_signal+0x165/0x1f0 [1172611.191258] [<ffffffff8022a49c>] do_group_exit+0x2c/0x80 [1172611.193857] [<ffffffff80232ee7>] get_signal_to_deliver+0x2c7/0x470 [1172611.196464] [<ffffffff8020af65>] do_notify_resume+0xc5/0x7a0 [1172611.199077] [<ffffffff80230992>] send_signal+0x62/0x1f0 [1172611.201678] [<ffffffff80231215>] __group_send_sig_info+0x75/0xa0 [1172611.204289] [<ffffffff80231a2e>] group_send_sig_info+0x6e/0x90 [1172611.206890] [<ffffffff8020b984>] sys_rt_sigreturn+0x324/0x3d0 [1172611.209498] [<ffffffff802322be>] sys_rt_sigaction+0x8e/0xc0 [1172611.212068] [<ffffffff8020bd66>] int_signal+0x12/0x17 [1172611.214618] [1172611.217129] su ? 0000000000000000 0 8903 8896 [1172611.219666] ffff8101e685dee8 0000000000000046 ffff8101e685deb0 0000000000000011 [1172611.222241] ffffffff80747dc0 ffffffff80747dc0 ffffffff80747dc0 ffffffff80744d80 [1172611.224859] ffffffff80747dc0 ffff8101158e9028 ffff8101e685deb4 ffff8101e685dea8 [1172611.224962] Call Trace: [1172611.230051] [<ffffffff8022a18e>] do_exit+0x5be/0x8a0 [1172611.232666] [<ffffffff8022a49c>] do_group_exit+0x2c/0x80 [1172611.235284] [<ffffffff8020bb4e>] system_call+0x7e/0x83 [1172611.237904] [1172611.240493] bash D ffff8101bfb7e600 0 8977 1 [1172611.243132] ffff810106e37ac8 0000000000000046 ffff810106e37c08 ffff810226a79800 [1172611.245831] ffffffff80747dc0 ffffffff80747dc0 ffffffff80747dc0 ffffffff80744d80 [1172611.248548] ffffffff80747dc0 ffff81018e79d738 0000000000000000 0000000000000000 [1172611.248652] Call Trace: [1172611.253996] [<ffffffff80590f9d>] wait_for_completion+0x7d/0xc0 [1172611.256787] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172611.259581] [<ffffffff8023706a>] flush_cpu_workqueue+0x6a/0x90 [1172611.262374] [<ffffffff802371b0>] wq_barrier_func+0x0/0x10 [1172611.265167] [<ffffffff80237183>] flush_workqueue+0x33/0x50 [1172611.267952] [<ffffffff803e633f>] release_dev+0x44f/0x750 [1172611.270733] [<ffffffff803e6651>] tty_release+0x11/0x20 [1172611.273502] [<ffffffff8026e531>] __fput+0xb1/0x1a0 [1172611.276259] [<ffffffff8026b544>] filp_close+0x54/0x90 [1172611.279016] [<ffffffff80228971>] put_files_struct+0xb1/0xd0 [1172611.281764] [<ffffffff80229d79>] do_exit+0x1a9/0x8a0 [1172611.284504] [<ffffffff80230c85>] __dequeue_signal+0x165/0x1f0 [1172611.287232] [<ffffffff8022a49c>] do_group_exit+0x2c/0x80 [1172611.289939] [<ffffffff80232ee7>] get_signal_to_deliver+0x2c7/0x470 [1172611.292654] [<ffffffff8020af65>] do_notify_resume+0xc5/0x7a0 [1172611.295338] [<ffffffff80230992>] send_signal+0x62/0x1f0 [1172611.298000] [<ffffffff80231215>] __group_send_sig_info+0x75/0xa0 [1172611.300678] [<ffffffff80231a2e>] group_send_sig_info+0x6e/0x90 [1172611.303357] [<ffffffff8020b984>] sys_rt_sigreturn+0x324/0x3d0 [1172611.306036] [<ffffffff802322be>] sys_rt_sigaction+0x8e/0xc0 [1172611.308696] [<ffffffff8020bd66>] int_signal+0x12/0x17 [1172611.311338] [1172611.313951] su ? 0000000000000000 0 8984 8977 [1172611.316601] ffff810151203ee8 0000000000000046 ffff810151203eb0 0000000000000011 [1172611.319282] ffffffff80747dc0 ffffffff80747dc0 ffffffff80747dc0 ffffffff80744d80 [1172611.321981] ffffffff80747dc0 ffff81021a284918 ffff810151203eb4 ffff810151203ea8 [1172611.322083] Call Trace: [1172611.327263] [<ffffffff8022a18e>] do_exit+0x5be/0x8a0 [1172611.329910] [<ffffffff8022a49c>] do_group_exit+0x2c/0x80 [1172611.332547] [<ffffffff8020bb4e>] system_call+0x7e/0x83 [1172611.335180] [1172611.337787] sshd S 0000000000000000 0 9072 7582 [1172611.340453] ffff81012ee91bf8 0000000000000082 ffff81012ee91bc0 ffff8101b0d95080 [1172611.343161] ffffffff80747dc0 ffffffff80747dc0 ffffffff80747dc0 ffffffff80744d80 [1172611.345879] ffffffff80747dc0 ffff8101cb862918 ffff81012ee91bc4 ffff81012ee91bb8 [1172611.345982] Call Trace: [1172611.351307] [<ffffffff80591755>] schedule_timeout+0x95/0xd0 [1172611.354060] [<ffffffff8023ae53>] prepare_to_wait+0x23/0x80 [1172611.356818] [<ffffffff80575ac6>] unix_stream_recvmsg+0x386/0x550 [1172611.359584] [<ffffffff8023acc0>] autoremove_wake_function+0x0/0x30 [1172611.362359] [<ffffffff80277aa0>] link_path_walk+0x80/0xf0 [1172611.365126] [<ffffffff80502b9b>] sock_aio_read+0x11b/0x130 [1172611.367882] [<ffffffff8026b5f9>] get_unused_fd_flags+0x79/0x120 [1172611.370649] [<ffffffff8026d349>] do_sync_read+0xd9/0x120 [1172611.373404] [<ffffffff8023acc0>] autoremove_wake_function+0x0/0x30 [1172611.376174] [<ffffffff8021e992>] pick_next_task_fair+0x42/0x70 [1172611.378939] [<ffffffff805907c6>] __sched_text_start+0x166/0x23d [1172611.381719] [<ffffffff8026b96a>] do_filp_open+0x3a/0x50 [1172611.384497] [<ffffffff8026dcf7>] vfs_read+0x157/0x160 [1172611.387260] [<ffffffff8026e113>] sys_read+0x53/0x90 [1172611.390008] [<ffffffff8020bb4e>] system_call+0x7e/0x83 [1172611.392729] [1172611.395395] sshd S 0000000000000000 0 9074 9072 [1172611.398114] ffff8101677179e8 0000000000000086 ffff8101677179b0 0000000000000002 [1172611.400847] ffffffff80747dc0 ffffffff80747dc0 ffffffff80747dc0 ffffffff80744d80 [1172611.403591] ffffffff80747dc0 ffff8101cb863738 ffff8101677179b4 ffff8101677179a8 [1172611.403694] Call Trace: [1172611.409063] [<ffffffff8059171f>] schedule_timeout+0x5f/0xd0 [1172611.411822] [<ffffffff8022f920>] process_timeout+0x0/0x10 [1172611.414587] [<ffffffff8027b528>] do_select+0x468/0x560 [1172611.417307] [<ffffffff8027bbc0>] __pollwait+0x0/0x130 [1172611.420004] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172611.422704] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172611.425339] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172611.427923] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172611.430472] [<ffffffff80267609>] add_partial+0x19/0x60 [1172611.433013] [<ffffffff80268a0d>] __slab_free+0x15d/0x310 [1172611.435547] [<ffffffff80592da9>] _spin_lock_bh+0x9/0x20 [1172611.438080] [<ffffffff80505743>] release_sock+0x13/0xb0 [1172611.440607] [<ffffffff80547700>] tcp_recvmsg+0x370/0x940 [1172611.443136] [<ffffffff80505080>] sock_common_recvmsg+0x30/0x50 [1172611.445679] [<ffffffff80502b9b>] sock_aio_read+0x11b/0x130 [1172611.448224] [<ffffffff8027b829>] core_sys_select+0x209/0x300 [1172611.450769] [<ffffffff8023acc0>] autoremove_wake_function+0x0/0x30 [1172611.453330] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172611.455892] [<ffffffff8022b51e>] current_fs_time+0x1e/0x30 [1172611.458451] [<ffffffff803e3582>] tty_ldisc_deref+0x52/0x80 [1172611.460996] [<ffffffff8027bdc1>] sys_select+0xd1/0x1c0 [1172611.463530] [<ffffffff8020bb4e>] system_call+0x7e/0x83 [1172611.466056] [1172611.468545] bash S 0000000000000000 0 9075 9074 [1172611.471088] ffff8101a8d01db8 0000000000000086 ffff8101a8d01d80 0000000000000ff5 [1172611.473676] ffffffff80747dc0 ffffffff80747dc0 ffffffff80747dc0 ffffffff80744d80 [1172611.476296] ffffffff80747dc0 ffff81010f26c208 ffff8101a8d01d84 ffff8101a8d01d78 [1172611.476398] Call Trace: [1172611.481491] [<ffffffff80591755>] schedule_timeout+0x95/0xd0 [1172611.484118] [<ffffffff8023af8c>] add_wait_queue+0x1c/0x60 [1172611.486724] [<ffffffff803e9bf8>] read_chan+0x228/0x6f0 [1172611.489303] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172611.491890] [<ffffffff803e6710>] tty_read+0xb0/0x100 [1172611.494437] [<ffffffff8026dc65>] vfs_read+0xc5/0x160 [1172611.496960] [<ffffffff8026e113>] sys_read+0x53/0x90 [1172611.499471] [<ffffffff8020bb4e>] system_call+0x7e/0x83 [1172611.501978] [1172611.504443] sshd S 0000000000000000 0 9477 7582 [1172611.506967] ffff810122bb5bf8 0000000000000082 ffff810122bb5bc0 ffff810102e23600 [1172611.509518] ffffffff80747dc0 ffffffff80747dc0 ffffffff80747dc0 ffffffff80744d80 [1172611.512071] ffffffff80747dc0 ffff81019b72e918 ffff810122bb5bc4 ffff810122bb5bb8 [1172611.512174] Call Trace: [1172611.517102] [<ffffffff80591755>] schedule_timeout+0x95/0xd0 [1172611.519615] [<ffffffff8023ae53>] prepare_to_wait+0x23/0x80 [1172611.522128] [<ffffffff80575ac6>] unix_stream_recvmsg+0x386/0x550 [1172611.524648] [<ffffffff8023acc0>] autoremove_wake_function+0x0/0x30 [1172611.527170] [<ffffffff80277aa0>] link_path_walk+0x80/0xf0 [1172611.529681] [<ffffffff80502b9b>] sock_aio_read+0x11b/0x130 [1172611.532193] [<ffffffff8026b5f9>] get_unused_fd_flags+0x79/0x120 [1172611.534712] [<ffffffff8026d349>] do_sync_read+0xd9/0x120 [1172611.537222] [<ffffffff8023acc0>] autoremove_wake_function+0x0/0x30 [1172611.539741] [<ffffffff8021e992>] pick_next_task_fair+0x42/0x70 [1172611.542257] [<ffffffff805907c6>] __sched_text_start+0x166/0x23d [1172611.544776] [<ffffffff8026b96a>] do_filp_open+0x3a/0x50 [1172611.547286] [<ffffffff8026dcf7>] vfs_read+0x157/0x160 [1172611.549793] [<ffffffff8026e113>] sys_read+0x53/0x90 [1172611.552298] [<ffffffff8020bb4e>] system_call+0x7e/0x83 [1172611.554805] [1172611.557268] sshd S 0000000000000000 0 9479 9477 [1172611.559791] ffff8101d7f7b9e8 0000000000000082 ffff8101d7f7b9b0 0000000000000002 [1172611.562340] ffffffff80747dc0 ffffffff80747dc0 ffffffff80747dc0 ffffffff80744d80 [1172611.564892] ffffffff80747dc0 ffff81019b72f738 ffff8101d7f7b9b4 ffff8101d7f7b9a8 [1172611.564995] Call Trace: [1172611.569926] [<ffffffff8059171f>] schedule_timeout+0x5f/0xd0 [1172611.572443] [<ffffffff8022f920>] process_timeout+0x0/0x10 [1172611.574957] [<ffffffff8027b528>] do_select+0x468/0x560 [1172611.577469] [<ffffffff8027bbc0>] __pollwait+0x0/0x130 [1172611.579979] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172611.582500] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172611.585023] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172611.587546] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172611.590069] [<ffffffff8050bd41>] skb_copy_datagram_iovec+0x1a1/0x260 [1172611.592602] [<ffffffff80592da9>] _spin_lock_bh+0x9/0x20 [1172611.595136] [<ffffffff80505743>] release_sock+0x13/0xb0 [1172611.597669] [<ffffffff80547700>] tcp_recvmsg+0x370/0x940 [1172611.600206] [<ffffffff80505080>] sock_common_recvmsg+0x30/0x50 [1172611.602755] [<ffffffff80502b9b>] sock_aio_read+0x11b/0x130 [1172611.605295] [<ffffffff8027b829>] core_sys_select+0x209/0x300 [1172611.607838] [<ffffffff8023acc0>] autoremove_wake_function+0x0/0x30 [1172611.610396] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172611.612949] [<ffffffff8022b51e>] current_fs_time+0x1e/0x30 [1172611.615496] [<ffffffff803e3582>] tty_ldisc_deref+0x52/0x80 [1172611.618033] [<ffffffff8027bdc1>] sys_select+0xd1/0x1c0 [1172611.620569] [<ffffffff8020bb4e>] system_call+0x7e/0x83 [1172611.623100] [1172611.625606] bash S 7fffffffffffffff 0 9480 9479 [1172611.628160] ffff8101d7ed1db8 0000000000000086 000000000000000b 0000000000000ff5 [1172611.630773] ffffffff80747dc0 ffffffff80747dc0 ffffffff80747dc0 ffffffff80744d80 [1172611.633395] ffffffff80747dc0 ffff8101cb7ee208 0000000000000000 ffff8101657c8018 [1172611.633497] Call Trace: [1172611.638557] [<ffffffff80591755>] schedule_timeout+0x95/0xd0 [1172611.641136] [<ffffffff8023af8c>] add_wait_queue+0x1c/0x60 [1172611.643699] [<ffffffff803e9bf8>] read_chan+0x228/0x6f0 [1172611.646256] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172611.648829] [<ffffffff803e6710>] tty_read+0xb0/0x100 [1172611.651389] [<ffffffff8026dc65>] vfs_read+0xc5/0x160 [1172611.653928] [<ffffffff8026e113>] sys_read+0x53/0x90 [1172611.656463] [<ffffffff8020bb4e>] system_call+0x7e/0x83 [1172611.659013] [1172611.661536] su S 0000000000000000 0 9613 1 [1172611.664103] ffff8101c3c57e88 0000000000000086 ffff8101c3c57e50 ffff810117ac0000 [1172611.666717] ffffffff80747dc0 ffffffff80747dc0 ffffffff80747dc0 ffffffff80744d80 [1172611.669327] ffffffff80747dc0 ffff810106581028 ffff8101c3c57e54 ffff8101c3c57e48 [1172611.669430] Call Trace: [1172611.674472] [<ffffffff80229439>] do_wait+0x599/0xc90 [1172611.677029] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172611.679584] [<ffffffff8020bb4e>] system_call+0x7e/0x83 [1172611.682132] [1172611.684643] bash S 000000000000000e 0 9614 9613 [1172611.687205] ffff8101ebc27e88 0000000000000082 80000001df3f8065 ffff8101a86f6710 [1172611.689809] ffffffff80747dc0 ffffffff80747dc0 ffffffff80747dc0 ffffffff80744d80 [1172611.692439] ffffffff80747dc0 ffff810117ac0208 ffff8101ebc27e38 ffff8101786d7a80 [1172611.692542] Call Trace: [1172611.697656] [<ffffffff8021cb22>] do_page_fault+0x202/0x890 [1172611.700289] [<ffffffff8021ed69>] update_curr+0x109/0x120 [1172611.702917] [<ffffffff80229439>] do_wait+0x599/0xc90 [1172611.705533] [<ffffffff805907c6>] __sched_text_start+0x166/0x23d [1172611.708163] [<ffffffff8021f7e3>] __wake_up+0x43/0x70 [1172611.710787] [<ffffffff8027a520>] vfs_ioctl+0x220/0x2c0 [1172611.713422] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172611.716078] [<ffffffff8027a609>] sys_ioctl+0x49/0x80 [1172611.718716] [<ffffffff8020bb4e>] system_call+0x7e/0x83 [1172611.721349] [1172611.723928] bash D ffff81017bb82900 0 9632 1 [1172611.726540] ffff8101514abac8 0000000000000046 ffff8101514abc08 ffff810226a79800 [1172611.729205] ffffffff80747dc0 ffffffff80747dc0 ffffffff80747dc0 ffffffff80744d80 [1172611.731857] ffffffff80747dc0 ffff810136f4c918 ffff810007139a50 ffff810004cd4a50 [1172611.731960] Call Trace: [1172611.737092] [<ffffffff80590f9d>] wait_for_completion+0x7d/0xc0 [1172611.739754] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172611.742431] [<ffffffff8023706a>] flush_cpu_workqueue+0x6a/0x90 [1172611.745109] [<ffffffff802371b0>] wq_barrier_func+0x0/0x10 [1172611.747812] [<ffffffff80237183>] flush_workqueue+0x33/0x50 [1172611.750540] [<ffffffff803e633f>] release_dev+0x44f/0x750 [1172611.753296] [<ffffffff805907c6>] __sched_text_start+0x166/0x23d [1172611.756058] [<ffffffff803e6651>] tty_release+0x11/0x20 [1172611.758805] [<ffffffff8026e531>] __fput+0xb1/0x1a0 [1172611.761549] [<ffffffff8026b544>] filp_close+0x54/0x90 [1172611.764295] [<ffffffff80228971>] put_files_struct+0xb1/0xd0 [1172611.767039] [<ffffffff80229d79>] do_exit+0x1a9/0x8a0 [1172611.769781] [<ffffffff80230c85>] __dequeue_signal+0x165/0x1f0 [1172611.772533] [<ffffffff8022a49c>] do_group_exit+0x2c/0x80 [1172611.775279] [<ffffffff80232ee7>] get_signal_to_deliver+0x2c7/0x470 [1172611.778032] [<ffffffff8020af65>] do_notify_resume+0xc5/0x7a0 [1172611.780784] [<ffffffff80230992>] send_signal+0x62/0x1f0 [1172611.783537] [<ffffffff80231215>] __group_send_sig_info+0x75/0xa0 [1172611.786308] [<ffffffff80231a2e>] group_send_sig_info+0x6e/0x90 [1172611.789086] [<ffffffff8020b984>] sys_rt_sigreturn+0x324/0x3d0 [1172611.791858] [<ffffffff802322be>] sys_rt_sigaction+0x8e/0xc0 [1172611.794615] [<ffffffff8020bd66>] int_signal+0x12/0x17 [1172611.797334] [1172611.799993] su ? 0000000000000000 0 9639 9632 [1172611.802704] ffff8101b98afee8 0000000000000046 ffff8101b98afeb0 0000000000000011 [1172611.805431] ffffffff80747dc0 ffffffff80747dc0 ffffffff80747dc0 ffffffff80744d80 [1172611.808166] ffffffff80747dc0 ffff8101243a7028 ffff8101b98afeb4 ffff8101b98afea8 [1172611.808269] Call Trace: [1172611.813600] [<ffffffff8022a18e>] do_exit+0x5be/0x8a0 [1172611.816333] [<ffffffff8022a49c>] do_group_exit+0x2c/0x80 [1172611.819057] [<ffffffff8020bb4e>] system_call+0x7e/0x83 [1172611.821794] [1172611.824519] mdadm D 0000000000000000 0 9783 9614 [1172611.827312] ffff8101aea09a18 0000000000000082 ffff8101aea099e0 ffff8101aea09998 [1172611.830142] ffffffff80747dc0 ffffffff80747dc0 ffffffff80747dc0 ffffffff80744d80 [1172611.832994] ffffffff80747dc0 ffff8101a86f6918 ffff8101aea099e4 ffff8101aea099d8 [1172611.833098] Call Trace: [1172611.838611] [<ffffffff8021f7e3>] __wake_up+0x43/0x70 [1172611.841427] [<ffffffff80247ee0>] sync_page+0x0/0x50 [1172611.844200] [<ffffffff80591348>] io_schedule+0x28/0x40 [1172611.846961] [<ffffffff80247f1b>] sync_page+0x3b/0x50 [1172611.849716] [<ffffffff8059181a>] __wait_on_bit_lock+0x4a/0x80 [1172611.852476] [<ffffffff80247ebf>] __lock_page+0x5f/0x70 [1172611.855220] [<ffffffff8023acf0>] wake_bit_function+0x0/0x30 [1172611.857968] [<ffffffff8024f08a>] pagevec_lookup_tag+0x1a/0x30 [1172611.860699] [<ffffffff8024d991>] write_cache_pages+0x191/0x340 [1172611.863407] [<ffffffff8024d370>] __writepage+0x0/0x30 [1172611.866104] [<ffffffff8024db90>] do_writepages+0x20/0x40 [1172611.868763] [<ffffffff8028c179>] __writeback_single_inode+0x2d9/0x400 [1172611.871430] [<ffffffff8021f7e3>] __wake_up+0x43/0x70 [1172611.874087] [<ffffffff8028c6ca>] sync_sb_inodes+0x21a/0x300 [1172611.876755] [<ffffffff8028c851>] sync_inodes_sb+0xa1/0xc0 [1172611.879405] [<ffffffff8026f08b>] __fsync_super+0xb/0x70 [1172611.882049] [<ffffffff8026f0f9>] fsync_super+0x9/0x20 [1172611.884692] [<ffffffff80291486>] fsync_bdev+0x26/0x60 [1172611.887318] [<ffffffff803929a7>] blkdev_ioctl+0x1c7/0x7a0 [1172611.889939] [<ffffffff802560f1>] handle_mm_fault+0x1a1/0x8a0 [1172611.892573] [<ffffffff804b072a>] md_open+0x6a/0x90 [1172611.895186] [<ffffffff802962e0>] blkdev_open+0x0/0x90 [1172611.897799] [<ffffffff8039c301>] __up_read+0x21/0xb0 [1172611.900374] [<ffffffff8021cb22>] do_page_fault+0x202/0x890 [1172611.902936] [<ffffffff8029631c>] blkdev_open+0x3c/0x90 [1172611.905489] [<ffffffff8029548b>] block_ioctl+0x1b/0x30 [1172611.907994] [<ffffffff8027a28f>] do_ioctl+0x2f/0xa0 [1172611.910470] [<ffffffff8027a520>] vfs_ioctl+0x220/0x2c0 [1172611.912938] [<ffffffff8027a609>] sys_ioctl+0x49/0x80 [1172611.915381] [<ffffffff8059301d>] error_exit+0x0/0x84 [1172611.917816] [<ffffffff8020bb4e>] system_call+0x7e/0x83 [1172611.920256] [1172611.922661] sshd S 0000000000000000 0 9793 7582 [1172611.925122] ffff8101a7fabbf8 0000000000000086 ffff8101a7fabbc0 ffff8101cd1f1600 [1172611.927626] ffffffff80747dc0 ffffffff80747dc0 ffffffff80747dc0 ffffffff80744d80 [1172611.930154] ffffffff80747dc0 ffff8101536d0918 ffff8101a7fabbc4 ffff8101a7fabbb8 [1172611.930258] Call Trace: [1172611.935198] [<ffffffff80591755>] schedule_timeout+0x95/0xd0 [1172611.937753] [<ffffffff8023ae53>] prepare_to_wait+0x23/0x80 [1172611.940308] [<ffffffff80575ac6>] unix_stream_recvmsg+0x386/0x550 [1172611.942871] [<ffffffff8023acc0>] autoremove_wake_function+0x0/0x30 [1172611.945439] [<ffffffff80277aa0>] link_path_walk+0x80/0xf0 [1172611.947997] [<ffffffff80502b9b>] sock_aio_read+0x11b/0x130 [1172611.950553] [<ffffffff8026b5f9>] get_unused_fd_flags+0x79/0x120 [1172611.953111] [<ffffffff8026d349>] do_sync_read+0xd9/0x120 [1172611.955662] [<ffffffff8023acc0>] autoremove_wake_function+0x0/0x30 [1172611.958225] [<ffffffff8021e992>] pick_next_task_fair+0x42/0x70 [1172611.960800] [<ffffffff805907c6>] __sched_text_start+0x166/0x23d [1172611.963379] [<ffffffff8026b96a>] do_filp_open+0x3a/0x50 [1172611.965946] [<ffffffff8026dcf7>] vfs_read+0x157/0x160 [1172611.968507] [<ffffffff8026e113>] sys_read+0x53/0x90 [1172611.971029] [<ffffffff8020bb4e>] system_call+0x7e/0x83 [1172611.973523] [1172611.975978] sshd S 0000000000000000 0 9795 9793 [1172611.978461] ffff81021f41d9e8 0000000000000082 ffff81021f41d9b0 0000000000000002 [1172611.980981] ffffffff80747dc0 ffffffff80747dc0 ffffffff80747dc0 ffffffff80744d80 [1172611.983533] ffffffff80747dc0 ffff8101536d1738 ffff81021f41d9b4 ffff81021f41d9a8 [1172611.983636] Call Trace: [1172611.988598] [<ffffffff8059171f>] schedule_timeout+0x5f/0xd0 [1172611.991157] [<ffffffff8022f920>] process_timeout+0x0/0x10 [1172611.993698] [<ffffffff8027b528>] do_select+0x468/0x560 [1172611.996208] [<ffffffff8027bbc0>] __pollwait+0x0/0x130 [1172611.998705] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172612.001178] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172612.003617] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172612.006034] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172612.008424] [<ffffffff80267609>] add_partial+0x19/0x60 [1172612.010813] [<ffffffff80268a0d>] __slab_free+0x15d/0x310 [1172612.013194] [<ffffffff80592da9>] _spin_lock_bh+0x9/0x20 [1172612.015567] [<ffffffff80505743>] release_sock+0x13/0xb0 [1172612.017935] [<ffffffff80547700>] tcp_recvmsg+0x370/0x940 [1172612.020296] [<ffffffff80505080>] sock_common_recvmsg+0x30/0x50 [1172612.022667] [<ffffffff80502b9b>] sock_aio_read+0x11b/0x130 [1172612.025029] [<ffffffff8027b829>] core_sys_select+0x209/0x300 [1172612.027401] [<ffffffff8023acc0>] autoremove_wake_function+0x0/0x30 [1172612.029774] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172612.032143] [<ffffffff8022b51e>] current_fs_time+0x1e/0x30 [1172612.034510] [<ffffffff803e3582>] tty_ldisc_deref+0x52/0x80 [1172612.036880] [<ffffffff8027bdc1>] sys_select+0xd1/0x1c0 [1172612.039245] [<ffffffff8020bb4e>] system_call+0x7e/0x83 [1172612.041607] [1172612.043951] bash S 000000000000000e 0 9796 9795 [1172612.046358] ffff81013de09e88 0000000000000086 8000000104441065 ffff8101125a2710 [1172612.048809] ffffffff80747dc0 ffffffff80747dc0 ffffffff80747dc0 ffffffff80744d80 [1172612.051282] ffffffff80747dc0 ffff81014da80208 ffff81013de09e38 ffff8101eab7d5e8 [1172612.051385] Call Trace: [1172612.056215] [<ffffffff8021cb22>] do_page_fault+0x202/0x890 [1172612.058715] [<ffffffff8021ed69>] update_curr+0x109/0x120 [1172612.061212] [<ffffffff80229439>] do_wait+0x599/0xc90 [1172612.063714] [<ffffffff805907c6>] __sched_text_start+0x166/0x23d [1172612.066238] [<ffffffff8021f7e3>] __wake_up+0x43/0x70 [1172612.068760] [<ffffffff8027a520>] vfs_ioctl+0x220/0x2c0 [1172612.071291] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172612.073847] [<ffffffff8027a609>] sys_ioctl+0x49/0x80 [1172612.076399] [<ffffffff8020bb4e>] system_call+0x7e/0x83 [1172612.078935] [1172612.081438] su S 0000000000000000 0 9804 9796 [1172612.083976] ffff810184fdbe88 0000000000000082 ffff810184fdbe50 ffff810120808000 [1172612.086547] ffffffff80747dc0 ffffffff80747dc0 ffffffff80747dc0 ffffffff80744d80 [1172612.089126] ffffffff80747dc0 ffff8101125a2918 ffff810184fdbe54 ffff810184fdbe48 [1172612.089229] Call Trace: [1172612.094170] [<ffffffff80229439>] do_wait+0x599/0xc90 [1172612.096703] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172612.099244] [<ffffffff8020bb4e>] system_call+0x7e/0x83 [1172612.101772] [1172612.104264] bash S 0000000000000000 0 9805 9804 [1172612.106820] ffff8101e88f7db8 0000000000000082 ffff8101e88f7d80 0000000000000ff9 [1172612.109419] ffffffff80747dc0 ffffffff80747dc0 ffffffff80747dc0 ffffffff80744d80 [1172612.112036] ffffffff80747dc0 ffff810120808208 ffff8101e88f7d84 ffff8101e88f7d78 [1172612.112139] Call Trace: [1172612.117216] [<ffffffff80591755>] schedule_timeout+0x95/0xd0 [1172612.119833] [<ffffffff8023af8c>] add_wait_queue+0x1c/0x60 [1172612.122446] [<ffffffff803e9bf8>] read_chan+0x228/0x6f0 [1172612.125058] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172612.127700] [<ffffffff803e6710>] tty_read+0xb0/0x100 [1172612.130342] [<ffffffff8026dc65>] vfs_read+0xc5/0x160 [1172612.132958] [<ffffffff8026e113>] sys_read+0x53/0x90 [1172612.135554] [<ffffffff8020bb4e>] system_call+0x7e/0x83 [1172612.138121] [1172612.140634] smtpd S 0000000000000000 0 9847 30580 [1172612.143203] ffff8101a6e25e58 0000000000000086 ffff8101a6e25e20 ffff81022583d318 [1172612.145786] ffffffff80747dc0 ffffffff80747dc0 ffffffff80747dc0 ffffffff80744d80 [1172612.148371] ffffffff80747dc0 ffff8101d859a208 ffff8101a6e25e24 ffff8101a6e25e18 [1172612.148474] Call Trace: [1172612.153514] [<ffffffff8059171f>] schedule_timeout+0x5f/0xd0 [1172612.156129] [<ffffffff8022f920>] process_timeout+0x0/0x10 [1172612.158743] [<ffffffff8029af2d>] sys_epoll_wait+0x1bd/0x4e0 [1172612.161385] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172612.164066] [<ffffffff8020bb4e>] system_call+0x7e/0x83 [1172612.166774] [1172612.169446] smtpd S ffff81022583d318 0 9963 30580 [1172612.172187] ffff8101c5f69eb8 0000000000000082 0000000000000000 ffffffff00000001 [1172612.174990] ffffffff80747dc0 ffffffff80747dc0 ffffffff80747dc0 ffffffff80744d80 [1172612.177819] ffffffff80747dc0 ffff810105b04918 0000000000000000 000000008bcec672 [1172612.177922] Call Trace: [1172612.183467] [<ffffffff8022b4c9>] ns_to_timeval+0x9/0x40 [1172612.186321] [<ffffffff8027da7d>] flock_lock_file_wait+0x14d/0x300 [1172612.189190] [<ffffffff8023acc0>] autoremove_wake_function+0x0/0x30 [1172612.192057] [<ffffffff8027e7ab>] sys_flock+0x16b/0x180 [1172612.194913] [<ffffffff8020bb4e>] system_call+0x7e/0x83 [1172612.197759] [1172612.200578] cleanup S 0000000000000000 0 9966 30580 [1172612.203466] ffff8101b50b7e58 0000000000000082 ffff8101b50b7e20 ffff8101a496a828 [1172612.206409] ffffffff80747dc0 ffffffff80747dc0 ffffffff80747dc0 ffffffff80744d80 [1172612.209358] ffffffff80747dc0 ffff810132074208 ffff8101b50b7e24 ffff8101b50b7e18 [1172612.209460] Call Trace: [1172612.215203] [<ffffffff8059171f>] schedule_timeout+0x5f/0xd0 [1172612.218127] [<ffffffff8022f920>] process_timeout+0x0/0x10 [1172612.221046] [<ffffffff8029af2d>] sys_epoll_wait+0x1bd/0x4e0 [1172612.223934] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172612.226813] [<ffffffff8020bb4e>] system_call+0x7e/0x83 [1172612.229693] [1172612.232543] local S 0000000000000000 0 9967 30580 [1172612.235450] ffff8101c7bf9e58 0000000000000086 ffff8101c7bf9e20 0000000000000000 [1172612.238401] ffffffff80747dc0 ffffffff80747dc0 ffffffff80747dc0 ffffffff80744d80 [1172612.241380] ffffffff80747dc0 ffff8101b11bd738 ffff8101c7bf9e24 ffff8101c7bf9e18 [1172612.241483] Call Trace: [1172612.247296] [<ffffffff8059171f>] schedule_timeout+0x5f/0xd0 [1172612.250292] [<ffffffff8022f920>] process_timeout+0x0/0x10 [1172612.253278] [<ffffffff8029af2d>] sys_epoll_wait+0x1bd/0x4e0 [1172612.256265] [<ffffffff8021fed0>] default_wake_function+0x0/0x10 [1172612.259237] [<ffffffff8020bb4e>] system_call+0x7e/0x83 [1172612.262188] ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: 2.6.23.1: mdadm/raid5 hung/d-state 2007-11-04 12:52 ` Justin Piszcz @ 2007-11-04 14:55 ` Michael Tokarev 2007-11-04 14:59 ` Justin Piszcz ` (2 more replies) 0 siblings, 3 replies; 35+ messages in thread From: Michael Tokarev @ 2007-11-04 14:55 UTC (permalink / raw) To: Justin Piszcz; +Cc: linux-kernel, linux-raid, xfs Justin Piszcz wrote: > On Sun, 4 Nov 2007, Michael Tokarev wrote: [] >> The next time you come across something like that, do a SysRq-T dump and >> post that. It shows a stack trace of all processes - and in particular, >> where exactly each task is stuck. > Yes I got it before I rebooted, ran that and then dmesg > file. > > Here it is: > > [1172609.665902] ffffffff80747dc0 ffffffff80747dc0 ffffffff80747dc0 ffffffff80744d80 > [1172609.668768] ffffffff80747dc0 ffff81015c3aa918 ffff810091c899b4 ffff810091c899a8 That's only partial list. All the kernel threads - which are most important in this context - aren't shown. You ran out of dmesg buffer, and the most interesting entries was at the beginning. If your /var/log partition is working, the stuff should be in /var/log/kern.log or equivalent. If it's not working, there is a way to capture the info still, by stopping syslogd, cat'ing /proc/kmsg to some tmpfs file and scp'ing it elsewhere. /mjt ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: 2.6.23.1: mdadm/raid5 hung/d-state 2007-11-04 14:55 ` Michael Tokarev @ 2007-11-04 14:59 ` Justin Piszcz 2007-11-04 18:17 ` BERTRAND Joël 2007-11-04 21:40 ` David Greaves 2 siblings, 0 replies; 35+ messages in thread From: Justin Piszcz @ 2007-11-04 14:59 UTC (permalink / raw) To: Michael Tokarev; +Cc: linux-kernel, linux-raid, xfs On Sun, 4 Nov 2007, Michael Tokarev wrote: > Justin Piszcz wrote: >> On Sun, 4 Nov 2007, Michael Tokarev wrote: > [] >>> The next time you come across something like that, do a SysRq-T dump and >>> post that. It shows a stack trace of all processes - and in particular, >>> where exactly each task is stuck. > >> Yes I got it before I rebooted, ran that and then dmesg > file. >> >> Here it is: >> >> [1172609.665902] ffffffff80747dc0 ffffffff80747dc0 ffffffff80747dc0 ffffffff80744d80 >> [1172609.668768] ffffffff80747dc0 ffff81015c3aa918 ffff810091c899b4 ffff810091c899a8 > > That's only partial list. All the kernel threads - which are most important > in this context - aren't shown. You ran out of dmesg buffer, and the most > interesting entries was at the beginning. If your /var/log partition is > working, the stuff should be in /var/log/kern.log or equivalent. If it's > not working, there is a way to capture the info still, by stopping syslogd, > cat'ing /proc/kmsg to some tmpfs file and scp'ing it elsewhere. > > /mjt > Will do that the next time it happens, thanks. ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: 2.6.23.1: mdadm/raid5 hung/d-state 2007-11-04 14:55 ` Michael Tokarev 2007-11-04 14:59 ` Justin Piszcz @ 2007-11-04 18:17 ` BERTRAND Joël 2007-11-04 21:40 ` David Greaves 2 siblings, 0 replies; 35+ messages in thread From: BERTRAND Joël @ 2007-11-04 18:17 UTC (permalink / raw) To: Michael Tokarev; +Cc: Justin Piszcz, linux-kernel, linux-raid, xfs Michael Tokarev wrote: > Justin Piszcz wrote: >> On Sun, 4 Nov 2007, Michael Tokarev wrote: > [] >>> The next time you come across something like that, do a SysRq-T dump and >>> post that. It shows a stack trace of all processes - and in particular, >>> where exactly each task is stuck. > >> Yes I got it before I rebooted, ran that and then dmesg > file. >> >> Here it is: >> >> [1172609.665902] ffffffff80747dc0 ffffffff80747dc0 ffffffff80747dc0 ffffffff80744d80 >> [1172609.668768] ffffffff80747dc0 ffff81015c3aa918 ffff810091c899b4 ffff810091c899a8 > > That's only partial list. All the kernel threads - which are most important > in this context - aren't shown. You ran out of dmesg buffer, and the most > interesting entries was at the beginning. If your /var/log partition is > working, the stuff should be in /var/log/kern.log or equivalent. If it's > not working, there is a way to capture the info still, by stopping syslogd, > cat'ing /proc/kmsg to some tmpfs file and scp'ing it elsewhere. I have reported some days ago the same bug . I can reproduced it without any trouble :-(. Configuration : 2.6.23 linux kernel with iscsi-target on sparc64/smp (sun4v). Following output was crated by echo t > /proc/sysrq-trigger and echo x > /proc/sysrq-trigger. I is a and paste from /var/log/syslog and I hope I haven't done any mistake... Nov 4 18:55:56 poulenc kernel: SysRq : Show State Nov 4 18:55:56 poulenc kernel: task PC stack pid father Nov 4 18:55:56 poulenc kernel: init S 00000000004c7d68 0 1 0 Nov 4 18:55:56 poulenc kernel: Call Trace: Nov 4 18:55:56 poulenc kernel: [000000000067ec0c] schedule_timeout+0x54/0xc0 Nov 4 18:55:56 poulenc kernel: [00000000004c7d68] do_select+0x3d0/0x420 Nov 4 18:55:56 poulenc kernel: [00000000004eccb8] compat_core_sys_select+0x160/0x200 Nov 4 18:55:56 poulenc kernel: [00000000004eef74] compat_sys_select+0xbc/0x1a0 Nov 4 18:55:56 poulenc kernel: [00000000004061d4] linux_sparc_syscall32+0x3c/0x40 Nov 4 18:55:56 poulenc kernel: [00000000000150b8] 0x150c0 Nov 4 18:55:56 poulenc kernel: kthreadd S 00000000004273d0 0 2 0 Nov 4 18:55:56 poulenc kernel: Call Trace: Nov 4 18:55:56 poulenc kernel: [0000000000478fe8] kthreadd+0x1b0/0x1c0 Nov 4 18:55:56 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:55:56 poulenc kernel: [000000000067d404] rest_init+0x2c/0x60 Nov 4 18:55:56 poulenc kernel: migration/0 S 0000000000478ce0 0 3 2 Nov 4 18:55:56 poulenc kernel: Call Trace: Nov 4 18:55:56 poulenc kernel: [000000000045e60c] migration_thread+0x174/0x360 Nov 4 18:55:56 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:55:56 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:55:56 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:55:56 poulenc kernel: ksoftirqd/0 S 0000000000478ce0 0 4 2 Nov 4 18:55:56 poulenc kernel: Call Trace: Nov 4 18:55:56 poulenc kernel: [00000000004683c0] ksoftirqd+0xa8/0xc0 Nov 4 18:55:56 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:55:56 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:55:57 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:55:57 poulenc kernel: watchdog/0 S 0000000000478ce0 0 5 2 Nov 4 18:55:57 poulenc kernel: Call Trace: Nov 4 18:55:57 poulenc kernel: [000000000048f9e0] watchdog+0x48/0x80 Nov 4 18:55:57 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:55:57 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:55:57 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:55:57 poulenc kernel: migration/1 S 0000000000478ce0 0 6 2 Nov 4 18:55:57 poulenc kernel: Call Trace: Nov 4 18:55:57 poulenc kernel: [000000000045e60c] migration_thread+0x174/0x360 Nov 4 18:55:57 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:55:57 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:55:57 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:55:57 poulenc kernel: ksoftirqd/1 S 0000000000478ce0 0 7 2 Nov 4 18:55:57 poulenc kernel: Call Trace: Nov 4 18:55:57 poulenc kernel: [00000000004683c0] ksoftirqd+0xa8/0xc0 Nov 4 18:55:57 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:55:57 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:55:57 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:55:57 poulenc kernel: watchdog/1 S 0000000000478ce0 0 8 2 Nov 4 18:55:57 poulenc kernel: Call Trace: Nov 4 18:55:57 poulenc kernel: [000000000048f9e0] watchdog+0x48/0x80 Nov 4 18:55:57 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:55:57 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:55:57 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:55:57 poulenc kernel: migration/2 S 0000000000478ce0 0 9 2 Nov 4 18:55:57 poulenc kernel: Call Trace: Nov 4 18:55:57 poulenc kernel: [000000000045e60c] migration_thread+0x174/0x360 Nov 4 18:55:57 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:55:57 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:55:57 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:55:57 poulenc kernel: ksoftirqd/2 S 0000000000478ce0 0 10 2 Nov 4 18:55:57 poulenc kernel: Call Trace: Nov 4 18:55:57 poulenc kernel: [00000000004683c0] ksoftirqd+0xa8/0xc0 Nov 4 18:55:57 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:55:57 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:55:57 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:55:57 poulenc kernel: watchdog/2 S 0000000000478ce0 0 11 2 Nov 4 18:55:57 poulenc kernel: Call Trace: Nov 4 18:55:57 poulenc kernel: [000000000048f9e0] watchdog+0x48/0x80 Nov 4 18:55:57 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:55:57 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:55:57 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:55:57 poulenc kernel: migration/3 R running task 0 12 2 Nov 4 18:55:57 poulenc kernel: ksoftirqd/3 S 0000000000478ce0 0 13 2 Nov 4 18:55:57 poulenc kernel: Call Trace: Nov 4 18:55:57 poulenc kernel: [00000000004683c0] ksoftirqd+0xa8/0xc0 Nov 4 18:55:57 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:55:57 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:55:58 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:55:58 poulenc kernel: watchdog/3 S 0000000000478ce0 0 14 2 Nov 4 18:55:58 poulenc kernel: Call Trace: Nov 4 18:55:58 poulenc kernel: [000000000048f9e0] watchdog+0x48/0x80 Nov 4 18:55:58 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:55:58 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:55:58 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:55:58 poulenc kernel: migration/4 S 0000000000478ce0 0 15 2 Nov 4 18:55:58 poulenc kernel: Call Trace: Nov 4 18:55:58 poulenc kernel: [000000000045e60c] migration_thread+0x174/0x360 Nov 4 18:55:58 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:55:58 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:55:58 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:55:58 poulenc kernel: ksoftirqd/4 S 0000000000478ce0 0 16 2 Nov 4 18:55:58 poulenc kernel: Call Trace: Nov 4 18:55:58 poulenc kernel: [00000000004683c0] ksoftirqd+0xa8/0xc0 Nov 4 18:55:58 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:55:58 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:55:58 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:55:58 poulenc kernel: watchdog/4 S 0000000000478ce0 0 17 2 Nov 4 18:55:58 poulenc kernel: Call Trace: Nov 4 18:55:58 poulenc kernel: [000000000048f9e0] watchdog+0x48/0x80 Nov 4 18:55:58 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:55:58 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:55:58 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:55:58 poulenc kernel: migration/5 S 0000000000478ce0 0 18 2 Nov 4 18:55:58 poulenc kernel: Call Trace: Nov 4 18:55:58 poulenc kernel: [000000000045e60c] migration_thread+0x174/0x360 Nov 4 18:55:58 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:55:58 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:55:58 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:55:58 poulenc kernel: ksoftirqd/5 S 0000000000478ce0 0 19 2 Nov 4 18:55:58 poulenc kernel: Call Trace: Nov 4 18:55:58 poulenc kernel: [00000000004683c0] ksoftirqd+0xa8/0xc0 Nov 4 18:55:58 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:55:58 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:55:58 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:55:58 poulenc kernel: watchdog/5 S 0000000000478ce0 0 20 2 Nov 4 18:55:58 poulenc kernel: Call Trace: Nov 4 18:55:58 poulenc kernel: [000000000048f9e0] watchdog+0x48/0x80 Nov 4 18:55:58 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:55:58 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:55:58 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:55:58 poulenc kernel: migration/6 S 0000000000478ce0 0 21 2 Nov 4 18:55:58 poulenc kernel: Call Trace: Nov 4 18:55:58 poulenc kernel: [000000000045e60c] migration_thread+0x174/0x360 Nov 4 18:55:58 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:55:58 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:55:59 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:55:59 poulenc kernel: ksoftirqd/6 S 0000000000478ce0 0 22 2 Nov 4 18:55:59 poulenc kernel: Call Trace: Nov 4 18:55:59 poulenc kernel: [00000000004683c0] ksoftirqd+0xa8/0xc0 Nov 4 18:55:59 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:55:59 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:55:59 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:55:59 poulenc kernel: watchdog/6 S 0000000000478ce0 0 23 2 Nov 4 18:55:59 poulenc kernel: Call Trace: Nov 4 18:55:59 poulenc kernel: [000000000048f9e0] watchdog+0x48/0x80 Nov 4 18:55:59 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:55:59 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:55:59 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:55:59 poulenc kernel: migration/7 S 0000000000478ce0 0 24 2 Nov 4 18:55:59 poulenc kernel: Call Trace: Nov 4 18:55:59 poulenc kernel: [000000000045e60c] migration_thread+0x174/0x360 Nov 4 18:55:59 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:55:59 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:55:59 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:55:59 poulenc kernel: ksoftirqd/7 S 0000000000478ce0 0 25 2 Nov 4 18:55:59 poulenc kernel: Call Trace: Nov 4 18:55:59 poulenc kernel: [00000000004683c0] ksoftirqd+0xa8/0xc0 Nov 4 18:55:59 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:55:59 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:55:59 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:55:59 poulenc kernel: watchdog/7 S 0000000000478ce0 0 26 2 Nov 4 18:55:59 poulenc kernel: Call Trace: Nov 4 18:55:59 poulenc kernel: [000000000048f9e0] watchdog+0x48/0x80 Nov 4 18:55:59 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:55:59 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:55:59 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:55:59 poulenc kernel: migration/8 S 0000000000478ce0 0 27 2 Nov 4 18:55:59 poulenc kernel: Call Trace: Nov 4 18:55:59 poulenc kernel: [000000000045e60c] migration_thread+0x174/0x360 Nov 4 18:55:59 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:55:59 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:55:59 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:55:59 poulenc kernel: ksoftirqd/8 S 0000000000478ce0 0 28 2 Nov 4 18:55:59 poulenc kernel: Call Trace: Nov 4 18:55:59 poulenc kernel: [00000000004683c0] ksoftirqd+0xa8/0xc0 Nov 4 18:55:59 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:55:59 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:55:59 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:55:59 poulenc kernel: watchdog/8 S 0000000000478ce0 0 29 2 Nov 4 18:55:59 poulenc kernel: Call Trace: Nov 4 18:55:59 poulenc kernel: [000000000048f9e0] watchdog+0x48/0x80 Nov 4 18:55:59 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:55:59 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:55:59 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:00 poulenc kernel: migration/9 S 0000000000478ce0 0 30 2 Nov 4 18:56:00 poulenc kernel: Call Trace: Nov 4 18:56:00 poulenc kernel: [000000000045e60c] migration_thread+0x174/0x360 Nov 4 18:56:00 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:00 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:00 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:00 poulenc kernel: ksoftirqd/9 S 0000000000478ce0 0 31 2 Nov 4 18:56:00 poulenc kernel: Call Trace: Nov 4 18:56:00 poulenc kernel: [00000000004683c0] ksoftirqd+0xa8/0xc0 Nov 4 18:56:00 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:00 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:00 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:00 poulenc kernel: watchdog/9 S 0000000000478ce0 0 32 2 Nov 4 18:56:00 poulenc kernel: Call Trace: Nov 4 18:56:00 poulenc kernel: [000000000048f9e0] watchdog+0x48/0x80 Nov 4 18:56:00 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:00 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:00 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:00 poulenc kernel: migration/10 S 0000000000478ce0 0 33 2 Nov 4 18:56:00 poulenc kernel: Call Trace: Nov 4 18:56:00 poulenc kernel: [000000000045e60c] migration_thread+0x174/0x360 Nov 4 18:56:00 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:00 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:00 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:00 poulenc kernel: ksoftirqd/10 S 0000000000478ce0 0 34 2 Nov 4 18:56:00 poulenc kernel: Call Trace: Nov 4 18:56:00 poulenc kernel: [00000000004683c0] ksoftirqd+0xa8/0xc0 Nov 4 18:56:00 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:00 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:00 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:00 poulenc kernel: watchdog/10 S 0000000000478ce0 0 35 2 Nov 4 18:56:00 poulenc kernel: Call Trace: Nov 4 18:56:00 poulenc kernel: [000000000048f9e0] watchdog+0x48/0x80 Nov 4 18:56:00 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:00 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:00 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:00 poulenc kernel: migration/11 S 0000000000478ce0 0 36 2 Nov 4 18:56:00 poulenc kernel: Call Trace: Nov 4 18:56:00 poulenc kernel: [000000000045e60c] migration_thread+0x174/0x360 Nov 4 18:56:00 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:00 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:00 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:00 poulenc kernel: ksoftirqd/11 S 0000000000478ce0 0 37 2 Nov 4 18:56:00 poulenc kernel: Call Trace: Nov 4 18:56:00 poulenc kernel: [00000000004683c0] ksoftirqd+0xa8/0xc0 Nov 4 18:56:00 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:01 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:01 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:01 poulenc kernel: watchdog/11 S 0000000000478ce0 0 38 2 Nov 4 18:56:01 poulenc kernel: Call Trace: Nov 4 18:56:01 poulenc kernel: [000000000048f9e0] watchdog+0x48/0x80 Nov 4 18:56:01 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:01 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:01 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:01 poulenc kernel: migration/12 S 0000000000478ce0 0 39 2 Nov 4 18:56:01 poulenc kernel: Call Trace: Nov 4 18:56:01 poulenc kernel: [000000000045e60c] migration_thread+0x174/0x360 Nov 4 18:56:01 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:01 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:01 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:01 poulenc kernel: ksoftirqd/12 S 0000000000478ce0 0 40 2 Nov 4 18:56:01 poulenc kernel: Call Trace: Nov 4 18:56:01 poulenc kernel: [00000000004683c0] ksoftirqd+0xa8/0xc0 Nov 4 18:56:01 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:01 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:01 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:01 poulenc kernel: watchdog/12 S 0000000000478ce0 0 41 2 Nov 4 18:56:01 poulenc kernel: Call Trace: Nov 4 18:56:01 poulenc kernel: [000000000048f9e0] watchdog+0x48/0x80 Nov 4 18:56:01 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:01 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:01 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:01 poulenc kernel: migration/13 S 0000000000478ce0 0 42 2 Nov 4 18:56:01 poulenc kernel: Call Trace: Nov 4 18:56:01 poulenc kernel: [000000000045e60c] migration_thread+0x174/0x360 Nov 4 18:56:01 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:01 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:01 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:01 poulenc kernel: ksoftirqd/13 S 0000000000478ce0 0 43 2 Nov 4 18:56:01 poulenc kernel: Call Trace: Nov 4 18:56:01 poulenc kernel: [00000000004683c0] ksoftirqd+0xa8/0xc0 Nov 4 18:56:01 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:01 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:01 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:01 poulenc kernel: watchdog/13 S 0000000000478ce0 0 44 2 Nov 4 18:56:01 poulenc kernel: Call Trace: Nov 4 18:56:01 poulenc kernel: [000000000048f9e0] watchdog+0x48/0x80 Nov 4 18:56:01 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:01 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:01 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:02 poulenc kernel: migration/14 S 0000000000478ce0 0 45 2 Nov 4 18:56:02 poulenc kernel: Call Trace: Nov 4 18:56:02 poulenc kernel: [000000000045e60c] migration_thread+0x174/0x360 Nov 4 18:56:02 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:02 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:02 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:02 poulenc kernel: ksoftirqd/14 S 0000000000478ce0 0 46 2 Nov 4 18:56:02 poulenc kernel: Call Trace: Nov 4 18:56:02 poulenc kernel: [00000000004683c0] ksoftirqd+0xa8/0xc0 Nov 4 18:56:02 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:02 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:02 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:02 poulenc kernel: watchdog/14 S 0000000000478ce0 0 47 2 Nov 4 18:56:02 poulenc kernel: Call Trace: Nov 4 18:56:02 poulenc kernel: [000000000048f9e0] watchdog+0x48/0x80 Nov 4 18:56:02 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:02 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:02 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:02 poulenc kernel: migration/15 S 0000000000478ce0 0 48 2 Nov 4 18:56:02 poulenc kernel: Call Trace: Nov 4 18:56:02 poulenc kernel: [000000000045e60c] migration_thread+0x174/0x360 Nov 4 18:56:02 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:02 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:02 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:02 poulenc kernel: ksoftirqd/15 S 0000000000478ce0 0 49 2 Nov 4 18:56:02 poulenc kernel: Call Trace: Nov 4 18:56:02 poulenc kernel: [00000000004683c0] ksoftirqd+0xa8/0xc0 Nov 4 18:56:02 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:02 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:02 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:02 poulenc kernel: watchdog/15 S 0000000000478ce0 0 50 2 Nov 4 18:56:02 poulenc kernel: Call Trace: Nov 4 18:56:02 poulenc kernel: [000000000048f9e0] watchdog+0x48/0x80 Nov 4 18:56:02 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:02 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:02 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:02 poulenc kernel: migration/16 S 0000000000478ce0 0 51 2 Nov 4 18:56:03 poulenc kernel: Call Trace: Nov 4 18:56:03 poulenc kernel: [000000000045e60c] migration_thread+0x174/0x360 Nov 4 18:56:03 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:03 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:03 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:03 poulenc kernel: ksoftirqd/16 S 0000000000478ce0 0 52 2 Nov 4 18:56:03 poulenc kernel: Call Trace: Nov 4 18:56:03 poulenc kernel: [00000000004683c0] ksoftirqd+0xa8/0xc0 Nov 4 18:56:03 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:03 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:03 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:03 poulenc kernel: watchdog/16 S 0000000000478ce0 0 53 2 Nov 4 18:56:03 poulenc kernel: Call Trace: Nov 4 18:56:03 poulenc kernel: [000000000048f9e0] watchdog+0x48/0x80 Nov 4 18:56:03 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:03 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:03 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:03 poulenc kernel: migration/17 R running task 0 54 2 Nov 4 18:56:03 poulenc kernel: ksoftirqd/17 R running task 0 55 2 Nov 4 18:56:03 poulenc kernel: watchdog/17 S 0000000000478ce0 0 56 2 Nov 4 18:56:03 poulenc kernel: Call Trace: Nov 4 18:56:03 poulenc kernel: [000000000048f9e0] watchdog+0x48/0x80 Nov 4 18:56:03 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:03 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:03 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:03 poulenc kernel: migration/18 S 0000000000478ce0 0 57 2 Nov 4 18:56:03 poulenc kernel: Call Trace: Nov 4 18:56:03 poulenc kernel: [000000000045e60c] migration_thread+0x174/0x360 Nov 4 18:56:03 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:03 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:03 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:03 poulenc kernel: ksoftirqd/18 S 0000000000478ce0 0 58 2 Nov 4 18:56:03 poulenc kernel: Call Trace: Nov 4 18:56:03 poulenc kernel: [00000000004683c0] ksoftirqd+0xa8/0xc0 Nov 4 18:56:03 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:04 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:04 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:04 poulenc kernel: watchdog/18 S 0000000000478ce0 0 59 2 Nov 4 18:56:04 poulenc kernel: Call Trace: Nov 4 18:56:04 poulenc kernel: [000000000048f9e0] watchdog+0x48/0x80 Nov 4 18:56:04 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:04 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:04 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:04 poulenc kernel: migration/19 S 0000000000478ce0 0 60 2 Nov 4 18:56:04 poulenc kernel: Call Trace: Nov 4 18:56:04 poulenc kernel: [000000000045e60c] migration_thread+0x174/0x360 Nov 4 18:56:04 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:04 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:04 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:04 poulenc kernel: ksoftirqd/19 S 0000000000478ce0 0 61 2 Nov 4 18:56:04 poulenc kernel: Call Trace: Nov 4 18:56:04 poulenc kernel: [00000000004683c0] ksoftirqd+0xa8/0xc0 Nov 4 18:56:04 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:04 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:04 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:04 poulenc kernel: watchdog/19 S 0000000000478ce0 0 62 2 Nov 4 18:56:04 poulenc kernel: Call Trace: Nov 4 18:56:04 poulenc kernel: [000000000048f9e0] watchdog+0x48/0x80 Nov 4 18:56:04 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:04 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:04 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:04 poulenc kernel: migration/20 S 0000000000478ce0 0 63 2 Nov 4 18:56:04 poulenc kernel: Call Trace: Nov 4 18:56:04 poulenc kernel: [000000000045e60c] migration_thread+0x174/0x360 Nov 4 18:56:04 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:04 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:04 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:04 poulenc kernel: ksoftirqd/20 S 0000000000478ce0 0 64 2 Nov 4 18:56:04 poulenc kernel: Call Trace: Nov 4 18:56:04 poulenc kernel: [00000000004683c0] ksoftirqd+0xa8/0xc0 Nov 4 18:56:04 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:04 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:04 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:04 poulenc kernel: watchdog/20 S 0000000000478ce0 0 65 2 Nov 4 18:56:04 poulenc kernel: Call Trace: Nov 4 18:56:04 poulenc kernel: [000000000048f9e0] watchdog+0x48/0x80 Nov 4 18:56:04 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:04 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:04 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:04 poulenc kernel: migration/21 S 0000000000478ce0 0 66 2 Nov 4 18:56:04 poulenc kernel: Call Trace: Nov 4 18:56:04 poulenc kernel: [000000000045e60c] migration_thread+0x174/0x360 Nov 4 18:56:04 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:04 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:05 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:05 poulenc kernel: ksoftirqd/21 S 0000000000478ce0 0 67 2 Nov 4 18:56:05 poulenc kernel: Call Trace: Nov 4 18:56:05 poulenc kernel: [00000000004683c0] ksoftirqd+0xa8/0xc0 Nov 4 18:56:05 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:05 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:05 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:05 poulenc kernel: watchdog/21 S 0000000000478ce0 0 68 2 Nov 4 18:56:05 poulenc kernel: Call Trace: Nov 4 18:56:05 poulenc kernel: [000000000048f9e0] watchdog+0x48/0x80 Nov 4 18:56:05 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:05 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:05 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:05 poulenc kernel: migration/22 S 0000000000478ce0 0 69 2 Nov 4 18:56:05 poulenc kernel: Call Trace: Nov 4 18:56:05 poulenc kernel: [000000000045e60c] migration_thread+0x174/0x360 Nov 4 18:56:05 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:05 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:05 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:05 poulenc kernel: ksoftirqd/22 S 0000000000478ce0 0 70 2 Nov 4 18:56:05 poulenc kernel: Call Trace: Nov 4 18:56:05 poulenc kernel: [00000000004683c0] ksoftirqd+0xa8/0xc0 Nov 4 18:56:05 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:05 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:05 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:05 poulenc kernel: watchdog/22 S 0000000000478ce0 0 71 2 Nov 4 18:56:05 poulenc kernel: Call Trace: Nov 4 18:56:05 poulenc kernel: [000000000048f9e0] watchdog+0x48/0x80 Nov 4 18:56:05 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:05 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:05 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:05 poulenc kernel: migration/23 S 0000000000478ce0 0 72 2 Nov 4 18:56:05 poulenc kernel: Call Trace: Nov 4 18:56:05 poulenc kernel: [000000000045e60c] migration_thread+0x174/0x360 Nov 4 18:56:05 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:05 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:05 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:05 poulenc kernel: ksoftirqd/23 S 0000000000478ce0 0 73 2 Nov 4 18:56:05 poulenc kernel: Call Trace: Nov 4 18:56:05 poulenc kernel: [00000000004683c0] ksoftirqd+0xa8/0xc0 Nov 4 18:56:05 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:05 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:06 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:06 poulenc kernel: watchdog/23 S 0000000000478ce0 0 74 2 Nov 4 18:56:06 poulenc kernel: Call Trace: Nov 4 18:56:06 poulenc kernel: [000000000048f9e0] watchdog+0x48/0x80 Nov 4 18:56:06 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:06 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:06 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:06 poulenc kernel: events/0 S 0000000000478ce0 0 75 2 Nov 4 18:56:06 poulenc kernel: Call Trace: Nov 4 18:56:06 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:06 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:06 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:06 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:06 poulenc kernel: events/1 S 0000000000478ce0 0 76 2 Nov 4 18:56:06 poulenc kernel: Call Trace: Nov 4 18:56:06 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:06 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:06 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:06 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:06 poulenc kernel: events/2 S 0000000000478ce0 0 77 2 Nov 4 18:56:06 poulenc kernel: Call Trace: Nov 4 18:56:06 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:06 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:06 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:06 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:06 poulenc kernel: events/3 R running task 0 78 2 Nov 4 18:56:06 poulenc kernel: events/4 S 0000000000478ce0 0 79 2 Nov 4 18:56:06 poulenc kernel: Call Trace: Nov 4 18:56:06 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:06 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:06 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:06 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:06 poulenc kernel: events/5 S 0000000000478ce0 0 80 2 Nov 4 18:56:06 poulenc kernel: Call Trace: Nov 4 18:56:06 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:06 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:06 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:06 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:06 poulenc kernel: events/6 S 0000000000478ce0 0 81 2 Nov 4 18:56:06 poulenc kernel: Call Trace: Nov 4 18:56:06 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:06 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:06 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:06 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:06 poulenc kernel: events/7 S 0000000000478ce0 0 82 2 Nov 4 18:56:06 poulenc kernel: Call Trace: Nov 4 18:56:06 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:06 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:06 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:06 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:07 poulenc kernel: events/8 S 0000000000478ce0 0 83 2 Nov 4 18:56:07 poulenc kernel: Call Trace: Nov 4 18:56:07 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:07 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:07 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:07 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:07 poulenc kernel: events/9 S 0000000000478ce0 0 84 2 Nov 4 18:56:07 poulenc kernel: Call Trace: Nov 4 18:56:07 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:07 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:07 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:07 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:07 poulenc kernel: events/10 S 0000000000478ce0 0 85 2 Nov 4 18:56:07 poulenc kernel: Call Trace: Nov 4 18:56:07 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:07 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:07 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:07 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:07 poulenc kernel: events/11 S 0000000000478ce0 0 86 2 Nov 4 18:56:07 poulenc kernel: Call Trace: Nov 4 18:56:07 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:07 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:07 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:07 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:07 poulenc kernel: events/12 S 0000000000478ce0 0 87 2 Nov 4 18:56:07 poulenc kernel: Call Trace: Nov 4 18:56:07 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:07 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:07 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:07 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:07 poulenc kernel: events/13 S 0000000000478ce0 0 88 2 Nov 4 18:56:07 poulenc kernel: Call Trace: Nov 4 18:56:07 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:07 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:07 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:07 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:07 poulenc kernel: events/14 S 0000000000478ce0 0 89 2 Nov 4 18:56:07 poulenc kernel: Call Trace: Nov 4 18:56:07 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:07 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:07 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:07 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:07 poulenc kernel: events/15 S 0000000000478ce0 0 90 2 Nov 4 18:56:07 poulenc kernel: Call Trace: Nov 4 18:56:07 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:07 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:07 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:07 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:07 poulenc kernel: events/16 S 0000000000478ce0 0 91 2 Nov 4 18:56:08 poulenc kernel: Call Trace: Nov 4 18:56:08 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:08 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:08 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:08 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:08 poulenc kernel: events/17 S 0000000000478ce0 0 92 2 Nov 4 18:56:08 poulenc kernel: Call Trace: Nov 4 18:56:08 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:08 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:08 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:08 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:08 poulenc kernel: events/18 S 0000000000478ce0 0 93 2 Nov 4 18:56:08 poulenc kernel: Call Trace: Nov 4 18:56:08 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:08 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:08 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:08 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:08 poulenc kernel: events/19 S 0000000000478ce0 0 94 2 Nov 4 18:56:08 poulenc kernel: Call Trace: Nov 4 18:56:08 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:08 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:08 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:08 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:08 poulenc kernel: events/20 S 0000000000478ce0 0 95 2 Nov 4 18:56:08 poulenc kernel: Call Trace: Nov 4 18:56:08 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:08 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:08 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:08 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:08 poulenc kernel: events/21 S 0000000000478ce0 0 96 2 Nov 4 18:56:08 poulenc kernel: Call Trace: Nov 4 18:56:08 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:08 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:08 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:08 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:08 poulenc kernel: events/22 S 0000000000478ce0 0 97 2 Nov 4 18:56:08 poulenc kernel: Call Trace: Nov 4 18:56:08 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:08 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:08 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:08 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:08 poulenc kernel: events/23 S 0000000000478ce0 0 98 2 Nov 4 18:56:08 poulenc kernel: Call Trace: Nov 4 18:56:08 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:08 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:08 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:08 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:08 poulenc kernel: khelper S 0000000000478ce0 0 99 2 Nov 4 18:56:08 poulenc kernel: Call Trace: Nov 4 18:56:08 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:08 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:08 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:08 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:09 poulenc kernel: kblockd/0 S 0000000000478ce0 0 247 2 Nov 4 18:56:09 poulenc kernel: Call Trace: Nov 4 18:56:09 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:09 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:09 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:09 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:09 poulenc kernel: kblockd/1 S 0000000000478ce0 0 248 2 Nov 4 18:56:09 poulenc kernel: Call Trace: Nov 4 18:56:09 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:09 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:09 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:09 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:09 poulenc kernel: kblockd/2 S 0000000000478ce0 0 249 2 Nov 4 18:56:09 poulenc kernel: Call Trace: Nov 4 18:56:09 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:09 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:09 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:09 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:09 poulenc kernel: kblockd/3 R running task 0 250 2 Nov 4 18:56:09 poulenc kernel: kblockd/4 S 0000000000478ce0 0 251 2 Nov 4 18:56:09 poulenc kernel: Call Trace: Nov 4 18:56:09 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:09 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:09 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:09 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:09 poulenc kernel: kblockd/5 S 0000000000478ce0 0 252 2 Nov 4 18:56:09 poulenc kernel: Call Trace: Nov 4 18:56:09 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:09 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:09 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:09 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:09 poulenc kernel: kblockd/6 S 0000000000478ce0 0 253 2 Nov 4 18:56:09 poulenc kernel: Call Trace: Nov 4 18:56:09 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:09 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:09 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:10 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:10 poulenc kernel: kblockd/7 S 0000000000478ce0 0 254 2 Nov 4 18:56:10 poulenc kernel: Call Trace: Nov 4 18:56:10 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:10 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:10 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:10 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:10 poulenc kernel: kblockd/8 S 0000000000478ce0 0 255 2 Nov 4 18:56:10 poulenc kernel: Call Trace: Nov 4 18:56:10 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:10 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:10 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:10 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:10 poulenc kernel: kblockd/9 S 0000000000478ce0 0 256 2 Nov 4 18:56:10 poulenc kernel: Call Trace: Nov 4 18:56:10 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:10 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:10 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:10 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:10 poulenc kernel: kblockd/10 S 0000000000478ce0 0 257 2 Nov 4 18:56:10 poulenc kernel: Call Trace: Nov 4 18:56:10 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:10 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:10 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:10 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:10 poulenc kernel: kblockd/11 S 0000000000478ce0 0 258 2 Nov 4 18:56:10 poulenc kernel: Call Trace: Nov 4 18:56:10 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:10 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:10 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:10 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:10 poulenc kernel: kblockd/12 S 0000000000478ce0 0 259 2 Nov 4 18:56:10 poulenc kernel: Call Trace: Nov 4 18:56:10 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:10 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:10 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:10 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:10 poulenc kernel: kblockd/13 S 0000000000478ce0 0 260 2 Nov 4 18:56:10 poulenc kernel: Call Trace: Nov 4 18:56:10 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:10 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:10 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:10 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:10 poulenc kernel: kblockd/14 S 0000000000478ce0 0 261 2 Nov 4 18:56:10 poulenc kernel: Call Trace: Nov 4 18:56:10 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:10 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:10 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:11 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:11 poulenc kernel: kblockd/15 S 0000000000478ce0 0 262 2 Nov 4 18:56:11 poulenc kernel: Call Trace: Nov 4 18:56:11 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:11 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:11 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:11 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:11 poulenc kernel: kblockd/16 S 0000000000478ce0 0 263 2 Nov 4 18:56:11 poulenc kernel: Call Trace: Nov 4 18:56:11 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:11 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:11 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:11 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:11 poulenc kernel: kblockd/17 S 0000000000478ce0 0 264 2 Nov 4 18:56:11 poulenc kernel: Call Trace: Nov 4 18:56:11 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:11 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:11 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:11 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:11 poulenc kernel: kblockd/18 S 0000000000478ce0 0 265 2 Nov 4 18:56:11 poulenc kernel: Call Trace: Nov 4 18:56:11 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:11 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:11 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:11 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:11 poulenc kernel: kblockd/19 S 0000000000478ce0 0 266 2 Nov 4 18:56:11 poulenc kernel: Call Trace: Nov 4 18:56:11 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:11 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:11 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:11 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:11 poulenc kernel: kblockd/20 S 0000000000478ce0 0 267 2 Nov 4 18:56:11 poulenc kernel: Call Trace: Nov 4 18:56:11 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:11 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:11 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:11 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:11 poulenc kernel: kblockd/21 S 0000000000478ce0 0 268 2 Nov 4 18:56:11 poulenc kernel: Call Trace: Nov 4 18:56:11 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:12 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:12 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:12 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:12 poulenc kernel: kblockd/22 S 0000000000478ce0 0 269 2 Nov 4 18:56:12 poulenc kernel: Call Trace: Nov 4 18:56:12 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:12 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:12 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:12 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:12 poulenc kernel: kblockd/23 S 0000000000478ce0 0 270 2 Nov 4 18:56:12 poulenc kernel: Call Trace: Nov 4 18:56:12 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:12 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:12 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:12 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:12 poulenc kernel: pdflush S 0000000000478ce0 0 294 2 Nov 4 18:56:12 poulenc kernel: Call Trace: Nov 4 18:56:12 poulenc kernel: [000000000049a420] pdflush+0xc8/0x200 Nov 4 18:56:12 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:12 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:12 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:12 poulenc kernel: pdflush S 0000000000478ce0 0 295 2 Nov 4 18:56:12 poulenc kernel: Call Trace: Nov 4 18:56:12 poulenc kernel: [000000000049a420] pdflush+0xc8/0x200 Nov 4 18:56:12 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:12 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:12 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:12 poulenc kernel: kswapd0 S 0000000000478ce0 0 296 2 Nov 4 18:56:12 poulenc kernel: Call Trace: Nov 4 18:56:12 poulenc kernel: [000000000049e778] kswapd+0x540/0x560 Nov 4 18:56:12 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:12 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:12 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:12 poulenc kernel: aio/0 S 0000000000478ce0 0 297 2 Nov 4 18:56:12 poulenc kernel: Call Trace: Nov 4 18:56:12 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:12 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:12 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:12 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:12 poulenc kernel: aio/1 S 0000000000478ce0 0 298 2 Nov 4 18:56:12 poulenc kernel: Call Trace: Nov 4 18:56:12 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:13 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:13 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:13 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:13 poulenc kernel: aio/2 S 0000000000478ce0 0 299 2 Nov 4 18:56:13 poulenc kernel: Call Trace: Nov 4 18:56:13 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:13 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:13 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:13 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:13 poulenc kernel: aio/3 S 0000000000478ce0 0 300 2 Nov 4 18:56:13 poulenc kernel: Call Trace: Nov 4 18:56:13 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:13 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:13 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:13 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:13 poulenc kernel: aio/4 S 0000000000478ce0 0 301 2 Nov 4 18:56:13 poulenc kernel: Call Trace: Nov 4 18:56:13 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:13 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:13 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:13 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:13 poulenc kernel: aio/5 S 0000000000478ce0 0 302 2 Nov 4 18:56:13 poulenc kernel: Call Trace: Nov 4 18:56:13 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:13 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:13 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:13 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:13 poulenc kernel: aio/6 S 0000000000478ce0 0 303 2 Nov 4 18:56:13 poulenc kernel: Call Trace: Nov 4 18:56:13 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:13 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:13 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:13 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:13 poulenc kernel: aio/7 S 0000000000478ce0 0 304 2 Nov 4 18:56:13 poulenc kernel: Call Trace: Nov 4 18:56:13 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:13 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:13 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:13 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:14 poulenc kernel: aio/8 S 0000000000478ce0 0 305 2 Nov 4 18:56:14 poulenc kernel: Call Trace: Nov 4 18:56:14 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:14 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:14 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:14 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:14 poulenc kernel: aio/9 S 0000000000478ce0 0 306 2 Nov 4 18:56:14 poulenc kernel: Call Trace: Nov 4 18:56:14 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:14 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:14 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:14 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:14 poulenc kernel: aio/10 S 0000000000478ce0 0 307 2 Nov 4 18:56:14 poulenc kernel: Call Trace: Nov 4 18:56:14 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:14 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:14 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:14 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:14 poulenc kernel: aio/11 S 0000000000478ce0 0 308 2 Nov 4 18:56:14 poulenc kernel: Call Trace: Nov 4 18:56:14 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:14 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:14 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:14 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:14 poulenc kernel: aio/12 S 0000000000478ce0 0 309 2 Nov 4 18:56:14 poulenc kernel: Call Trace: Nov 4 18:56:15 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:15 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:15 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:15 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:15 poulenc kernel: aio/13 S 0000000000478ce0 0 310 2 Nov 4 18:56:15 poulenc kernel: Call Trace: Nov 4 18:56:15 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:15 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:15 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:15 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:15 poulenc kernel: aio/14 S 0000000000478ce0 0 311 2 Nov 4 18:56:15 poulenc kernel: Call Trace: Nov 4 18:56:15 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:15 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:15 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:15 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:15 poulenc kernel: aio/15 S 0000000000478ce0 0 312 2 Nov 4 18:56:15 poulenc kernel: Call Trace: Nov 4 18:56:15 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:15 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:15 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:15 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:15 poulenc kernel: aio/16 S 0000000000478ce0 0 313 2 Nov 4 18:56:15 poulenc kernel: Call Trace: Nov 4 18:56:15 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:15 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:15 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:15 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:15 poulenc kernel: aio/17 S 0000000000478ce0 0 314 2 Nov 4 18:56:15 poulenc kernel: Call Trace: Nov 4 18:56:15 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:15 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:15 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:15 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:15 poulenc kernel: aio/18 S 0000000000478ce0 0 315 2 Nov 4 18:56:15 poulenc kernel: Call Trace: Nov 4 18:56:15 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:15 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:15 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:16 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:16 poulenc kernel: aio/19 S 0000000000478ce0 0 316 2 Nov 4 18:56:16 poulenc kernel: Call Trace: Nov 4 18:56:16 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:16 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:16 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:16 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:16 poulenc kernel: aio/20 S 0000000000478ce0 0 317 2 Nov 4 18:56:16 poulenc kernel: Call Trace: Nov 4 18:56:16 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:16 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:16 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:16 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:16 poulenc kernel: aio/21 S 0000000000478ce0 0 318 2 Nov 4 18:56:16 poulenc kernel: Call Trace: Nov 4 18:56:16 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:16 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:16 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:16 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:16 poulenc kernel: aio/22 S 0000000000478ce0 0 319 2 Nov 4 18:56:16 poulenc kernel: Call Trace: Nov 4 18:56:16 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:16 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:16 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:16 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:16 poulenc kernel: aio/23 S 0000000000478ce0 0 320 2 Nov 4 18:56:16 poulenc kernel: Call Trace: Nov 4 18:56:16 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:16 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:16 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:16 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:16 poulenc kernel: scsi_tgtd/0 S 0000000000478ce0 0 911 2 Nov 4 18:56:16 poulenc kernel: Call Trace: Nov 4 18:56:16 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:16 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:16 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:17 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:17 poulenc kernel: scsi_tgtd/1 S 0000000000478ce0 0 912 2 Nov 4 18:56:17 poulenc kernel: Call Trace: Nov 4 18:56:17 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:17 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:17 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:17 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:17 poulenc kernel: scsi_tgtd/2 S 0000000000478ce0 0 913 2 Nov 4 18:56:17 poulenc kernel: Call Trace: Nov 4 18:56:17 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:17 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:17 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:17 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:17 poulenc kernel: scsi_tgtd/3 S 0000000000478ce0 0 914 2 Nov 4 18:56:17 poulenc kernel: Call Trace: Nov 4 18:56:17 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:17 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:17 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:17 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:17 poulenc kernel: scsi_tgtd/4 S 0000000000478ce0 0 915 2 Nov 4 18:56:17 poulenc kernel: Call Trace: Nov 4 18:56:17 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:17 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:17 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:17 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:17 poulenc kernel: scsi_tgtd/5 S 0000000000478ce0 0 916 2 Nov 4 18:56:17 poulenc kernel: Call Trace: Nov 4 18:56:17 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:17 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:17 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:17 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:17 poulenc kernel: scsi_tgtd/6 S 0000000000478ce0 0 917 2 Nov 4 18:56:17 poulenc kernel: Call Trace: Nov 4 18:56:17 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:17 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:17 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:17 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:17 poulenc kernel: scsi_tgtd/7 S 0000000000478ce0 0 918 2 Nov 4 18:56:17 poulenc kernel: Call Trace: Nov 4 18:56:17 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:17 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:17 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:17 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:17 poulenc kernel: scsi_tgtd/8 S 0000000000478ce0 0 919 2 Nov 4 18:56:17 poulenc kernel: Call Trace: Nov 4 18:56:17 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:17 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:17 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:17 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:18 poulenc kernel: scsi_tgtd/9 S 0000000000478ce0 0 920 2 Nov 4 18:56:18 poulenc kernel: Call Trace: Nov 4 18:56:18 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:18 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:18 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:18 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:18 poulenc kernel: scsi_tgtd/10 S 0000000000478ce0 0 921 2 Nov 4 18:56:18 poulenc kernel: Call Trace: Nov 4 18:56:18 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:18 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:18 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:18 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:18 poulenc kernel: scsi_tgtd/11 S 0000000000478ce0 0 922 2 Nov 4 18:56:18 poulenc kernel: Call Trace: Nov 4 18:56:18 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:18 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:18 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:18 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:18 poulenc kernel: scsi_tgtd/12 S 0000000000478ce0 0 923 2 Nov 4 18:56:18 poulenc kernel: Call Trace: Nov 4 18:56:18 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:18 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:18 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:18 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:18 poulenc kernel: scsi_tgtd/13 S 0000000000478ce0 0 924 2 Nov 4 18:56:18 poulenc kernel: Call Trace: Nov 4 18:56:18 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:18 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:18 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:18 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:18 poulenc kernel: scsi_tgtd/14 S 0000000000478ce0 0 925 2 Nov 4 18:56:18 poulenc kernel: Call Trace: Nov 4 18:56:18 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:18 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:18 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:18 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:18 poulenc kernel: scsi_tgtd/15 S 0000000000478ce0 0 926 2 Nov 4 18:56:18 poulenc kernel: Call Trace: Nov 4 18:56:18 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:18 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:18 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:18 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:18 poulenc kernel: scsi_tgtd/16 S 0000000000478ce0 0 927 2 Nov 4 18:56:19 poulenc kernel: Call Trace: Nov 4 18:56:19 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:19 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:19 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:19 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:19 poulenc kernel: scsi_tgtd/17 S 0000000000478ce0 0 928 2 Nov 4 18:56:19 poulenc kernel: Call Trace: Nov 4 18:56:19 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:19 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:19 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:19 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:19 poulenc kernel: scsi_tgtd/18 S 0000000000478ce0 0 929 2 Nov 4 18:56:19 poulenc kernel: Call Trace: Nov 4 18:56:19 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:19 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:19 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:19 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:19 poulenc kernel: scsi_tgtd/19 S 0000000000478ce0 0 930 2 Nov 4 18:56:19 poulenc kernel: Call Trace: Nov 4 18:56:19 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:19 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:19 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:19 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:19 poulenc kernel: scsi_tgtd/20 S 0000000000478ce0 0 931 2 Nov 4 18:56:19 poulenc kernel: Call Trace: Nov 4 18:56:19 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:19 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:19 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:19 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:19 poulenc kernel: scsi_tgtd/21 S 0000000000478ce0 0 932 2 Nov 4 18:56:19 poulenc kernel: Call Trace: Nov 4 18:56:19 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:19 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:19 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:19 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:19 poulenc kernel: scsi_tgtd/22 S 0000000000478ce0 0 933 2 Nov 4 18:56:19 poulenc kernel: Call Trace: Nov 4 18:56:20 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:20 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:20 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:20 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:20 poulenc kernel: scsi_tgtd/23 S 0000000000478ce0 0 934 2 Nov 4 18:56:20 poulenc kernel: Call Trace: Nov 4 18:56:20 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:20 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:20 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:20 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:20 poulenc kernel: scsi_eh_0 S 0000000000478ce0 0 947 2 Nov 4 18:56:20 poulenc kernel: Call Trace: Nov 4 18:56:20 poulenc kernel: [00000000005ae0a0] scsi_error_handler+0x48/0x5a0 Nov 4 18:56:20 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:20 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:20 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:20 poulenc kernel: md0_raid1 S 00000000005f2ee8 0 991 2 Nov 4 18:56:20 poulenc kernel: Call Trace: Nov 4 18:56:20 poulenc kernel: [000000000067ec30] schedule_timeout+0x78/0xc0 Nov 4 18:56:20 poulenc kernel: [00000000005f2ee8] md_thread+0xf0/0x140 Nov 4 18:56:20 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:20 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:20 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:20 poulenc kernel: kjournald S 0000000000478ce0 0 993 2 Nov 4 18:56:20 poulenc kernel: Call Trace: Nov 4 18:56:20 poulenc kernel: [000000000052da18] kjournald+0x1c0/0x1e0 Nov 4 18:56:20 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:20 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:20 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:20 poulenc kernel: udevd S 00000000004c7d68 0 1091 1 Nov 4 18:56:20 poulenc kernel: Call Trace: Nov 4 18:56:20 poulenc kernel: [000000000067ec30] schedule_timeout+0x78/0xc0 Nov 4 18:56:20 poulenc kernel: [00000000004c7d68] do_select+0x3d0/0x420 Nov 4 18:56:20 poulenc kernel: [00000000004eccb8] compat_core_sys_select+0x160/0x200 Nov 4 18:56:20 poulenc kernel: [00000000004eeee4] compat_sys_select+0x2c/0x1a0 Nov 4 18:56:20 poulenc kernel: [00000000004061d4] linux_sparc_syscall32+0x3c/0x40 Nov 4 18:56:20 poulenc kernel: [0000000000013590] 0x13598 Nov 4 18:56:20 poulenc kernel: scsi_eh_1 S 0000000000478ce0 0 1985 2 Nov 4 18:56:20 poulenc kernel: Call Trace: Nov 4 18:56:20 poulenc kernel: [00000000005ae0a0] scsi_error_handler+0x48/0x5a0 Nov 4 18:56:20 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:20 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:20 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:20 poulenc kernel: scsi_eh_2 S 0000000000478ce0 0 2093 2 Nov 4 18:56:20 poulenc kernel: Call Trace: Nov 4 18:56:21 poulenc kernel: [00000000005ae0a0] scsi_error_handler+0x48/0x5a0 Nov 4 18:56:21 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:21 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:21 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:21 poulenc kernel: ksnapd S 0000000000478ce0 0 2718 2 Nov 4 18:56:21 poulenc kernel: Call Trace: Nov 4 18:56:21 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:21 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:21 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:21 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:21 poulenc kernel: md6_raid1 S 00000000005f2ee8 0 2731 2 Nov 4 18:56:21 poulenc kernel: Call Trace: Nov 4 18:56:21 poulenc kernel: [000000000067ec30] schedule_timeout+0x78/0xc0 Nov 4 18:56:21 poulenc kernel: [00000000005f2ee8] md_thread+0xf0/0x140 Nov 4 18:56:21 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:21 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:21 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:21 poulenc kernel: md1_raid1 S 00000000005f2ee8 0 2748 2 Nov 4 18:56:21 poulenc kernel: Call Trace: Nov 4 18:56:21 poulenc kernel: [000000000067ec30] schedule_timeout+0x78/0xc0 Nov 4 18:56:21 poulenc kernel: [00000000005f2ee8] md_thread+0xf0/0x140 Nov 4 18:56:21 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:21 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:21 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:21 poulenc kernel: md2_raid1 S 00000000005f2ee8 0 2753 2 Nov 4 18:56:21 poulenc kernel: Call Trace: Nov 4 18:56:21 poulenc kernel: [000000000067ec30] schedule_timeout+0x78/0xc0 Nov 4 18:56:21 poulenc kernel: [00000000005f2ee8] md_thread+0xf0/0x140 Nov 4 18:56:21 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:21 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:21 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:21 poulenc kernel: md3_raid1 S 00000000005f2ee8 0 2758 2 Nov 4 18:56:21 poulenc kernel: Call Trace: Nov 4 18:56:21 poulenc kernel: [000000000067ec30] schedule_timeout+0x78/0xc0 Nov 4 18:56:21 poulenc kernel: [00000000005f2ee8] md_thread+0xf0/0x140 Nov 4 18:56:21 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:21 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:21 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:22 poulenc kernel: md4_raid1 S 00000000005f2ee8 0 2763 2 Nov 4 18:56:22 poulenc kernel: Call Trace: Nov 4 18:56:22 poulenc kernel: [000000000067ec30] schedule_timeout+0x78/0xc0 Nov 4 18:56:22 poulenc kernel: [00000000005f2ee8] md_thread+0xf0/0x140 Nov 4 18:56:22 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:22 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:22 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:22 poulenc kernel: md5_raid1 S 00000000005f2ee8 0 2768 2 Nov 4 18:56:22 poulenc kernel: Call Trace: Nov 4 18:56:22 poulenc kernel: [000000000067ec30] schedule_timeout+0x78/0xc0 Nov 4 18:56:22 poulenc kernel: [00000000005f2ee8] md_thread+0xf0/0x140 Nov 4 18:56:22 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:22 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:22 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:22 poulenc kernel: kjournald S 0000000000478ce0 0 2857 2 Nov 4 18:56:22 poulenc kernel: Call Trace: Nov 4 18:56:22 poulenc kernel: [000000000052da18] kjournald+0x1c0/0x1e0 Nov 4 18:56:22 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:22 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:22 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:22 poulenc kernel: kjournald S 0000000000478ce0 0 2870 2 Nov 4 18:56:22 poulenc kernel: Call Trace: Nov 4 18:56:22 poulenc kernel: [000000000052da18] kjournald+0x1c0/0x1e0 Nov 4 18:56:22 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:22 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:22 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:22 poulenc kernel: kjournald S 0000000000478ce0 0 2871 2 Nov 4 18:56:22 poulenc kernel: Call Trace: Nov 4 18:56:22 poulenc kernel: [000000000052da18] kjournald+0x1c0/0x1e0 Nov 4 18:56:22 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:22 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:22 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:22 poulenc kernel: kjournald S 0000000000478ce0 0 2872 2 Nov 4 18:56:22 poulenc kernel: Call Trace: Nov 4 18:56:22 poulenc kernel: [000000000052da18] kjournald+0x1c0/0x1e0 Nov 4 18:56:22 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:22 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:22 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:23 poulenc kernel: kjournald S 0000000000478ce0 0 2873 2 Nov 4 18:56:23 poulenc kernel: Call Trace: Nov 4 18:56:23 poulenc kernel: [000000000052da18] kjournald+0x1c0/0x1e0 Nov 4 18:56:23 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:23 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:23 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:23 poulenc kernel: portmap S 00000000004c776c 0 2994 1 Nov 4 18:56:23 poulenc kernel: Call Trace: Nov 4 18:56:23 poulenc kernel: [000000000067ec0c] schedule_timeout+0x54/0xc0 Nov 4 18:56:23 poulenc kernel: [00000000004c776c] do_sys_poll+0x234/0x400 Nov 4 18:56:23 poulenc kernel: [00000000004c7960] sys_poll+0x28/0x60 Nov 4 18:56:23 poulenc kernel: [00000000004061d4] linux_sparc_syscall32+0x3c/0x40 Nov 4 18:56:23 poulenc kernel: [00000000700025b8] 0x700025c0 Nov 4 18:56:23 poulenc kernel: rpc.statd S 00000000004c7d68 0 3006 1 Nov 4 18:56:23 poulenc kernel: Call Trace: Nov 4 18:56:23 poulenc kernel: [000000000067ec30] schedule_timeout+0x78/0xc0 Nov 4 18:56:23 poulenc kernel: [00000000004c7d68] do_select+0x3d0/0x420 Nov 4 18:56:23 poulenc kernel: [00000000004eccb8] compat_core_sys_select+0x160/0x200 Nov 4 18:56:23 poulenc kernel: [00000000004eeee4] compat_sys_select+0x2c/0x1a0 Nov 4 18:56:23 poulenc kernel: [00000000004061d4] linux_sparc_syscall32+0x3c/0x40 Nov 4 18:56:23 poulenc kernel: [00000000000143f4] 0x143fc Nov 4 18:56:23 poulenc kernel: rpciod/0 S 0000000000478ce0 0 3035 2 Nov 4 18:56:23 poulenc kernel: Call Trace: Nov 4 18:56:23 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:23 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:23 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:23 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:23 poulenc kernel: rpciod/1 S 0000000000478ce0 0 3036 2 Nov 4 18:56:23 poulenc kernel: Call Trace: Nov 4 18:56:23 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:23 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:23 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:23 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:23 poulenc kernel: rpciod/2 S 0000000000478ce0 0 3037 2 Nov 4 18:56:23 poulenc kernel: Call Trace: Nov 4 18:56:23 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:23 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:23 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:23 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:23 poulenc kernel: rpciod/3 S 0000000000478ce0 0 3038 2 Nov 4 18:56:23 poulenc kernel: Call Trace: Nov 4 18:56:23 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:23 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:23 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:23 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:24 poulenc kernel: rpciod/4 S 0000000000478ce0 0 3039 2 Nov 4 18:56:24 poulenc kernel: Call Trace: Nov 4 18:56:24 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:24 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:24 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:24 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:24 poulenc kernel: rpciod/5 S 0000000000478ce0 0 3040 2 Nov 4 18:56:24 poulenc kernel: Call Trace: Nov 4 18:56:24 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:24 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:24 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:24 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:24 poulenc kernel: rpciod/6 S 0000000000478ce0 0 3041 2 Nov 4 18:56:24 poulenc kernel: Call Trace: Nov 4 18:56:24 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:24 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:24 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:24 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:24 poulenc kernel: rpciod/7 S 0000000000478ce0 0 3042 2 Nov 4 18:56:24 poulenc kernel: Call Trace: Nov 4 18:56:24 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:24 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:24 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:24 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:24 poulenc kernel: rpciod/8 S 0000000000478ce0 0 3043 2 Nov 4 18:56:24 poulenc kernel: Call Trace: Nov 4 18:56:24 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:24 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:24 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:24 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:24 poulenc kernel: rpciod/9 S 0000000000478ce0 0 3044 2 Nov 4 18:56:24 poulenc kernel: Call Trace: Nov 4 18:56:24 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:24 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:24 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:24 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:24 poulenc kernel: rpciod/10 S 0000000000478ce0 0 3045 2 Nov 4 18:56:24 poulenc kernel: Call Trace: Nov 4 18:56:24 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:24 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:24 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:24 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:24 poulenc kernel: rpciod/11 S 0000000000478ce0 0 3046 2 Nov 4 18:56:24 poulenc kernel: Call Trace: Nov 4 18:56:24 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:24 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:24 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:24 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:24 poulenc kernel: rpciod/12 S 0000000000478ce0 0 3047 2 Nov 4 18:56:24 poulenc kernel: Call Trace: Nov 4 18:56:25 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:25 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:25 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:25 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:25 poulenc kernel: rpciod/13 S 0000000000478ce0 0 3048 2 Nov 4 18:56:25 poulenc kernel: Call Trace: Nov 4 18:56:25 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:25 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:25 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:25 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:25 poulenc kernel: rpciod/14 S 0000000000478ce0 0 3049 2 Nov 4 18:56:25 poulenc kernel: Call Trace: Nov 4 18:56:25 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:25 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:25 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:25 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:25 poulenc kernel: rpciod/15 S 0000000000478ce0 0 3050 2 Nov 4 18:56:25 poulenc kernel: Call Trace: Nov 4 18:56:25 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:25 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:25 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:25 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:25 poulenc kernel: rpciod/16 S 0000000000478ce0 0 3051 2 Nov 4 18:56:25 poulenc kernel: Call Trace: Nov 4 18:56:25 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:25 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:25 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:25 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:25 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:25 poulenc kernel: rpciod/17 S 0000000000478ce0 0 3052 2 Nov 4 18:56:25 poulenc kernel: Call Trace: Nov 4 18:56:25 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:25 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:25 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:25 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:25 poulenc kernel: rpciod/18 S 0000000000478ce0 0 3053 2 Nov 4 18:56:25 poulenc kernel: Call Trace: Nov 4 18:56:25 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:25 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:25 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:25 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:25 poulenc kernel: rpciod/19 S 0000000000478ce0 0 3054 2 Nov 4 18:56:25 poulenc kernel: Call Trace: Nov 4 18:56:25 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:25 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:25 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:25 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:26 poulenc kernel: rpciod/20 S 0000000000478ce0 0 3055 2 Nov 4 18:56:26 poulenc kernel: Call Trace: Nov 4 18:56:26 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:26 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:26 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:26 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:26 poulenc kernel: rpciod/21 S 0000000000478ce0 0 3056 2 Nov 4 18:56:26 poulenc kernel: Call Trace: Nov 4 18:56:26 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:26 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:26 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:26 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:26 poulenc kernel: rpciod/22 S 0000000000478ce0 0 3057 2 Nov 4 18:56:26 poulenc kernel: Call Trace: Nov 4 18:56:26 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:26 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:26 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:26 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:26 poulenc kernel: rpciod/23 S 0000000000478ce0 0 3058 2 Nov 4 18:56:26 poulenc kernel: Call Trace: Nov 4 18:56:26 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:26 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:26 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:26 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:26 poulenc kernel: rpc.idmapd S 00000000004ea8fc 0 3139 1 Nov 4 18:56:26 poulenc kernel: Call Trace: Nov 4 18:56:26 poulenc kernel: [000000000067ec30] schedule_timeout+0x78/0xc0 Nov 4 18:56:26 poulenc kernel: [00000000004ea8fc] sys_epoll_wait+0x144/0x480 Nov 4 18:56:26 poulenc kernel: [00000000004061d4] linux_sparc_syscall32+0x3c/0x40 Nov 4 18:56:26 poulenc kernel: [00000000f7f2515c] 0xf7f25164 Nov 4 18:56:26 poulenc kernel: syslogd S 00000000004c7d68 0 3244 1 Nov 4 18:56:26 poulenc kernel: Call Trace: Nov 4 18:56:26 poulenc kernel: [000000000067ec30] schedule_timeout+0x78/0xc0 Nov 4 18:56:26 poulenc kernel: [00000000004c7d68] do_select+0x3d0/0x420 Nov 4 18:56:26 poulenc kernel: [00000000004eccb8] compat_core_sys_select+0x160/0x200 Nov 4 18:56:26 poulenc kernel: [00000000004eeee4] compat_sys_select+0x2c/0x1a0 Nov 4 18:56:26 poulenc kernel: [000000000002a32c] 0x2a334 Nov 4 18:56:26 poulenc kernel: [0000000000014910] 0x14918 Nov 4 18:56:26 poulenc kernel: klogd R running task 0 3254 1 Nov 4 18:56:26 poulenc kernel: named S 00000000004061d4 0 3270 1 Nov 4 18:56:26 poulenc kernel: Call Trace: Nov 4 18:56:26 poulenc kernel: [000000000048b6d0] compat_sys_rt_sigsuspend+0x98/0xe0 Nov 4 18:56:26 poulenc kernel: [00000000004061d4] linux_sparc_syscall32+0x3c/0x40 Nov 4 18:56:26 poulenc kernel: [00000000f79bb0b0] 0xf79bb0b8 Nov 4 18:56:26 poulenc kernel: named S 00000000004833bc 0 3271 1 Nov 4 18:56:26 poulenc kernel: Call Trace: Nov 4 18:56:26 poulenc kernel: [0000000000482ec0] futex_wait+0x268/0x2c0 Nov 4 18:56:27 poulenc kernel: [00000000004833bc] do_futex+0x64/0xbc0 Nov 4 18:56:27 poulenc kernel: [00000000004843fc] compat_sys_futex+0x64/0x120 Nov 4 18:56:27 poulenc kernel: [00000000004061d4] linux_sparc_syscall32+0x3c/0x40 Nov 4 18:56:27 poulenc kernel: [00000000f7afb2e4] 0xf7afb2ec Nov 4 18:56:27 poulenc kernel: named S 00000000004833bc 0 3272 1 Nov 4 18:56:27 poulenc kernel: Call Trace: Nov 4 18:56:27 poulenc kernel: [0000000000482ec0] futex_wait+0x268/0x2c0 Nov 4 18:56:27 poulenc kernel: [00000000004833bc] do_futex+0x64/0xbc0 Nov 4 18:56:27 poulenc kernel: [00000000004843fc] compat_sys_futex+0x64/0x120 Nov 4 18:56:27 poulenc kernel: [00000000004061d4] linux_sparc_syscall32+0x3c/0x40 Nov 4 18:56:27 poulenc kernel: [00000000f7afb2e4] 0xf7afb2ec Nov 4 18:56:27 poulenc kernel: named S 00000000004833bc 0 3273 1 Nov 4 18:56:27 poulenc kernel: Call Trace: Nov 4 18:56:27 poulenc kernel: [0000000000482ec0] futex_wait+0x268/0x2c0 Nov 4 18:56:27 poulenc kernel: [00000000004833bc] do_futex+0x64/0xbc0 Nov 4 18:56:27 poulenc kernel: [00000000004843fc] compat_sys_futex+0x64/0x120 Nov 4 18:56:27 poulenc kernel: [00000000004061d4] linux_sparc_syscall32+0x3c/0x40 Nov 4 18:56:27 poulenc kernel: [00000000f7afb2e4] 0xf7afb2ec Nov 4 18:56:27 poulenc kernel: named S 00000000004833bc 0 3274 1 Nov 4 18:56:27 poulenc kernel: Call Trace: Nov 4 18:56:27 poulenc kernel: [0000000000482ec0] futex_wait+0x268/0x2c0 Nov 4 18:56:27 poulenc kernel: [00000000004833bc] do_futex+0x64/0xbc0 Nov 4 18:56:27 poulenc kernel: [00000000004843fc] compat_sys_futex+0x64/0x120 Nov 4 18:56:27 poulenc kernel: [00000000004061d4] linux_sparc_syscall32+0x3c/0x40 Nov 4 18:56:27 poulenc kernel: [00000000f7afb2e4] 0xf7afb2ec Nov 4 18:56:27 poulenc kernel: [00000000004061d4] linux_sparc_syscall32+0x3c/0x40 Nov 4 18:56:27 poulenc kernel: [00000000f7afb2e4] 0xf7afb2ec Nov 4 18:56:27 poulenc kernel: named S 00000000004833bc 0 3275 1 Nov 4 18:56:27 poulenc kernel: Call Trace: Nov 4 18:56:27 poulenc kernel: [0000000000482ec0] futex_wait+0x268/0x2c0 Nov 4 18:56:27 poulenc kernel: [00000000004833bc] do_futex+0x64/0xbc0 Nov 4 18:56:27 poulenc kernel: [00000000004843fc] compat_sys_futex+0x64/0x120 Nov 4 18:56:27 poulenc kernel: [00000000004061d4] linux_sparc_syscall32+0x3c/0x40 Nov 4 18:56:27 poulenc kernel: [00000000f7afb2e4] 0xf7afb2ec Nov 4 18:56:27 poulenc kernel: named S 00000000004833bc 0 3276 1 Nov 4 18:56:27 poulenc kernel: Call Trace: Nov 4 18:56:27 poulenc kernel: [0000000000482ec0] futex_wait+0x268/0x2c0 Nov 4 18:56:27 poulenc kernel: [00000000004833bc] do_futex+0x64/0xbc0 Nov 4 18:56:27 poulenc kernel: [00000000004843fc] compat_sys_futex+0x64/0x120 Nov 4 18:56:27 poulenc kernel: [00000000004061d4] linux_sparc_syscall32+0x3c/0x40 Nov 4 18:56:27 poulenc kernel: [00000000f7afb2e4] 0xf7afb2ec Nov 4 18:56:27 poulenc kernel: named S 00000000004833bc 0 3277 1 Nov 4 18:56:27 poulenc kernel: Call Trace: Nov 4 18:56:27 poulenc kernel: [0000000000482ec0] futex_wait+0x268/0x2c0 Nov 4 18:56:28 poulenc kernel: [00000000004833bc] do_futex+0x64/0xbc0 Nov 4 18:56:28 poulenc kernel: [00000000004843fc] compat_sys_futex+0x64/0x120 Nov 4 18:56:28 poulenc kernel: [00000000004061d4] linux_sparc_syscall32+0x3c/0x40 Nov 4 18:56:28 poulenc kernel: [00000000f7afb2e4] 0xf7afb2ec Nov 4 18:56:28 poulenc kernel: named S 00000000004833bc 0 3278 1 Nov 4 18:56:28 poulenc kernel: Call Trace: Nov 4 18:56:28 poulenc kernel: [0000000000482ec0] futex_wait+0x268/0x2c0 Nov 4 18:56:28 poulenc kernel: [00000000004833bc] do_futex+0x64/0xbc0 Nov 4 18:56:28 poulenc kernel: [00000000004843fc] compat_sys_futex+0x64/0x120 Nov 4 18:56:28 poulenc kernel: [00000000004061d4] linux_sparc_syscall32+0x3c/0x40 Nov 4 18:56:28 poulenc kernel: [00000000f7afb2e4] 0xf7afb2ec Nov 4 18:56:28 poulenc kernel: named S 00000000004833bc 0 3279 1 Nov 4 18:56:28 poulenc kernel: Call Trace: Nov 4 18:56:28 poulenc kernel: [0000000000482ec0] futex_wait+0x268/0x2c0 Nov 4 18:56:28 poulenc kernel: [00000000004833bc] do_futex+0x64/0xbc0 Nov 4 18:56:28 poulenc kernel: [00000000004843fc] compat_sys_futex+0x64/0x120 Nov 4 18:56:28 poulenc kernel: [00000000004061d4] linux_sparc_syscall32+0x3c/0x40 Nov 4 18:56:28 poulenc kernel: [00000000f7afb2e4] 0xf7afb2ec Nov 4 18:56:28 poulenc kernel: named S 00000000004833bc 0 3280 1 Nov 4 18:56:28 poulenc kernel: Call Trace: Nov 4 18:56:28 poulenc kernel: [0000000000482ec0] futex_wait+0x268/0x2c0 Nov 4 18:56:28 poulenc kernel: [00000000004833bc] do_futex+0x64/0xbc0 Nov 4 18:56:28 poulenc kernel: [00000000004843fc] compat_sys_futex+0x64/0x120 Nov 4 18:56:28 poulenc kernel: [00000000004061d4] linux_sparc_syscall32+0x3c/0x40 Nov 4 18:56:28 poulenc kernel: [00000000f7afb2e4] 0xf7afb2ec Nov 4 18:56:28 poulenc kernel: [00000000004061d4] linux_sparc_syscall32+0x3c/0x40 Nov 4 18:56:28 poulenc kernel: [00000000f7afb2e4] 0xf7afb2ec Nov 4 18:56:28 poulenc kernel: named S 00000000004833bc 0 3281 1 Nov 4 18:56:28 poulenc kernel: Call Trace: Nov 4 18:56:28 poulenc kernel: [0000000000482ec0] futex_wait+0x268/0x2c0 Nov 4 18:56:28 poulenc kernel: [00000000004833bc] do_futex+0x64/0xbc0 Nov 4 18:56:28 poulenc kernel: [00000000004843fc] compat_sys_futex+0x64/0x120 Nov 4 18:56:28 poulenc kernel: [00000000004061d4] linux_sparc_syscall32+0x3c/0x40 Nov 4 18:56:28 poulenc kernel: [00000000f7afb2e4] 0xf7afb2ec Nov 4 18:56:28 poulenc kernel: named S 00000000004833bc 0 3282 1 Nov 4 18:56:28 poulenc kernel: Call Trace: Nov 4 18:56:28 poulenc kernel: [0000000000482ec0] futex_wait+0x268/0x2c0 Nov 4 18:56:28 poulenc kernel: [00000000004833bc] do_futex+0x64/0xbc0 Nov 4 18:56:28 poulenc kernel: [00000000004843fc] compat_sys_futex+0x64/0x120 Nov 4 18:56:29 poulenc kernel: [00000000004061d4] linux_sparc_syscall32+0x3c/0x40 Nov 4 18:56:29 poulenc kernel: [00000000f7afb2e4] 0xf7afb2ec Nov 4 18:56:29 poulenc kernel: named S 00000000004833bc 0 3283 1 Nov 4 18:56:29 poulenc kernel: Call Trace: Nov 4 18:56:29 poulenc kernel: [0000000000482ec0] futex_wait+0x268/0x2c0 Nov 4 18:56:29 poulenc kernel: [00000000004833bc] do_futex+0x64/0xbc0 Nov 4 18:56:29 poulenc kernel: [00000000004843fc] compat_sys_futex+0x64/0x120 Nov 4 18:56:29 poulenc kernel: [00000000004061d4] linux_sparc_syscall32+0x3c/0x40 Nov 4 18:56:29 poulenc kernel: [00000000f7afb2e4] 0xf7afb2ec Nov 4 18:56:29 poulenc kernel: named S 00000000004833bc 0 3284 1 Nov 4 18:56:29 poulenc kernel: Call Trace: Nov 4 18:56:29 poulenc kernel: [0000000000482ec0] futex_wait+0x268/0x2c0 Nov 4 18:56:29 poulenc kernel: [00000000004833bc] do_futex+0x64/0xbc0 Nov 4 18:56:29 poulenc kernel: [00000000004843fc] compat_sys_futex+0x64/0x120 Nov 4 18:56:29 poulenc kernel: [00000000004061d4] linux_sparc_syscall32+0x3c/0x40 Nov 4 18:56:29 poulenc kernel: [00000000f7afb2e4] 0xf7afb2ec Nov 4 18:56:29 poulenc kernel: named S 00000000004833bc 0 3285 1 Nov 4 18:56:29 poulenc kernel: Call Trace: Nov 4 18:56:29 poulenc kernel: [0000000000482ec0] futex_wait+0x268/0x2c0 Nov 4 18:56:29 poulenc kernel: [00000000004833bc] do_futex+0x64/0xbc0 Nov 4 18:56:29 poulenc kernel: [00000000004843fc] compat_sys_futex+0x64/0x120 Nov 4 18:56:29 poulenc kernel: [00000000004061d4] linux_sparc_syscall32+0x3c/0x40 Nov 4 18:56:29 poulenc kernel: [00000000f7afb2e4] 0xf7afb2ec Nov 4 18:56:29 poulenc kernel: named S 00000000004833bc 0 3286 1 Nov 4 18:56:29 poulenc kernel: Call Trace: Nov 4 18:56:29 poulenc kernel: [0000000000482ec0] futex_wait+0x268/0x2c0 Nov 4 18:56:29 poulenc kernel: [00000000004833bc] do_futex+0x64/0xbc0 Nov 4 18:56:29 poulenc kernel: [00000000004843fc] compat_sys_futex+0x64/0x120 Nov 4 18:56:29 poulenc kernel: [00000000004061d4] linux_sparc_syscall32+0x3c/0x40 Nov 4 18:56:29 poulenc kernel: [00000000f7afb2e4] 0xf7afb2ec Nov 4 18:56:29 poulenc kernel: [00000000f7afb2e4] 0xf7afb2ec Nov 4 18:56:29 poulenc kernel: named S 00000000004833bc 0 3287 1 Nov 4 18:56:29 poulenc kernel: Call Trace: Nov 4 18:56:29 poulenc kernel: [0000000000482ec0] futex_wait+0x268/0x2c0 Nov 4 18:56:29 poulenc kernel: [00000000004833bc] do_futex+0x64/0xbc0 Nov 4 18:56:29 poulenc kernel: [00000000004843fc] compat_sys_futex+0x64/0x120 Nov 4 18:56:29 poulenc kernel: [00000000004061d4] linux_sparc_syscall32+0x3c/0x40 Nov 4 18:56:29 poulenc kernel: [00000000f7afb2e4] 0xf7afb2ec Nov 4 18:56:29 poulenc kernel: named S 00000000004833bc 0 3288 1 Nov 4 18:56:29 poulenc kernel: Call Trace: Nov 4 18:56:29 poulenc kernel: [0000000000482ec0] futex_wait+0x268/0x2c0 Nov 4 18:56:29 poulenc kernel: [00000000004833bc] do_futex+0x64/0xbc0 Nov 4 18:56:29 poulenc kernel: [00000000004843fc] compat_sys_futex+0x64/0x120 Nov 4 18:56:29 poulenc kernel: [00000000004061d4] linux_sparc_syscall32+0x3c/0x40 Nov 4 18:56:30 poulenc kernel: [00000000f7afb2e4] 0xf7afb2ec Nov 4 18:56:30 poulenc kernel: named S 00000000004833bc 0 3289 1 Nov 4 18:56:30 poulenc kernel: Call Trace: Nov 4 18:56:30 poulenc kernel: [0000000000482ec0] futex_wait+0x268/0x2c0 Nov 4 18:56:30 poulenc kernel: [00000000004833bc] do_futex+0x64/0xbc0 Nov 4 18:56:30 poulenc kernel: [00000000004843fc] compat_sys_futex+0x64/0x120 Nov 4 18:56:30 poulenc kernel: [00000000004061d4] linux_sparc_syscall32+0x3c/0x40 Nov 4 18:56:30 poulenc kernel: [00000000f7afb2e4] 0xf7afb2ec Nov 4 18:56:30 poulenc kernel: named S 00000000004833bc 0 3290 1 Nov 4 18:56:30 poulenc kernel: Call Trace: Nov 4 18:56:30 poulenc kernel: [0000000000482ec0] futex_wait+0x268/0x2c0 Nov 4 18:56:30 poulenc kernel: [00000000004833bc] do_futex+0x64/0xbc0 Nov 4 18:56:30 poulenc kernel: [00000000004843fc] compat_sys_futex+0x64/0x120 Nov 4 18:56:30 poulenc kernel: [00000000004061d4] linux_sparc_syscall32+0x3c/0x40 Nov 4 18:56:30 poulenc kernel: [00000000f7afb2e4] 0xf7afb2ec Nov 4 18:56:30 poulenc kernel: named S 00000000004833bc 0 3291 1 Nov 4 18:56:30 poulenc kernel: Call Trace: Nov 4 18:56:30 poulenc kernel: [0000000000482ec0] futex_wait+0x268/0x2c0 Nov 4 18:56:30 poulenc kernel: [00000000004833bc] do_futex+0x64/0xbc0 Nov 4 18:56:30 poulenc kernel: [00000000004843fc] compat_sys_futex+0x64/0x120 Nov 4 18:56:30 poulenc kernel: [00000000004061d4] linux_sparc_syscall32+0x3c/0x40 Nov 4 18:56:30 poulenc kernel: [00000000f7afb2e4] 0xf7afb2ec Nov 4 18:56:30 poulenc kernel: named S 00000000004833bc 0 3292 1 Nov 4 18:56:30 poulenc kernel: Call Trace: Nov 4 18:56:30 poulenc kernel: [0000000000482ec0] futex_wait+0x268/0x2c0 Nov 4 18:56:30 poulenc kernel: [00000000004833bc] do_futex+0x64/0xbc0 Nov 4 18:56:30 poulenc kernel: [00000000004843fc] compat_sys_futex+0x64/0x120 Nov 4 18:56:30 poulenc kernel: [00000000004061d4] linux_sparc_syscall32+0x3c/0x40 Nov 4 18:56:30 poulenc kernel: [00000000f7afb2e4] 0xf7afb2ec Nov 4 18:56:30 poulenc kernel: [00000000f7afb2e4] 0xf7afb2ec Nov 4 18:56:30 poulenc kernel: named S 00000000004833bc 0 3293 1 Nov 4 18:56:30 poulenc kernel: Call Trace: Nov 4 18:56:30 poulenc kernel: [0000000000482ec0] futex_wait+0x268/0x2c0 Nov 4 18:56:30 poulenc kernel: [00000000004833bc] do_futex+0x64/0xbc0 Nov 4 18:56:30 poulenc kernel: [00000000004843fc] compat_sys_futex+0x64/0x120 Nov 4 18:56:30 poulenc kernel: [00000000004061d4] linux_sparc_syscall32+0x3c/0x40 Nov 4 18:56:30 poulenc kernel: [00000000f7afb2e4] 0xf7afb2ec Nov 4 18:56:30 poulenc kernel: named S 00000000004833bc 0 3294 1 Nov 4 18:56:30 poulenc kernel: Call Trace: Nov 4 18:56:30 poulenc kernel: [0000000000482ec0] futex_wait+0x268/0x2c0 Nov 4 18:56:30 poulenc kernel: [00000000004833bc] do_futex+0x64/0xbc0 Nov 4 18:56:31 poulenc kernel: [00000000004843fc] compat_sys_futex+0x64/0x120 Nov 4 18:56:31 poulenc kernel: [00000000004061d4] linux_sparc_syscall32+0x3c/0x40 Nov 4 18:56:31 poulenc kernel: [00000000f7afb2e4] 0xf7afb2ec Nov 4 18:56:31 poulenc kernel: named S 00000000004833bc 0 3295 1 Nov 4 18:56:31 poulenc kernel: Call Trace: Nov 4 18:56:31 poulenc kernel: [0000000000482e1c] futex_wait+0x1c4/0x2c0 Nov 4 18:56:31 poulenc kernel: [00000000004833bc] do_futex+0x64/0xbc0 Nov 4 18:56:31 poulenc kernel: [00000000004843fc] compat_sys_futex+0x64/0x120 Nov 4 18:56:31 poulenc kernel: [00000000004061d4] linux_sparc_syscall32+0x3c/0x40 Nov 4 18:56:31 poulenc kernel: [00000000f7afb638] 0xf7afb640 Nov 4 18:56:31 poulenc kernel: named S 00000000004c7d68 0 3296 1 Nov 4 18:56:31 poulenc kernel: Call Trace: Nov 4 18:56:31 poulenc kernel: [000000000067ec30] schedule_timeout+0x78/0xc0 Nov 4 18:56:31 poulenc kernel: [00000000004c7d68] do_select+0x3d0/0x420 Nov 4 18:56:31 poulenc kernel: [00000000004eccb8] compat_core_sys_select+0x160/0x200 Nov 4 18:56:31 poulenc kernel: [00000000004eeee4] compat_sys_select+0x2c/0x1a0 Nov 4 18:56:31 poulenc kernel: [00000000004061d4] linux_sparc_syscall32+0x3c/0x40 Nov 4 18:56:31 poulenc kernel: [00000000f7a60070] 0xf7a60078 Nov 4 18:56:31 poulenc kernel: rpc.bootparam S 00000000004c776c 0 3518 1 Nov 4 18:56:31 poulenc kernel: Call Trace: Nov 4 18:56:31 poulenc kernel: [000000000067ec0c] schedule_timeout+0x54/0xc0 Nov 4 18:56:31 poulenc kernel: [00000000004c776c] do_sys_poll+0x234/0x400 Nov 4 18:56:31 poulenc kernel: [00000000004c7960] sys_poll+0x28/0x60 Nov 4 18:56:31 poulenc kernel: [00000000004061d4] linux_sparc_syscall32+0x3c/0x40 Nov 4 18:56:31 poulenc kernel: [0000000000011bbc] 0x11bc4 Nov 4 18:56:31 poulenc kernel: hddtemp S 00000000004c7d68 0 3576 1 Nov 4 18:56:31 poulenc kernel: Call Trace: Nov 4 18:56:31 poulenc kernel: [000000000067ec30] schedule_timeout+0x78/0xc0 Nov 4 18:56:31 poulenc kernel: [00000000004c7d68] do_select+0x3d0/0x420 Nov 4 18:56:31 poulenc kernel: [00000000004eccb8] compat_core_sys_select+0x160/0x200 Nov 4 18:56:31 poulenc kernel: [00000000004eeee4] compat_sys_select+0x2c/0x1a0 Nov 4 18:56:31 poulenc kernel: [00000000004061d4] linux_sparc_syscall32+0x3c/0x40 Nov 4 18:56:31 poulenc kernel: [000000000001223c] 0x12244 Nov 4 18:56:31 poulenc kernel: lockd S 000000001008aff4 0 3681 2 Nov 4 18:56:31 poulenc kernel: Call Trace: Nov 4 18:56:31 poulenc kernel: [000000000067ec30] schedule_timeout+0x78/0xc0 Nov 4 18:56:31 poulenc kernel: [000000001008aff4] svc_recv+0x21c/0x4e0 [sunrpc] Nov 4 18:56:31 poulenc kernel: [00000000100baef8] lockd+0x120/0x300 [lockd] Nov 4 18:56:31 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:32 poulenc kernel: [0000000010087e10] __svc_create_thread+0x118/0x220 [sunrpc] Nov 4 18:56:32 poulenc kernel: nfsd4 S 0000000000478ce0 0 3682 2 Nov 4 18:56:32 poulenc kernel: Call Trace: Nov 4 18:56:32 poulenc kernel: [0000000000474c5c] worker_thread+0xa4/0xe0 Nov 4 18:56:32 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:32 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:32 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:32 poulenc kernel: nfsd S 000000001008aff4 0 3683 2 Nov 4 18:56:32 poulenc kernel: Call Trace: Nov 4 18:56:32 poulenc kernel: [000000000067ec0c] schedule_timeout+0x54/0xc0 Nov 4 18:56:32 poulenc kernel: [000000001008aff4] svc_recv+0x21c/0x4e0 [sunrpc] Nov 4 18:56:32 poulenc kernel: [0000000010150aac] nfsd+0xb4/0x300 [nfsd] Nov 4 18:56:32 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:32 poulenc kernel: [0000000010087e10] __svc_create_thread+0x118/0x220 [sunrpc] Nov 4 18:56:32 poulenc kernel: rpc.mountd S 00000000004c7d68 0 3694 1 Nov 4 18:56:32 poulenc kernel: Call Trace: Nov 4 18:56:32 poulenc kernel: [000000000067ec30] schedule_timeout+0x78/0xc0 Nov 4 18:56:32 poulenc kernel: [00000000004c7d68] do_select+0x3d0/0x420 Nov 4 18:56:32 poulenc kernel: [00000000004eccb8] compat_core_sys_select+0x160/0x200 Nov 4 18:56:32 poulenc kernel: [00000000004eeee4] compat_sys_select+0x2c/0x1a0 Nov 4 18:56:32 poulenc kernel: [00000000004061d4] linux_sparc_syscall32+0x3c/0x40 Nov 4 18:56:32 poulenc kernel: [0000000000016708] 0x16710 Nov 4 18:56:32 poulenc kernel: iscsid S 000000000048c0d4 0 3785 1 Nov 4 18:56:32 poulenc kernel: Call Trace: Nov 4 18:56:32 poulenc kernel: [000000000067ec0c] schedule_timeout+0x54/0xc0 Nov 4 18:56:32 poulenc kernel: [000000000048c0d4] compat_sys_nanosleep+0x7c/0xe0 Nov 4 18:56:32 poulenc kernel: [00000000004061d4] linux_sparc_syscall32+0x3c/0x40 Nov 4 18:56:32 poulenc kernel: [00000000f7ec168c] 0xf7ec1694 Nov 4 18:56:32 poulenc kernel: iscsid S 00000000004c776c 0 3786 1 Nov 4 18:56:32 poulenc kernel: Call Trace: Nov 4 18:56:32 poulenc kernel: [000000000067ec0c] schedule_timeout+0x54/0xc0 Nov 4 18:56:32 poulenc kernel: [00000000004c776c] do_sys_poll+0x234/0x400 Nov 4 18:56:32 poulenc kernel: [00000000004c7960] sys_poll+0x28/0x60 Nov 4 18:56:32 poulenc kernel: [00000000004061d4] linux_sparc_syscall32+0x3c/0x40 Nov 4 18:56:32 poulenc kernel: [0000000000045bf0] 0x45bf8 Nov 4 18:56:32 poulenc kernel: inetd S 00000000004c7d68 0 3802 1 Nov 4 18:56:32 poulenc kernel: Call Trace: Nov 4 18:56:32 poulenc kernel: [000000000067ec30] schedule_timeout+0x78/0xc0 Nov 4 18:56:32 poulenc kernel: [00000000004c7d68] do_select+0x3d0/0x420 Nov 4 18:56:32 poulenc kernel: [00000000004eccb8] compat_core_sys_select+0x160/0x200 Nov 4 18:56:32 poulenc kernel: [00000000004eeee4] compat_sys_select+0x2c/0x1a0 Nov 4 18:56:32 poulenc kernel: [00000000004061d4] linux_sparc_syscall32+0x3c/0x40 Nov 4 18:56:32 poulenc kernel: [00000000000155d8] 0x155e0 Nov 4 18:56:32 poulenc kernel: rarpd S 00000000004c776c 0 3809 1 Nov 4 18:56:32 poulenc kernel: Call Trace: Nov 4 18:56:32 poulenc kernel: [000000000067ec0c] schedule_timeout+0x54/0xc0 Nov 4 18:56:33 poulenc kernel: [00000000004c776c] do_sys_poll+0x234/0x400 Nov 4 18:56:33 poulenc kernel: [00000000004c7960] sys_poll+0x28/0x60 Nov 4 18:56:33 poulenc kernel: [00000000004061d4] linux_sparc_syscall32+0x3c/0x40 Nov 4 18:56:33 poulenc kernel: [00000000f7da7234] 0xf7da723c Nov 4 18:56:33 poulenc kernel: smartd S 000000000048c0d4 0 3814 1 Nov 4 18:56:33 poulenc kernel: Call Trace: Nov 4 18:56:33 poulenc kernel: [000000000067ec0c] schedule_timeout+0x54/0xc0 Nov 4 18:56:33 poulenc kernel: [000000000048c0d4] compat_sys_nanosleep+0x7c/0xe0 Nov 4 18:56:33 poulenc kernel: [00000000004061d4] linux_sparc_syscall32+0x3c/0x40 Nov 4 18:56:33 poulenc kernel: [00000000f7c5d68c] 0xf7c5d694 Nov 4 18:56:33 poulenc kernel: snmpd S 00000000004c7d68 0 3823 1 Nov 4 18:56:33 poulenc kernel: Call Trace: Nov 4 18:56:33 poulenc kernel: [000000000067ec0c] schedule_timeout+0x54/0xc0 Nov 4 18:56:33 poulenc kernel: [00000000004c7d68] do_select+0x3d0/0x420 Nov 4 18:56:33 poulenc kernel: [00000000004eccb8] compat_core_sys_select+0x160/0x200 Nov 4 18:56:33 poulenc kernel: [00000000004eef74] compat_sys_select+0xbc/0x1a0 Nov 4 18:56:33 poulenc kernel: [00000000004061d4] linux_sparc_syscall32+0x3c/0x40 Nov 4 18:56:33 poulenc kernel: [0000000000012e64] 0x12e6c Nov 4 18:56:33 poulenc kernel: sendmail-mta S 00000000004c7d68 0 3880 1 Nov 4 18:56:33 poulenc kernel: Call Trace: Nov 4 18:56:33 poulenc kernel: [000000000067ec0c] schedule_timeout+0x54/0xc0 Nov 4 18:56:33 poulenc kernel: [00000000004c7d68] do_select+0x3d0/0x420 Nov 4 18:56:33 poulenc kernel: [00000000004eccb8] compat_core_sys_select+0x160/0x200 Nov 4 18:56:33 poulenc kernel: [00000000004eef74] compat_sys_select+0xbc/0x1a0 Nov 4 18:56:33 poulenc kernel: [00000000004061d4] linux_sparc_syscall32+0x3c/0x40 Nov 4 18:56:33 poulenc kernel: [000000007003b1d8] 0x7003b1e0 Nov 4 18:56:33 poulenc kernel: ntpd S 00000000004c7d68 0 3909 1 Nov 4 18:56:33 poulenc kernel: Call Trace: Nov 4 18:56:33 poulenc kernel: [000000000067ec30] schedule_timeout+0x78/0xc0 Nov 4 18:56:33 poulenc kernel: [00000000004c7d68] do_select+0x3d0/0x420 Nov 4 18:56:33 poulenc kernel: [00000000004eccb8] compat_core_sys_select+0x160/0x200 Nov 4 18:56:33 poulenc kernel: [00000000004eeee4] compat_sys_select+0x2c/0x1a0 Nov 4 18:56:33 poulenc kernel: [00000000004061d4] linux_sparc_syscall32+0x3c/0x40 Nov 4 18:56:33 poulenc kernel: [000000000001aea8] 0x1aeb0 Nov 4 18:56:33 poulenc kernel: mdadm S 00000000004c7d68 0 3922 1 Nov 4 18:56:33 poulenc kernel: Call Trace: Nov 4 18:56:33 poulenc kernel: [000000000067ec0c] schedule_timeout+0x54/0xc0 Nov 4 18:56:33 poulenc kernel: [00000000004c7d68] do_select+0x3d0/0x420 Nov 4 18:56:33 poulenc kernel: [00000000004eccb8] compat_core_sys_select+0x160/0x200 Nov 4 18:56:33 poulenc kernel: [00000000004eef74] compat_sys_select+0xbc/0x1a0 Nov 4 18:56:33 poulenc kernel: [00000000004061d4] linux_sparc_syscall32+0x3c/0x40 Nov 4 18:56:33 poulenc kernel: [0000000000016cc0] 0x16cc8 Nov 4 18:56:33 poulenc kernel: rsync S 00000000004c7d68 0 3943 1 Nov 4 18:56:34 poulenc kernel: Call Trace: Nov 4 18:56:34 poulenc kernel: [000000000067ec30] schedule_timeout+0x78/0xc0 Nov 4 18:56:34 poulenc kernel: [00000000004c7d68] do_select+0x3d0/0x420 Nov 4 18:56:34 poulenc kernel: [00000000004eccb8] compat_core_sys_select+0x160/0x200 Nov 4 18:56:34 poulenc kernel: [00000000004eeee4] compat_sys_select+0x2c/0x1a0 Nov 4 18:56:34 poulenc kernel: [00000000004061d4] linux_sparc_syscall32+0x3c/0x40 Nov 4 18:56:34 poulenc kernel: [0000000000035418] 0x35420 Nov 4 18:56:34 poulenc kernel: atd S 000000000048c0d4 0 3959 1 Nov 4 18:56:34 poulenc kernel: Call Trace: Nov 4 18:56:34 poulenc kernel: [000000000067ec0c] schedule_timeout+0x54/0xc0 Nov 4 18:56:34 poulenc kernel: [000000000048c0d4] compat_sys_nanosleep+0x7c/0xe0 Nov 4 18:56:34 poulenc kernel: [00000000004061d4] linux_sparc_syscall32+0x3c/0x40 Nov 4 18:56:34 poulenc kernel: [00000000f7e6d68c] 0xf7e6d694 Nov 4 18:56:34 poulenc kernel: cron S 000000000048c0d4 0 3966 1 Nov 4 18:56:34 poulenc kernel: Call Trace: Nov 4 18:56:34 poulenc kernel: [000000000067ec0c] schedule_timeout+0x54/0xc0 Nov 4 18:56:34 poulenc kernel: [000000000048c0d4] compat_sys_nanosleep+0x7c/0xe0 Nov 4 18:56:34 poulenc kernel: [00000000004061d4] linux_sparc_syscall32+0x3c/0x40 Nov 4 18:56:34 poulenc kernel: [00000000f7df968c] 0xf7df9694 Nov 4 18:56:34 poulenc kernel: watchdog S 000000000048c0d4 0 3976 1 Nov 4 18:56:34 poulenc kernel: Call Trace: Nov 4 18:56:34 poulenc kernel: [000000000067ec0c] schedule_timeout+0x54/0xc0 Nov 4 18:56:34 poulenc kernel: [000000000048c0d4] compat_sys_nanosleep+0x7c/0xe0 Nov 4 18:56:34 poulenc kernel: [00000000004061d4] linux_sparc_syscall32+0x3c/0x40 Nov 4 18:56:34 poulenc kernel: [00000000f7e4568c] 0xf7e45694 Nov 4 18:56:34 poulenc kernel: apache2 S 00000000004c7d68 0 3994 1 Nov 4 18:56:34 poulenc kernel: Call Trace: Nov 4 18:56:34 poulenc kernel: [000000000067ec0c] schedule_timeout+0x54/0xc0 Nov 4 18:56:34 poulenc kernel: [00000000004c7d68] do_select+0x3d0/0x420 Nov 4 18:56:34 poulenc kernel: [00000000004eccb8] compat_core_sys_select+0x160/0x200 Nov 4 18:56:34 poulenc kernel: [00000000004eef74] compat_sys_select+0xbc/0x1a0 Nov 4 18:56:34 poulenc kernel: [00000000004061d4] linux_sparc_syscall32+0x3c/0x40 Nov 4 18:56:34 poulenc kernel: [00000000f7be6c74] 0xf7be6c7c Nov 4 18:56:34 poulenc kernel: fail2ban-serv S 00000000004c7d68 0 4011 1 Nov 4 18:56:34 poulenc kernel: Call Trace: Nov 4 18:56:34 poulenc kernel: [000000000067ec0c] schedule_timeout+0x54/0xc0 Nov 4 18:56:34 poulenc kernel: [00000000004c7d68] do_select+0x3d0/0x420 Nov 4 18:56:34 poulenc kernel: [00000000004eccb8] compat_core_sys_select+0x160/0x200 Nov 4 18:56:34 poulenc kernel: [00000000004eef74] compat_sys_select+0xbc/0x1a0 Nov 4 18:56:34 poulenc kernel: [00000000004061d4] linux_sparc_syscall32+0x3c/0x40 Nov 4 18:56:34 poulenc kernel: [00000000f7dc0070] 0xf7dc0078 Nov 4 18:56:34 poulenc kernel: fail2ban-serv S 00000000004c776c 0 4012 1 Nov 4 18:56:34 poulenc kernel: Call Trace: Nov 4 18:56:34 poulenc kernel: [000000000067ec0c] schedule_timeout+0x54/0xc0 Nov 4 18:56:34 poulenc kernel: [00000000004c776c] do_sys_poll+0x234/0x400 Nov 4 18:56:34 poulenc kernel: [00000000004c7960] sys_poll+0x28/0x60 Nov 4 18:56:34 poulenc kernel: [00000000004061d4] linux_sparc_syscall32+0x3c/0x40 Nov 4 18:56:34 poulenc kernel: [00000000f7dbd0d4] 0xf7dbd0dc Nov 4 18:56:34 poulenc kernel: fail2ban-serv S 00000000004c7d68 0 4038 1 Nov 4 18:56:34 poulenc kernel: Call Trace: Nov 4 18:56:34 poulenc kernel: [000000000067ec0c] schedule_timeout+0x54/0xc0 Nov 4 18:56:34 poulenc kernel: [00000000004c7d68] do_select+0x3d0/0x420 Nov 4 18:56:35 poulenc kernel: [00000000004eccb8] compat_core_sys_select+0x160/0x200 Nov 4 18:56:35 poulenc kernel: [00000000004eef74] compat_sys_select+0xbc/0x1a0 Nov 4 18:56:35 poulenc kernel: [00000000004061d4] linux_sparc_syscall32+0x3c/0x40 Nov 4 18:56:35 poulenc kernel: [00000000f7dc0070] 0xf7dc0078 Nov 4 18:56:35 poulenc kernel: fail2ban-serv S 00000000004c7d68 0 4039 1 Nov 4 18:56:35 poulenc kernel: Call Trace: Nov 4 18:56:35 poulenc kernel: [000000000067ec0c] schedule_timeout+0x54/0xc0 Nov 4 18:56:35 poulenc kernel: [00000000004c7d68] do_select+0x3d0/0x420 Nov 4 18:56:35 poulenc kernel: [00000000004eccb8] compat_core_sys_select+0x160/0x200 Nov 4 18:56:35 poulenc kernel: [00000000004eef74] compat_sys_select+0xbc/0x1a0 Nov 4 18:56:35 poulenc kernel: [00000000004061d4] linux_sparc_syscall32+0x3c/0x40 Nov 4 18:56:35 poulenc kernel: [00000000f7dc0070] 0xf7dc0078 Nov 4 18:56:35 poulenc kernel: apache2 S 00000000004ea8fc 0 4071 3994 Nov 4 18:56:35 poulenc kernel: Call Trace: Nov 4 18:56:35 poulenc kernel: [000000000067ec30] schedule_timeout+0x78/0xc0 Nov 4 18:56:35 poulenc kernel: [00000000004ea8fc] sys_epoll_wait+0x144/0x480 Nov 4 18:56:35 poulenc kernel: [00000000004061d4] linux_sparc_syscall32+0x3c/0x40 Nov 4 18:56:35 poulenc kernel: [00000000f7be2710] 0xf7be2718 Nov 4 18:56:35 poulenc kernel: apache2 S 00000000004061d4 0 4072 3994 Nov 4 18:56:35 poulenc kernel: Call Trace: Nov 4 18:56:35 poulenc kernel: [00000000005333bc] sys_semtimedop+0x564/0x660 Nov 4 18:56:35 poulenc kernel: [00000000004061d4] linux_sparc_syscall32+0x3c/0x40 Nov 4 18:56:35 poulenc kernel: [00000000f7bdaea8] 0xf7bdaeb0 Nov 4 18:56:35 poulenc kernel: apache2 S 00000000004061d4 0 4073 3994 Nov 4 18:56:35 poulenc kernel: Call Trace: Nov 4 18:56:35 poulenc kernel: [00000000005333bc] sys_semtimedop+0x564/0x660 Nov 4 18:56:35 poulenc kernel: [00000000004061d4] linux_sparc_syscall32+0x3c/0x40 Nov 4 18:56:35 poulenc kernel: [00000000f7bdaea8] 0xf7bdaeb0 Nov 4 18:56:35 poulenc kernel: apache2 S 00000000004061d4 0 4074 3994 Nov 4 18:56:35 poulenc kernel: Call Trace: Nov 4 18:56:35 poulenc kernel: [00000000005333bc] sys_semtimedop+0x564/0x660 Nov 4 18:56:35 poulenc kernel: [00000000004061d4] linux_sparc_syscall32+0x3c/0x40 Nov 4 18:56:35 poulenc kernel: [00000000f7bdaea8] 0xf7bdaeb0 Nov 4 18:56:35 poulenc kernel: apache2 S 00000000004061d4 0 4075 3994 Nov 4 18:56:35 poulenc kernel: Call Trace: Nov 4 18:56:35 poulenc kernel: [00000000005333bc] sys_semtimedop+0x564/0x660 Nov 4 18:56:35 poulenc kernel: [00000000004061d4] linux_sparc_syscall32+0x3c/0x40 Nov 4 18:56:35 poulenc kernel: [00000000f7bdaea8] 0xf7bdaeb0 Nov 4 18:56:35 poulenc kernel: login S 000000000048bda8 0 4146 1 Nov 4 18:56:35 poulenc kernel: Call Trace: Nov 4 18:56:35 poulenc kernel: [00000000004656dc] do_wait+0x264/0xda0 Nov 4 18:56:36 poulenc kernel: [000000000048bda8] compat_sys_wait4+0xb0/0xc0 Nov 4 18:56:36 poulenc kernel: [00000000004061d4] linux_sparc_syscall32+0x3c/0x40 Nov 4 18:56:36 poulenc kernel: [00000000f7df3234] 0xf7df323c Nov 4 18:56:36 poulenc kernel: bash R running task 0 4173 4146 Nov 4 18:56:36 poulenc kernel: sshd S 00000000004c7d68 0 4330 1 Nov 4 18:56:36 poulenc kernel: Call Trace: Nov 4 18:56:36 poulenc kernel: [000000000067ec30] schedule_timeout+0x78/0xc0 Nov 4 18:56:36 poulenc kernel: [00000000004c7d68] do_select+0x3d0/0x420 Nov 4 18:56:36 poulenc kernel: [00000000004eccb8] compat_core_sys_select+0x160/0x200 Nov 4 18:56:36 poulenc kernel: [00000000004eeee4] compat_sys_select+0x2c/0x1a0 Nov 4 18:56:36 poulenc kernel: [00000000004061d4] linux_sparc_syscall32+0x3c/0x40 Nov 4 18:56:36 poulenc kernel: [0000000070017e1c] 0x70017e24 Nov 4 18:56:36 poulenc kernel: md_d0_raid5 R running task 0 4462 2 Nov 4 18:56:36 poulenc kernel: ietd S 000000000067dac4 0 8227 1 Nov 4 18:56:36 poulenc kernel: Call Trace: Nov 4 18:56:36 poulenc kernel: [000000000067d9dc] __down_interruptible+0xa4/0x1c0 Nov 4 18:56:36 poulenc kernel: [000000000067dac4] __down_interruptible+0x18c/0x1c0 Nov 4 18:56:36 poulenc kernel: [00000000102204d0] ioctl+0x58/0x5e0 [iscsi_trgt] Nov 4 18:56:36 poulenc kernel: [00000000004f0484] compat_sys_ioctl+0x14c/0x460 Nov 4 18:56:36 poulenc kernel: [00000000004061d4] linux_sparc_syscall32+0x3c/0x40 Nov 4 18:56:36 poulenc kernel: [000000000001532c] 0x15334 Nov 4 18:56:36 poulenc kernel: istd1 R running task 0 8228 2 Nov 4 18:56:36 poulenc kernel: istiod1 D 00000000102262c8 0 8229 2 Nov 4 18:56:36 poulenc kernel: Call Trace: Nov 4 18:56:36 poulenc kernel: [000000000067e4cc] wait_for_completion+0x74/0xe0 Nov 4 18:56:36 poulenc kernel: [00000000102262c8] blockio_make_request+0x1d0/0x24c [iscsi_trgt] Nov 4 18:56:36 poulenc kernel: [000000001021a140] tio_write+0x28/0x80 [iscsi_trgt] Nov 4 18:56:36 poulenc kernel: [0000000010223df8] build_write_response+0x40/0xe0 [iscsi_trgt] Nov 4 18:56:36 poulenc kernel: [000000001021e444] send_scsi_rsp+0xc/0x120 [iscsi_trgt] Nov 4 18:56:36 poulenc kernel: [0000000010223c30] disk_execute_cmnd+0x158/0x220 [iscsi_trgt] Nov 4 18:56:36 poulenc kernel: [0000000010220330] worker_thread+0x118/0x1a0 [iscsi_trgt] Nov 4 18:56:36 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:36 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:36 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:36 poulenc kernel: istiod1 D 00000000102262c8 0 8230 2 Nov 4 18:56:36 poulenc kernel: Call Trace: Nov 4 18:56:36 poulenc kernel: [000000000067e4cc] wait_for_completion+0x74/0xe0 Nov 4 18:56:36 poulenc kernel: [00000000102262c8] blockio_make_request+0x1d0/0x24c [iscsi_trgt] Nov 4 18:56:36 poulenc kernel: [000000001021a140] tio_write+0x28/0x80 [iscsi_trgt] Nov 4 18:56:36 poulenc kernel: [0000000010223df8] build_write_response+0x40/0xe0 [iscsi_trgt] Nov 4 18:56:36 poulenc kernel: [000000001021e444] send_scsi_rsp+0xc/0x120 [iscsi_trgt] Nov 4 18:56:36 poulenc kernel: [0000000010223c30] disk_execute_cmnd+0x158/0x220 [iscsi_trgt] Nov 4 18:56:36 poulenc kernel: [0000000010220330] worker_thread+0x118/0x1a0 [iscsi_trgt] Nov 4 18:56:37 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:37 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:37 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:37 poulenc kernel: istiod1 D 00000000102262c8 0 8231 2 Nov 4 18:56:37 poulenc kernel: Call Trace: Nov 4 18:56:37 poulenc kernel: [000000000067e4cc] wait_for_completion+0x74/0xe0 Nov 4 18:56:37 poulenc kernel: [00000000102262c8] blockio_make_request+0x1d0/0x24c [iscsi_trgt] Nov 4 18:56:37 poulenc kernel: [000000001021a140] tio_write+0x28/0x80 [iscsi_trgt] Nov 4 18:56:37 poulenc kernel: [0000000010223df8] build_write_response+0x40/0xe0 [iscsi_trgt] Nov 4 18:56:37 poulenc kernel: [000000001021e444] send_scsi_rsp+0xc/0x120 [iscsi_trgt] Nov 4 18:56:37 poulenc kernel: [0000000010223c30] disk_execute_cmnd+0x158/0x220 [iscsi_trgt] Nov 4 18:56:37 poulenc kernel: [0000000010220330] worker_thread+0x118/0x1a0 [iscsi_trgt] Nov 4 18:56:37 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:37 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:37 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:37 poulenc kernel: istiod1 D 00000000102262c8 0 8232 2 Nov 4 18:56:37 poulenc kernel: Call Trace: Nov 4 18:56:37 poulenc kernel: [000000000067e4cc] wait_for_completion+0x74/0xe0 Nov 4 18:56:37 poulenc kernel: [00000000102262c8] blockio_make_request+0x1d0/0x24c [iscsi_trgt] Nov 4 18:56:37 poulenc kernel: [000000001021a140] tio_write+0x28/0x80 [iscsi_trgt] Nov 4 18:56:37 poulenc kernel: [0000000010223df8] build_write_response+0x40/0xe0 [iscsi_trgt] Nov 4 18:56:37 poulenc kernel: [000000001021e444] send_scsi_rsp+0xc/0x120 [iscsi_trgt] Nov 4 18:56:37 poulenc kernel: [0000000010223c30] disk_execute_cmnd+0x158/0x220 [iscsi_trgt] Nov 4 18:56:37 poulenc kernel: [0000000010220330] worker_thread+0x118/0x1a0 [iscsi_trgt] Nov 4 18:56:37 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:37 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:37 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:37 poulenc kernel: istiod1 D 00000000102262c8 0 8233 2 Nov 4 18:56:37 poulenc kernel: Call Trace: Nov 4 18:56:37 poulenc kernel: [000000000067e4cc] wait_for_completion+0x74/0xe0 Nov 4 18:56:37 poulenc kernel: [00000000102262c8] blockio_make_request+0x1d0/0x24c [iscsi_trgt] Nov 4 18:56:37 poulenc kernel: [000000001021a140] tio_write+0x28/0x80 [iscsi_trgt] Nov 4 18:56:37 poulenc kernel: [0000000010223df8] build_write_response+0x40/0xe0 [iscsi_trgt] Nov 4 18:56:37 poulenc kernel: [000000001021e444] send_scsi_rsp+0xc/0x120 [iscsi_trgt] Nov 4 18:56:37 poulenc kernel: [0000000010223c30] disk_execute_cmnd+0x158/0x220 [iscsi_trgt] Nov 4 18:56:37 poulenc kernel: [0000000010220330] worker_thread+0x118/0x1a0 [iscsi_trgt] Nov 4 18:56:37 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:37 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:37 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:37 poulenc kernel: istiod1 D 00000000102262c8 0 8234 2 Nov 4 18:56:37 poulenc kernel: Call Trace: Nov 4 18:56:37 poulenc kernel: [000000000067e4cc] wait_for_completion+0x74/0xe0 Nov 4 18:56:37 poulenc kernel: [00000000102262c8] blockio_make_request+0x1d0/0x24c [iscsi_trgt] Nov 4 18:56:37 poulenc kernel: [000000001021a140] tio_write+0x28/0x80 [iscsi_trgt] Nov 4 18:56:37 poulenc kernel: [0000000010223df8] build_write_response+0x40/0xe0 [iscsi_trgt] Nov 4 18:56:37 poulenc kernel: [000000001021e444] send_scsi_rsp+0xc/0x120 [iscsi_trgt] Nov 4 18:56:37 poulenc kernel: [0000000010223c30] disk_execute_cmnd+0x158/0x220 [iscsi_trgt] Nov 4 18:56:37 poulenc kernel: [0000000010220330] worker_thread+0x118/0x1a0 [iscsi_trgt] Nov 4 18:56:38 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:38 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:38 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:38 poulenc kernel: istiod1 D 00000000102262c8 0 8235 2 Nov 4 18:56:38 poulenc kernel: Call Trace: Nov 4 18:56:38 poulenc kernel: [000000000067e4cc] wait_for_completion+0x74/0xe0 Nov 4 18:56:38 poulenc kernel: [00000000102262c8] blockio_make_request+0x1d0/0x24c [iscsi_trgt] Nov 4 18:56:38 poulenc kernel: [000000001021a140] tio_write+0x28/0x80 [iscsi_trgt] Nov 4 18:56:38 poulenc kernel: [0000000010223df8] build_write_response+0x40/0xe0 [iscsi_trgt] Nov 4 18:56:38 poulenc kernel: [000000001021e444] send_scsi_rsp+0xc/0x120 [iscsi_trgt] Nov 4 18:56:38 poulenc kernel: [0000000010223c30] disk_execute_cmnd+0x158/0x220 [iscsi_trgt] Nov 4 18:56:38 poulenc kernel: [0000000010220330] worker_thread+0x118/0x1a0 [iscsi_trgt] Nov 4 18:56:38 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:38 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:38 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:38 poulenc kernel: istiod1 D 00000000102262c8 0 8236 2 Nov 4 18:56:38 poulenc kernel: Call Trace: Nov 4 18:56:38 poulenc kernel: [000000000067e4cc] wait_for_completion+0x74/0xe0 Nov 4 18:56:38 poulenc kernel: [00000000102262c8] blockio_make_request+0x1d0/0x24c [iscsi_trgt] Nov 4 18:56:38 poulenc kernel: [000000001021a140] tio_write+0x28/0x80 [iscsi_trgt] Nov 4 18:56:38 poulenc kernel: [0000000010223df8] build_write_response+0x40/0xe0 [iscsi_trgt] Nov 4 18:56:38 poulenc kernel: [000000001021e444] send_scsi_rsp+0xc/0x120 [iscsi_trgt] Nov 4 18:56:38 poulenc kernel: [0000000010223c30] disk_execute_cmnd+0x158/0x220 [iscsi_trgt] Nov 4 18:56:38 poulenc kernel: [0000000010220330] worker_thread+0x118/0x1a0 [iscsi_trgt] Nov 4 18:56:38 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:38 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:38 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:38 poulenc kernel: identd S 00000000004c776c 0 8394 3802 Nov 4 18:56:38 poulenc kernel: Call Trace: Nov 4 18:56:38 poulenc kernel: [000000000067ec0c] schedule_timeout+0x54/0xc0 Nov 4 18:56:38 poulenc kernel: [00000000004c776c] do_sys_poll+0x234/0x400 Nov 4 18:56:38 poulenc kernel: [00000000004c7960] sys_poll+0x28/0x60 Nov 4 18:56:38 poulenc kernel: [00000000004061d4] linux_sparc_syscall32+0x3c/0x40 Nov 4 18:56:38 poulenc kernel: [00000000f7ce50d4] 0xf7ce50dc Nov 4 18:56:38 poulenc kernel: identd S 00000000004833bc 0 8395 3802 Nov 4 18:56:38 poulenc kernel: Call Trace: Nov 4 18:56:38 poulenc kernel: [0000000000482ec0] futex_wait+0x268/0x2c0 Nov 4 18:56:38 poulenc kernel: [00000000004833bc] do_futex+0x64/0xbc0 Nov 4 18:56:38 poulenc kernel: [00000000004843fc] compat_sys_futex+0x64/0x120 Nov 4 18:56:38 poulenc kernel: [00000000004061d4] linux_sparc_syscall32+0x3c/0x40 Nov 4 18:56:38 poulenc kernel: [00000000f7ee72e4] 0xf7ee72ec Nov 4 18:56:38 poulenc kernel: identd S 00000000004833bc 0 8396 3802 Nov 4 18:56:38 poulenc kernel: Call Trace: Nov 4 18:56:38 poulenc kernel: [0000000000482ec0] futex_wait+0x268/0x2c0 Nov 4 18:56:38 poulenc kernel: [00000000004833bc] do_futex+0x64/0xbc0 Nov 4 18:56:38 poulenc kernel: [00000000004843fc] compat_sys_futex+0x64/0x120 Nov 4 18:56:38 poulenc kernel: [00000000004061d4] linux_sparc_syscall32+0x3c/0x40 Nov 4 18:56:38 poulenc kernel: [00000000f7ee72e4] 0xf7ee72ec Nov 4 18:56:38 poulenc kernel: identd S 00000000004833bc 0 8397 3802 Nov 4 18:56:38 poulenc kernel: Call Trace: Nov 4 18:56:39 poulenc kernel: [0000000000482ec0] futex_wait+0x268/0x2c0 Nov 4 18:56:39 poulenc kernel: [00000000004833bc] do_futex+0x64/0xbc0 Nov 4 18:56:39 poulenc kernel: [00000000004843fc] compat_sys_futex+0x64/0x120 Nov 4 18:56:39 poulenc kernel: [00000000004061d4] linux_sparc_syscall32+0x3c/0x40 Nov 4 18:56:39 poulenc kernel: [00000000f7ee72e4] 0xf7ee72ec Nov 4 18:56:39 poulenc kernel: identd S 00000000004833bc 0 8398 3802 Nov 4 18:56:39 poulenc kernel: Call Trace: Nov 4 18:56:39 poulenc kernel: [0000000000482ec0] futex_wait+0x268/0x2c0 Nov 4 18:56:39 poulenc kernel: [00000000004833bc] do_futex+0x64/0xbc0 Nov 4 18:56:39 poulenc kernel: [00000000004843fc] compat_sys_futex+0x64/0x120 Nov 4 18:56:39 poulenc kernel: [00000000004061d4] linux_sparc_syscall32+0x3c/0x40 Nov 4 18:56:39 poulenc kernel: [00000000f7ee72e4] 0xf7ee72ec Nov 4 18:56:39 poulenc kernel: identd S 00000000004833bc 0 8399 3802 Nov 4 18:56:39 poulenc kernel: Call Trace: Nov 4 18:56:39 poulenc kernel: [0000000000482ec0] futex_wait+0x268/0x2c0 Nov 4 18:56:39 poulenc kernel: [00000000004833bc] do_futex+0x64/0xbc0 Nov 4 18:56:39 poulenc kernel: [00000000004843fc] compat_sys_futex+0x64/0x120 Nov 4 18:56:39 poulenc kernel: [00000000004061d4] linux_sparc_syscall32+0x3c/0x40 Nov 4 18:56:39 poulenc kernel: [00000000f7ee72e4] 0xf7ee72ec Nov 4 18:56:39 poulenc kernel: watchdog ? 0000000000466d4c 0 8412 3976 Nov 4 18:56:39 poulenc kernel: Call Trace: Nov 4 18:56:39 poulenc kernel: [00000000004669f0] do_exit+0x6d8/0xa00 Nov 4 18:56:39 poulenc kernel: [0000000000466d4c] do_group_exit+0x34/0xa0 Nov 4 18:56:39 poulenc kernel: [00000000004061d4] linux_sparc_syscall32+0x3c/0x40 Nov 4 18:56:39 poulenc kernel: [000000000001b264] 0x1b26c Nov 4 18:56:46 poulenc kernel: SysRq : Show Blocked State Nov 4 18:56:46 poulenc kernel: task PC stack pid father Nov 4 18:56:46 poulenc kernel: istiod1 D 00000000102262c8 0 8229 2 Nov 4 18:56:46 poulenc kernel: Call Trace: Nov 4 18:56:46 poulenc kernel: [000000000067e4cc] wait_for_completion+0x74/0xe0 Nov 4 18:56:46 poulenc kernel: [00000000102262c8] blockio_make_request+0x1d0/0x24c [iscsi_trgt] Nov 4 18:56:46 poulenc kernel: [000000001021a140] tio_write+0x28/0x80 [iscsi_trgt] Nov 4 18:56:46 poulenc kernel: [0000000010223df8] build_write_response+0x40/0xe0 [iscsi_trgt] Nov 4 18:56:47 poulenc kernel: [000000001021e444] send_scsi_rsp+0xc/0x120 [iscsi_trgt] Nov 4 18:56:47 poulenc kernel: [0000000010223c30] disk_execute_cmnd+0x158/0x220 [iscsi_trgt] Nov 4 18:56:47 poulenc kernel: [0000000010220330] worker_thread+0x118/0x1a0 [iscsi_trgt] Nov 4 18:56:47 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:47 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:47 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:47 poulenc kernel: istiod1 D 00000000102262c8 0 8230 2 Nov 4 18:56:47 poulenc kernel: Call Trace: Nov 4 18:56:47 poulenc kernel: [000000000067e4cc] wait_for_completion+0x74/0xe0 Nov 4 18:56:47 poulenc kernel: [00000000102262c8] blockio_make_request+0x1d0/0x24c [iscsi_trgt] Nov 4 18:56:47 poulenc kernel: [000000001021a140] tio_write+0x28/0x80 [iscsi_trgt] Nov 4 18:56:47 poulenc kernel: [0000000010223df8] build_write_response+0x40/0xe0 [iscsi_trgt] Nov 4 18:56:47 poulenc kernel: [000000001021e444] send_scsi_rsp+0xc/0x120 [iscsi_trgt] Nov 4 18:56:47 poulenc kernel: [0000000010223c30] disk_execute_cmnd+0x158/0x220 [iscsi_trgt] Nov 4 18:56:47 poulenc kernel: [0000000010220330] worker_thread+0x118/0x1a0 [iscsi_trgt] Nov 4 18:56:47 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:47 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:47 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:47 poulenc kernel: istiod1 D 00000000102262c8 0 8231 2 Nov 4 18:56:47 poulenc kernel: Call Trace: Nov 4 18:56:47 poulenc kernel: [000000000067e4cc] wait_for_completion+0x74/0xe0 Nov 4 18:56:47 poulenc kernel: [00000000102262c8] blockio_make_request+0x1d0/0x24c [iscsi_trgt] Nov 4 18:56:47 poulenc kernel: [000000001021a140] tio_write+0x28/0x80 [iscsi_trgt] Nov 4 18:56:47 poulenc kernel: [0000000010223df8] build_write_response+0x40/0xe0 [iscsi_trgt] Nov 4 18:56:47 poulenc kernel: [000000001021e444] send_scsi_rsp+0xc/0x120 [iscsi_trgt] Nov 4 18:56:47 poulenc kernel: [0000000010223c30] disk_execute_cmnd+0x158/0x220 [iscsi_trgt] Nov 4 18:56:47 poulenc kernel: [0000000010220330] worker_thread+0x118/0x1a0 [iscsi_trgt] Nov 4 18:56:47 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:47 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:47 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:47 poulenc kernel: istiod1 D 00000000102262c8 0 8232 2 Nov 4 18:56:47 poulenc kernel: Call Trace: Nov 4 18:56:47 poulenc kernel: [000000000067e4cc] wait_for_completion+0x74/0xe0 Nov 4 18:56:47 poulenc kernel: [00000000102262c8] blockio_make_request+0x1d0/0x24c [iscsi_trgt] Nov 4 18:56:47 poulenc kernel: [000000001021a140] tio_write+0x28/0x80 [iscsi_trgt] Nov 4 18:56:47 poulenc kernel: [0000000010223df8] build_write_response+0x40/0xe0 [iscsi_trgt] Nov 4 18:56:47 poulenc kernel: [000000001021e444] send_scsi_rsp+0xc/0x120 [iscsi_trgt] Nov 4 18:56:48 poulenc kernel: [0000000010223c30] disk_execute_cmnd+0x158/0x220 [iscsi_trgt] Nov 4 18:56:48 poulenc kernel: [0000000010220330] worker_thread+0x118/0x1a0 [iscsi_trgt] Nov 4 18:56:48 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:48 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:48 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:48 poulenc kernel: istiod1 D 00000000102262c8 0 8233 2 Nov 4 18:56:48 poulenc kernel: Call Trace: Nov 4 18:56:48 poulenc kernel: [000000000067e4cc] wait_for_completion+0x74/0xe0 Nov 4 18:56:48 poulenc kernel: [00000000102262c8] blockio_make_request+0x1d0/0x24c [iscsi_trgt] Nov 4 18:56:48 poulenc kernel: [000000001021a140] tio_write+0x28/0x80 [iscsi_trgt] Nov 4 18:56:48 poulenc kernel: [0000000010223df8] build_write_response+0x40/0xe0 [iscsi_trgt] Nov 4 18:56:48 poulenc kernel: [000000001021e444] send_scsi_rsp+0xc/0x120 [iscsi_trgt] Nov 4 18:56:48 poulenc kernel: [0000000010223c30] disk_execute_cmnd+0x158/0x220 [iscsi_trgt] Nov 4 18:56:48 poulenc kernel: [0000000010220330] worker_thread+0x118/0x1a0 [iscsi_trgt] Nov 4 18:56:48 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:48 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:48 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:48 poulenc kernel: istiod1 D 00000000102262c8 0 8234 2 Nov 4 18:56:48 poulenc kernel: Call Trace: Nov 4 18:56:48 poulenc kernel: [000000000067e4cc] wait_for_completion+0x74/0xe0 Nov 4 18:56:48 poulenc kernel: [00000000102262c8] blockio_make_request+0x1d0/0x24c [iscsi_trgt] Nov 4 18:56:48 poulenc kernel: [000000001021a140] tio_write+0x28/0x80 [iscsi_trgt] Nov 4 18:56:48 poulenc kernel: [0000000010223df8] build_write_response+0x40/0xe0 [iscsi_trgt] Nov 4 18:56:48 poulenc kernel: [000000001021e444] send_scsi_rsp+0xc/0x120 [iscsi_trgt] Nov 4 18:56:48 poulenc kernel: [0000000010223c30] disk_execute_cmnd+0x158/0x220 [iscsi_trgt] Nov 4 18:56:48 poulenc kernel: [0000000010220330] worker_thread+0x118/0x1a0 [iscsi_trgt] Nov 4 18:56:48 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:48 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:48 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:48 poulenc kernel: istiod1 D 00000000102262c8 0 8235 2 Nov 4 18:56:48 poulenc kernel: Call Trace: Nov 4 18:56:48 poulenc kernel: [000000000067e4cc] wait_for_completion+0x74/0xe0 Nov 4 18:56:48 poulenc kernel: [00000000102262c8] blockio_make_request+0x1d0/0x24c [iscsi_trgt] Nov 4 18:56:48 poulenc kernel: [000000001021a140] tio_write+0x28/0x80 [iscsi_trgt] Nov 4 18:56:48 poulenc kernel: [0000000010223df8] build_write_response+0x40/0xe0 [iscsi_trgt] Nov 4 18:56:48 poulenc kernel: [000000001021e444] send_scsi_rsp+0xc/0x120 [iscsi_trgt] Nov 4 18:56:48 poulenc kernel: [0000000010223c30] disk_execute_cmnd+0x158/0x220 [iscsi_trgt] Nov 4 18:56:48 poulenc kernel: [0000000010220330] worker_thread+0x118/0x1a0 [iscsi_trgt] Nov 4 18:56:48 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:48 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:48 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 Nov 4 18:56:49 poulenc kernel: istiod1 D 00000000102262c8 0 8236 2 Nov 4 18:56:49 poulenc kernel: Call Trace: Nov 4 18:56:49 poulenc kernel: [000000000067e4cc] wait_for_completion+0x74/0xe0 Nov 4 18:56:49 poulenc kernel: [00000000102262c8] blockio_make_request+0x1d0/0x24c [iscsi_trgt] Nov 4 18:56:49 poulenc kernel: [000000001021a140] tio_write+0x28/0x80 [iscsi_trgt] Nov 4 18:56:49 poulenc kernel: [0000000010223df8] build_write_response+0x40/0xe0 [iscsi_trgt] Nov 4 18:56:49 poulenc kernel: [000000001021e444] send_scsi_rsp+0xc/0x120 [iscsi_trgt] Nov 4 18:56:49 poulenc kernel: [0000000010223c30] disk_execute_cmnd+0x158/0x220 [iscsi_trgt] Nov 4 18:56:49 poulenc kernel: [0000000010220330] worker_thread+0x118/0x1a0 [iscsi_trgt] Nov 4 18:56:49 poulenc kernel: [0000000000478ce0] kthread+0x48/0x80 Nov 4 18:56:49 poulenc kernel: [00000000004273d0] kernel_thread+0x38/0x60 Nov 4 18:56:49 poulenc kernel: [0000000000478f80] kthreadd+0x148/0x1c0 ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: 2.6.23.1: mdadm/raid5 hung/d-state 2007-11-04 14:55 ` Michael Tokarev 2007-11-04 14:59 ` Justin Piszcz 2007-11-04 18:17 ` BERTRAND Joël @ 2007-11-04 21:40 ` David Greaves 2 siblings, 0 replies; 35+ messages in thread From: David Greaves @ 2007-11-04 21:40 UTC (permalink / raw) To: Michael Tokarev; +Cc: Justin Piszcz, linux-kernel, linux-raid, xfs Michael Tokarev wrote: > Justin Piszcz wrote: >> On Sun, 4 Nov 2007, Michael Tokarev wrote: > [] >>> The next time you come across something like that, do a SysRq-T dump and >>> post that. It shows a stack trace of all processes - and in particular, >>> where exactly each task is stuck. > >> Yes I got it before I rebooted, ran that and then dmesg > file. >> >> Here it is: >> >> [1172609.665902] ffffffff80747dc0 ffffffff80747dc0 ffffffff80747dc0 ffffffff80744d80 >> [1172609.668768] ffffffff80747dc0 ffff81015c3aa918 ffff810091c899b4 ffff810091c899a8 > > That's only partial list. All the kernel threads - which are most important > in this context - aren't shown. You ran out of dmesg buffer, and the most > interesting entries was at the beginning. If your /var/log partition is > working, the stuff should be in /var/log/kern.log or equivalent. If it's > not working, there is a way to capture the info still, by stopping syslogd, > cat'ing /proc/kmsg to some tmpfs file and scp'ing it elsewhere. or netconsole is actually pretty easy and incredibly useful in this kind of situation even if there's no disk at all :) David ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: 2.6.23.1: mdadm/raid5 hung/d-state 2007-11-04 12:03 2.6.23.1: mdadm/raid5 hung/d-state Justin Piszcz 2007-11-04 12:39 ` 2.6.23.1: mdadm/raid5 hung/d-state (md3_raid5 stuck in endless loop?) Justin Piszcz 2007-11-04 12:48 ` 2.6.23.1: mdadm/raid5 hung/d-state Michael Tokarev @ 2007-11-04 13:40 ` BERTRAND Joël 2007-11-04 13:42 ` Justin Piszcz 2007-11-04 21:49 ` Neil Brown 3 siblings, 1 reply; 35+ messages in thread From: BERTRAND Joël @ 2007-11-04 13:40 UTC (permalink / raw) To: Justin Piszcz; +Cc: linux-kernel, linux-raid Justin Piszcz wrote: > # ps auxww | grep D > USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND > root 273 0.0 0.0 0 0 ? D Oct21 14:40 [pdflush] > root 274 0.0 0.0 0 0 ? D Oct21 13:00 [pdflush] > > After several days/weeks, this is the second time this has happened, > while doing regular file I/O (decompressing a file), everything on the > device went into D-state. Same observation here (kernel 2.6.23). I can see this bug when I try to synchronize a raid1 volume over iSCSI (each element is a raid5 volume), or sometimes only with a 1,5 TB raid5 volume. When this bug occurs, md subsystem eats 100% of one CPU and pdflush remains in D state too. What is your architecture ? I use two 32-threads T1000 (sparc64), and I'm trying to determine if this bug is arch specific. Regards, JKB ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: 2.6.23.1: mdadm/raid5 hung/d-state 2007-11-04 13:40 ` BERTRAND Joël @ 2007-11-04 13:42 ` Justin Piszcz 0 siblings, 0 replies; 35+ messages in thread From: Justin Piszcz @ 2007-11-04 13:42 UTC (permalink / raw) To: BERTRAND Joël; +Cc: linux-kernel, linux-raid [-- Attachment #1: Type: TEXT/PLAIN, Size: 1032 bytes --] On Sun, 4 Nov 2007, BERTRAND Joël wrote: > Justin Piszcz wrote: >> # ps auxww | grep D >> USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND >> root 273 0.0 0.0 0 0 ? D Oct21 14:40 [pdflush] >> root 274 0.0 0.0 0 0 ? D Oct21 13:00 [pdflush] >> >> After several days/weeks, this is the second time this has happened, while >> doing regular file I/O (decompressing a file), everything on the device >> went into D-state. > > Same observation here (kernel 2.6.23). I can see this bug when I try > to synchronize a raid1 volume over iSCSI (each element is a raid5 volume), or > sometimes only with a 1,5 TB raid5 volume. When this bug occurs, md subsystem > eats 100% of one CPU and pdflush remains in D state too. What is your > architecture ? I use two 32-threads T1000 (sparc64), and I'm trying to > determine if this bug is arch specific. > > Regards, > > JKB > Using x86_64 here (Q6600/Intel DG965WH). Justin. ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: 2.6.23.1: mdadm/raid5 hung/d-state 2007-11-04 12:03 2.6.23.1: mdadm/raid5 hung/d-state Justin Piszcz ` (2 preceding siblings ...) 2007-11-04 13:40 ` BERTRAND Joël @ 2007-11-04 21:49 ` Neil Brown 2007-11-04 21:51 ` Justin Piszcz 2007-11-05 8:36 ` BERTRAND Joël 3 siblings, 2 replies; 35+ messages in thread From: Neil Brown @ 2007-11-04 21:49 UTC (permalink / raw) To: Justin Piszcz; +Cc: linux-kernel, linux-raid On Sunday November 4, jpiszcz@lucidpixels.com wrote: > # ps auxww | grep D > USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND > root 273 0.0 0.0 0 0 ? D Oct21 14:40 [pdflush] > root 274 0.0 0.0 0 0 ? D Oct21 13:00 [pdflush] > > After several days/weeks, this is the second time this has happened, while > doing regular file I/O (decompressing a file), everything on the device > went into D-state. At a guess (I haven't looked closely) I'd say it is the bug that was meant to be fixed by commit 4ae3f847e49e3787eca91bced31f8fd328d50496 except that patch applied badly and needed to be fixed with the following patch (not in git yet). These have been sent to stable@ and should be in the queue for 2.6.23.2 NeilBrown Fix misapplied patch in raid5.c commit 4ae3f847e49e3787eca91bced31f8fd328d50496 did not get applied correctly, presumably due to substantial similarities between handle_stripe5 and handle_stripe6. This patch (with lots of context) moves the chunk of new code from handle_stripe6 (where it isn't needed (yet)) to handle_stripe5. Signed-off-by: Neil Brown <neilb@suse.de> ### Diffstat output ./drivers/md/raid5.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff .prev/drivers/md/raid5.c ./drivers/md/raid5.c --- .prev/drivers/md/raid5.c 2007-11-02 12:10:49.000000000 +1100 +++ ./drivers/md/raid5.c 2007-11-02 12:25:31.000000000 +1100 @@ -2607,40 +2607,47 @@ static void handle_stripe5(struct stripe struct bio *return_bi = NULL; struct stripe_head_state s; struct r5dev *dev; unsigned long pending = 0; memset(&s, 0, sizeof(s)); pr_debug("handling stripe %llu, state=%#lx cnt=%d, pd_idx=%d " "ops=%lx:%lx:%lx\n", (unsigned long long)sh->sector, sh->state, atomic_read(&sh->count), sh->pd_idx, sh->ops.pending, sh->ops.ack, sh->ops.complete); spin_lock(&sh->lock); clear_bit(STRIPE_HANDLE, &sh->state); clear_bit(STRIPE_DELAYED, &sh->state); s.syncing = test_bit(STRIPE_SYNCING, &sh->state); s.expanding = test_bit(STRIPE_EXPAND_SOURCE, &sh->state); s.expanded = test_bit(STRIPE_EXPAND_READY, &sh->state); /* Now to look around and see what can be done */ + /* clean-up completed biofill operations */ + if (test_bit(STRIPE_OP_BIOFILL, &sh->ops.complete)) { + clear_bit(STRIPE_OP_BIOFILL, &sh->ops.pending); + clear_bit(STRIPE_OP_BIOFILL, &sh->ops.ack); + clear_bit(STRIPE_OP_BIOFILL, &sh->ops.complete); + } + rcu_read_lock(); for (i=disks; i--; ) { mdk_rdev_t *rdev; struct r5dev *dev = &sh->dev[i]; clear_bit(R5_Insync, &dev->flags); pr_debug("check %d: state 0x%lx toread %p read %p write %p " "written %p\n", i, dev->flags, dev->toread, dev->read, dev->towrite, dev->written); /* maybe we can request a biofill operation * * new wantfill requests are only permitted while * STRIPE_OP_BIOFILL is clear */ if (test_bit(R5_UPTODATE, &dev->flags) && dev->toread && !test_bit(STRIPE_OP_BIOFILL, &sh->ops.pending)) set_bit(R5_Wantfill, &dev->flags); /* now count some things */ @@ -2880,47 +2887,40 @@ static void handle_stripe6(struct stripe struct stripe_head_state s; struct r6_state r6s; struct r5dev *dev, *pdev, *qdev; r6s.qd_idx = raid6_next_disk(pd_idx, disks); pr_debug("handling stripe %llu, state=%#lx cnt=%d, " "pd_idx=%d, qd_idx=%d\n", (unsigned long long)sh->sector, sh->state, atomic_read(&sh->count), pd_idx, r6s.qd_idx); memset(&s, 0, sizeof(s)); spin_lock(&sh->lock); clear_bit(STRIPE_HANDLE, &sh->state); clear_bit(STRIPE_DELAYED, &sh->state); s.syncing = test_bit(STRIPE_SYNCING, &sh->state); s.expanding = test_bit(STRIPE_EXPAND_SOURCE, &sh->state); s.expanded = test_bit(STRIPE_EXPAND_READY, &sh->state); /* Now to look around and see what can be done */ - /* clean-up completed biofill operations */ - if (test_bit(STRIPE_OP_BIOFILL, &sh->ops.complete)) { - clear_bit(STRIPE_OP_BIOFILL, &sh->ops.pending); - clear_bit(STRIPE_OP_BIOFILL, &sh->ops.ack); - clear_bit(STRIPE_OP_BIOFILL, &sh->ops.complete); - } - rcu_read_lock(); for (i=disks; i--; ) { mdk_rdev_t *rdev; dev = &sh->dev[i]; clear_bit(R5_Insync, &dev->flags); pr_debug("check %d: state 0x%lx read %p write %p written %p\n", i, dev->flags, dev->toread, dev->towrite, dev->written); /* maybe we can reply to a read */ if (test_bit(R5_UPTODATE, &dev->flags) && dev->toread) { struct bio *rbi, *rbi2; pr_debug("Return read for disc %d\n", i); spin_lock_irq(&conf->device_lock); rbi = dev->toread; dev->toread = NULL; if (test_and_clear_bit(R5_Overlap, &dev->flags)) wake_up(&conf->wait_for_overlap); spin_unlock_irq(&conf->device_lock); while (rbi && rbi->bi_sector < dev->sector + STRIPE_SECTORS) { copy_data(0, rbi, dev->page, dev->sector); ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: 2.6.23.1: mdadm/raid5 hung/d-state 2007-11-04 21:49 ` Neil Brown @ 2007-11-04 21:51 ` Justin Piszcz 2007-11-05 18:35 ` Dan Williams 2007-11-05 8:36 ` BERTRAND Joël 1 sibling, 1 reply; 35+ messages in thread From: Justin Piszcz @ 2007-11-04 21:51 UTC (permalink / raw) To: Neil Brown; +Cc: linux-kernel, linux-raid On Mon, 5 Nov 2007, Neil Brown wrote: > On Sunday November 4, jpiszcz@lucidpixels.com wrote: >> # ps auxww | grep D >> USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND >> root 273 0.0 0.0 0 0 ? D Oct21 14:40 [pdflush] >> root 274 0.0 0.0 0 0 ? D Oct21 13:00 [pdflush] >> >> After several days/weeks, this is the second time this has happened, while >> doing regular file I/O (decompressing a file), everything on the device >> went into D-state. > > At a guess (I haven't looked closely) I'd say it is the bug that was > meant to be fixed by > > commit 4ae3f847e49e3787eca91bced31f8fd328d50496 > > except that patch applied badly and needed to be fixed with > the following patch (not in git yet). > These have been sent to stable@ and should be in the queue for 2.6.23.2 > Ah, thanks Neil, will be updating as soon as it is released, thanks. Justin. ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: 2.6.23.1: mdadm/raid5 hung/d-state 2007-11-04 21:51 ` Justin Piszcz @ 2007-11-05 18:35 ` Dan Williams 2007-11-05 18:35 ` Justin Piszcz 2007-11-06 23:18 ` Jeff Lessem 0 siblings, 2 replies; 35+ messages in thread From: Dan Williams @ 2007-11-05 18:35 UTC (permalink / raw) To: Justin Piszcz; +Cc: Neil Brown, linux-kernel, linux-raid, BERTRAND Joël On 11/4/07, Justin Piszcz <jpiszcz@lucidpixels.com> wrote: > > > On Mon, 5 Nov 2007, Neil Brown wrote: > > > On Sunday November 4, jpiszcz@lucidpixels.com wrote: > >> # ps auxww | grep D > >> USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND > >> root 273 0.0 0.0 0 0 ? D Oct21 14:40 [pdflush] > >> root 274 0.0 0.0 0 0 ? D Oct21 13:00 [pdflush] > >> > >> After several days/weeks, this is the second time this has happened, while > >> doing regular file I/O (decompressing a file), everything on the device > >> went into D-state. > > > > At a guess (I haven't looked closely) I'd say it is the bug that was > > meant to be fixed by > > > > commit 4ae3f847e49e3787eca91bced31f8fd328d50496 > > > > except that patch applied badly and needed to be fixed with > > the following patch (not in git yet). > > These have been sent to stable@ and should be in the queue for 2.6.23.2 > > > > Ah, thanks Neil, will be updating as soon as it is released, thanks. > Are you seeing the same "md thread takes 100% of the CPU" that Joël is reporting? - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: 2.6.23.1: mdadm/raid5 hung/d-state 2007-11-05 18:35 ` Dan Williams @ 2007-11-05 18:35 ` Justin Piszcz 2007-11-06 0:19 ` Dan Williams 2007-11-06 23:18 ` Jeff Lessem 1 sibling, 1 reply; 35+ messages in thread From: Justin Piszcz @ 2007-11-05 18:35 UTC (permalink / raw) To: Dan Williams; +Cc: Neil Brown, linux-kernel, linux-raid, BERTRAND Joël [-- Attachment #1: Type: TEXT/PLAIN, Size: 1294 bytes --] On Mon, 5 Nov 2007, Dan Williams wrote: > On 11/4/07, Justin Piszcz <jpiszcz@lucidpixels.com> wrote: >> >> >> On Mon, 5 Nov 2007, Neil Brown wrote: >> >>> On Sunday November 4, jpiszcz@lucidpixels.com wrote: >>>> # ps auxww | grep D >>>> USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND >>>> root 273 0.0 0.0 0 0 ? D Oct21 14:40 [pdflush] >>>> root 274 0.0 0.0 0 0 ? D Oct21 13:00 [pdflush] >>>> >>>> After several days/weeks, this is the second time this has happened, while >>>> doing regular file I/O (decompressing a file), everything on the device >>>> went into D-state. >>> >>> At a guess (I haven't looked closely) I'd say it is the bug that was >>> meant to be fixed by >>> >>> commit 4ae3f847e49e3787eca91bced31f8fd328d50496 >>> >>> except that patch applied badly and needed to be fixed with >>> the following patch (not in git yet). >>> These have been sent to stable@ and should be in the queue for 2.6.23.2 >>> >> >> Ah, thanks Neil, will be updating as soon as it is released, thanks. >> > > Are you seeing the same "md thread takes 100% of the CPU" that Joël is > reporting? > Yes, in another e-mail I posted the top output with md3_raid5 at 100%. Justin. ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: 2.6.23.1: mdadm/raid5 hung/d-state 2007-11-05 18:35 ` Justin Piszcz @ 2007-11-06 0:19 ` Dan Williams 2007-11-06 10:19 ` BERTRAND Joël 0 siblings, 1 reply; 35+ messages in thread From: Dan Williams @ 2007-11-06 0:19 UTC (permalink / raw) To: Justin Piszcz, BERTRAND Joël; +Cc: Neil Brown, linux-kernel, linux-raid [-- Attachment #1: Type: text/plain, Size: 963 bytes --] On 11/5/07, Justin Piszcz <jpiszcz@lucidpixels.com> wrote: [..] > > Are you seeing the same "md thread takes 100% of the CPU" that Joël is > > reporting? > > > > Yes, in another e-mail I posted the top output with md3_raid5 at 100%. > This seems too similar to Joël's situation for them not to be correlated, and it shows that iscsi is not a necessary component of the failure. The attached patch allows the debug statements in MD to be enabled via sysfs. Joël, since it is easier for you to reproduce can you capture the kernel log output after the raid thread goes into the spin? It will help if you have CONFIG_PRINTK_TIME=y set in your kernel configuration. After the failure run: echo 1 > /sys/block/md_d0/md/debug_print_enable; sleep 5; echo 0 > /sys/block/md_d0/md/debug_print_enable ...to enable the print messages for a few seconds. Please send the output in a private message if it proves too big for the mailing list. [-- Attachment #2: raid5-debug-print-enable.patch --] [-- Type: application/octet-stream, Size: 1805 bytes --] raid5: debug print enable From: Dan Williams <dan.j.williams@intel.com> --- drivers/md/raid5.c | 36 ++++++++++++++++++++++++++++++++++++ 1 files changed, 36 insertions(+), 0 deletions(-) diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c index 3808f52..496b9a3 100644 --- a/drivers/md/raid5.c +++ b/drivers/md/raid5.c @@ -54,6 +54,10 @@ #include <linux/raid/bitmap.h> #include <linux/async_tx.h> +static int debug_print_enable; +#undef pr_debug +#define pr_debug(x...) ((void)(debug_print_enable && printk(x))) + /* * Stripe cache */ @@ -4023,6 +4027,37 @@ raid5_stripecache_size = __ATTR(stripe_cache_size, S_IRUGO | S_IWUSR, raid5_store_stripe_cache_size); static ssize_t +raid5_show_debug_print_enable(mddev_t *mddev, char *page) +{ + return sprintf(page, "%d\n", debug_print_enable); +} + +static ssize_t +raid5_store_debug_print_enable(mddev_t *mddev, const char *page, size_t len) +{ + raid5_conf_t *conf = mddev_to_conf(mddev); + char *end; + int new; + if (len >= PAGE_SIZE) + return -EINVAL; + + new = simple_strtoul(page, &end, 10); + if (!*page || (*end && *end != '\n') ) + return -EINVAL; + if (new < 0 || new > 1) + return -EINVAL; + + debug_print_enable = new; + + return len; +} + +static struct md_sysfs_entry +raid5_debug_print = __ATTR(debug_print_enable, S_IRUGO | S_IWUSR, + raid5_show_debug_print_enable, + raid5_store_debug_print_enable); + +static ssize_t stripe_cache_active_show(mddev_t *mddev, char *page) { raid5_conf_t *conf = mddev_to_conf(mddev); @@ -4038,6 +4073,7 @@ raid5_stripecache_active = __ATTR_RO(stripe_cache_active); static struct attribute *raid5_attrs[] = { &raid5_stripecache_size.attr, &raid5_stripecache_active.attr, + &raid5_debug_print.attr, NULL, }; static struct attribute_group raid5_attrs_group = { ^ permalink raw reply related [flat|nested] 35+ messages in thread
* Re: 2.6.23.1: mdadm/raid5 hung/d-state 2007-11-06 0:19 ` Dan Williams @ 2007-11-06 10:19 ` BERTRAND Joël 2007-11-06 11:29 ` Justin Piszcz 2007-11-07 1:25 ` Dan Williams 0 siblings, 2 replies; 35+ messages in thread From: BERTRAND Joël @ 2007-11-06 10:19 UTC (permalink / raw) To: Dan Williams; +Cc: Justin Piszcz, Neil Brown, linux-kernel, linux-raid Done. Here is obtained ouput : [ 1260.967796] for sector 7629696, rmw=0 rcw=0 [ 1260.969314] handling stripe 7629696, state=0x14 cnt=1, pd_idx=2 ops=0:0:0 [ 1260.980606] check 5: state 0x6 toread 0000000000000000 read 0000000000000000 write fffff800ffcffcc0 written 0000000000000000 [ 1260.994808] check 4: state 0x6 toread 0000000000000000 read 0000000000000000 write fffff800fdd4e360 written 0000000000000000 [ 1261.009325] check 3: state 0x1 toread 0000000000000000 read 0000000000000000 write 0000000000000000 written 0000000000000000 [ 1261.244478] check 2: state 0x1 toread 0000000000000000 read 0000000000000000 write 0000000000000000 written 0000000000000000 [ 1261.270821] check 1: state 0x6 toread 0000000000000000 read 0000000000000000 write fffff800ff517e40 written 0000000000000000 [ 1261.312320] check 0: state 0x6 toread 0000000000000000 read 0000000000000000 write fffff800fd4cae60 written 0000000000000000 [ 1261.361030] locked=4 uptodate=2 to_read=0 to_write=4 failed=0 failed_num=0 [ 1261.443120] for sector 7629696, rmw=0 rcw=0 [ 1261.453348] handling stripe 7629696, state=0x14 cnt=1, pd_idx=2 ops=0:0:0 [ 1261.491538] check 5: state 0x6 toread 0000000000000000 read 0000000000000000 write fffff800ffcffcc0 written 0000000000000000 [ 1261.529120] check 4: state 0x6 toread 0000000000000000 read 0000000000000000 write fffff800fdd4e360 written 0000000000000000 [ 1261.560151] check 3: state 0x1 toread 0000000000000000 read 0000000000000000 write 0000000000000000 written 0000000000000000 [ 1261.599180] check 2: state 0x1 toread 0000000000000000 read 0000000000000000 write 0000000000000000 written 0000000000000000 [ 1261.637138] check 1: state 0x6 toread 0000000000000000 read 0000000000000000 write fffff800ff517e40 written 0000000000000000 [ 1261.674502] check 0: state 0x6 toread 0000000000000000 read 0000000000000000 write fffff800fd4cae60 written 0000000000000000 [ 1261.712589] locked=4 uptodate=2 to_read=0 to_write=4 failed=0 failed_num=0 [ 1261.864338] for sector 7629696, rmw=0 rcw=0 [ 1261.873475] handling stripe 7629696, state=0x14 cnt=1, pd_idx=2 ops=0:0:0 [ 1261.907840] check 5: state 0x6 toread 0000000000000000 read 0000000000000000 write fffff800ffcffcc0 written 0000000000000000 [ 1261.950770] check 4: state 0x6 toread 0000000000000000 read 0000000000000000 write fffff800fdd4e360 written 0000000000000000 [ 1261.989003] check 3: state 0x1 toread 0000000000000000 read 0000000000000000 write 0000000000000000 written 0000000000000000 [ 1262.019621] check 2: state 0x1 toread 0000000000000000 read 0000000000000000 write 0000000000000000 written 0000000000000000 [ 1262.068705] check 1: state 0x6 toread 0000000000000000 read 0000000000000000 write fffff800ff517e40 written 0000000000000000 [ 1262.113265] check 0: state 0x6 toread 0000000000000000 read 0000000000000000 write fffff800fd4cae60 written 0000000000000000 [ 1262.150511] locked=4 uptodate=2 to_read=0 to_write=4 failed=0 failed_num=0 [ 1262.171143] for sector 7629696, rmw=0 rcw=0 [ 1262.179142] handling stripe 7629696, state=0x14 cnt=1, pd_idx=2 ops=0:0:0 [ 1262.201905] check 5: state 0x6 toread 0000000000000000 read 0000000000000000 write fffff800ffcffcc0 written 0000000000000000 [ 1262.252750] check 4: state 0x6 toread 0000000000000000 read 0000000000000000 write fffff800fdd4e360 written 0000000000000000 [ 1262.289631] check 3: state 0x1 toread 0000000000000000 read 0000000000000000 write 0000000000000000 written 0000000000000000 [ 1262.344709] check 2: state 0x1 toread 0000000000000000 read 0000000000000000 write 0000000000000000 written 0000000000000000 [ 1262.400411] check 1: state 0x6 toread 0000000000000000 read 0000000000000000 write fffff800ff517e40 written 0000000000000000 [ 1262.437353] check 0: state 0x6 toread 0000000000000000 read 0000000000000000 write fffff800fd4cae60 written 0000000000000000 [ 1262.492561] locked=4 uptodate=2 to_read=0 to_write=4 failed=0 failed_num=0 [ 1262.524993] for sector 7629696, rmw=0 rcw=0 [ 1262.533314] handling stripe 7629696, state=0x14 cnt=1, pd_idx=2 ops=0:0:0 [ 1262.561900] check 5: state 0x6 toread 0000000000000000 read 0000000000000000 write fffff800ffcffcc0 written 0000000000000000 [ 1262.588986] check 4: state 0x6 toread 0000000000000000 read 0000000000000000 write fffff800fdd4e360 written 0000000000000000 [ 1262.619455] check 3: state 0x1 toread 0000000000000000 read 0000000000000000 write 0000000000000000 written 0000000000000000 [ 1262.671006] check 2: state 0x1 toread 0000000000000000 read 0000000000000000 write 0000000000000000 written 0000000000000000 [ 1262.709065] check 1: state 0x6 toread 0000000000000000 read 0000000000000000 write fffff800ff517e40 written 0000000000000000 [ 1262.746904] check 0: state 0x6 toread 0000000000000000 read 0000000000000000 write fffff800fd4cae60 written 0000000000000000 [ 1262.780203] locked=4 uptodate=2 to_read=0 to_write=4 failed=0 failed_num=0 [ 1262.805941] for sector 7629696, rmw=0 rcw=0 [ 1262.815759] handling stripe 7629696, state=0x14 cnt=1, pd_idx=2 ops=0:0:0 [ 1262.850115] check 5: state 0x6 toread 0000000000000000 read 0000000000000000 write fffff800ffcffcc0 written 0000000000000000 [ 1262.893254] check 4: state 0x6 toread 0000000000000000 read 0000000000000000 write fffff800fdd4e360 written 0000000000000000 [ 1262.931227] check 3: state 0x1 toread 0000000000000000 read 0000000000000000 write 0000000000000000 written 0000000000000000 [ 1262.979417] check 2: state 0x1 toread 0000000000000000 read 0000000000000000 write 0000000000000000 written 0000000000000000 [ 1263.017059] check 1: state 0x6 toread 0000000000000000 read 0000000000000000 write fffff800ff517e40 written 0000000000000000 [ 1263.067023] check 0: state 0x6 toread 0000000000000000 read 0000000000000000 write fffff800fd4cae60 written 0000000000000000 [ 1263.104531] locked=4 uptodate=2 to_read=0 to_write=4 failed=0 failed_num=0 [ 1263.452465] for sector 7629696, rmw=0 rcw=0 [ 1263.460875] handling stripe 7629696, state=0x14 cnt=1, pd_idx=2 ops=0:0:0 [ 1263.490828] check 5: state 0x6 toread 0000000000000000 read 0000000000000000 write fffff800ffcffcc0 written 0000000000000000 [ 1263.518608] check 4: state 0x6 toread 0000000000000000 read 0000000000000000 write fffff800fdd4e360 written 0000000000000000 [ 1263.555348] check 3: state 0x1 toread 0000000000000000 read 0000000000000000 write 0000000000000000 written 0000000000000000 [ 1263.593250] check 2: state 0x1 toread 0000000000000000 read 0000000000000000 write 0000000000000000 written 0000000000000000 [ 1263.655904] check 1: state 0x6 toread 0000000000000000 read 0000000000000000 write fffff800ff517e40 written 0000000000000000 [ 1263.707175] check 0: state 0x6 toread 0000000000000000 read 0000000000000000 write fffff800fd4cae60 written 0000000000000000 [ 1263.744877] locked=4 uptodate=2 to_read=0 to_write=4 failed=0 failed_num=0 [ 1263.764939] for sector 7629696, rmw=0 rcw=0 [ 1263.773640] handling stripe 7629696, state=0x14 cnt=1, pd_idx=2 ops=0:0:0 [ 1263.802799] check 5: state 0x6 toread 0000000000000000 read 0000000000000000 write fffff800ffcffcc0 written 0000000000000000 [ 1263.840684] check 4: state 0x6 toread 0000000000000000 read 0000000000000000 write fffff800fdd4e360 written 0000000000000000 [ 1263.879844] check 3: state 0x1 toread 0000000000000000 read 0000000000000000 write 0000000000000000 written 0000000000000000 [ 1263.917788] check 2: state 0x1 toread 0000000000000000 read 0000000000000000 write 0000000000000000 written 0000000000000000 [ 1263.963288] check 1: state 0x6 toread 0000000000000000 read 0000000000000000 write fffff800ff517e40 written 0000000000000000 [ 1264.007020] check 0: state 0x6 toread 0000000000000000 read 0000000000000000 write fffff800fd4cae60 written 0000000000000000 [ 1264.044718] locked=4 uptodate=2 to_read=0 to_write=4 failed=0 failed_num=0 [ 1264.063690] for sector 7629696, rmw=0 rcw=0 [ 1264.071938] handling stripe 7629696, state=0x14 cnt=1, pd_idx=2 ops=0:0:0 [ 1264.100608] check 5: state 0x6 toread 0000000000000000 read 0000000000000000 write fffff800ffcffcc0 written 0000000000000000 [ 1264.138977] check 4: state 0x6 toread 0000000000000000 read 0000000000000000 write fffff800fdd4e360 written 0000000000000000 [ 1264.170593] check 3: state 0x1 toread 0000000000000000 read 0000000000000000 write 0000000000000000 written 0000000000000000 [ 1264.214718] check 2: state 0x1 toread 0000000000000000 read 0000000000000000 write 0000000000000000 written 0000000000000000 [ 1264.259371] check 1: state 0x6 toread 0000000000000000 read 0000000000000000 write fffff800ff517e40 written 0000000000000000 [ 1264.296140] check 0: state 0x6 toread 0000000000000000 read 0000000000000000 write fffff800fd4cae60 written 0000000000000000 [ 1264.335221] locked=4 uptodate=2 to_read=0 to_write=4 failed=0 failed_num=0 [ 1264.354767] for sector 7629696, rmw=0 rcw=0 [ 1264.363279] handling stripe 7629696, state=0x14 cnt=1, pd_idx=2 ops=0:0:0 [ 1264.399971] check 5: state 0x6 toread 0000000000000000 read 0000000000000000 write fffff800ffcffcc0 written 0000000000000000 [ 1264.454607] check 4: state 0x6 toread 0000000000000000 read 0000000000000000 write fffff800fdd4e360 written 0000000000000000 [ 1264.510498] check 3: state 0x1 toread 0000000000000000 read 0000000000000000 write 0000000000000000 written 0000000000000000 [ 1264.548240] check 2: state 0x1 toread 0000000000000000 read 0000000000000000 write 0000000000000000 written 0000000000000000 [ 1264.585633] check 1: state 0x6 toread 0000000000000000 read 0000000000000000 write fffff800ff517e40 written 0000000000000000 [ 1264.622707] check 0: state 0x6 toread 0000000000000000 read 0000000000000000 write fffff800fd4cae60 written 0000000000000000 [ 1264.660464] locked=4 uptodate=2 to_read=0 to_write=4 failed=0 failed_num=0 [ 1264.680185] for sector 7629696, rmw=0 rcw=0 [ 1264.688775] handling stripe 7629696, state=0x14 cnt=1, pd_idx=2 ops=0:0:0 [ 1264.717231] check 5: state 0x6 toread 0000000000000000 read 0000000000000000 write fffff800ffcffcc0 written 0000000000000000 [ 1264.760881] check 4: state 0x6 toread 0000000000000000 read 0000000000000000 write fffff800fdd4e360 written 0000000000000000 [ 1264.797532] check 3: state 0x1 toread 0000000000000000 read 0000000000000000 write 0000000000000000 written 0000000000000000 [ 1264.833996] check 2: state 0x1 toread 0000000000000000 read 0000000000000000 write 0000000000000000 written 0000000000000000 [ 1264.870709] check 1: state 0x6 toread 0000000000000000 read 0000000000000000 write fffff800ff517e40 written 0000000000000000 [ 1264.901594] check 0: state 0x6 toread 0000000000000000 read 0000000000000000 write fffff800fd4cae60 written 0000000000000000 [ 1264.940015] locked=4 uptodate=2 to_read=0 to_write=4 failed=0 failed_num=0 [ 1264.959415] for sector 7629696, rmw=0 rcw=0 [ 1264.967595] handling stripe 7629696, state=0x14 cnt=1, pd_idx=2 ops=0:0:0 [ 1264.996217] check 5: state 0x6 toread 0000000000000000 read 0000000000000000 write fffff800ffcffcc0 written 0000000000000000 [ 1265.046572] check 4: state 0x6 toread 0000000000000000 read 0000000000000000 write fffff800fdd4e360 written 0000000000000000 [ 1265.083599] check 3: state 0x1 toread 0000000000000000 read 0000000000000000 write 0000000000000000 written 0000000000000000 [ 1265.109803] check 2: state 0x1 toread 0000000000000000 read 0000000000000000 write 0000000000000000 written 0000000000000000 [ 1265.139780] check 1: state 0x6 toread 0000000000000000 read 0000000000000000 write fffff800ff517e40 written 0000000000000000 [ 1265.170751] check 0: state 0x6 toread 0000000000000000 read 0000000000000000 write fffff800fd4cae60 written 0000000000000000 [ 1265.517286] locked=4 uptodate=2 to_read=0 to_write=4 failed=0 failed_num=0 [ 1265.533341] for sector 7629696, rmw=0 rcw=0 [ 1265.541329] handling stripe 7629696, state=0x14 cnt=1, pd_idx=2 ops=0:0:0 [ 1265.568846] check 5: state 0x6 toread 0000000000000000 read 0000000000000000 write fffff800ffcffcc0 written 0000000000000000 [ 1265.606657] check 4: state 0x6 toread 0000000000000000 read 0000000000000000 write fffff800fdd4e360 written 0000000000000000 [ 1265.649175] check 3: state 0x1 toread 0000000000000000 read 0000000000000000 write 0000000000000000 written 0000000000000000 [ 1265.685075] check 2: state 0x1 toread 0000000000000000 read 0000000000000000 write 0000000000000000 written 0000000000000000 [ 1265.727835] check 1: state 0x6 toread 0000000000000000 read 0000000000000000 write fffff800ff517e40 written 0000000000000000 [ 1265.764432] check 0: state 0x6 toread 0000000000000000 read 0000000000000000 write fffff800fd4cae60 written 0000000000000000 [ 1265.806241] locked=4 uptodate=2 to_read=0 to_write=4 failed=0 failed_num=0 [ 1265.825835] for sector 7629696, rmw=0 rcw=0 [ 1265.833817] handling stripe 7629696, state=0x14 cnt=1, pd_idx=2 ops=0:0:0 [ 1265.862460] check 5: state 0x6 toread 0000000000000000 read 0000000000000000 write fffff800ffcffcc0 written 0000000000000000 [ 1265.899068] check 4: state 0x6 toread 0000000000000000 read 0000000000000000 write fffff800fdd4e360 written 0000000000000000 [ 1265.941328] check 3: state 0x1 toread 0000000000000000 read 0000000000000000 write 0000000000000000 written 0000000000000000 [ 1265.972129] check 2: state 0x1 toread 0000000000000000 read 0000000000000000 write 0000000000000000 written 0000000000000000 For information, after crash, I have : Root poulenc:[/sys/block] > cat /proc/mdstat Personalities : [raid1] [raid6] [raid5] [raid4] md_d0 : active raid5 sdc1[0] sdh1[5] sdg1[4] sdf1[3] sde1[2] sdd1[1] 1464725760 blocks level 5, 64k chunk, algorithm 2 [6/6] [UUUUUU] Regards, JKB ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: 2.6.23.1: mdadm/raid5 hung/d-state 2007-11-06 10:19 ` BERTRAND Joël @ 2007-11-06 11:29 ` Justin Piszcz 2007-11-06 11:39 ` BERTRAND Joël 2007-11-07 1:25 ` Dan Williams 1 sibling, 1 reply; 35+ messages in thread From: Justin Piszcz @ 2007-11-06 11:29 UTC (permalink / raw) To: BERTRAND Joël; +Cc: Dan Williams, Neil Brown, linux-kernel, linux-raid [-- Attachment #1: Type: TEXT/PLAIN, Size: 868 bytes --] On Tue, 6 Nov 2007, BERTRAND Joël wrote: > Done. Here is obtained ouput : > > [ 1265.899068] check 4: state 0x6 toread 0000000000000000 read > 0000000000000000 write fffff800fdd4e360 written 0000000000000000 > [ 1265.941328] check 3: state 0x1 toread 0000000000000000 read > 0000000000000000 write 0000000000000000 written 0000000000000000 > [ 1265.972129] check 2: state 0x1 toread 0000000000000000 read > 0000000000000000 write 0000000000000000 written 0000000000000000 > > > For information, after crash, I have : > > Root poulenc:[/sys/block] > cat /proc/mdstat > Personalities : [raid1] [raid6] [raid5] [raid4] > md_d0 : active raid5 sdc1[0] sdh1[5] sdg1[4] sdf1[3] sde1[2] sdd1[1] > 1464725760 blocks level 5, 64k chunk, algorithm 2 [6/6] [UUUUUU] > > Regards, > > JKB After the crash it is not 'resyncing' ? Justin. ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: 2.6.23.1: mdadm/raid5 hung/d-state 2007-11-06 11:29 ` Justin Piszcz @ 2007-11-06 11:39 ` BERTRAND Joël 2007-11-06 11:42 ` Justin Piszcz 0 siblings, 1 reply; 35+ messages in thread From: BERTRAND Joël @ 2007-11-06 11:39 UTC (permalink / raw) To: Justin Piszcz; +Cc: Dan Williams, Neil Brown, linux-kernel, linux-raid Justin Piszcz wrote: > > > On Tue, 6 Nov 2007, BERTRAND Joël wrote: > >> Done. Here is obtained ouput : >> >> [ 1265.899068] check 4: state 0x6 toread 0000000000000000 read >> 0000000000000000 write fffff800fdd4e360 written 0000000000000000 >> [ 1265.941328] check 3: state 0x1 toread 0000000000000000 read >> 0000000000000000 write 0000000000000000 written 0000000000000000 >> [ 1265.972129] check 2: state 0x1 toread 0000000000000000 read >> 0000000000000000 write 0000000000000000 written 0000000000000000 >> >> >> For information, after crash, I have : >> >> Root poulenc:[/sys/block] > cat /proc/mdstat >> Personalities : [raid1] [raid6] [raid5] [raid4] >> md_d0 : active raid5 sdc1[0] sdh1[5] sdg1[4] sdf1[3] sde1[2] sdd1[1] >> 1464725760 blocks level 5, 64k chunk, algorithm 2 [6/6] [UUUUUU] >> >> Regards, >> >> JKB > > After the crash it is not 'resyncing' ? No, it isn't... JKB - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: 2.6.23.1: mdadm/raid5 hung/d-state 2007-11-06 11:39 ` BERTRAND Joël @ 2007-11-06 11:42 ` Justin Piszcz 2007-11-06 12:20 ` BERTRAND Joël 0 siblings, 1 reply; 35+ messages in thread From: Justin Piszcz @ 2007-11-06 11:42 UTC (permalink / raw) To: BERTRAND Joël; +Cc: Dan Williams, Neil Brown, linux-kernel, linux-raid [-- Attachment #1: Type: TEXT/PLAIN, Size: 2813 bytes --] On Tue, 6 Nov 2007, BERTRAND Joël wrote: > Justin Piszcz wrote: >> >> >> On Tue, 6 Nov 2007, BERTRAND Joël wrote: >> >>> Done. Here is obtained ouput : >>> >>> [ 1265.899068] check 4: state 0x6 toread 0000000000000000 read >>> 0000000000000000 write fffff800fdd4e360 written 0000000000000000 >>> [ 1265.941328] check 3: state 0x1 toread 0000000000000000 read >>> 0000000000000000 write 0000000000000000 written 0000000000000000 >>> [ 1265.972129] check 2: state 0x1 toread 0000000000000000 read >>> 0000000000000000 write 0000000000000000 written 0000000000000000 >>> >>> >>> For information, after crash, I have : >>> >>> Root poulenc:[/sys/block] > cat /proc/mdstat >>> Personalities : [raid1] [raid6] [raid5] [raid4] >>> md_d0 : active raid5 sdc1[0] sdh1[5] sdg1[4] sdf1[3] sde1[2] sdd1[1] >>> 1464725760 blocks level 5, 64k chunk, algorithm 2 [6/6] [UUUUUU] >>> >>> Regards, >>> >>> JKB >> >> After the crash it is not 'resyncing' ? > > No, it isn't... > > JKB > After any crash/unclean shutdown the RAID should resync, if it doesn't, that's not good, I'd suggest running a raid check. The 'repair' is supposed to clean it, in some cases (md0=swap) it gets dirty again. Tue May 8 09:19:54 EDT 2007: Executing RAID health check for /dev/md0... Tue May 8 09:19:55 EDT 2007: Executing RAID health check for /dev/md1... Tue May 8 09:19:56 EDT 2007: Executing RAID health check for /dev/md2... Tue May 8 09:19:57 EDT 2007: Executing RAID health check for /dev/md3... Tue May 8 10:09:58 EDT 2007: cat /sys/block/md0/md/mismatch_cnt Tue May 8 10:09:58 EDT 2007: 2176 Tue May 8 10:09:58 EDT 2007: cat /sys/block/md1/md/mismatch_cnt Tue May 8 10:09:58 EDT 2007: 0 Tue May 8 10:09:58 EDT 2007: cat /sys/block/md2/md/mismatch_cnt Tue May 8 10:09:58 EDT 2007: 0 Tue May 8 10:09:58 EDT 2007: cat /sys/block/md3/md/mismatch_cnt Tue May 8 10:09:58 EDT 2007: 0 Tue May 8 10:09:58 EDT 2007: The meta-device /dev/md0 has 2176 mismatched sectors. Tue May 8 10:09:58 EDT 2007: Executing repair on /dev/md0 Tue May 8 10:09:59 EDT 2007: The meta-device /dev/md1 has no mismatched sectors. Tue May 8 10:10:00 EDT 2007: The meta-device /dev/md2 has no mismatched sectors. Tue May 8 10:10:01 EDT 2007: The meta-device /dev/md3 has no mismatched sectors. Tue May 8 10:20:02 EDT 2007: All devices are clean... Tue May 8 10:20:02 EDT 2007: cat /sys/block/md0/md/mismatch_cnt Tue May 8 10:20:02 EDT 2007: 2176 Tue May 8 10:20:02 EDT 2007: cat /sys/block/md1/md/mismatch_cnt Tue May 8 10:20:02 EDT 2007: 0 Tue May 8 10:20:02 EDT 2007: cat /sys/block/md2/md/mismatch_cnt Tue May 8 10:20:02 EDT 2007: 0 Tue May 8 10:20:02 EDT 2007: cat /sys/block/md3/md/mismatch_cnt Tue May 8 10:20:02 EDT 2007: 0 ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: 2.6.23.1: mdadm/raid5 hung/d-state 2007-11-06 11:42 ` Justin Piszcz @ 2007-11-06 12:20 ` BERTRAND Joël 0 siblings, 0 replies; 35+ messages in thread From: BERTRAND Joël @ 2007-11-06 12:20 UTC (permalink / raw) To: Justin Piszcz; +Cc: Dan Williams, Neil Brown, linux-kernel, linux-raid Justin Piszcz wrote: > > > On Tue, 6 Nov 2007, BERTRAND Joël wrote: > >> Justin Piszcz wrote: >>> >>> >>> On Tue, 6 Nov 2007, BERTRAND Joël wrote: >>> >>>> Done. Here is obtained ouput : >>>> >>>> [ 1265.899068] check 4: state 0x6 toread 0000000000000000 read >>>> 0000000000000000 write fffff800fdd4e360 written 0000000000000000 >>>> [ 1265.941328] check 3: state 0x1 toread 0000000000000000 read >>>> 0000000000000000 write 0000000000000000 written 0000000000000000 >>>> [ 1265.972129] check 2: state 0x1 toread 0000000000000000 read >>>> 0000000000000000 write 0000000000000000 written 0000000000000000 >>>> >>>> >>>> For information, after crash, I have : >>>> >>>> Root poulenc:[/sys/block] > cat /proc/mdstat >>>> Personalities : [raid1] [raid6] [raid5] [raid4] >>>> md_d0 : active raid5 sdc1[0] sdh1[5] sdg1[4] sdf1[3] sde1[2] sdd1[1] >>>> 1464725760 blocks level 5, 64k chunk, algorithm 2 [6/6] [UUUUUU] >>>> >>>> Regards, >>>> >>>> JKB >>> >>> After the crash it is not 'resyncing' ? >> >> No, it isn't... >> >> JKB >> > > After any crash/unclean shutdown the RAID should resync, if it doesn't, > that's not good, I'd suggest running a raid check. > > The 'repair' is supposed to clean it, in some cases (md0=swap) it gets > dirty again. > > Tue May 8 09:19:54 EDT 2007: Executing RAID health check for /dev/md0... > Tue May 8 09:19:55 EDT 2007: Executing RAID health check for /dev/md1... > Tue May 8 09:19:56 EDT 2007: Executing RAID health check for /dev/md2... > Tue May 8 09:19:57 EDT 2007: Executing RAID health check for /dev/md3... > Tue May 8 10:09:58 EDT 2007: cat /sys/block/md0/md/mismatch_cnt > Tue May 8 10:09:58 EDT 2007: 2176 > Tue May 8 10:09:58 EDT 2007: cat /sys/block/md1/md/mismatch_cnt > Tue May 8 10:09:58 EDT 2007: 0 > Tue May 8 10:09:58 EDT 2007: cat /sys/block/md2/md/mismatch_cnt > Tue May 8 10:09:58 EDT 2007: 0 > Tue May 8 10:09:58 EDT 2007: cat /sys/block/md3/md/mismatch_cnt > Tue May 8 10:09:58 EDT 2007: 0 > Tue May 8 10:09:58 EDT 2007: The meta-device /dev/md0 has 2176 > mismatched sectors. > Tue May 8 10:09:58 EDT 2007: Executing repair on /dev/md0 > Tue May 8 10:09:59 EDT 2007: The meta-device /dev/md1 has no mismatched > sectors. > Tue May 8 10:10:00 EDT 2007: The meta-device /dev/md2 has no mismatched > sectors. > Tue May 8 10:10:01 EDT 2007: The meta-device /dev/md3 has no mismatched > sectors. > Tue May 8 10:20:02 EDT 2007: All devices are clean... > Tue May 8 10:20:02 EDT 2007: cat /sys/block/md0/md/mismatch_cnt > Tue May 8 10:20:02 EDT 2007: 2176 > Tue May 8 10:20:02 EDT 2007: cat /sys/block/md1/md/mismatch_cnt > Tue May 8 10:20:02 EDT 2007: 0 > Tue May 8 10:20:02 EDT 2007: cat /sys/block/md2/md/mismatch_cnt > Tue May 8 10:20:02 EDT 2007: 0 > Tue May 8 10:20:02 EDT 2007: cat /sys/block/md3/md/mismatch_cnt > Tue May 8 10:20:02 EDT 2007: 0 I cannot repair this raid volume. I cannot reboot server without sending stop+A. init 6 stops at "INIT:". After reboot, md0 is resynchronized. Regards, JKB - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: 2.6.23.1: mdadm/raid5 hung/d-state 2007-11-06 10:19 ` BERTRAND Joël 2007-11-06 11:29 ` Justin Piszcz @ 2007-11-07 1:25 ` Dan Williams 2007-11-07 5:00 ` Jeff Lessem 2007-11-07 11:20 ` BERTRAND Joël 1 sibling, 2 replies; 35+ messages in thread From: Dan Williams @ 2007-11-07 1:25 UTC (permalink / raw) To: BERTRAND Joël Cc: Justin Piszcz, Neil Brown, linux-kernel, linux-raid, Jeff [-- Attachment #1: Type: text/plain, Size: 4104 bytes --] On Tue, 2007-11-06 at 03:19 -0700, BERTRAND Joël wrote: > Done. Here is obtained ouput : Much appreciated. > > [ 1260.969314] handling stripe 7629696, state=0x14 cnt=1, pd_idx=2 ops=0:0:0 > [ 1260.980606] check 5: state 0x6 toread 0000000000000000 read 0000000000000000 write fffff800ffcffcc0 written 0000000000000000 > [ 1260.994808] check 4: state 0x6 toread 0000000000000000 read 0000000000000000 write fffff800fdd4e360 written 0000000000000000 > [ 1261.009325] check 3: state 0x1 toread 0000000000000000 read 0000000000000000 write 0000000000000000 written 0000000000000000 > [ 1261.244478] check 2: state 0x1 toread 0000000000000000 read 0000000000000000 write 0000000000000000 written 0000000000000000 > [ 1261.270821] check 1: state 0x6 toread 0000000000000000 read 0000000000000000 write fffff800ff517e40 written 0000000000000000 > [ 1261.312320] check 0: state 0x6 toread 0000000000000000 read 0000000000000000 write fffff800fd4cae60 written 0000000000000000 > [ 1261.361030] locked=4 uptodate=2 to_read=0 to_write=4 failed=0 failed_num=0 > [ 1261.443120] for sector 7629696, rmw=0 rcw=0 [..] This looks as if the blocks were prepared to be written out, but were never handled in ops_run_biodrain(), so they remain locked forever. The operations flags are all clear which means handle_stripe thinks nothing else needs to be done. The following patch, also attached, cleans up cases where the code looks at sh->ops.pending when it should be looking at the consistent stack-based snapshot of the operations flags. --- drivers/md/raid5.c | 16 +++++++++------- 1 files changed, 9 insertions(+), 7 deletions(-) diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c index 496b9a3..e1a3942 100644 --- a/drivers/md/raid5.c +++ b/drivers/md/raid5.c @@ -693,7 +693,8 @@ ops_run_prexor(struct stripe_head *sh, struct dma_async_tx_descriptor *tx) } static struct dma_async_tx_descriptor * -ops_run_biodrain(struct stripe_head *sh, struct dma_async_tx_descriptor *tx) +ops_run_biodrain(struct stripe_head *sh, struct dma_async_tx_descriptor *tx, + unsigned long pending) { int disks = sh->disks; int pd_idx = sh->pd_idx, i; @@ -701,7 +702,7 @@ ops_run_biodrain(struct stripe_head *sh, struct dma_async_tx_descriptor *tx) /* check if prexor is active which means only process blocks * that are part of a read-modify-write (Wantprexor) */ - int prexor = test_bit(STRIPE_OP_PREXOR, &sh->ops.pending); + int prexor = test_bit(STRIPE_OP_PREXOR, &pending); pr_debug("%s: stripe %llu\n", __FUNCTION__, (unsigned long long)sh->sector); @@ -778,7 +779,8 @@ static void ops_complete_write(void *stripe_head_ref) } static void -ops_run_postxor(struct stripe_head *sh, struct dma_async_tx_descriptor *tx) +ops_run_postxor(struct stripe_head *sh, struct dma_async_tx_descriptor *tx, + unsigned long pending) { /* kernel stack size limits the total number of disks */ int disks = sh->disks; @@ -786,7 +788,7 @@ ops_run_postxor(struct stripe_head *sh, struct dma_async_tx_descriptor *tx) int count = 0, pd_idx = sh->pd_idx, i; struct page *xor_dest; - int prexor = test_bit(STRIPE_OP_PREXOR, &sh->ops.pending); + int prexor = test_bit(STRIPE_OP_PREXOR, &pending); unsigned long flags; dma_async_tx_callback callback; @@ -813,7 +815,7 @@ ops_run_postxor(struct stripe_head *sh, struct dma_async_tx_descriptor *tx) } /* check whether this postxor is part of a write */ - callback = test_bit(STRIPE_OP_BIODRAIN, &sh->ops.pending) ? + callback = test_bit(STRIPE_OP_BIODRAIN, &pending) ? ops_complete_write : ops_complete_postxor; /* 1/ if we prexor'd then the dest is reused as a source @@ -901,12 +903,12 @@ static void raid5_run_ops(struct stripe_head *sh, unsigned long pending) tx = ops_run_prexor(sh, tx); if (test_bit(STRIPE_OP_BIODRAIN, &pending)) { - tx = ops_run_biodrain(sh, tx); + tx = ops_run_biodrain(sh, tx, pending); overlap_clear++; } if (test_bit(STRIPE_OP_POSTXOR, &pending)) - ops_run_postxor(sh, tx); + ops_run_postxor(sh, tx, pending); if (test_bit(STRIPE_OP_CHECK, &pending)) ops_run_check(sh); [-- Attachment #2: raid5-fix-unending-write-sequence.patch --] [-- Type: text/x-patch, Size: 2650 bytes --] raid5: fix unending write sequence From: Dan Williams <dan.j.williams@intel.com> --- drivers/md/raid5.c | 16 +++++++++------- 1 files changed, 9 insertions(+), 7 deletions(-) diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c index 496b9a3..e1a3942 100644 --- a/drivers/md/raid5.c +++ b/drivers/md/raid5.c @@ -693,7 +693,8 @@ ops_run_prexor(struct stripe_head *sh, struct dma_async_tx_descriptor *tx) } static struct dma_async_tx_descriptor * -ops_run_biodrain(struct stripe_head *sh, struct dma_async_tx_descriptor *tx) +ops_run_biodrain(struct stripe_head *sh, struct dma_async_tx_descriptor *tx, + unsigned long pending) { int disks = sh->disks; int pd_idx = sh->pd_idx, i; @@ -701,7 +702,7 @@ ops_run_biodrain(struct stripe_head *sh, struct dma_async_tx_descriptor *tx) /* check if prexor is active which means only process blocks * that are part of a read-modify-write (Wantprexor) */ - int prexor = test_bit(STRIPE_OP_PREXOR, &sh->ops.pending); + int prexor = test_bit(STRIPE_OP_PREXOR, &pending); pr_debug("%s: stripe %llu\n", __FUNCTION__, (unsigned long long)sh->sector); @@ -778,7 +779,8 @@ static void ops_complete_write(void *stripe_head_ref) } static void -ops_run_postxor(struct stripe_head *sh, struct dma_async_tx_descriptor *tx) +ops_run_postxor(struct stripe_head *sh, struct dma_async_tx_descriptor *tx, + unsigned long pending) { /* kernel stack size limits the total number of disks */ int disks = sh->disks; @@ -786,7 +788,7 @@ ops_run_postxor(struct stripe_head *sh, struct dma_async_tx_descriptor *tx) int count = 0, pd_idx = sh->pd_idx, i; struct page *xor_dest; - int prexor = test_bit(STRIPE_OP_PREXOR, &sh->ops.pending); + int prexor = test_bit(STRIPE_OP_PREXOR, &pending); unsigned long flags; dma_async_tx_callback callback; @@ -813,7 +815,7 @@ ops_run_postxor(struct stripe_head *sh, struct dma_async_tx_descriptor *tx) } /* check whether this postxor is part of a write */ - callback = test_bit(STRIPE_OP_BIODRAIN, &sh->ops.pending) ? + callback = test_bit(STRIPE_OP_BIODRAIN, &pending) ? ops_complete_write : ops_complete_postxor; /* 1/ if we prexor'd then the dest is reused as a source @@ -901,12 +903,12 @@ static void raid5_run_ops(struct stripe_head *sh, unsigned long pending) tx = ops_run_prexor(sh, tx); if (test_bit(STRIPE_OP_BIODRAIN, &pending)) { - tx = ops_run_biodrain(sh, tx); + tx = ops_run_biodrain(sh, tx, pending); overlap_clear++; } if (test_bit(STRIPE_OP_POSTXOR, &pending)) - ops_run_postxor(sh, tx); + ops_run_postxor(sh, tx, pending); if (test_bit(STRIPE_OP_CHECK, &pending)) ops_run_check(sh); ^ permalink raw reply related [flat|nested] 35+ messages in thread
* Re: 2.6.23.1: mdadm/raid5 hung/d-state 2007-11-07 1:25 ` Dan Williams @ 2007-11-07 5:00 ` Jeff Lessem 2007-11-08 17:45 ` Bill Davidsen 2007-11-08 21:40 ` Carlos Carvalho 2007-11-07 11:20 ` BERTRAND Joël 1 sibling, 2 replies; 35+ messages in thread From: Jeff Lessem @ 2007-11-07 5:00 UTC (permalink / raw) To: Dan Williams Cc: BERTRAND Joël, Justin Piszcz, Neil Brown, linux-kernel, linux-raid Dan Williams wrote: > The following patch, also attached, cleans up cases where the code looks > at sh->ops.pending when it should be looking at the consistent > stack-based snapshot of the operations flags. I tried this patch (against a stock 2.6.23), and it did not work for me. Not only did I/O to the effected RAID5 & XFS partition stop, but also I/O to all other disks. I was not able to capture any debugging information, but I should be able to do that tomorrow when I can hook a serial console to the machine. I'm not sure if my problem is identical to these others, as mine only seems to manifest with RAID5+XFS. The RAID rebuilds with no problem, and I've not had any problems with RAID5+ext3. > > > --- > > drivers/md/raid5.c | 16 +++++++++------- > 1 files changed, 9 insertions(+), 7 deletions(-) > > diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c > index 496b9a3..e1a3942 100644 > --- a/drivers/md/raid5.c > +++ b/drivers/md/raid5.c > @@ -693,7 +693,8 @@ ops_run_prexor(struct stripe_head *sh, struct dma_async_tx_descriptor *tx) > } > > static struct dma_async_tx_descriptor * > -ops_run_biodrain(struct stripe_head *sh, struct dma_async_tx_descriptor *tx) > +ops_run_biodrain(struct stripe_head *sh, struct dma_async_tx_descriptor *tx, > + unsigned long pending) > { > int disks = sh->disks; > int pd_idx = sh->pd_idx, i; > @@ -701,7 +702,7 @@ ops_run_biodrain(struct stripe_head *sh, struct dma_async_tx_descriptor *tx) > /* check if prexor is active which means only process blocks > * that are part of a read-modify-write (Wantprexor) > */ > - int prexor = test_bit(STRIPE_OP_PREXOR, &sh->ops.pending); > + int prexor = test_bit(STRIPE_OP_PREXOR, &pending); > > pr_debug("%s: stripe %llu\n", __FUNCTION__, > (unsigned long long)sh->sector); > @@ -778,7 +779,8 @@ static void ops_complete_write(void *stripe_head_ref) > } > > static void > -ops_run_postxor(struct stripe_head *sh, struct dma_async_tx_descriptor *tx) > +ops_run_postxor(struct stripe_head *sh, struct dma_async_tx_descriptor *tx, > + unsigned long pending) > { > /* kernel stack size limits the total number of disks */ > int disks = sh->disks; > @@ -786,7 +788,7 @@ ops_run_postxor(struct stripe_head *sh, struct dma_async_tx_descriptor *tx) > > int count = 0, pd_idx = sh->pd_idx, i; > struct page *xor_dest; > - int prexor = test_bit(STRIPE_OP_PREXOR, &sh->ops.pending); > + int prexor = test_bit(STRIPE_OP_PREXOR, &pending); > unsigned long flags; > dma_async_tx_callback callback; > > @@ -813,7 +815,7 @@ ops_run_postxor(struct stripe_head *sh, struct dma_async_tx_descriptor *tx) > } > > /* check whether this postxor is part of a write */ > - callback = test_bit(STRIPE_OP_BIODRAIN, &sh->ops.pending) ? > + callback = test_bit(STRIPE_OP_BIODRAIN, &pending) ? > ops_complete_write : ops_complete_postxor; > > /* 1/ if we prexor'd then the dest is reused as a source > @@ -901,12 +903,12 @@ static void raid5_run_ops(struct stripe_head *sh, unsigned long pending) > tx = ops_run_prexor(sh, tx); > > if (test_bit(STRIPE_OP_BIODRAIN, &pending)) { > - tx = ops_run_biodrain(sh, tx); > + tx = ops_run_biodrain(sh, tx, pending); > overlap_clear++; > } > > if (test_bit(STRIPE_OP_POSTXOR, &pending)) > - ops_run_postxor(sh, tx); > + ops_run_postxor(sh, tx, pending); > > if (test_bit(STRIPE_OP_CHECK, &pending)) > ops_run_check(sh); > > ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: 2.6.23.1: mdadm/raid5 hung/d-state 2007-11-07 5:00 ` Jeff Lessem @ 2007-11-08 17:45 ` Bill Davidsen 2007-11-08 18:02 ` Dan Williams 2007-11-08 21:40 ` Carlos Carvalho 1 sibling, 1 reply; 35+ messages in thread From: Bill Davidsen @ 2007-11-08 17:45 UTC (permalink / raw) To: Jeff Lessem Cc: Dan Williams, BERTRAND Joël, Justin Piszcz, Neil Brown, linux-kernel, linux-raid Jeff Lessem wrote: > Dan Williams wrote: > > The following patch, also attached, cleans up cases where the code > looks > > at sh->ops.pending when it should be looking at the consistent > > stack-based snapshot of the operations flags. > > I tried this patch (against a stock 2.6.23), and it did not work for > me. Not only did I/O to the effected RAID5 & XFS partition stop, but > also I/O to all other disks. I was not able to capture any debugging > information, but I should be able to do that tomorrow when I can hook > a serial console to the machine. That can't be good! This is worrisome because Joel is giddy with joy because it fixes his iSCSI problems. I was going to try it with nbd, but perhaps I'll wait a week or so and see if others have more information. Applying patches before a holiday weekend is a good way to avoid time off. :-( > > I'm not sure if my problem is identical to these others, as mine only > seems to manifest with RAID5+XFS. The RAID rebuilds with no problem, > and I've not had any problems with RAID5+ext3. Hopefully it's not the raid which is the issue. -- bill davidsen <davidsen@tmr.com> CTO TMR Associates, Inc Doing interesting things with small computers since 1979 ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: 2.6.23.1: mdadm/raid5 hung/d-state 2007-11-08 17:45 ` Bill Davidsen @ 2007-11-08 18:02 ` Dan Williams 2007-11-09 20:36 ` Jeff Lessem 0 siblings, 1 reply; 35+ messages in thread From: Dan Williams @ 2007-11-08 18:02 UTC (permalink / raw) To: Bill Davidsen Cc: Jeff Lessem, BERTRAND Joël, Justin Piszcz, Neil Brown, linux-kernel, linux-raid On 11/8/07, Bill Davidsen <davidsen@tmr.com> wrote: > Jeff Lessem wrote: > > Dan Williams wrote: > > > The following patch, also attached, cleans up cases where the code > > looks > > > at sh->ops.pending when it should be looking at the consistent > > > stack-based snapshot of the operations flags. > > > > I tried this patch (against a stock 2.6.23), and it did not work for > > me. Not only did I/O to the effected RAID5 & XFS partition stop, but > > also I/O to all other disks. I was not able to capture any debugging > > information, but I should be able to do that tomorrow when I can hook > > a serial console to the machine. > > That can't be good! This is worrisome because Joel is giddy with joy > because it fixes his iSCSI problems. I was going to try it with nbd, but > perhaps I'll wait a week or so and see if others have more information. > Applying patches before a holiday weekend is a good way to avoid time > off. :-( We need to see more information on the failure that Jeff is seeing, and whether it goes away with the two known patches applied. He applied this most recent patch against stock 2.6.23 which means that the platform was still open to the first biofill flags issue. -- Dan ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: 2.6.23.1: mdadm/raid5 hung/d-state 2007-11-08 18:02 ` Dan Williams @ 2007-11-09 20:36 ` Jeff Lessem 0 siblings, 0 replies; 35+ messages in thread From: Jeff Lessem @ 2007-11-09 20:36 UTC (permalink / raw) To: Dan Williams Cc: Bill Davidsen, BERTRAND Joël, Justin Piszcz, Neil Brown, linux-kernel, linux-raid Dan Williams wrote: > On 11/8/07, Bill Davidsen <davidsen@tmr.com> wrote: >> Jeff Lessem wrote: >>> Dan Williams wrote: >>>> The following patch, also attached, cleans up cases where the code >>> looks >>>> at sh->ops.pending when it should be looking at the consistent >>>> stack-based snapshot of the operations flags. >>> I tried this patch (against a stock 2.6.23), and it did not work for >>> me. Not only did I/O to the effected RAID5 & XFS partition stop, but >>> also I/O to all other disks. I was not able to capture any debugging >>> information, but I should be able to do that tomorrow when I can hook >>> a serial console to the machine. >> That can't be good! This is worrisome because Joel is giddy with joy >> because it fixes his iSCSI problems. I was going to try it with nbd, but >> perhaps I'll wait a week or so and see if others have more information. >> Applying patches before a holiday weekend is a good way to avoid time >> off. :-( > > We need to see more information on the failure that Jeff is seeing, > and whether it goes away with the two known patches applied. He > applied this most recent patch against stock 2.6.23 which means that > the platform was still open to the first biofill flags issue. I applied both of the patches. The biofill one did not apply cleanly, as it was adding biofill to one section, and removing it from another, but it appears that biofill does not need to be removed from a stock 2.6.23 kernel. The second patch applies with a slight offset, but no errors. I can report success so far with both patches applied. I created an 1100GB RAID5, formated it XFS, and successfully "tar c | tar x" 895GB of data onto it. I'm also in the process of rsync-ing the 895GB of data from the (slightly changed) original. In the past, I would always get a hang within 0-50GB of data transfer. For each drive in the RAID I also: echo 128 > /sys/block/"$i"/queue/max_sectors_kb echo 512 > /sys/block/"$i"/queue/nr_requests echo 1 > /sys/block/"$i"/device/queue_depth blockdev --setra 65536 /dev/md3 echo 16384 > /sys/block/md3/md/stripe_cache_size These changes appear to improve performance, along with a RAID5 chunk size of 1024k, but these changes alone (without the patches) do not fix the problem. ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: 2.6.23.1: mdadm/raid5 hung/d-state 2007-11-07 5:00 ` Jeff Lessem 2007-11-08 17:45 ` Bill Davidsen @ 2007-11-08 21:40 ` Carlos Carvalho 2007-11-09 9:14 ` Justin Piszcz 1 sibling, 1 reply; 35+ messages in thread From: Carlos Carvalho @ 2007-11-08 21:40 UTC (permalink / raw) To: Jeff Lessem, root Cc: Dan Williams, BERTRAND Joël, Justin Piszcz, Neil Brown, linux-kernel, linux-raid Jeff Lessem (Jeff@Lessem.org) wrote on 6 November 2007 22:00: >Dan Williams wrote: > > The following patch, also attached, cleans up cases where the code looks > > at sh->ops.pending when it should be looking at the consistent > > stack-based snapshot of the operations flags. > >I tried this patch (against a stock 2.6.23), and it did not work for >me. Not only did I/O to the effected RAID5 & XFS partition stop, but >also I/O to all other disks. I was not able to capture any debugging >information, but I should be able to do that tomorrow when I can hook >a serial console to the machine. > >I'm not sure if my problem is identical to these others, as mine only >seems to manifest with RAID5+XFS. The RAID rebuilds with no problem, >and I've not had any problems with RAID5+ext3. Us too! We're stuck trying to build a disk server with several disks in a raid5 array, and the rsync from the old machine stops writing to the new filesystem. It only happens under heavy IO. We can make it lock without rsync, using 8 simultaneous dd's to the array. All IO stops, including the resync after a newly created raid or after an unclean reboot. We could not trigger the problem with ext3 or reiser3; it only happens with xfs. ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: 2.6.23.1: mdadm/raid5 hung/d-state 2007-11-08 21:40 ` Carlos Carvalho @ 2007-11-09 9:14 ` Justin Piszcz 2007-11-09 14:09 ` Fabiano Silva 0 siblings, 1 reply; 35+ messages in thread From: Justin Piszcz @ 2007-11-09 9:14 UTC (permalink / raw) To: Carlos Carvalho Cc: Jeff Lessem, root, Dan Williams, BERTRAND Joël, Neil Brown, linux-kernel, linux-raid, xfs On Thu, 8 Nov 2007, Carlos Carvalho wrote: > Jeff Lessem (Jeff@Lessem.org) wrote on 6 November 2007 22:00: > >Dan Williams wrote: > > > The following patch, also attached, cleans up cases where the code looks > > > at sh->ops.pending when it should be looking at the consistent > > > stack-based snapshot of the operations flags. > > > >I tried this patch (against a stock 2.6.23), and it did not work for > >me. Not only did I/O to the effected RAID5 & XFS partition stop, but > >also I/O to all other disks. I was not able to capture any debugging > >information, but I should be able to do that tomorrow when I can hook > >a serial console to the machine. > > > >I'm not sure if my problem is identical to these others, as mine only > >seems to manifest with RAID5+XFS. The RAID rebuilds with no problem, > >and I've not had any problems with RAID5+ext3. > > Us too! We're stuck trying to build a disk server with several disks > in a raid5 array, and the rsync from the old machine stops writing to > the new filesystem. It only happens under heavy IO. We can make it > lock without rsync, using 8 simultaneous dd's to the array. All IO > stops, including the resync after a newly created raid or after an > unclean reboot. > > We could not trigger the problem with ext3 or reiser3; it only happens > with xfs. > - > To unsubscribe from this list: send the line "unsubscribe linux-raid" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > Including XFS mailing list as well can you provide more information to them? ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: 2.6.23.1: mdadm/raid5 hung/d-state 2007-11-09 9:14 ` Justin Piszcz @ 2007-11-09 14:09 ` Fabiano Silva 0 siblings, 0 replies; 35+ messages in thread From: Fabiano Silva @ 2007-11-09 14:09 UTC (permalink / raw) To: Justin Piszcz Cc: Carlos Carvalho, Jeff Lessem, root, Dan Williams, BERTRAND Joël, Neil Brown, linux-kernel, linux-raid, xfs [-- Attachment #1: Type: text/plain, Size: 2201 bytes --] On Nov 9, 2007 7:14 AM, Justin Piszcz <jpiszcz@lucidpixels.com> wrote: > > > > On Thu, 8 Nov 2007, Carlos Carvalho wrote: > > > Jeff Lessem (Jeff@Lessem.org) wrote on 6 November 2007 22:00: > > >Dan Williams wrote: > > > > The following patch, also attached, cleans up cases where the code looks > > > > at sh->ops.pending when it should be looking at the consistent > > > > stack-based snapshot of the operations flags. > > > > > >I tried this patch (against a stock 2.6.23), and it did not work for > > >me. Not only did I/O to the effected RAID5 & XFS partition stop, but > > >also I/O to all other disks. I was not able to capture any debugging > > >information, but I should be able to do that tomorrow when I can hook > > >a serial console to the machine. > > > > > >I'm not sure if my problem is identical to these others, as mine only > > >seems to manifest with RAID5+XFS. The RAID rebuilds with no problem, > > >and I've not had any problems with RAID5+ext3. > > > > Us too! We're stuck trying to build a disk server with several disks > > in a raid5 array, and the rsync from the old machine stops writing to > > the new filesystem. It only happens under heavy IO. We can make it > > lock without rsync, using 8 simultaneous dd's to the array. All IO > > stops, including the resync after a newly created raid or after an > > unclean reboot. > > > > We could not trigger the problem with ext3 or reiser3; it only happens > > with xfs. In our case all process using md4, including md4_resync, stay in D state. Call Trace: [<ffffffff803615ac>] __generic_unplug_device+0x13/0x24 [<ffffffff803622cf>] generic_unplug_device+0x18/0x28 [<ffffffff803f2cf7>] get_active_stripe+0x22b/0x472 ... see dmesg (sysrq t) attached. We can reproduce this problem in two machines with the same configuration: - 2 x Dual-Core Opteron 2.8GHz - 8GB memory - 3ware 9000 with 10 x 750GB sata disks - Debian Etch x86_64 - raid5 + xfs (/dev/md4) in all these stock kernel's: - 2.6.22.11, 2.6.22.12, 2.6.23.1, 2.6.24-rc2 running: - for i in f{0..7}; do (dd bs=1M count=100000 if=/dev/zero of=$i &); done If we increase /sys/block/md4/md/stripe_cache_size the device and process back to work. [-- Attachment #2: dmesg_sysrq_t.txt.gz --] [-- Type: application/x-gzip, Size: 114342 bytes --] ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: 2.6.23.1: mdadm/raid5 hung/d-state 2007-11-07 1:25 ` Dan Williams 2007-11-07 5:00 ` Jeff Lessem @ 2007-11-07 11:20 ` BERTRAND Joël 1 sibling, 0 replies; 35+ messages in thread From: BERTRAND Joël @ 2007-11-07 11:20 UTC (permalink / raw) To: Dan Williams; +Cc: Justin Piszcz, Neil Brown, linux-kernel, linux-raid, Jeff Dan Williams wrote: > On Tue, 2007-11-06 at 03:19 -0700, BERTRAND Joël wrote: >> Done. Here is obtained ouput : > > Much appreciated. >> [ 1260.969314] handling stripe 7629696, state=0x14 cnt=1, pd_idx=2 ops=0:0:0 >> [ 1260.980606] check 5: state 0x6 toread 0000000000000000 read 0000000000000000 write fffff800ffcffcc0 written 0000000000000000 >> [ 1260.994808] check 4: state 0x6 toread 0000000000000000 read 0000000000000000 write fffff800fdd4e360 written 0000000000000000 >> [ 1261.009325] check 3: state 0x1 toread 0000000000000000 read 0000000000000000 write 0000000000000000 written 0000000000000000 >> [ 1261.244478] check 2: state 0x1 toread 0000000000000000 read 0000000000000000 write 0000000000000000 written 0000000000000000 >> [ 1261.270821] check 1: state 0x6 toread 0000000000000000 read 0000000000000000 write fffff800ff517e40 written 0000000000000000 >> [ 1261.312320] check 0: state 0x6 toread 0000000000000000 read 0000000000000000 write fffff800fd4cae60 written 0000000000000000 >> [ 1261.361030] locked=4 uptodate=2 to_read=0 to_write=4 failed=0 failed_num=0 >> [ 1261.443120] for sector 7629696, rmw=0 rcw=0 > [..] > > This looks as if the blocks were prepared to be written out, but were > never handled in ops_run_biodrain(), so they remain locked forever. The > operations flags are all clear which means handle_stripe thinks nothing > else needs to be done. > > The following patch, also attached, cleans up cases where the code looks > at sh->ops.pending when it should be looking at the consistent > stack-based snapshot of the operations flags. Thanks for this patch. I'm testing it for three hours. I'm rebuilding a 1.5 TB raid1 array over iSCSI without any trouble. gershwin:[/usr/scripts] > cat /proc/mdstat Personalities : [raid1] [raid6] [raid5] [raid4] md7 : active raid1 sdi1[2] md_d0p1[0] 1464725632 blocks [2/1] [U_] [=>...................] recovery = 6.7% (99484736/1464725632) finish=1450.9min speed=15679K/sec Without your patch, I never reached 1%... I hope it fix this bug and I shall come back when my raid1 volume shall be resynchronized. Regards, JKB - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: 2.6.23.1: mdadm/raid5 hung/d-state 2007-11-05 18:35 ` Dan Williams 2007-11-05 18:35 ` Justin Piszcz @ 2007-11-06 23:18 ` Jeff Lessem 1 sibling, 0 replies; 35+ messages in thread From: Jeff Lessem @ 2007-11-06 23:18 UTC (permalink / raw) To: Dan Williams Cc: Justin Piszcz, Neil Brown, linux-kernel, linux-raid, BERTRAND Joël Dan Williams wrote: > On 11/4/07, Justin Piszcz <jpiszcz@lucidpixels.com> wrote: >> >> On Mon, 5 Nov 2007, Neil Brown wrote: >> >>> On Sunday November 4, jpiszcz@lucidpixels.com wrote: >>>> # ps auxww | grep D >>>> USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND >>>> root 273 0.0 0.0 0 0 ? D Oct21 14:40 [pdflush] >>>> root 274 0.0 0.0 0 0 ? D Oct21 13:00 [pdflush] >>>> >>>> After several days/weeks, this is the second time this has happened, while >>>> doing regular file I/O (decompressing a file), everything on the device >>>> went into D-state. >>> At a guess (I haven't looked closely) I'd say it is the bug that was >>> meant to be fixed by >>> >>> commit 4ae3f847e49e3787eca91bced31f8fd328d50496 >>> >>> except that patch applied badly and needed to be fixed with >>> the following patch (not in git yet). >>> These have been sent to stable@ and should be in the queue for 2.6.23.2 >>> >> Ah, thanks Neil, will be updating as soon as it is released, thanks. >> > > Are you seeing the same "md thread takes 100% of the CPU" that Joël is > reporting? I'm also seeing something similar, but it only seems to cause a problem if the file system is xfs. Once I observed the md thread at 100% cpu, but usually the machine is just idle, with processes stuck in the D state. The system: Quad Xeon with 4GB of ram running stock 2.6.23, x86_64. Drives are attached to an Adaptec AIC-9410W with aic94xx driver 1.0.3 and firmware 1.1 (V17/10c6). Unfortunately I can't try earlier kernels, because the aic94xx driver didn't support SATA disks until 2.6.23. I have 4 750GB drives in a RAID5 with LVM and an LV formated ext3 that works without problem. I can't do extensive testing on those drives because they contain important data. I also have 4 400GB drives attached to the same controller. I created 10GB partition on the 4 drives and a 30GB RAID5 across the drives. Formatting this RAID as XFS and then running bonnie++ on it causes a hang (stack trace at the bottom of this message). Rebooting, letting the RAID resync, and reformating the partition ext3 allows bonnie++ to complete successfully. bonnie++ completes successfully on an xfs formatted non-RAID partition on one of the drives. bonnie++ completes succesfully on an xfs formatted RAID0 across the 4 drives. I should be able to provide any additional debugging information. I can also test any patches, either the previous one from this thread, or new ones. The following stack trace was performed after the RAID5 hung. The hung RAID is md3: SysRq : Show State task PC stack pid father init S 0000000000000000 0 1 0 ffff81012fc47a18 0000000000000082 0000000000000000 ffff81012c73e9c0 ffff81012c8b6478 ffff81012fc44000 ffff81012fcc2000 ffff81012fc44208 0000000300001000 ffff81012fc47a28 00000000ffffffff 000000010030fe11 Call Trace: [<ffffffff80448a20>] schedule_timeout+0x8a/0xad [<ffffffff80237ba4>] process_timeout+0x0/0x5 [<ffffffff80284031>] do_select+0x3fc/0x45f [<ffffffff8028451a>] __pollwait+0x0/0xe1 [<ffffffff802295f6>] default_wake_function+0x0/0xe [<ffffffff80259d96>] generic_file_buffered_write+0x575/0x695 [<ffffffff80287d9d>] __d_lookup+0xb0/0x100 [<ffffffff8027ec7d>] do_lookup+0x63/0x1ae [<ffffffff80287198>] dput+0x1c/0x10b [<ffffffff80280eb1>] __link_path_walk+0xbb7/0xd0c [<ffffffff8028bba8>] mntput_no_expire+0x1c/0x80 [<ffffffff802810d4>] link_path_walk+0xce/0xe0 [<ffffffff80284250>] core_sys_select+0x1bc/0x265 [<ffffffff8028bba8>] mntput_no_expire+0x1c/0x80 [<ffffffff8027ada4>] cp_new_stat+0xe7/0xff [<ffffffff80284755>] sys_select+0x15a/0x183 [<ffffffff8020bdee>] system_call+0x7e/0x83 kthreadd S 0000000000000000 0 2 0 ffff81012fc4bf20 0000000000000046 0000000000000000 0000000000000001 ffff81011598bb98 ffff81012fc44720 ffffffff805354c0 ffff81012fc44928 0000000000000000 ffff81011598bb90 00000000ffffffff 0000000000000286 Call Trace: [<ffffffff802410c4>] kthreadd+0x73/0x12e [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff803075b0>] acpi_ds_init_one_object+0x0/0x7c [<ffffffff80241051>] kthreadd+0x0/0x12e [<ffffffff8020cbfe>] child_rip+0x0/0x12 migration/0 S 0000000000000000 0 3 2 ffff81012fc4feb0 0000000000000046 0000000000000001 0000000000000001 ffff8100a4e67e90 ffff81012fc44e40 ffff810129824000 ffff81012fc45048 0000000000000000 ffff8100a4e67e88 ffff8100a4e67e90 0000000000000286 Call Trace: [<ffffffff8022c553>] migration_thread+0x185/0x21d [<ffffffff8022c3ce>] migration_thread+0x0/0x21d [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 ksoftirqd/0 S 0000000000000000 0 4 2 ffff81012fc51f10 0000000000000046 0000000000000000 ffff81012fc45770 0000000000000001 ffff81012fc45560 ffffffff805354c0 ffff81012fc45768 000000002fc44720 ffff81012ea47560 00000000ffffffff 0000000000000000 Call Trace: [<ffffffff80234540>] ksoftirqd+0x0/0x9b [<ffffffff80234557>] ksoftirqd+0x17/0x9b [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 migration/1 S 0000000000000001 0 5 2 ffff81012fc55eb0 0000000000000046 0000000000000001 0000000000000001 ffff8100a49e9e90 ffff81012fc52000 ffff810129a82e40 ffff81012fc52208 0000000100000000 ffff8100a49e9e88 ffff8100a49e9e90 0000000000000286 Call Trace: [<ffffffff8022c553>] migration_thread+0x185/0x21d [<ffffffff8022c3ce>] migration_thread+0x0/0x21d [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 ksoftirqd/1 S 0000000000000000 0 6 2 ffff81012fc5df10 0000000000000046 0000000000000000 ffff81012fc52930 0000000100000001 ffff81012fc52720 ffff81012fc52e40 ffff81012fc52928 00000001ffffffff ffff81012ea47560 00000000ffffffff 0000000000000001 Call Trace: [<ffffffff80234540>] ksoftirqd+0x0/0x9b [<ffffffff80234557>] ksoftirqd+0x17/0x9b [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 migration/2 S 0000000000000002 0 7 2 ffff81012fc87eb0 0000000000000046 0000000000000001 0000000000000001 ffff8100a489be90 ffff81012fc53560 ffff810129824000 ffff81012fc53768 0000000200000000 ffff8100a489be88 ffff8100a489be90 0000000000000286 Call Trace: [<ffffffff8022c553>] migration_thread+0x185/0x21d [<ffffffff8022c3ce>] migration_thread+0x0/0x21d [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 ksoftirqd/2 S 0000000000000000 0 8 2 ffff81012fc91f10 0000000000000046 0000000000000000 ffff81012fc8a210 0000000200000001 ffff81012fc8a000 ffff81012fc8a720 ffff81012fc8a208 000000022fc44720 ffff81012ea47560 00000000ffffffff 0000000000000002 Call Trace: [<ffffffff80234540>] ksoftirqd+0x0/0x9b [<ffffffff80234557>] ksoftirqd+0x17/0x9b [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 migration/3 S 0000000000000000 0 9 2 ffff81012fcb9eb0 0000000000000046 0000000000000000 0000000000000001 ffff8100a49e7e90 ffff81012fc8ae40 ffff81012fcc2000 ffff81012fc8b048 0000000300000000 ffff8100a49e7e88 00000000ffffffff 0000000000000286 Call Trace: [<ffffffff8022c553>] migration_thread+0x185/0x21d [<ffffffff8022c3ce>] migration_thread+0x0/0x21d [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 ksoftirqd/3 S 0000000000000003 0 10 2 ffff81012fcc1f10 0000000000000046 ffff81012d4cd560 ffff81012fc8b770 0000000300000001 ffff81012fc8b560 ffff81012ea47560 ffff81012fc8b768 000000032fc44720 ffff81012ea47560 00000000ffffffff 0000000000000003 Call Trace: [<ffffffff80234540>] ksoftirqd+0x0/0x9b [<ffffffff80234557>] ksoftirqd+0x17/0x9b [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 events/0 S 0000000000000000 0 11 2 ffff81012fceded0 0000000000000046 0000000000000000 ffffffff80237ddc ffffffff802147e3 ffff81012fcc2720 ffffffff805354c0 ffff81012fcc2928 00000000ffffffff 00000000000000fa 00000000ffffffff ffffffff8023e5fe Call Trace: [<ffffffff80237ddc>] __mod_timer+0xb6/0xc4 [<ffffffff802147e3>] mcheck_timer+0x0/0x7c [<ffffffff8023e5fe>] queue_delayed_work_on+0xae/0xbe [<ffffffff80261c6a>] vmstat_update+0x0/0x32 [<ffffffff8023e6c7>] worker_thread+0x74/0x9b [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff8023e653>] worker_thread+0x0/0x9b [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 events/1 S 0000000000000000 0 12 2 ffff81012fcf1ed0 0000000000000046 0000000000000000 ffffffff80237ddc ffff81012c74a9a0 ffff81012fcc2e40 ffff81012fc52e40 ffff81012fcc3048 00000001ffffffff 00000000000000fa 00000000ffffffff ffffffff8023e5fe Call Trace: [<ffffffff80237ddc>] __mod_timer+0xb6/0xc4 [<ffffffff8023e5fe>] queue_delayed_work_on+0xae/0xbe [<ffffffff80261c6a>] vmstat_update+0x0/0x32 [<ffffffff8023e6c7>] worker_thread+0x74/0x9b [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff8023e653>] worker_thread+0x0/0x9b [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 events/2 S 0000000000000000 0 13 2 ffff81012fcf3ed0 0000000000000046 0000000000000000 ffffffff80237ddc ffff81012d73cec8 ffff81012fcc3560 ffff81012fc8a720 ffff81012fcc3768 00000002ffffffff 00000000000000fa 00000000ffffffff ffffffff8023e5fe Call Trace: [<ffffffff80237ddc>] __mod_timer+0xb6/0xc4 [<ffffffff8023e5fe>] queue_delayed_work_on+0xae/0xbe [<ffffffff80261c6a>] vmstat_update+0x0/0x32 [<ffffffff8023e6c7>] worker_thread+0x74/0x9b [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff8023e653>] worker_thread+0x0/0x9b [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 events/3 S 0000000000000000 0 14 2 ffff81012fcf7ed0 0000000000000046 0000000000000000 ffffffff80237ddc 0000000000000286 ffff81012fcf4000 ffff81012fcc2000 ffff81012fcf4208 00000003ffffffff 00000000000000fa 00000000ffffffff ffffffff8023e5fe Call Trace: [<ffffffff80237ddc>] __mod_timer+0xb6/0xc4 [<ffffffff8023e5fe>] queue_delayed_work_on+0xae/0xbe [<ffffffff80261c6a>] vmstat_update+0x0/0x32 [<ffffffff8023e6c7>] worker_thread+0x74/0x9b [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff8023e653>] worker_thread+0x0/0x9b [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 khelper S 0000000000000002 0 15 2 ffff81012fcfbed0 0000000000000046 0000000000001351 ffff81012b6bc188 0000000000000611 ffff81012fcf4720 ffff810129d44e40 ffff81012fcf4928 000000028020cbfe 0000000000000010 0000000000000200 0000000000000000 Call Trace: [<ffffffff8023d7f4>] __call_usermodehelper+0x41/0x61 [<ffffffff8023e6c7>] worker_thread+0x74/0x9b [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff8023e653>] worker_thread+0x0/0x9b [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 kblockd/0 S 0000000000000000 0 54 2 ffff81012fd83ed0 0000000000000046 0000000000000000 ffffffff802ec753 ffff81012ee54000 ffff81012fd80000 ffffffff805354c0 ffff81012fd80208 000000002ee54000 ffffffff802e8fbd 00000000ffffffff ffffffffffffffff Call Trace: [<ffffffff802ec753>] kobject_get+0x12/0x17 [<ffffffff802e8fbd>] cfq_kick_queue+0x0/0x35 [<ffffffff8023e6c7>] worker_thread+0x74/0x9b [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff8023e653>] worker_thread+0x0/0x9b [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 kblockd/1 S 0000000000000001 0 55 2 ffff81012fd85ed0 0000000000000046 ffff81012ec9e1d8 ffffffff802ec753 ffff81012ee54000 ffff81012fd80720 ffff810129824000 ffff81012fd80928 000000012ee54000 ffffffff802e8fbd 0000000000000286 ffffffffffffffff Call Trace: [<ffffffff802ec753>] kobject_get+0x12/0x17 [<ffffffff802e8fbd>] cfq_kick_queue+0x0/0x35 [<ffffffff802e8fdf>] cfq_kick_queue+0x22/0x35 [<ffffffff8023e6c7>] worker_thread+0x74/0x9b [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff8023e653>] worker_thread+0x0/0x9b [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 kblockd/2 S 0000000000000000 0 56 2 ffff81012fd89ed0 0000000000000046 0000000000000000 ffffffff802ec753 ffff81012e7505e0 ffff81012fd80e40 ffff81012fc8a720 ffff81012fd81048 000000022e7505e0 ffffffff802e8fbd 00000000ffffffff ffffffffffffffff Call Trace: [<ffffffff802ec753>] kobject_get+0x12/0x17 [<ffffffff802e8fbd>] cfq_kick_queue+0x0/0x35 [<ffffffff8023e6c7>] worker_thread+0x74/0x9b [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff8023e653>] worker_thread+0x0/0x9b [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 kblockd/3 S 0000000000000003 0 57 2 ffff81012fd8bed0 0000000000000046 ffff81012e8fd1d8 ffffffff802ec753 ffff81012ee551a0 ffff81012fd81560 ffff81012ec75560 ffff81012fd81768 000000032ee551a0 ffffffff802e8fbd 0000000000000286 ffffffffffffffff Call Trace: [<ffffffff802ec753>] kobject_get+0x12/0x17 [<ffffffff802e8fbd>] cfq_kick_queue+0x0/0x35 [<ffffffff802e8fdf>] cfq_kick_queue+0x22/0x35 [<ffffffff8023e6c7>] worker_thread+0x74/0x9b [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff8023e653>] worker_thread+0x0/0x9b [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 kacpid S 0000000000000000 0 59 2 ffff81012fd97ed0 0000000000000046 ffff81012fd97e60 0000000000000000 ffff81012fd90720 ffff81012fd90720 ffff81012fd90e40 ffff81012fd90928 0000000000000000 ffffffff80448110 ffff81012fd97f20 0000000000000046 Call Trace: [<ffffffff80448110>] thread_return+0x0/0xf9 [<ffffffff8023e6c7>] worker_thread+0x74/0x9b [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff8023e653>] worker_thread+0x0/0x9b [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 kacpi_notify S 0000000000000000 0 60 2 ffff81012fd99ed0 0000000000000046 ffff81012fd99e60 0000000000000000 ffff81012fd90e40 ffff81012fd90e40 ffff81012fd91560 ffff81012fd91048 0000000000000000 ffffffff80448110 ffff81012fd99f20 0000000000000046 Call Trace: [<ffffffff80448110>] thread_return+0x0/0xf9 [<ffffffff8023e6c7>] worker_thread+0x74/0x9b [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff8023e653>] worker_thread+0x0/0x9b [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 ata/0 S 0000000000000000 0 189 2 ffff81012fd9fed0 0000000000000046 0000000000000000 000000005800176d ffff81012fd9fee0 ffff81012fda3560 ffffffff805354c0 ffff81012fda3768 0000000000000004 ffff81012ecd69e0 00000000ffffffff ffffffff805dd920 Call Trace: [<ffffffff8023e6c7>] worker_thread+0x74/0x9b [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff8023e653>] worker_thread+0x0/0x9b [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 ata/1 S 0000000000000000 0 190 2 ffff81012fd4bed0 0000000000000046 0000000000000000 00000000580017a9 ffff81012fd4bee0 ffff81012fd48000 ffff81012fc52e40 ffff81012fd48208 0000000100000004 ffff81012ecd69e0 00000000ffffffff ffffffff805dd920 Call Trace: [<ffffffff8023e6c7>] worker_thread+0x74/0x9b [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff8023e653>] worker_thread+0x0/0x9b [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 ata/2 S 0000000000000000 0 191 2 ffff81012fd53ed0 0000000000000046 0000000000000000 00000000580016c8 ffff81012fd53ee0 ffff81012fd48720 ffff81012fc8a720 ffff81012fd48928 0000000200000004 ffff81012ecd69e0 00000000ffffffff ffffffff805dd920 Call Trace: [<ffffffff8023e6c7>] worker_thread+0x74/0x9b [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff8023e653>] worker_thread+0x0/0x9b [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 ata/3 S 0000000000000000 0 192 2 ffff81012fd11ed0 0000000000000046 0000000000000000 00000000580017ea ffff81012fd11ee0 ffff81012fd48e40 ffff81012fcc2000 ffff81012fd49048 0000000300000004 ffff81012ecd4ae0 00000000ffffffff ffffffff805dd920 Call Trace: [<ffffffff8023e6c7>] worker_thread+0x74/0x9b [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff8023e653>] worker_thread+0x0/0x9b [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 ata_aux S 0000000000000000 0 193 2 ffff81012fd13ed0 0000000000000046 ffff81012fd13e60 0000000000000000 ffff81012fd49560 ffff81012fd49560 ffff81012fd36000 ffff81012fd49768 0000000000000000 ffffffff80448110 ffff81012fd13f20 0000000000000046 Call Trace: [<ffffffff80448110>] thread_return+0x0/0xf9 [<ffffffff8023e6c7>] worker_thread+0x74/0x9b [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff8023e653>] worker_thread+0x0/0x9b [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 ksuspend_usbd S 0000000000000000 0 194 2 ffff81012fd05ed0 0000000000000046 ffff81012fd05e60 0000000000000000 ffff81012fd36000 ffff81012fd36000 ffff81012fd36720 ffff81012fd36208 0000000000000000 ffffffff80448110 ffff81012fd05f20 0000000000000046 Call Trace: [<ffffffff80448110>] thread_return+0x0/0xf9 [<ffffffff8023e6c7>] worker_thread+0x74/0x9b [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff8023e653>] worker_thread+0x0/0x9b [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 khubd S 0000000000000000 0 200 2 ffff81012fe03e40 0000000000000046 0000000000000000 ffffffff80228f46 ffff81012ea8c000 ffff81012fe14e40 ffff81012fcc2000 ffff81012fe15048 0000000300000003 ffff81012ea8c000 00000000ffffffff ffffffff8044985b Call Trace: [<ffffffff80228f46>] __wake_up+0x38/0x4e [<ffffffff8044985b>] __up_wakeup+0x35/0x67 [<ffffffff80393d35>] hub_thread+0xb33/0xb93 [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff80393202>] hub_thread+0x0/0xb93 [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 kseriod S 0000000000000000 0 203 2 ffff81012fdefed0 0000000000000046 0000000000000000 ffffffff80446d57 0000000000000000 ffff81012fdea720 ffffffff805354c0 ffff81012fdea928 00000000805dd920 ffffffff80345f1d 00000000ffffffff ffffffff80346044 Call Trace: [<ffffffff80446d57>] klist_next+0x2d/0x83 [<ffffffff80345f1d>] next_device+0x9/0x1f [<ffffffff80346044>] bus_for_each_dev+0x61/0x6e [<ffffffff803a4a0d>] serio_thread+0x2ac/0x2e3 [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff803a4761>] serio_thread+0x0/0x2e3 [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 pdflush D 0000000000000000 0 261 2 ffff81012ecc7930 0000000000000046 ffff81012ecc78f8 0000000000000001 0000000000000082 ffff81012ecbf560 ffff81012fc52e40 ffff81012ecbf768 000000012cb33800 0000000000000046 00000000ffffffff ffffffff802e27e0 Call Trace: [<ffffffff802e27e0>] __generic_unplug_device+0x13/0x24 [<ffffffff803b12cf>] get_active_stripe+0x22f/0x4ca [<ffffffff80228f46>] __wake_up+0x38/0x4e [<ffffffff802295f6>] default_wake_function+0x0/0xe [<ffffffff803b6b3b>] make_request+0x3f3/0x577 [<ffffffff8025acf6>] mempool_alloc+0x24/0xda [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff802e1728>] generic_make_request+0x1be/0x1f5 [<ffffffff802e3caf>] submit_bio+0xb4/0xbb [<ffffffff80298e2f>] __bio_add_page+0x109/0x1b9 [<ffffffff8827dcb4>] :xfs:xfs_submit_ioend_bio+0x1e/0x27 [<ffffffff8827e0ce>] :xfs:xfs_submit_ioend+0x88/0xc6 [<ffffffff8827ef51>] :xfs:xfs_page_state_convert+0x51e/0x56d [<ffffffff8827f0f2>] :xfs:xfs_vm_writepage+0xa7/0xe1 [<ffffffff8025d022>] __writepage+0xa/0x23 [<ffffffff8025d552>] write_cache_pages+0x176/0x2a3 [<ffffffff8025d018>] __writepage+0x0/0x23 [<ffffffff8025d6bb>] do_writepages+0x20/0x2d [<ffffffff80291fd1>] __writeback_single_inode+0x1d6/0x3a7 [<ffffffff80228561>] update_curr+0xdf/0xfe [<ffffffff802924fa>] sync_sb_inodes+0x1cb/0x2af [<ffffffff80292961>] writeback_inodes+0x7d/0xd3 [<ffffffff8025dafb>] background_writeout+0x84/0xb7 [<ffffffff8025df26>] pdflush+0x0/0x1d8 [<ffffffff8025e054>] pdflush+0x12e/0x1d8 [<ffffffff8025da77>] background_writeout+0x0/0xb7 [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 pdflush D 0000000000000000 0 262 2 ffff81012eccbdb0 0000000000000046 0000000000000000 ffff81012ecc8000 0000000000000286 ffff81012ecc8000 ffffffff805354c0 ffff81012ecc8208 000000002eccbe90 ffff81012eccbdc0 00000000ffffffff 000000010030fe0d Call Trace: [<ffffffff80448a20>] schedule_timeout+0x8a/0xad [<ffffffff80237ba4>] process_timeout+0x0/0x5 [<ffffffff804483fe>] io_schedule_timeout+0x28/0x33 [<ffffffff802622f4>] congestion_wait+0x66/0x80 [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff80292987>] writeback_inodes+0xa3/0xd3 [<ffffffff8025dc1c>] wb_kupdate+0xba/0x111 [<ffffffff8025df26>] pdflush+0x0/0x1d8 [<ffffffff8025e054>] pdflush+0x12e/0x1d8 [<ffffffff8025db62>] wb_kupdate+0x0/0x111 [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 kswapd0 S 0000000000000000 0 263 2 ffff81012eccfe20 0000000000000046 0000000000000000 ffffffff8022bc95 ffff81012eccfde0 ffff81012ecc8720 ffff81012fcc2000 ffff81012ecc8928 0000000300000001 ffff81012eccfe10 00000000ffffffff ffffffff80228561 Call Trace: [<ffffffff8022bc95>] set_cpus_allowed+0xa5/0xb2 [<ffffffff80228561>] update_curr+0xdf/0xfe [<ffffffff80260c38>] kswapd+0x0/0x429 [<ffffffff80260d0b>] kswapd+0xd3/0x429 [<ffffffff80448110>] thread_return+0x0/0xf9 [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff80260c38>] kswapd+0x0/0x429 [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 aio/0 S 0000000000000000 0 318 2 ffff81012ed81ed0 0000000000000046 0000000000000000 0000000000000000 ffff81012ecc8e40 ffff81012ecc8e40 ffffffff805354c0 ffff81012ecc9048 0000000000000000 ffffffff80448110 00000000ffffffff 0000000000000046 Call Trace: [<ffffffff80448110>] thread_return+0x0/0xf9 [<ffffffff8023e6c7>] worker_thread+0x74/0x9b [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff8023e653>] worker_thread+0x0/0x9b [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 aio/1 S 0000000000000000 0 319 2 ffff81012ed83ed0 0000000000000046 0000000000000000 0000000000000000 ffff81012ecc9560 ffff81012ecc9560 ffff81012fc52e40 ffff81012ecc9768 0000000100000000 ffffffff80448110 00000000ffffffff 0000000000000046 Call Trace: [<ffffffff80448110>] thread_return+0x0/0xf9 [<ffffffff8023e6c7>] worker_thread+0x74/0x9b [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff8023e653>] worker_thread+0x0/0x9b [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 aio/2 S 0000000000000000 0 320 2 ffff81012ed89ed0 0000000000000046 0000000000000000 0000000000000000 ffff81012ed86000 ffff81012ed86000 ffff81012fc8a720 ffff81012ed86208 0000000200000000 ffffffff80448110 00000000ffffffff 0000000000000046 Call Trace: [<ffffffff80448110>] thread_return+0x0/0xf9 [<ffffffff8023e6c7>] worker_thread+0x74/0x9b [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff8023e653>] worker_thread+0x0/0x9b [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 aio/3 S 0000000000000000 0 321 2 ffff81012ed8bed0 0000000000000046 0000000000000000 0000000000000000 ffff81012ed86720 ffff81012ed86720 ffff81012fcc2000 ffff81012ed86928 000000032ecae000 ffffffff80448110 00000000ffffffff 0000000000000046 Call Trace: [<ffffffff80448110>] thread_return+0x0/0xf9 [<ffffffff8023e6c7>] worker_thread+0x74/0x9b [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff8023e653>] worker_thread+0x0/0x9b [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 scsi_eh_0 S 0000000000000000 0 497 2 ffff81012ecb9e80 0000000000000046 ffff81012ecb9e48 0000000000000003 ffff81012ecb9e40 ffff81012ed86e40 ffffffff805354c0 ffff81012ed87048 0000000000000000 0000000000000001 00000000ffffffff ffffffff8035db5d Call Trace: [<ffffffff8035db5d>] __scsi_iterate_devices+0x56/0x6f [<ffffffff803619ad>] scsi_error_handler+0x59/0x4b7 [<ffffffff80361954>] scsi_error_handler+0x0/0x4b7 [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 scsi_eh_1 S 0000000000000000 0 499 2 ffff81012ecb3e80 0000000000000046 0000000000000000 0000000000000003 ffff81012ecb3e40 ffff81012ed87560 ffff81012fc52e40 ffff81012ed87768 0000000100000000 0000000000000001 00000000ffffffff ffffffff8035db5d Call Trace: [<ffffffff8035db5d>] __scsi_iterate_devices+0x56/0x6f [<ffffffff803619ad>] scsi_error_handler+0x59/0x4b7 [<ffffffff80361954>] scsi_error_handler+0x0/0x4b7 [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 scsi_eh_2 S 0000000000000003 0 501 2 ffff81012ee25e80 0000000000000046 0000000000000082 0000000000000003 ffff81012ee25e40 ffff81012fdeb560 ffff81012fc44000 ffff81012fdeb768 0000000300000000 0000000000000001 0000000000000246 ffffffff8035db5d Call Trace: [<ffffffff8035db5d>] __scsi_iterate_devices+0x56/0x6f [<ffffffff803619ad>] scsi_error_handler+0x59/0x4b7 [<ffffffff80361954>] scsi_error_handler+0x0/0x4b7 [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 scsi_eh_3 S 0000000000000003 0 503 2 ffff81012ed93e80 0000000000000046 0000000000000082 0000000000000003 ffff81012ed93e40 ffff81012fdeae40 ffff81012fc44000 ffff81012fdeb048 0000000300000000 0000000000000001 0000000000000246 ffffffff8035db5d Call Trace: [<ffffffff8035db5d>] __scsi_iterate_devices+0x56/0x6f [<ffffffff803619ad>] scsi_error_handler+0x59/0x4b7 [<ffffffff80361954>] scsi_error_handler+0x0/0x4b7 [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 scsi_eh_4 S 0000000000000003 0 505 2 ffff81012eca7e80 0000000000000046 0000000000000082 0000000000000003 ffff81012eca7e40 ffff81012fdea000 ffff81012fc44000 ffff81012fdea208 0000000300000000 0000000000000001 0000000000000246 ffffffff8035db5d Call Trace: [<ffffffff8035db5d>] __scsi_iterate_devices+0x56/0x6f [<ffffffff803619ad>] scsi_error_handler+0x59/0x4b7 [<ffffffff80361954>] scsi_error_handler+0x0/0x4b7 [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 scsi_eh_5 S 0000000000000003 0 507 2 ffff81012ec97e80 0000000000000046 0000000000000082 0000000000000003 ffff81012ec97e40 ffff81012fd36720 ffff81012fc44000 ffff81012fd36928 0000000300000000 0000000000000001 0000000000000246 ffffffff8035db5d Call Trace: [<ffffffff8035db5d>] __scsi_iterate_devices+0x56/0x6f [<ffffffff803619ad>] scsi_error_handler+0x59/0x4b7 [<ffffffff80361954>] scsi_error_handler+0x0/0x4b7 [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 scsi_eh_6 S 0000000000000000 0 524 2 ffff81012ecd3e80 0000000000000046 0000000000000000 0000000000000003 ffff81012ec9f9d8 ffff81012fd37560 ffffffff805354c0 ffff81012fd37768 0000000000000202 0000000000000000 00000000ffffffff ffffffff8035db6a Call Trace: [<ffffffff8035db6a>] __scsi_iterate_devices+0x63/0x6f [<ffffffff803619ad>] scsi_error_handler+0x59/0x4b7 [<ffffffff80361954>] scsi_error_handler+0x0/0x4b7 [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 scsi_eh_7 S 0000000000000000 0 526 2 ffff81012ede5e80 0000000000000046 0000000000000000 0000000000000003 ffff81012ede5e40 ffff81012fd36e40 ffff81012fcc2000 ffff81012fd37048 0000000300000000 0000000000000001 00000000ffffffff ffffffff8035db5d Call Trace: [<ffffffff8035db5d>] __scsi_iterate_devices+0x56/0x6f [<ffffffff803619ad>] scsi_error_handler+0x59/0x4b7 [<ffffffff80361954>] scsi_error_handler+0x0/0x4b7 [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 kcryptd/0 S 0000000000000000 0 544 2 ffff81012edfded0 0000000000000046 0000000000000000 0000000000000000 ffff81012fda2e40 ffff81012fda2e40 ffffffff805354c0 ffff81012fda3048 0000000000000000 ffffffff80448110 00000000ffffffff 0000000000000046 Call Trace: [<ffffffff80448110>] thread_return+0x0/0xf9 [<ffffffff8023e6c7>] worker_thread+0x74/0x9b [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff8023e653>] worker_thread+0x0/0x9b [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 kcryptd/1 S 0000000000000000 0 545 2 ffff81012ed6bed0 0000000000000046 0000000000000000 0000000000000000 ffff81012fda2720 ffff81012fda2720 ffff81012fc52e40 ffff81012fda2928 0000000100000000 ffffffff80448110 00000000ffffffff 0000000000000046 Call Trace: [<ffffffff80448110>] thread_return+0x0/0xf9 [<ffffffff8023e6c7>] worker_thread+0x74/0x9b [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff8023e653>] worker_thread+0x0/0x9b [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 kcryptd/2 S 0000000000000000 0 546 2 ffff81012ee59ed0 0000000000000046 0000000000000000 0000000000000000 ffff81012fda2000 ffff81012fda2000 ffff81012fc8a720 ffff81012fda2208 0000000200000000 ffffffff80448110 00000000ffffffff 0000000000000046 Call Trace: [<ffffffff80448110>] thread_return+0x0/0xf9 [<ffffffff8023e6c7>] worker_thread+0x74/0x9b [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff8023e653>] worker_thread+0x0/0x9b [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 kcryptd/3 S 0000000000000000 0 547 2 ffff81012ecdfed0 0000000000000046 0000000000000000 0000000000000000 ffff81012fd90000 ffff81012fd90000 ffff81012fcc2000 ffff81012fd90208 0000000300000000 ffffffff80448110 00000000ffffffff 0000000000000046 Call Trace: [<ffffffff80448110>] thread_return+0x0/0xf9 [<ffffffff8023e6c7>] worker_thread+0x74/0x9b [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff8023e653>] worker_thread+0x0/0x9b [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 ksnapd S 0000000000000000 0 548 2 ffff81012ed1bed0 0000000000000046 0000000000000000 0000000000000000 ffff81012fd91560 ffff81012fd91560 ffff81012fcc2000 ffff81012fd91768 0000000300000000 ffffffff80448110 00000000ffffffff 0000000000000046 Call Trace: [<ffffffff80448110>] thread_return+0x0/0xf9 [<ffffffff8023e6c7>] worker_thread+0x74/0x9b [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff8023e653>] worker_thread+0x0/0x9b [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 scsi_eh_8 S 0000000000000003 0 1093 2 ffff81012ebb3e80 0000000000000046 0000000000000001 ffff81012ec74768 ffff8100052c0b80 ffff81012ec74720 ffff81012e645560 ffff81012ec74928 000000032ebb3e50 ffff81012ec74720 ffff81012ec74720 ffffffffffffffff Call Trace: [<ffffffff803619ad>] scsi_error_handler+0x59/0x4b7 [<ffffffff80361954>] scsi_error_handler+0x0/0x4b7 [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 scsi_wq_8 S 0000000000000003 0 1094 2 ffff81012e67fed0 0000000000000046 0000000000000286 ffffffff8036bbf9 ffff81012eba7a00 ffff81012e645560 ffff81012e8eee40 ffff81012e645768 000000032e6c4154 ffffffff8036f154 ffff81012e6c0f30 ffff81012e993180 Call Trace: [<ffffffff8036bbf9>] sas_rphy_add+0x133/0x13f [<ffffffff8036f154>] sas_discover_domain+0x344/0x3fc [<ffffffff8036ee10>] sas_discover_domain+0x0/0x3fc [<ffffffff8023e6c7>] worker_thread+0x74/0x9b [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff8023e653>] worker_thread+0x0/0x9b [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 scsi_eh_9 S 0000000000000001 0 1665 2 ffff81012e4a1e80 0000000000000046 0000000000000001 ffff81012ea46e88 ffff8100052aeb80 ffff81012ea46e40 ffff81012ea47560 ffff81012ea47048 000000012e4a1e50 ffff81012ea46e40 ffff81012ea46e40 ffffffffffffffff Call Trace: [<ffffffff803619ad>] scsi_error_handler+0x59/0x4b7 [<ffffffff80361954>] scsi_error_handler+0x0/0x4b7 [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 usb-storage S 0000000000000000 0 1667 2 ffff81012e681e20 0000000000000046 ffff81012e58fca0 ffffffff8806ea5f 0000000000000000 ffff81012ea47560 ffff81012fc45560 ffff81012ea47768 00000000000002d5 0000000000000001 0000000000000001 ffff81012e681e20 Call Trace: [<ffffffff8806ea5f>] :usb_storage:usb_stor_msg_common+0x110/0x13a [<ffffffff804499e5>] __down_interruptible+0xcb/0x137 [<ffffffff802295f6>] default_wake_function+0x0/0xe [<ffffffff804497e7>] __down_failed_interruptible+0x35/0x3a [<ffffffff8806fca5>] :usb_storage:usb_stor_control_thread+0x27/0x1e2 [<ffffffff8806fc7e>] :usb_storage:usb_stor_control_thread+0x0/0x1e2 [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 md0_raid1 S 0000000000000000 0 1836 2 ffff81012eab1e80 0000000000000046 0000000000000000 ffff81012e8ef5a8 ffff8100052aeb80 ffff81012e8ef560 ffff81012fcc2000 ffff81012e8ef768 000000032eab1e50 ffff81012fcc2000 00000000ffffffff ffffffffffffffff Call Trace: [<ffffffff804489b4>] schedule_timeout+0x1e/0xad [<ffffffff802414fd>] prepare_to_wait+0x15/0x5f [<ffffffff803c02fb>] md_thread+0xbb/0xf1 [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff803c0240>] md_thread+0x0/0xf1 [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 md1_raid1 S 0000000000000000 0 1854 2 ffff81012e5bde80 0000000000000046 0000000000000000 ffff81012e8ee048 ffff8100052aeb80 ffff81012e8ee000 ffff81012fc8a720 ffff81012e8ee208 000000022e5bde50 ffffffff805354c0 00000000ffffffff ffffffffffffffff Call Trace: [<ffffffff804489b4>] schedule_timeout+0x1e/0xad [<ffffffff802414fd>] prepare_to_wait+0x15/0x5f [<ffffffff803c02fb>] md_thread+0xbb/0xf1 [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff803c0240>] md_thread+0x0/0xf1 [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 md2_raid5 S 0000000000000000 0 1872 2 ffff81012e479e80 0000000000000046 0000000000000000 0000000000000046 ffff81012eaa1780 ffff81012e8ee720 ffff81012fc52e40 ffff81012e8ee928 000000012eb1c800 ffff81012fc52e40 00000000ffffffff ffff81012e453400 Call Trace: [<ffffffff804489b4>] schedule_timeout+0x1e/0xad [<ffffffff802414fd>] prepare_to_wait+0x15/0x5f [<ffffffff803c02fb>] md_thread+0xbb/0xf1 [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff803c0240>] md_thread+0x0/0xf1 [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 kjournald S 0000000000000000 0 1980 2 ffff81012d503eb0 0000000000000046 0000000000000000 00000000802413e5 00000fcc00000000 ffff81012e48c720 ffffffff805354c0 ffff81012e48c928 0000000000000000 0000000000000001 00000000ffffffff 0000000000000003 Call Trace: [<ffffffff802d0936>] kjournald+0x165/0x1e6 [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff802d07d1>] kjournald+0x0/0x1e6 [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff8020cf7c>] call_softirq+0x1c/0x28 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 udevd S 0000000000000000 0 2189 1 ffff81012e7f7a18 0000000000000086 0000000000000000 ffffffff80264e33 80000001143fa065 ffff81012ecbee40 ffff81012fc52e40 ffff81012ecbf048 0000000100000870 ffff81012fc52e40 00000000ffffffff ffff81012ecbee40 Call Trace: [<ffffffff80264e33>] handle_mm_fault+0x6f3/0x772 [<ffffffff804489b4>] schedule_timeout+0x1e/0xad [<ffffffff802415e1>] add_wait_queue+0x15/0x44 [<ffffffff8029dde8>] inotify_poll+0x4f/0x56 [<ffffffff80284031>] do_select+0x3fc/0x45f [<ffffffff8028451a>] __pollwait+0x0/0xe1 [<ffffffff802295f6>] default_wake_function+0x0/0xe sage repeated 3 times [<ffffffff802eebcf>] __up_read+0x13/0x8a [<ffffffff802ef67d>] number+0x119/0x204 [<ffffffff8025c800>] get_page_from_freelist+0x278/0x35e [<ffffffff80287198>] dput+0x1c/0x10b [<ffffffff8027eab8>] __follow_mount+0x26/0x7b [<ffffffff8027ec7d>] do_lookup+0x63/0x1ae [<ffffffff8028bba8>] mntput_no_expire+0x1c/0x80 [<ffffffff8028098f>] __link_path_walk+0x695/0xd0c [<ffffffff80284250>] core_sys_select+0x1bc/0x265 [<ffffffff80287d9d>] __d_lookup+0xb0/0x100 [<ffffffff802910e3>] simple_empty+0x10/0x58 [<ffffffff80241559>] remove_wait_queue+0x12/0x44 [<ffffffff802846bc>] sys_select+0xc1/0x183 [<ffffffff802295f6>] default_wake_function+0x0/0xe [<ffffffff80278ad7>] sys_read+0x45/0x6e [<ffffffff8020bdee>] system_call+0x7e/0x83 kpsmoused S 0000000000000001 0 2826 2 ffff81012d0f3ed0 0000000000000046 ffff81012d0f3e60 0000000000000000 ffff81012e599560 ffff81012e599560 ffff81012ecbee40 ffff81012e599768 000000012fd2c768 ffffffff80448110 ffff81012d0f3f20 0000000000000046 Call Trace: [<ffffffff80448110>] thread_return+0x0/0xf9 [<ffffffff8023e6c7>] worker_thread+0x74/0x9b [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff8023e653>] worker_thread+0x0/0x9b [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 kjournald S 0000000000000000 0 3333 2 ffff81012c8b1eb0 0000000000000046 0000000000000000 0000000000000c31 ffff81012dbd9a98 ffff81012fcf4e40 ffffffff805354c0 ffff81012fcf5048 0000000000000000 0000000000000001 00000000ffffffff 0000000000000003 Call Trace: [<ffffffff802d0936>] kjournald+0x165/0x1e6 [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff802d07d1>] kjournald+0x0/0x1e6 [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 kjournald S 0000000000000000 0 3334 2 ffff81012d0b1eb0 0000000000000046 ffff81012e09b898 00000000000003f0 ffff81012e09b898 ffff81012e8eee40 ffff81012fcf5560 ffff81012e8ef048 0000000000000000 0000000000000001 0000000000000282 0000000000000003 Call Trace: [<ffffffff80228f46>] __wake_up+0x38/0x4e [<ffffffff802d0936>] kjournald+0x165/0x1e6 [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff802d07d1>] kjournald+0x0/0x1e6 [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 kjournald S 0000000000000000 0 3335 2 ffff81012e3edeb0 0000000000000046 0000000000000000 0000000000000c31 ffff81012e09be98 ffff81012e598000 ffffffff805354c0 ffff81012e598208 0000000000000000 0000000000000001 00000000ffffffff 0000000000000003 Call Trace: [<ffffffff802d0936>] kjournald+0x165/0x1e6 [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff802d07d1>] kjournald+0x0/0x1e6 [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 portmap S 0000000000000000 0 3766 1 ffff81012f703b28 0000000000000086 0000000000000000 ffffffff802eebcf ffff81012f0ab888 ffff81012d4cc720 ffff81012fc52e40 ffff81012d4cc928 000000012dfa4600 ffff81012fc52e40 00000000ffffffff ffffffff00030002 Call Trace: [<ffffffff802eebcf>] __up_read+0x13/0x8a [<ffffffff804489b4>] schedule_timeout+0x1e/0xad [<ffffffff802415e1>] add_wait_queue+0x15/0x44 [<ffffffff8040caaf>] tcp_poll+0x25/0x138 [<ffffffff80283b12>] do_sys_poll+0x278/0x360 [<ffffffff8028451a>] __pollwait+0x0/0xe1 [<ffffffff802295f6>] default_wake_function+0x0/0xe [<ffffffff802295f6>] default_wake_function+0x0/0xe [<ffffffff8025885f>] file_read_actor+0xa0/0x118 [<ffffffff8025897e>] find_get_page+0x21/0x50 [<ffffffff80259207>] do_generic_mapping_read+0x3f4/0x406 [<ffffffff802587bf>] file_read_actor+0x0/0x118 [<ffffffff8025a61e>] generic_file_aio_read+0x11d/0x160 [<ffffffff8025c2bf>] __pagevec_free+0x21/0x2e [<ffffffff8025e950>] release_pages+0x14a/0x157 [<ffffffff80263edc>] unmap_vmas+0x3ec/0x710 [<ffffffff8026d4d5>] free_pages_and_swap_cache+0x73/0x8f [<ffffffff802679e6>] unmap_region+0x114/0x12a [<ffffffff80267a40>] remove_vma+0x44/0x4b [<ffffffff80268687>] do_munmap+0x254/0x276 [<ffffffff80283c2c>] sys_poll+0x32/0x3b [<ffffffff8020bdee>] system_call+0x7e/0x83 rpc.statd S 0000000000000000 0 3783 1 ffff81012c5bda18 0000000000000086 0000000000000000 ffffffff80229af5 ffff81012c5bdad8 ffff81012e373560 ffff81012fcc2000 ffff81012e373768 0000000300000003 0000000000000000 00000000ffffffff 0000000000000000 Call Trace: [<ffffffff80229af5>] find_busiest_group+0x254/0x6d6 [<ffffffff804489b4>] schedule_timeout+0x1e/0xad [<ffffffff802415e1>] add_wait_queue+0x15/0x44 [<ffffffff8040caaf>] tcp_poll+0x25/0x138 [<ffffffff80284031>] do_select+0x3fc/0x45f [<ffffffff8028451a>] __pollwait+0x0/0xe1 [<ffffffff802295f6>] default_wake_function+0x0/0xe sage repeated 2 times [<ffffffff8042298d>] udp_recvmsg+0x193/0x1f3 [<ffffffff803dd31e>] sock_common_recvmsg+0x30/0x45 [<ffffffff803dbe0b>] sock_recvmsg+0xd5/0xed [<ffffffff80449c4f>] _spin_lock_bh+0x9/0x19 [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff80284250>] core_sys_select+0x1bc/0x265 [<ffffffff803ddac9>] lock_sock_nested+0xa2/0xad [<ffffffff80449c4f>] _spin_lock_bh+0x9/0x19 [<ffffffff803dd99f>] release_sock+0x13/0x9b [<ffffffff80241321>] bit_waitqueue+0x16/0x82 [<ffffffff802413cb>] wake_up_bit+0x11/0x22 [<ffffffff802846bc>] sys_select+0xc1/0x183 [<ffffffff80276816>] filp_close+0x5d/0x65 [<ffffffff8020bdee>] system_call+0x7e/0x83 rpciod/0 S 0000000000000000 0 3790 2 ffff81012c693ed0 0000000000000046 0000000000000000 ffffffff80228f46 ffff810128534308 ffff81012d97b560 ffffffff805354c0 ffff81012d97b768 00000000805dd920 0000000000000000 00000000ffffffff 0000000000000286 Call Trace: [<ffffffff80228f46>] __wake_up+0x38/0x4e [<ffffffff8023e6c7>] worker_thread+0x74/0x9b [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff8023e653>] worker_thread+0x0/0x9b [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 rpciod/1 S 0000000000000000 0 3791 2 ffff81012c695ed0 0000000000000046 0000000000000000 ffff81012cea4000 ffff81012c846a10 ffff81012d97ae40 ffff81012fc52e40 ffff81012d97b048 0000000100000282 ffffffff802413b5 00000000ffffffff ffff810100000004 Call Trace: [<ffffffff802413b5>] __wake_up_bit+0x28/0x2d [<ffffffff8023e6c7>] worker_thread+0x74/0x9b [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff8023e653>] worker_thread+0x0/0x9b [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 rpciod/2 S 0000000000000000 0 3792 2 ffff81012c699ed0 0000000000000046 0000000000000000 ffffffff80228f46 ffff810128534c08 ffff81012d97a000 ffff81012fc8a720 ffff81012d97a208 00000002805dd920 0000000000000000 00000000ffffffff 0000000000000286 Call Trace: [<ffffffff80228f46>] __wake_up+0x38/0x4e [<ffffffff8023e6c7>] worker_thread+0x74/0x9b [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff8023e653>] worker_thread+0x0/0x9b [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 rpciod/3 S 0000000000000000 0 3793 2 ffff81012c69bed0 0000000000000046 0000000000000000 ffff81012cb29d00 ffff81012cf8cd00 ffff81012e644000 ffff81012fcc2000 ffff81012e644208 000000032ca80c00 0000000000000287 00000000ffffffff ffffffff881aad70 Call Trace: [<ffffffff8023e6c7>] worker_thread+0x74/0x9b [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff8023e653>] worker_thread+0x0/0x9b [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 rpc.idmapd S 0000000000000000 0 3817 1 ffff81012c165e98 0000000000000082 0000000000000000 ffff81012c165ec8 7474697765680101 ffff81012fe14720 ffff81012fc8a720 ffff81012fe14928 0000000200000000 ffff81012c165ea8 00000000ffffffff 000000010030ffb3 Call Trace: [<ffffffff80448a20>] schedule_timeout+0x8a/0xad [<ffffffff80237ba4>] process_timeout+0x0/0x5 [<ffffffff8029e84e>] sys_epoll_wait+0x17f/0x421 [<ffffffff802295f6>] default_wake_function+0x0/0xe [<ffffffff8023ab09>] sys_rt_sigprocmask+0x50/0xce [<ffffffff8020bdee>] system_call+0x7e/0x83 nfsv4-svc S 0000000000000000 0 3823 2 ffff81012c585e20 0000000000000046 0000000000000000 000005a800000001 ffff81012c41acc8 ffff81012ea46000 ffffffff805354c0 ffff81012ea46208 000000002c41ac00 ffffffff803dd99f 00000000ffffffff 0000000000000282 Call Trace: [<ffffffff803dd99f>] release_sock+0x13/0x9b [<ffffffff804489b4>] schedule_timeout+0x1e/0xad [<ffffffff802415e1>] add_wait_queue+0x15/0x44 [<ffffffff8814d308>] :sunrpc:svc_recv+0x28e/0x40f [<ffffffff802295f6>] default_wake_function+0x0/0xe [<ffffffff881a714f>] :nfs:nfs_callback_svc+0xa9/0x149 [<ffffffff80232f85>] do_exit+0x7bc/0x7c0 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff881a70a6>] :nfs:nfs_callback_svc+0x0/0x149 [<ffffffff8020cbfe>] child_rip+0x0/0x12 syslogd R running task 0 4037 1 klogd S 0000000000000000 0 4043 1 ffff81012c6e1bf8 0000000000000082 0000000000000000 ffffffff80227c57 ffff810005260480 ffff81012c047560 ffff81012fc52e40 ffff81012c047768 00000001805a3b40 ffffffff803ddc95 00000000ffffffff 0000000000000286 Call Trace: [<ffffffff80227c57>] enqueue_task+0x13/0x21 [<ffffffff803ddc95>] sock_alloc_send_skb+0x77/0x1d2 [<ffffffff803ddc95>] sock_alloc_send_skb+0x77/0x1d2 [<ffffffff804489b4>] schedule_timeout+0x1e/0xad [<ffffffff803e17b9>] __alloc_skb+0x76/0x121 [<ffffffff8024149f>] prepare_to_wait_exclusive+0x15/0x5e [<ffffffff8043b9b4>] unix_wait_for_peer+0x90/0xac [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff803e08cc>] skb_queue_tail+0x17/0x3e [<ffffffff803df401>] sock_def_readable+0x10/0x5f [<ffffffff8043bf67>] unix_dgram_sendmsg+0x3fb/0x491 [<ffffffff803db31f>] sock_aio_write+0xd1/0xe0 [<ffffffff80277ec8>] do_sync_write+0xc9/0x10c [<ffffffff802eebcf>] __up_read+0x13/0x8a [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff802b4818>] kmsg_read+0x3a/0x44 [<ffffffff8027861b>] vfs_write+0xc0/0x136 [<ffffffff80278b45>] sys_write+0x45/0x6e [<ffffffff8020bdee>] system_call+0x7e/0x83 ypbind S 0000000000000000 0 4067 1 ffff81012c3ddb28 0000000000000082 0000000000000000 ffffffff804081d6 ffff81012c47e300 ffff81012e598720 ffff81012fc52e40 ffff81012e598928 000000012c47e300 ffffffff804055e5 00000000ffffffff ffffffff80406d2f Call Trace: [<ffffffff804081d6>] ip_output+0x2bb/0x301 [<ffffffff804055e5>] ip_push_pending_frames+0x3bd/0x425 [<ffffffff80406d2f>] ip_generic_getfrag+0x0/0x8b [<ffffffff804489b4>] schedule_timeout+0x1e/0xad [<ffffffff802415e1>] add_wait_queue+0x15/0x44 [<ffffffff8040caaf>] tcp_poll+0x25/0x138 [<ffffffff80283b12>] do_sys_poll+0x278/0x360 [<ffffffff8028451a>] __pollwait+0x0/0xe1 [<ffffffff802295f6>] default_wake_function+0x0/0xe [<ffffffff802295f6>] default_wake_function+0x0/0xe [<ffffffff803dd31e>] sock_common_recvmsg+0x30/0x45 [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff803dbe0b>] sock_recvmsg+0xd5/0xed [<ffffffff80258ab2>] find_lock_page+0x26/0xa1 [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff803db1f1>] move_addr_to_kernel+0x25/0x36 [<ffffffff803e2b8e>] verify_iovec+0x46/0x84 [<ffffffff803dc16a>] sys_sendmsg+0x264/0x287 [<ffffffff803dc88e>] move_addr_to_user+0x3a/0x4f [<ffffffff803dcc1a>] sys_recvfrom+0x121/0x136 [<ffffffff80228561>] update_curr+0xdf/0xfe [<ffffffff80448110>] thread_return+0x0/0xf9 [<ffffffff80283c2c>] sys_poll+0x32/0x3b [<ffffffff8020bdee>] system_call+0x7e/0x83 ypbind S 0000000000000002 0 4068 1 ffff81012c3ffe58 0000000000000082 000000000050a000 ffff81012d799040 ffff810100000766 ffff81012ea46720 ffff81012e598720 ffff81012ea46928 0000000200000004 00000000ffffffda 0000000000000004 0000000000000002 Call Trace: [<ffffffff802473d3>] do_futex+0x30b/0x9e0 [<ffffffff804489b4>] schedule_timeout+0x1e/0xad [<ffffffff8023838b>] recalc_sigpending+0xe/0x25 [<ffffffff80239f96>] dequeue_signal+0x8d/0x115 [<ffffffff8023ae14>] sys_rt_sigtimedwait+0x17a/0x255 [<ffffffff80247b8b>] sys_futex+0xe3/0x101 [<ffffffff8027867c>] vfs_write+0x121/0x136 [<ffffffff80278b60>] sys_write+0x60/0x6e [<ffffffff8020bdee>] system_call+0x7e/0x83 ypbind S 0000000000000000 0 4069 1 ffff81012bc01eb8 0000000000000082 0000000000000000 ffffffff80243b12 ffff81012bc01ee8 ffff81012e644e40 ffff81012fcc2000 ffff81012e645048 0000000300000000 ffffffff802433ae 00000000ffffffff ffffffff80243925 Call Trace: [<ffffffff80243b12>] ktime_get_ts+0x1a/0x4e [<ffffffff802433ae>] enqueue_hrtimer+0x5c/0x63 [<ffffffff80243925>] hrtimer_start+0xf2/0x104 [<ffffffff80448e56>] do_nanosleep+0x46/0x77 [<ffffffff8024398f>] hrtimer_nanosleep+0x58/0x11e [<ffffffff802eb8c9>] _atomic_dec_and_lock+0x39/0x58 [<ffffffff80243553>] hrtimer_wakeup+0x0/0x22 [<ffffffff80243aa1>] sys_nanosleep+0x4c/0x62 [<ffffffff8020bdee>] system_call+0x7e/0x83 acpid S 0000000000000000 0 4143 1 ffff81012bd0bb28 0000000000000086 0000000000000000 ffffffff80228f46 ffffffff80466320 ffff81012c046000 ffff81012fc8a720 ffff81012c046208 0000000200000000 ffff81012f3ee540 00000000ffffffff ffffffff802cafba Call Trace: [<ffffffff80228f46>] __wake_up+0x38/0x4e [<ffffffff802cafba>] journal_stop+0x1e2/0x1ee [<ffffffff804489b4>] schedule_timeout+0x1e/0xad [<ffffffff802ada2d>] pde_users_dec+0x10/0x3f [<ffffffff802415e1>] add_wait_queue+0x15/0x44 [<ffffffff80283b12>] do_sys_poll+0x278/0x360 [<ffffffff8028451a>] __pollwait+0x0/0xe1 [<ffffffff802295f6>] default_wake_function+0x0/0xe [<ffffffff802295f6>] default_wake_function+0x0/0xe [<ffffffff80233df8>] current_fs_time+0x1e/0x24 [<ffffffff8025885f>] file_read_actor+0xa0/0x118 [<ffffffff8025897e>] find_get_page+0x21/0x50 [<ffffffff8025a1f4>] __generic_file_aio_write_nolock+0x33e/0x3a8 [<ffffffff80258ab2>] find_lock_page+0x26/0xa1 [<ffffffff8025a843>] filemap_fault+0x1e2/0x366 [<ffffffff802631f2>] __do_fault+0x370/0x3aa [<ffffffff80264b64>] handle_mm_fault+0x424/0x772 [<ffffffff802eebcf>] __up_read+0x13/0x8a [<ffffffff8021f1d8>] do_page_fault+0x425/0x779 [<ffffffff8026d4d5>] free_pages_and_swap_cache+0x73/0x8f [<ffffffff8023838b>] recalc_sigpending+0xe/0x25 [<ffffffff80283c2c>] sys_poll+0x32/0x3b [<ffffffff8020bdee>] system_call+0x7e/0x83 courierlogger S 0000000000000000 0 4147 1 ffff81012bd63ce8 0000000000000082 0000000000000000 ffff810100000000 0000000000000000 ffff81012bd64e40 ffffffff805354c0 ffff81012bd65048 00000000051f7bb8 00000000fffffff7 00000000ffffffff 0000000000004000 Call Trace: [<ffffffff8027db22>] pipe_wait+0x66/0x8d [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff8027e39f>] pipe_read+0x318/0x39a [<ffffffff80277fd4>] do_sync_read+0xc9/0x10c [<ffffffff8027ada4>] cp_new_stat+0xe7/0xff [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff8027873b>] vfs_read+0xaa/0x132 [<ffffffff80278ad7>] sys_read+0x45/0x6e [<ffffffff8020bdee>] system_call+0x7e/0x83 authdaemond S 0000000000000001 0 4148 4147 ffff81012bc91a18 0000000000000086 ffff81012bc919e0 ffff81012bc91ab8 0000000000000000 ffff81012bd65560 ffff81012bd64000 ffff81012bd65768 000000012ebef800 ffff81012bc91a28 ffff81012c436500 0000000100314abf Call Trace: [<ffffffff80237ddc>] __mod_timer+0xb6/0xc4 [<ffffffff80448a20>] schedule_timeout+0x8a/0xad [<ffffffff80237ba4>] process_timeout+0x0/0x5 [<ffffffff80284031>] do_select+0x3fc/0x45f [<ffffffff8028451a>] __pollwait+0x0/0xe1 [<ffffffff802295f6>] default_wake_function+0x0/0xe [<ffffffff802c808a>] __ext3_journal_dirty_metadata+0x1e/0x46 [<ffffffff80241321>] bit_waitqueue+0x16/0x82 [<ffffffff80287d9d>] __d_lookup+0xb0/0x100 [<ffffffff8027ec7d>] do_lookup+0x63/0x1ae [<ffffffff8025bb9c>] __rmqueue+0x79/0xe6 [<ffffffff8025c84b>] get_page_from_freelist+0x2c3/0x35e [<ffffffff8025c93f>] __alloc_pages+0x59/0x2ae [<ffffffff80264706>] __pte_alloc+0x78/0xb2 [<ffffffff80284250>] core_sys_select+0x1bc/0x265 [<ffffffff802eebcf>] __up_read+0x13/0x8a [<ffffffff8021f1d8>] do_page_fault+0x425/0x779 [<ffffffff80228561>] update_curr+0xdf/0xfe [<ffffffff80284755>] sys_select+0x15a/0x183 [<ffffffff8020bdee>] system_call+0x7e/0x83 authdaemond S 0000000000000000 0 4156 4148 ffff81012c199a18 0000000000000082 0000000000000000 0000000000000000 0000000000000000 ffff81012bd64000 ffff81012fc52e40 ffff81012bd64208 0000000100000000 ffff81012c199a28 00000000ffffffff 0000000100314abf Call Trace: [<ffffffff80448a20>] schedule_timeout+0x8a/0xad [<ffffffff80237ba4>] process_timeout+0x0/0x5 [<ffffffff80284031>] do_select+0x3fc/0x45f [<ffffffff8028451a>] __pollwait+0x0/0xe1 [<ffffffff802295f6>] default_wake_function+0x0/0xe [<ffffffff802295f6>] default_wake_function+0x0/0xe [<ffffffff8025c93f>] __alloc_pages+0x59/0x2ae [<ffffffff802638ab>] do_wp_page+0x4d4/0x545 [<ffffffff80258ab2>] find_lock_page+0x26/0xa1 [<ffffffff8025a843>] filemap_fault+0x1e2/0x366 [<ffffffff80284250>] core_sys_select+0x1bc/0x265 [<ffffffff802eebcf>] __up_read+0x13/0x8a [<ffffffff8021f1d8>] do_page_fault+0x425/0x779 [<ffffffff80284755>] sys_select+0x15a/0x183 [<ffffffff8020bdee>] system_call+0x7e/0x83 authdaemond S 0000000000000000 0 4157 4148 ffff81012c6b5a18 0000000000000082 ffff81012c6b59e0 00000000000041ed ffff81012c6b5ce8 ffff81012bd64720 ffffffff805354c0 ffff81012bd64928 000000002e0a4ca8 ffff81012c6b5a28 00000000ffffffff 0000000100314abf Call Trace: [<ffffffff80448a20>] schedule_timeout+0x8a/0xad [<ffffffff80237ba4>] process_timeout+0x0/0x5 [<ffffffff80284031>] do_select+0x3fc/0x45f [<ffffffff8028451a>] __pollwait+0x0/0xe1 [<ffffffff802295f6>] default_wake_function+0x0/0xe [<ffffffff802295f6>] default_wake_function+0x0/0xe [<ffffffff802810d4>] link_path_walk+0xce/0xe0 [<ffffffff8025bb9c>] __rmqueue+0x79/0xe6 [<ffffffff8025c93f>] __alloc_pages+0x59/0x2ae [<ffffffff80258ab2>] find_lock_page+0x26/0xa1 [<ffffffff8025a843>] filemap_fault+0x1e2/0x366 [<ffffffff80284250>] core_sys_select+0x1bc/0x265 [<ffffffff802eebcf>] __up_read+0x13/0x8a [<ffffffff8021f1d8>] do_page_fault+0x425/0x779 [<ffffffff80284755>] sys_select+0x15a/0x183 [<ffffffff8020bdee>] system_call+0x7e/0x83 authdaemond S 0000000000000002 0 4158 4148 ffff81012daf5a18 0000000000000086 0000000000000000 0000000000000000 0000000000000000 ffff81012e48d560 ffff81012e372000 ffff81012e48d768 0000000200000000 ffff81012daf5a28 ffff81012f628e00 0000000100314abf Call Trace: [<ffffffff80237ddc>] __mod_timer+0xb6/0xc4 [<ffffffff80448a20>] schedule_timeout+0x8a/0xad [<ffffffff80237ba4>] process_timeout+0x0/0x5 [<ffffffff80284031>] do_select+0x3fc/0x45f [<ffffffff8028451a>] __pollwait+0x0/0xe1 [<ffffffff802295f6>] default_wake_function+0x0/0xe [<ffffffff802295f6>] default_wake_function+0x0/0xe [<ffffffff8025c93f>] __alloc_pages+0x59/0x2ae [<ffffffff8025bb9c>] __rmqueue+0x79/0xe6 [<ffffffff80258ab2>] find_lock_page+0x26/0xa1 [<ffffffff8025a843>] filemap_fault+0x1e2/0x366 [<ffffffff80284250>] core_sys_select+0x1bc/0x265 [<ffffffff802eebcf>] __up_read+0x13/0x8a [<ffffffff8021f1d8>] do_page_fault+0x425/0x779 [<ffffffff80284755>] sys_select+0x15a/0x183 [<ffffffff8020bdee>] system_call+0x7e/0x83 authdaemond S 0000000000000000 0 4159 4148 ffff81012bdada18 0000000000000086 0000000000000000 0000000000000000 0000000000000000 ffff81012e372e40 ffff81012fcc2000 ffff81012e373048 0000000300000000 ffff81012bdada28 00000000ffffffff 0000000100314abf Call Trace: [<ffffffff80448a20>] schedule_timeout+0x8a/0xad [<ffffffff80237ba4>] process_timeout+0x0/0x5 [<ffffffff80284031>] do_select+0x3fc/0x45f [<ffffffff8028451a>] __pollwait+0x0/0xe1 [<ffffffff802295f6>] default_wake_function+0x0/0xe [<ffffffff802295f6>] default_wake_function+0x0/0xe [<ffffffff8025c93f>] __alloc_pages+0x59/0x2ae [<ffffffff8025bb9c>] __rmqueue+0x79/0xe6 [<ffffffff80258ab2>] find_lock_page+0x26/0xa1 [<ffffffff8025a843>] filemap_fault+0x1e2/0x366 [<ffffffff80284250>] core_sys_select+0x1bc/0x265 [<ffffffff802eebcf>] __up_read+0x13/0x8a [<ffffffff8021f1d8>] do_page_fault+0x425/0x779 [<ffffffff80284755>] sys_select+0x15a/0x183 [<ffffffff8020bdee>] system_call+0x7e/0x83 authdaemond S 0000000000000000 0 4160 4148 ffff81012bdcda18 0000000000000086 0000000000000000 0000000000000000 0000000000000000 ffff81012e372000 ffff81012fc8a720 ffff81012e372208 0000000200000000 ffff81012bdcda28 00000000ffffffff 0000000100314abf Call Trace: [<ffffffff80448a20>] schedule_timeout+0x8a/0xad [<ffffffff80237ba4>] process_timeout+0x0/0x5 [<ffffffff80284031>] do_select+0x3fc/0x45f [<ffffffff8028451a>] __pollwait+0x0/0xe1 [<ffffffff802295f6>] default_wake_function+0x0/0xe [<ffffffff802295f6>] default_wake_function+0x0/0xe [<ffffffff8025c93f>] __alloc_pages+0x59/0x2ae [<ffffffff80258ab2>] find_lock_page+0x26/0xa1 [<ffffffff8025a843>] filemap_fault+0x1e2/0x366 [<ffffffff80284250>] core_sys_select+0x1bc/0x265 [<ffffffff802eebcf>] __up_read+0x13/0x8a [<ffffffff8021f1d8>] do_page_fault+0x425/0x779 [<ffffffff80284755>] sys_select+0x15a/0x183 [<ffffffff8020bdee>] system_call+0x7e/0x83 dbus-daemon S 0000000000000001 0 4162 1 ffff81012be09b28 0000000000000082 ffff81012be09ad8 ffffffff80228561 0000000000001e81 ffff81012c398e40 ffff81012e372720 ffff81012c399048 000000012e372768 ffff8100052aec08 ffff8100052aeb80 ffffffff802287a3 Call Trace: [<ffffffff80228561>] update_curr+0xdf/0xfe [<ffffffff802287a3>] enqueue_entity+0x17c/0x1a2 [<ffffffff804489b4>] schedule_timeout+0x1e/0xad [<ffffffff802415e1>] add_wait_queue+0x15/0x44 [<ffffffff80283b12>] do_sys_poll+0x278/0x360 [<ffffffff8028451a>] __pollwait+0x0/0xe1 [<ffffffff802295f6>] default_wake_function+0x0/0xe sage repeated 3 times [<ffffffff803db40b>] sock_aio_read+0xdd/0xec [<ffffffff80277fd4>] do_sync_read+0xc9/0x10c [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff802784c5>] do_readv_writev+0x176/0x18b [<ffffffff80283c2c>] sys_poll+0x32/0x3b [<ffffffff8020bdee>] system_call+0x7e/0x83 hald S 0000000000000000 0 4170 1 ffff81012bf17b28 0000000000000082 0000000000000000 0000000000000000 00000000000000d0 ffff81012e372720 ffff81012fc52e40 ffff81012e372928 000000012bf17f68 ffff81012c398e40 00000000ffffffff ffff810003b931a0 Call Trace: [<ffffffff804489b4>] schedule_timeout+0x1e/0xad [<ffffffff80284572>] __pollwait+0x58/0xe1 [<ffffffff802415e1>] add_wait_queue+0x15/0x44 [<ffffffff802af526>] mounts_poll+0x39/0x56 [<ffffffff80283b12>] do_sys_poll+0x278/0x360 [<ffffffff8028451a>] __pollwait+0x0/0xe1 [<ffffffff802295f6>] default_wake_function+0x0/0xe sage repeated 9 times [<ffffffff802784c5>] do_readv_writev+0x176/0x18b [<ffffffff802679e6>] unmap_region+0x114/0x12a [<ffffffff80283c2c>] sys_poll+0x32/0x3b [<ffffffff8020bdee>] system_call+0x7e/0x83 hald-runner S 0000000000000000 0 4171 4170 ffff81012bf39b28 0000000000000082 0000000000000000 ffffffff802287a3 00000010000204d0 ffff81012e48c000 ffff81012fc52e40 ffff81012e48c208 000000012bf39ae8 ffffffff80227c57 00000000ffffffff ffff8100052c0b80 Call Trace: [<ffffffff802287a3>] enqueue_entity+0x17c/0x1a2 [<ffffffff80227c57>] enqueue_task+0x13/0x21 [<ffffffff804489b4>] schedule_timeout+0x1e/0xad [<ffffffff802415e1>] add_wait_queue+0x15/0x44 [<ffffffff80283b12>] do_sys_poll+0x278/0x360 [<ffffffff8028451a>] __pollwait+0x0/0xe1 [<ffffffff802295f6>] default_wake_function+0x0/0xe [<ffffffff803db31f>] sock_aio_write+0xd1/0xe0 [<ffffffff803db24e>] sock_aio_write+0x0/0xe0 [<ffffffff80277db8>] do_sync_readv_writev+0xc0/0x107 [<ffffffff802635b0>] do_wp_page+0x1d9/0x545 [<ffffffff80264e33>] handle_mm_fault+0x6f3/0x772 [<ffffffff802eebcf>] __up_read+0x13/0x8a [<ffffffff8021f1d8>] do_page_fault+0x425/0x779 [<ffffffff802784c5>] do_readv_writev+0x176/0x18b [<ffffffff80241559>] remove_wait_queue+0x12/0x44 [<ffffffff802326a5>] do_wait+0xa1e/0xace [<ffffffff8027d80d>] pipe_release+0x80/0x8b [<ffffffff80283c2c>] sys_poll+0x32/0x3b [<ffffffff8020bdee>] system_call+0x7e/0x83 hald-addon-ke S 0000000000000000 0 4179 4171 ffff81012b5fdea8 0000000000000086 0000000000000000 00000002b25b6228 ffff81012dd1a9c0 ffff81012e644720 ffff81012fcc2000 ffff81012e644928 0000000300000000 00002b25b62fc8d0 00000000ffffffff 0000000000000000 Call Trace: [<ffffffff88099c94>] :evdev:evdev_read+0xff/0x211 [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff80269820>] do_mmap_pgoff+0x27b/0x2db [<ffffffff8027873b>] vfs_read+0xaa/0x132 [<ffffffff80278ad7>] sys_read+0x45/0x6e [<ffffffff8020bdee>] system_call+0x7e/0x83 hald-addon-hi S 0000000000000001 0 4186 4171 ffff81012b68da18 0000000000000086 ffff8100052aeb80 0000000000000002 ffff81012b68da08 ffff81012c398720 ffff81012e372720 ffff81012c398928 000000012b68d9f0 ffff81012b68da5c ffff81012b68d9d8 ffff81012b68da5c Call Trace: [<ffffffff8022ccd5>] load_balance_start_fair+0x0/0x2b [<ffffffff80228918>] load_balance_next_fair+0x0/0x2b [<ffffffff804489b4>] schedule_timeout+0x1e/0xad [<ffffffff802415e1>] add_wait_queue+0x15/0x44 [<ffffffff80284031>] do_select+0x3fc/0x45f [<ffffffff8028451a>] __pollwait+0x0/0xe1 [<ffffffff802295f6>] default_wake_function+0x0/0xe [<ffffffff80227f17>] __wake_up_common+0x3e/0x68 [<ffffffff80228f46>] __wake_up+0x38/0x4e [<ffffffff8043da8e>] unix_write_space+0x45/0x70 [<ffffffff803e09d0>] skb_dequeue+0x48/0x50 [<ffffffff8043c995>] unix_stream_recvmsg+0x439/0x4ea [<ffffffff803db40b>] sock_aio_read+0xdd/0xec [<ffffffff80284250>] core_sys_select+0x1bc/0x265 [<ffffffff802eebcf>] __up_read+0x13/0x8a [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff802784c5>] do_readv_writev+0x176/0x18b [<ffffffff802295f6>] default_wake_function+0x0/0xe [<ffffffff802846bc>] sys_select+0xc1/0x183 [<ffffffff80278ad7>] sys_read+0x45/0x6e [<ffffffff8020bdee>] system_call+0x7e/0x83 hald-addon-st S 0000000000000000 0 4195 4171 ffff81012b773eb8 0000000000000086 0000000000000000 ffffffff80243b12 ffff81012b773ee8 ffff81012d4cd560 ffff81012fcc2000 ffff81012d4cd768 0000000300000000 ffffffff802433ae 00000000ffffffff ffffffff80243925 Call Trace: [<ffffffff80243b12>] ktime_get_ts+0x1a/0x4e [<ffffffff802433ae>] enqueue_hrtimer+0x5c/0x63 [<ffffffff80243925>] hrtimer_start+0xf2/0x104 [<ffffffff80448e56>] do_nanosleep+0x46/0x77 [<ffffffff8024398f>] hrtimer_nanosleep+0x58/0x11e [<ffffffff80243553>] hrtimer_wakeup+0x0/0x22 [<ffffffff80243aa1>] sys_nanosleep+0x4c/0x62 [<ffffffff8020bdee>] system_call+0x7e/0x83 hald-addon-st S 0000000000000000 0 4209 4171 ffff81012b733eb8 0000000000000086 0000000000000000 ffffffff80243b12 ffff81012b733ee8 ffff81012c399560 ffff81012fcc2000 ffff81012c399768 0000000300000000 ffffffff802433ae 00000000ffffffff ffffffff80243925 Call Trace: [<ffffffff80243b12>] ktime_get_ts+0x1a/0x4e [<ffffffff802433ae>] enqueue_hrtimer+0x5c/0x63 [<ffffffff80243925>] hrtimer_start+0xf2/0x104 [<ffffffff80448e56>] do_nanosleep+0x46/0x77 [<ffffffff8024398f>] hrtimer_nanosleep+0x58/0x11e [<ffffffff8029a2c4>] __blkdev_put+0x136/0x142 [<ffffffff80243553>] hrtimer_wakeup+0x0/0x22 [<ffffffff80243aa1>] sys_nanosleep+0x4c/0x62 [<ffffffff8020bdee>] system_call+0x7e/0x83 avahi-daemon S 0000000000000000 0 4226 1 ffff81012b769b28 0000000000000086 0000000000000000 ffff810129cff300 ffff81012cfb2c00 ffff81012ec74000 ffff81012fc8a720 ffff81012ec74208 000000022cfb2c00 ffff81012b769b38 00000000ffffffff 00000001003118ad Call Trace: [<ffffffff80448a20>] schedule_timeout+0x8a/0xad [<ffffffff80237ba4>] process_timeout+0x0/0x5 [<ffffffff80283b12>] do_sys_poll+0x278/0x360 [<ffffffff8028451a>] __pollwait+0x0/0xe1 [<ffffffff802295f6>] default_wake_function+0x0/0xe sage repeated 6 times [<ffffffff80277fd4>] do_sync_read+0xc9/0x10c [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff80228561>] update_curr+0xdf/0xfe [<ffffffff80422729>] udp_ioctl+0x75/0x7a [<ffffffff80283c2c>] sys_poll+0x32/0x3b [<ffffffff8020bdee>] system_call+0x7e/0x83 avahi-daemon S 0000000000000000 0 4227 4226 ffff81012b7d9c18 0000000000000086 0000000000000000 ffffffff8043d3ce 0000000000000001 ffff81012d97a720 ffff81012fcc2000 ffff81012d97a928 000000032e668dc0 ffff81012d953200 00000000ffffffff ffffffff0000006a Call Trace: [<ffffffff8043d3ce>] unix_stream_sendmsg+0x262/0x329 [<ffffffff804489b4>] schedule_timeout+0x1e/0xad [<ffffffff803dbeee>] sock_sendmsg+0xcb/0xe3 [<ffffffff802414fd>] prepare_to_wait+0x15/0x5f [<ffffffff8043c7a1>] unix_stream_recvmsg+0x245/0x4ea [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff803db40b>] sock_aio_read+0xdd/0xec [<ffffffff803dc11a>] sys_sendmsg+0x214/0x287 [<ffffffff80277fd4>] do_sync_read+0xc9/0x10c [<ffffffff802eebcf>] __up_read+0x13/0x8a [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff802eb8c9>] _atomic_dec_and_lock+0x39/0x58 [<ffffffff8027874e>] vfs_read+0xbd/0x132 [<ffffffff80278ad7>] sys_read+0x45/0x6e [<ffffffff8020bdee>] system_call+0x7e/0x83 exim4 S 0000000000000000 0 4275 1 ffff81012b021a18 0000000000000082 0000000000000000 ffff81012b810888 ffff81012c28a208 ffff81012b6b0720 ffff81012fc8a720 ffff81012b6b0928 0000000200001000 ffff81012fc8a720 00000000ffffffff ffff81012d520f78 Call Trace: [<ffffffff804489b4>] schedule_timeout+0x1e/0xad [<ffffffff802415e1>] add_wait_queue+0x15/0x44 [<ffffffff8040caaf>] tcp_poll+0x25/0x138 [<ffffffff80284031>] do_select+0x3fc/0x45f [<ffffffff8028451a>] __pollwait+0x0/0xe1 [<ffffffff802295f6>] default_wake_function+0x0/0xe [<ffffffff80259e5a>] generic_file_buffered_write+0x639/0x695 [<ffffffff8025bb9c>] __rmqueue+0x79/0xe6 [<ffffffff802ef67d>] number+0x119/0x204 [<ffffffff8025c800>] get_page_from_freelist+0x278/0x35e [<ffffffff8025bb9c>] __rmqueue+0x79/0xe6 [<ffffffff802f038b>] vsnprintf+0x561/0x5a5 [<ffffffff80284250>] core_sys_select+0x1bc/0x265 [<ffffffff80287e0b>] d_lookup+0x1e/0x42 [<ffffffff802af809>] proc_flush_task+0x4e/0x1f6 [<ffffffff80241559>] remove_wait_queue+0x12/0x44 [<ffffffff802846bc>] sys_select+0xc1/0x183 [<ffffffff802295f6>] default_wake_function+0x0/0xe [<ffffffff8020bdee>] system_call+0x7e/0x83 hddtemp S 0000000000000000 0 4425 1 ffff81012b3efa18 0000000000000082 0000000000000000 00000000ffffffff ffff81012be2a720 ffff81012be2a720 ffff81012fc8a720 ffff81012be2a928 0000000200000000 ffff81012b3efa28 00000000ffffffff 0000000100310655 Call Trace: [<ffffffff80448a20>] schedule_timeout+0x8a/0xad [<ffffffff80237ba4>] process_timeout+0x0/0x5 [<ffffffff80284031>] do_select+0x3fc/0x45f [<ffffffff80227c57>] enqueue_task+0x13/0x21 [<ffffffff8028451a>] __pollwait+0x0/0xe1 [<ffffffff80274a24>] __slab_alloc+0xba/0x53a [<ffffffff803e1774>] __alloc_skb+0x31/0x121 [<ffffffff803e17b9>] __alloc_skb+0x76/0x121 [<ffffffff803ddc95>] sock_alloc_send_skb+0x77/0x1d2 [<ffffffff80228f46>] __wake_up+0x38/0x4e [<ffffffff803e2909>] memcpy_fromiovec+0x36/0x66 [<ffffffff803e08cc>] skb_queue_tail+0x17/0x3e [<ffffffff803df401>] sock_def_readable+0x10/0x5f [<ffffffff8043bf67>] unix_dgram_sendmsg+0x3fb/0x491 [<ffffffff803dbeee>] sock_sendmsg+0xcb/0xe3 [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff80284250>] core_sys_select+0x1bc/0x265 [<ffffffff8028140e>] do_path_lookup+0x1a0/0x1c2 [<ffffffff802eb8c9>] _atomic_dec_and_lock+0x39/0x58 [<ffffffff8028bba8>] mntput_no_expire+0x1c/0x80 [<ffffffff8027ada4>] cp_new_stat+0xe7/0xff [<ffffffff803dc2b5>] sys_sendto+0x128/0x151 [<ffffffff80284755>] sys_select+0x15a/0x183 [<ffffffff8020bdee>] system_call+0x7e/0x83 lpd S 0000000000000000 0 4435 1 ffff81012ac19a18 0000000000000082 0000000000000000 0000000000000000 0000000000000001 ffff81012b18a720 ffff81012fc8a720 ffff81012b18a928 0000000200000001 0000000000000000 00000000ffffffff ffff81012b18a720 Call Trace: [<ffffffff804489b4>] schedule_timeout+0x1e/0xad [<ffffffff8025acf6>] mempool_alloc+0x24/0xda [<ffffffff802415e1>] add_wait_queue+0x15/0x44 [<ffffffff80284031>] do_select+0x3fc/0x45f [<ffffffff8028451a>] __pollwait+0x0/0xe1 [<ffffffff802295f6>] default_wake_function+0x0/0xe [<ffffffff8044817e>] thread_return+0x6e/0xf9 [<ffffffff80227f17>] __wake_up_common+0x3e/0x68 [<ffffffff80228f46>] __wake_up+0x38/0x4e [<ffffffff80287198>] dput+0x1c/0x10b [<ffffffff8027eab8>] __follow_mount+0x26/0x7b [<ffffffff8027ec7d>] do_lookup+0x63/0x1ae [<ffffffff80287d9d>] __d_lookup+0xb0/0x100 [<ffffffff8027ec7d>] do_lookup+0x63/0x1ae [<ffffffff80287d9d>] __d_lookup+0xb0/0x100 [<ffffffff8027ec7d>] do_lookup+0x63/0x1ae [<ffffffff80287198>] dput+0x1c/0x10b [<ffffffff80280eb1>] __link_path_walk+0xbb7/0xd0c [<ffffffff8028bba8>] mntput_no_expire+0x1c/0x80 [<ffffffff80284250>] core_sys_select+0x1bc/0x265 [<ffffffff8028aaee>] notify_change+0x287/0x2ad [<ffffffff80276e56>] chown_common+0xa8/0xb3 [<ffffffff8028bba8>] mntput_no_expire+0x1c/0x80 [<ffffffff802846bc>] sys_select+0xc1/0x183 [<ffffffff8043c3db>] unix_listen+0x49/0xd8 [<ffffffff8020bdee>] system_call+0x7e/0x83 lockd S 0000000000000001 0 4464 2 ffff81012ad83e10 0000000000000046 ffff81012b73b400 ffffffff80449c4f ffff81012bf2fc00 ffff81012d4cce40 ffff81012b01d560 ffff81012d4cd048 000000012cee0780 ffff81012b01d560 ffff81012c6a0028 0000000000000000 Call Trace: [<ffffffff80449c4f>] _spin_lock_bh+0x9/0x19 [<ffffffff804489b4>] schedule_timeout+0x1e/0xad [<ffffffff8814c9c0>] :sunrpc:svc_sock_release+0xf0/0x170 [<ffffffff802415e1>] add_wait_queue+0x15/0x44 [<ffffffff8814d308>] :sunrpc:svc_recv+0x28e/0x40f [<ffffffff802295f6>] default_wake_function+0x0/0xe [<ffffffff88174686>] :lockd:lockd+0x134/0x262 [<ffffffff80232f85>] do_exit+0x7bc/0x7c0 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff88174552>] :lockd:lockd+0x0/0x262 [<ffffffff8020cbfe>] child_rip+0x0/0x12 nfsd4 S 0000000000000000 0 4467 2 ffff81012ae79ed0 0000000000000046 0000000000000000 ffffffff8023e5fe 000000004730b4d2 ffff81012d4cc000 ffff81012fcc2000 ffff81012d4cc208 000000034730e772 ffffffff881f852f 00000000ffffffff ffff81012ae79e80 Call Trace: [<ffffffff8023e5fe>] queue_delayed_work_on+0xae/0xbe [<ffffffff881f852f>] :nfsd:laundromat_main+0x20b/0x216 [<ffffffff881f8324>] :nfsd:laundromat_main+0x0/0x216 [<ffffffff8023e6c7>] worker_thread+0x74/0x9b [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff8023e653>] worker_thread+0x0/0x9b [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 nfsd S 0000000000000000 0 4468 2 ffff81012aecde30 0000000000000046 0000000000000000 ffffffff80449c4f ffff81012fe21900 ffff81012be2b560 ffff81012fc8a720 ffff81012be2b768 000000022cee0a00 ffff81012aecde40 00000000ffffffff 000000010035e62b Call Trace: [<ffffffff80449c4f>] _spin_lock_bh+0x9/0x19 [<ffffffff80448a20>] schedule_timeout+0x8a/0xad [<ffffffff80237ba4>] process_timeout+0x0/0x5 [<ffffffff8814d308>] :sunrpc:svc_recv+0x28e/0x40f [<ffffffff802295f6>] default_wake_function+0x0/0xe [<ffffffff881e17a6>] :nfsd:nfsd+0xdb/0x2ad [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff881e16cb>] :nfsd:nfsd+0x0/0x2ad [<ffffffff8020cbfe>] child_rip+0x0/0x12 nfsd S 0000000000000003 0 4469 2 ffff81012af4fe30 0000000000000046 ffff81012b73b400 ffffffff80276816 ffff81012c7ef8c0 ffff81012be2a000 ffff81012be2ae40 ffff81012be2a208 000000032eb6a740 ffff81012af4fe40 ffff81012aece000 000000010035e62b Call Trace: [<ffffffff80276816>] filp_close+0x5d/0x65 [<ffffffff80237ddc>] __mod_timer+0xb6/0xc4 [<ffffffff80448a20>] schedule_timeout+0x8a/0xad [<ffffffff80237ba4>] process_timeout+0x0/0x5 [<ffffffff8814d308>] :sunrpc:svc_recv+0x28e/0x40f [<ffffffff802295f6>] default_wake_function+0x0/0xe [<ffffffff881e17a6>] :nfsd:nfsd+0xdb/0x2ad [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff881e16cb>] :nfsd:nfsd+0x0/0x2ad [<ffffffff8020cbfe>] child_rip+0x0/0x12 nfsd S 0000000000000003 0 4470 2 ffff81012afd1e30 0000000000000046 ffff81012b73b400 ffffffff80276816 ffff81012c7eedc0 ffff81012be2ae40 ffff81012b6b1560 ffff81012be2b048 000000032eb6a980 ffff81012afd1e40 ffff81012af50000 000000010035e62b Call Trace: [<ffffffff80276816>] filp_close+0x5d/0x65 [<ffffffff80237ddc>] __mod_timer+0xb6/0xc4 [<ffffffff80448a20>] schedule_timeout+0x8a/0xad [<ffffffff80237ba4>] process_timeout+0x0/0x5 [<ffffffff8814d308>] :sunrpc:svc_recv+0x28e/0x40f [<ffffffff802295f6>] default_wake_function+0x0/0xe [<ffffffff881e17a6>] :nfsd:nfsd+0xdb/0x2ad [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff881e16cb>] :nfsd:nfsd+0x0/0x2ad [<ffffffff8020cbfe>] child_rip+0x0/0x12 nfsd S 0000000000000002 0 4471 2 ffff81012a871e30 0000000000000046 ffff81012b73b400 ffffffff80276816 ffff81012cfe8840 ffff81012fe15560 ffff81012be2b560 ffff81012fe15768 000000022eb6a440 ffff81012a871e40 ffff81012afd2000 000000010035e62b Call Trace: [<ffffffff80276816>] filp_close+0x5d/0x65 [<ffffffff80237ddc>] __mod_timer+0xb6/0xc4 [<ffffffff80448a20>] schedule_timeout+0x8a/0xad [<ffffffff80237ba4>] process_timeout+0x0/0x5 [<ffffffff8814d308>] :sunrpc:svc_recv+0x28e/0x40f [<ffffffff802295f6>] default_wake_function+0x0/0xe [<ffffffff881e17a6>] :nfsd:nfsd+0xdb/0x2ad [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff881e16cb>] :nfsd:nfsd+0x0/0x2ad [<ffffffff8020cbfe>] child_rip+0x0/0x12 nfsd S 0000000000000002 0 4472 2 ffff81012a8f3e30 0000000000000046 ffff81012b73b400 ffffffff80276816 ffff81012a8f4000 ffff81012e598e40 ffff81012fe15560 ffff81012e599048 000000022eb6aec0 ffff81012a8f3e40 ffff81012a872000 000000010035e62b Call Trace: [<ffffffff80276816>] filp_close+0x5d/0x65 [<ffffffff80237ddc>] __mod_timer+0xb6/0xc4 [<ffffffff80448a20>] schedule_timeout+0x8a/0xad [<ffffffff80237ba4>] process_timeout+0x0/0x5 [<ffffffff8814d308>] :sunrpc:svc_recv+0x28e/0x40f [<ffffffff802295f6>] default_wake_function+0x0/0xe [<ffffffff881e17a6>] :nfsd:nfsd+0xdb/0x2ad [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff881e16cb>] :nfsd:nfsd+0x0/0x2ad [<ffffffff8020cbfe>] child_rip+0x0/0x12 nfsd S 0000000000000000 0 4473 2 ffff81012a979e30 0000000000000046 0000000000000000 ffffffff80276816 ffff81012a8f42c0 ffff81012c398000 ffff81012fcc2000 ffff81012c398208 000000032f0763c4 ffff81012a979e40 00000000ffffffff 000000010035e634 Call Trace: [<ffffffff80276816>] filp_close+0x5d/0x65 [<ffffffff80448a20>] schedule_timeout+0x8a/0xad [<ffffffff80237ba4>] process_timeout+0x0/0x5 [<ffffffff8814d308>] :sunrpc:svc_recv+0x28e/0x40f [<ffffffff802295f6>] default_wake_function+0x0/0xe [<ffffffff881e17a6>] :nfsd:nfsd+0xdb/0x2ad [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff881e16cb>] :nfsd:nfsd+0x0/0x2ad [<ffffffff8020cbfe>] child_rip+0x0/0x12 nfsd S 0000000000000000 0 4474 2 ffff81012a9fbe30 0000000000000046 0000000000000000 0000000000000080 0000000000000000 ffff81012b6b0e40 ffff81012fc8a720 ffff81012b6b1048 0000000200008000 ffff81012a9fbe40 00000000ffffffff 000000010035e634 Call Trace: [<ffffffff80448a20>] schedule_timeout+0x8a/0xad [<ffffffff80237ba4>] process_timeout+0x0/0x5 [<ffffffff8814d308>] :sunrpc:svc_recv+0x28e/0x40f [<ffffffff802295f6>] default_wake_function+0x0/0xe [<ffffffff804496f0>] __down_read+0x12/0x9a [<ffffffff881e17a6>] :nfsd:nfsd+0xdb/0x2ad [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff881e16cb>] :nfsd:nfsd+0x0/0x2ad [<ffffffff8020cbfe>] child_rip+0x0/0x12 nfsd S 0000000000000000 0 4475 2 ffff81012aa7de30 0000000000000046 0000000000000000 ffffffff80276816 ffff81012a8f4840 ffff81012b6b1560 ffff81012fcc2000 ffff81012b6b1768 000000032eb6aa00 ffff81012aa7de40 00000000ffffffff 000000010035e62b Call Trace: [<ffffffff80276816>] filp_close+0x5d/0x65 [<ffffffff80448a20>] schedule_timeout+0x8a/0xad [<ffffffff80237ba4>] process_timeout+0x0/0x5 [<ffffffff8814d308>] :sunrpc:svc_recv+0x28e/0x40f [<ffffffff802295f6>] default_wake_function+0x0/0xe [<ffffffff881e17a6>] :nfsd:nfsd+0xdb/0x2ad [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff881e16cb>] :nfsd:nfsd+0x0/0x2ad [<ffffffff8020cbfe>] child_rip+0x0/0x12 rpc.mountd S 0000000000000003 0 4479 1 ffff81012aaf9a18 0000000000000086 0000000000000000 0000000000000000 0000000000000000 ffff81012fcf5560 ffff81012b6b0000 ffff81012fcf5768 0000000300000000 0000000000000000 0000000000000000 0000000000000000 Call Trace: [<ffffffff804489b4>] schedule_timeout+0x1e/0xad [<ffffffff802415e1>] add_wait_queue+0x15/0x44 [<ffffffff8040caaf>] tcp_poll+0x25/0x138 [<ffffffff80284031>] do_select+0x3fc/0x45f [<ffffffff8028451a>] __pollwait+0x0/0xe1 [<ffffffff802295f6>] default_wake_function+0x0/0xe sage repeated 4 times [<ffffffff80287198>] dput+0x1c/0x10b [<ffffffff80280eb1>] __link_path_walk+0xbb7/0xd0c [<ffffffff80258ab2>] find_lock_page+0x26/0xa1 [<ffffffff8025a843>] filemap_fault+0x1e2/0x366 [<ffffffff80284250>] core_sys_select+0x1bc/0x265 [<ffffffff8027a970>] chrdev_open+0x167/0x196 [<ffffffff802eebcf>] __up_read+0x13/0x8a [<ffffffff8021f1d8>] do_page_fault+0x425/0x779 [<ffffffff80276ba4>] do_filp_open+0x2d/0x3d [<ffffffff802f1117>] strncpy_from_user+0x36/0x4b [<ffffffff802846bc>] sys_select+0xc1/0x183 [<ffffffff8020bdee>] system_call+0x7e/0x83 nscd S 0000000000000000 0 4490 1 ffff81012aac1e98 0000000000000086 0000000000000000 000000000000118a 000055555566c524 ffff81012ecbe000 ffffffff805354c0 ffff81012ecbe208 0000000000000000 ffff81012aac1ea8 00000000ffffffff 0000000100310ab5 Call Trace: [<ffffffff80448a20>] schedule_timeout+0x8a/0xad [<ffffffff80237ba4>] process_timeout+0x0/0x5 [<ffffffff8029e84e>] sys_epoll_wait+0x17f/0x421 [<ffffffff802295f6>] default_wake_function+0x0/0xe [<ffffffff8020bdee>] system_call+0x7e/0x83 nscd S 0000000000000000 0 4496 1 ffff81012acb1cd8 0000000000000086 0000000000000000 ffffffff802ee66c ffff81012acb1d78 ffff81012b18b560 ffff81012fc8a720 ffff81012b18b768 00000002ffffffff ffffffff802433ae 00000000ffffffff ffffffff80243925 Call Trace: [<ffffffff802ee66c>] rb_insert_color+0xb2/0xda [<ffffffff802433ae>] enqueue_hrtimer+0x5c/0x63 [<ffffffff80243925>] hrtimer_start+0xf2/0x104 [<ffffffff80246c9a>] futex_wait+0x23f/0x304 [<ffffffff80243553>] hrtimer_wakeup+0x0/0x22 [<ffffffff802295f6>] default_wake_function+0x0/0xe [<ffffffff8024713c>] do_futex+0x74/0x9e0 [<ffffffff80244b2b>] getnstimeofday+0x32/0x8b [<ffffffff80243b12>] ktime_get_ts+0x1a/0x4e [<ffffffff80247b8b>] sys_futex+0xe3/0x101 [<ffffffff80244b2b>] getnstimeofday+0x32/0x8b [<ffffffff8020bdee>] system_call+0x7e/0x83 nscd S 0000000000000000 0 4497 1 ffff81012ab8dcd8 0000000000000086 0000000000000000 ffff81012f0741ed ffff81012ab8dd78 ffff81012b18a000 ffff81012fc52e40 ffff81012b18a208 00000001ffffffff ffffffff802433ae 00000000ffffffff ffffffff80243925 Call Trace: [<ffffffff802433ae>] enqueue_hrtimer+0x5c/0x63 [<ffffffff80243925>] hrtimer_start+0xf2/0x104 [<ffffffff80246c9a>] futex_wait+0x23f/0x304 [<ffffffff80243553>] hrtimer_wakeup+0x0/0x22 [<ffffffff802295f6>] default_wake_function+0x0/0xe [<ffffffff8024713c>] do_futex+0x74/0x9e0 [<ffffffff80244b2b>] getnstimeofday+0x32/0x8b [<ffffffff80243b12>] ktime_get_ts+0x1a/0x4e [<ffffffff80247b8b>] sys_futex+0xe3/0x101 [<ffffffff80244b2b>] getnstimeofday+0x32/0x8b [<ffffffff8020bdee>] system_call+0x7e/0x83 nscd S 0000000000000000 0 4498 1 ffff81012ab8fcd8 0000000000000086 0000000000000000 ffff81012f0741ed ffff81012ab8fd78 ffff81012ecbe720 ffffffff805354c0 ffff81012ecbe928 00000000ffffffff ffffffff802433ae 00000000ffffffff ffffffff80243925 Call Trace: [<ffffffff802433ae>] enqueue_hrtimer+0x5c/0x63 [<ffffffff80243925>] hrtimer_start+0xf2/0x104 [<ffffffff80246c9a>] futex_wait+0x23f/0x304 [<ffffffff80243553>] hrtimer_wakeup+0x0/0x22 [<ffffffff802295f6>] default_wake_function+0x0/0xe [<ffffffff8024713c>] do_futex+0x74/0x9e0 [<ffffffff80244b2b>] getnstimeofday+0x32/0x8b [<ffffffff80243b12>] ktime_get_ts+0x1a/0x4e [<ffffffff80247b8b>] sys_futex+0xe3/0x101 [<ffffffff80244b2b>] getnstimeofday+0x32/0x8b [<ffffffff8020bdee>] system_call+0x7e/0x83 nscd S 0000000000000003 0 4499 1 ffff81012ab91cd8 0000000000000086 0000000000000000 ffffffff00000001 0000000000000000 ffff81012c046720 ffff810129a82e40 ffff81012c046928 0000000300000000 ffffffff80267d50 000055555566c000 ffff81012ab91d40 Call Trace: [<ffffffff80267d50>] find_extend_vma+0x16/0x59 [<ffffffff80246010>] get_futex_key+0x82/0x14e [<ffffffff80246c4b>] futex_wait+0x1f0/0x304 [<ffffffff802290dc>] task_rq_lock+0x3d/0x6f [<ffffffff802295e5>] try_to_wake_up+0x2c3/0x2d4 [<ffffffff802295f6>] default_wake_function+0x0/0xe [<ffffffff8024713c>] do_futex+0x74/0x9e0 [<ffffffff80228f46>] __wake_up+0x38/0x4e [<ffffffff8028bba8>] mntput_no_expire+0x1c/0x80 [<ffffffff80241321>] bit_waitqueue+0x16/0x82 [<ffffffff80247b8b>] sys_futex+0xe3/0x101 [<ffffffff80276816>] filp_close+0x5d/0x65 [<ffffffff8020bdee>] system_call+0x7e/0x83 nscd S 0000000000000000 0 4500 1 ffff81012abb3cd8 0000000000000086 0000000000000000 ffffffff80228f46 ffff81012ecb5700 ffff81012c046e40 ffff81012fc52e40 ffff81012c047048 0000000100000040 ffffffff80267d50 00000000ffffffff ffff81012abb3d40 Call Trace: [<ffffffff80228f46>] __wake_up+0x38/0x4e [<ffffffff80267d50>] find_extend_vma+0x16/0x59 [<ffffffff80246c4b>] futex_wait+0x1f0/0x304 [<ffffffff802295f6>] default_wake_function+0x0/0xe [<ffffffff8024713c>] do_futex+0x74/0x9e0 [<ffffffff80228f46>] __wake_up+0x38/0x4e [<ffffffff8028bba8>] mntput_no_expire+0x1c/0x80 [<ffffffff80241321>] bit_waitqueue+0x16/0x82 [<ffffffff80247b8b>] sys_futex+0xe3/0x101 [<ffffffff80276816>] filp_close+0x5d/0x65 [<ffffffff8020bdee>] system_call+0x7e/0x83 nscd S 0000000000000000 0 4501 1 ffff81012abb5cd8 0000000000000086 0000000000000000 ffffffff00000001 0000000000000000 ffff81012b6b0000 ffff81012fc8a720 ffff81012b6b0208 0000000200000000 ffffffff80267d50 00000000ffffffff ffff81012abb5d40 Call Trace: [<ffffffff80267d50>] find_extend_vma+0x16/0x59 [<ffffffff80246c4b>] futex_wait+0x1f0/0x304 [<ffffffff803e2b8e>] verify_iovec+0x46/0x84 [<ffffffff802295f6>] default_wake_function+0x0/0xe [<ffffffff8024713c>] do_futex+0x74/0x9e0 [<ffffffff80228f46>] __wake_up+0x38/0x4e [<ffffffff8028bba8>] mntput_no_expire+0x1c/0x80 [<ffffffff80241321>] bit_waitqueue+0x16/0x82 [<ffffffff80247b8b>] sys_futex+0xe3/0x101 [<ffffffff80276816>] filp_close+0x5d/0x65 [<ffffffff8020bdee>] system_call+0x7e/0x83 inetd S 0000000000000000 0 4502 1 ffff81012ab89a18 0000000000000082 0000000000000000 000a80d200000000 ffffffff80545cd8 ffff81012ec74e40 ffff81012fcc2000 ffff81012ec75048 00000003000a80d2 ffff81012fcc2000 00000000ffffffff ffff81012ec74e40 Call Trace: [<ffffffff804489b4>] schedule_timeout+0x1e/0xad [<ffffffff802415e1>] add_wait_queue+0x15/0x44 [<ffffffff8040caaf>] tcp_poll+0x25/0x138 [<ffffffff80284031>] do_select+0x3fc/0x45f [<ffffffff8028451a>] __pollwait+0x0/0xe1 [<ffffffff802295f6>] default_wake_function+0x0/0xe [<ffffffff80324d32>] extract_buf+0xe9/0xf9 [<ffffffff803248a9>] __add_entropy_words+0x5d/0x184 [<ffffffff802ef67d>] number+0x119/0x204 [<ffffffff8025c800>] get_page_from_freelist+0x278/0x35e [<ffffffff802f038b>] vsnprintf+0x561/0x5a5 [<ffffffff80284250>] core_sys_select+0x1bc/0x265 [<ffffffff80287167>] d_kill+0x44/0x59 [<ffffffff80287198>] dput+0x1c/0x10b [<ffffffff802af874>] proc_flush_task+0xb9/0x1f6 [<ffffffff80241559>] remove_wait_queue+0x12/0x44 [<ffffffff802846bc>] sys_select+0xc1/0x183 [<ffffffff802295f6>] default_wake_function+0x0/0xe [<ffffffff8023ab09>] sys_rt_sigprocmask+0x50/0xce [<ffffffff8020bdee>] system_call+0x7e/0x83 nmbd S 0000000000000000 0 4511 1 ffff81012a5c5a18 0000000000000086 0000000000000000 ffff81012ea8ba00 ffff8100a4868300 ffff81012b01c720 ffffffff805354c0 ffff81012b01c928 0000000000000000 ffff81012a5c5a28 00000000ffffffff 000000010030feb1 Call Trace: [<ffffffff80448a20>] schedule_timeout+0x8a/0xad [<ffffffff80237ba4>] process_timeout+0x0/0x5 [<ffffffff80284031>] do_select+0x3fc/0x45f [<ffffffff8028451a>] __pollwait+0x0/0xe1 [<ffffffff802295f6>] default_wake_function+0x0/0xe sage repeated 6 times [<ffffffff803dbe0b>] sock_recvmsg+0xd5/0xed [<ffffffff8025d9cf>] balance_dirty_pages_ratelimited_nr+0x1e5/0x1f4 [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff80284250>] core_sys_select+0x1bc/0x265 [<ffffffff803e7f63>] netdev_run_todo+0x220/0x229 [<ffffffff803e542d>] __dev_get_by_name+0x72/0x85 [<ffffffff803ddac9>] lock_sock_nested+0xa2/0xad [<ffffffff80449c4f>] _spin_lock_bh+0x9/0x19 [<ffffffff803dd99f>] release_sock+0x13/0x9b [<ffffffff80241321>] bit_waitqueue+0x16/0x82 [<ffffffff802413cb>] wake_up_bit+0x11/0x22 [<ffffffff80284755>] sys_select+0x15a/0x183 [<ffffffff8020bdee>] system_call+0x7e/0x83 smbd S 0000000000000000 0 4513 1 ffff81012a081a18 0000000000000086 0000000000000000 ffffffff80229af5 0000000000470842 ffff81012ac4ce40 ffff81012fc52e40 ffff81012ac4d048 0000000100000001 ffff81012fc52e40 00000000ffffffff 0000000000000001 Call Trace: [<ffffffff80229af5>] find_busiest_group+0x254/0x6d6 [<ffffffff804489b4>] schedule_timeout+0x1e/0xad [<ffffffff802415e1>] add_wait_queue+0x15/0x44 [<ffffffff8027d2ef>] pipe_poll+0x33/0x8d [<ffffffff80284031>] do_select+0x3fc/0x45f [<ffffffff8028451a>] __pollwait+0x0/0xe1 [<ffffffff802295f6>] default_wake_function+0x0/0xe sage repeated 2 times [<ffffffff80287d9d>] __d_lookup+0xb0/0x100 [<ffffffff8027ec7d>] do_lookup+0x63/0x1ae [<ffffffff80287198>] dput+0x1c/0x10b [<ffffffff802ef67d>] number+0x119/0x204 [<ffffffff8025c800>] get_page_from_freelist+0x278/0x35e [<ffffffff802f038b>] vsnprintf+0x561/0x5a5 [<ffffffff80284250>] core_sys_select+0x1bc/0x265 [<ffffffff80287e0b>] d_lookup+0x1e/0x42 [<ffffffff802af809>] proc_flush_task+0x4e/0x1f6 [<ffffffff80241559>] remove_wait_queue+0x12/0x44 [<ffffffff802846bc>] sys_select+0xc1/0x183 [<ffffffff8020bdee>] system_call+0x7e/0x83 smbd S 0000000000000000 0 4533 4513 ffff81012a2bbf68 0000000000000082 0000000000000000 00002af339898f10 00000000000011b5 ffff81012b01ce40 ffff81012fc52e40 ffff81012b01d048 0000000100000007 0000000b0000000e 00000000ffffffff 0000000000000000 Call Trace: [<ffffffff8023a3c1>] sys_pause+0x19/0x22 [<ffffffff8020bdee>] system_call+0x7e/0x83 sshd S 0000000000000000 0 4534 1 ffff81012a319a18 0000000000000086 0000000000000000 0000000000000096 000000000042b364 ffff81012a413560 ffff81012fc8a720 ffff81012a413768 0000000200001000 ffff81012fc8a720 00000000ffffffff ffff81012d520cd8 Call Trace: [<ffffffff804489b4>] schedule_timeout+0x1e/0xad [<ffffffff802415e1>] add_wait_queue+0x15/0x44 [<ffffffff8040caaf>] tcp_poll+0x25/0x138 [<ffffffff80284031>] do_select+0x3fc/0x45f [<ffffffff8028451a>] __pollwait+0x0/0xe1 [<ffffffff802295f6>] default_wake_function+0x0/0xe [<ffffffff802295f6>] default_wake_function+0x0/0xe [<ffffffff80274a24>] __slab_alloc+0xba/0x53a [<ffffffff803ddc95>] sock_alloc_send_skb+0x77/0x1d2 [<ffffffff803ddc95>] sock_alloc_send_skb+0x77/0x1d2 [<ffffffff803e17b9>] __alloc_skb+0x76/0x121 [<ffffffff803ddc95>] sock_alloc_send_skb+0x77/0x1d2 [<ffffffff80233df8>] current_fs_time+0x1e/0x24 [<ffffffff803e2909>] memcpy_fromiovec+0x36/0x66 [<ffffffff803e08cc>] skb_queue_tail+0x17/0x3e [<ffffffff803df401>] sock_def_readable+0x10/0x5f [<ffffffff8043d3ce>] unix_stream_sendmsg+0x262/0x329 [<ffffffff8025c93f>] __alloc_pages+0x59/0x2ae [<ffffffff803db31f>] sock_aio_write+0xd1/0xe0 [<ffffffff80284250>] core_sys_select+0x1bc/0x265 [<ffffffff802eebcf>] __up_read+0x13/0x8a [<ffffffff80273327>] add_partial+0x12/0x3f [<ffffffff80274306>] __slab_free+0x6d/0x2c4 [<ffffffff80241321>] bit_waitqueue+0x16/0x82 [<ffffffff802413cb>] wake_up_bit+0x11/0x22 [<ffffffff802846bc>] sys_select+0xc1/0x183 [<ffffffff80276816>] filp_close+0x5d/0x65 [<ffffffff8020bdee>] system_call+0x7e/0x83 famd S 0000000000000000 0 4595 1 ffff810129cf1a18 0000000000000086 0000000000000000 0000000000000000 0000000000000000 ffff81012b01c000 ffff81012fc8a720 ffff81012b01c208 0000000200000003 ffff81012d52d078 00000000ffffffff ffff81012d52d000 Call Trace: [<ffffffff804489b4>] schedule_timeout+0x1e/0xad [<ffffffff802415e1>] add_wait_queue+0x15/0x44 [<ffffffff8040caaf>] tcp_poll+0x25/0x138 [<ffffffff80284031>] do_select+0x3fc/0x45f [<ffffffff8028451a>] __pollwait+0x0/0xe1 [<ffffffff802295f6>] default_wake_function+0x0/0xe [<ffffffff80283bce>] do_sys_poll+0x334/0x360 [<ffffffff803e09d0>] skb_dequeue+0x48/0x50 [<ffffffff803e2b18>] memcpy_toiovec+0x36/0x66 [<ffffffff8028451a>] __pollwait+0x0/0xe1 [<ffffffff803e2f52>] skb_copy_datagram_iovec+0x49/0x1e8 [<ffffffff802295f6>] default_wake_function+0x0/0xe [<ffffffff8042298d>] udp_recvmsg+0x193/0x1f3 [<ffffffff803dd31e>] sock_common_recvmsg+0x30/0x45 [<ffffffff803dbe0b>] sock_recvmsg+0xd5/0xed [<ffffffff8025ec50>] activate_page+0xa2/0xc9 [<ffffffff8025ee53>] mark_page_accessed+0x1b/0x2f [<ffffffff8025a843>] filemap_fault+0x1e2/0x366 [<ffffffff80284250>] core_sys_select+0x1bc/0x265 [<ffffffff802eebcf>] __up_read+0x13/0x8a [<ffffffff8021f1d8>] do_page_fault+0x425/0x779 [<ffffffff803dd99f>] release_sock+0x13/0x9b [<ffffffff80241321>] bit_waitqueue+0x16/0x82 [<ffffffff802413cb>] wake_up_bit+0x11/0x22 [<ffffffff802846bc>] sys_select+0xc1/0x183 [<ffffffff8020bdee>] system_call+0x7e/0x83 ntpd S 0000000000000000 0 4632 1 ffff810129e6fa18 0000000000000082 0000000000000000 ffff81012e19c0d0 ffff81012ed903a8 ffff810129d45560 ffff81012fc8a720 ffff810129d45768 0000000200001000 ffff81012fc8a720 00000000ffffffff ffff81012c92b9c0 Call Trace: [<ffffffff804489b4>] schedule_timeout+0x1e/0xad [<ffffffff803e326f>] datagram_poll+0x21/0xd9 [<ffffffff80420d77>] udp_poll+0x13/0xfb [<ffffffff80284031>] do_select+0x3fc/0x45f [<ffffffff8028451a>] __pollwait+0x0/0xe1 [<ffffffff802295f6>] default_wake_function+0x0/0xe sage repeated 7 times [<ffffffff802ee66c>] rb_insert_color+0xb2/0xda [<ffffffff802433ae>] enqueue_hrtimer+0x5c/0x63 [<ffffffff80243925>] hrtimer_start+0xf2/0x104 [<ffffffff80284250>] core_sys_select+0x1bc/0x265 [<ffffffff8023838b>] recalc_sigpending+0xe/0x25 [<ffffffff8020b92c>] do_notify_resume+0x653/0x725 [<ffffffff80211d59>] init_fpu+0x6b/0x87 [<ffffffff8020dd37>] math_state_restore+0x1a/0x49 [<ffffffff80449e2d>] error_exit+0x0/0x84 [<ffffffff8020bc2d>] sys_rt_sigreturn+0x21b/0x2be [<ffffffff802846bc>] sys_select+0xc1/0x183 [<ffffffff8020bdee>] system_call+0x7e/0x83 mdadm S 0000000000000000 0 4643 1 ffff810129db9a18 0000000000000082 0000000000000000 000a80d200000000 ffffffff80545cd8 ffff81012ac4c000 ffff81012fc8a720 ffff81012ac4c208 00000002000a80d2 ffff810129db9a28 00000000ffffffff 00000001003122a6 Call Trace: [<ffffffff80448a20>] schedule_timeout+0x8a/0xad [<ffffffff80237ba4>] process_timeout+0x0/0x5 [<ffffffff80284031>] do_select+0x3fc/0x45f [<ffffffff8028451a>] __pollwait+0x0/0xe1 [<ffffffff802295f6>] default_wake_function+0x0/0xe [<ffffffff80227c57>] enqueue_task+0x13/0x21 [<ffffffff80228435>] inc_nr_running+0x19/0x32 [<ffffffff802295e5>] try_to_wake_up+0x2c3/0x2d4 [<ffffffff80287d9d>] __d_lookup+0xb0/0x100 [<ffffffff8027ec7d>] do_lookup+0x63/0x1ae [<ffffffff80228f46>] __wake_up+0x38/0x4e [<ffffffff803c01c8>] md_ioctl+0x1206/0x127e [<ffffffff802413e5>] autoremove_wake_function+0x9/0x2e [<ffffffff80227f17>] __wake_up_common+0x3e/0x68 [<ffffffff80284250>] core_sys_select+0x1bc/0x265 [<ffffffff8029a631>] do_open+0x229/0x2c0 [<ffffffff8027fc34>] may_open+0x5b/0x22c [<ffffffff8029a887>] blkdev_open+0x0/0x5d [<ffffffff8029a8b5>] blkdev_open+0x2e/0x5d [<ffffffff80276a44>] __dentry_open+0x101/0x1aa [<ffffffff80284755>] sys_select+0x15a/0x183 [<ffffffff8020bdee>] system_call+0x7e/0x83 apcupsd D 0000000000000000 0 4655 1 ffff810129f13d48 0000000000000086 0000000000000000 0000002000000400 ffff810129f13d28 ffff810129d44000 ffff81012fc8a720 ffff810129d44208 00000002052aeb80 ffff810129f13d58 00000000ffffffff 00000001003107cc Call Trace: [<ffffffff80448a20>] schedule_timeout+0x8a/0xad [<ffffffff80237ba4>] process_timeout+0x0/0x5 [<ffffffff803d69e7>] usbhid_wait_io+0x9a/0xfd [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff803d8caa>] hiddev_ioctl+0x370/0x91e [<ffffffff80448110>] thread_return+0x0/0xf9 [<ffffffff80243b12>] ktime_get_ts+0x1a/0x4e [<ffffffff8027ada4>] cp_new_stat+0xe7/0xff [<ffffffff803d87f2>] hiddev_read+0x19a/0x1f7 [<ffffffff802295f6>] default_wake_function+0x0/0xe [<ffffffff802830b5>] do_ioctl+0x55/0x6b [<ffffffff80283318>] vfs_ioctl+0x24d/0x266 [<ffffffff802787af>] vfs_read+0x11e/0x132 [<ffffffff8028336d>] sys_ioctl+0x3c/0x5f [<ffffffff8020bdee>] system_call+0x7e/0x83 apcupsd S 0000000000000000 0 4739 1 ffff810129b17db8 0000000000000086 ffff810129b38000 ffffffff802413dc ffff810129b17d38 ffff810129b38000 ffff810129d44000 ffff810129b38208 0000000029b17dc8 ffff81012d420244 ffff810129b17dc8 0000000000000002 Call Trace: [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff8028140e>] do_path_lookup+0x1a0/0x1c2 [<ffffffff804489b4>] schedule_timeout+0x1e/0xad [<ffffffff80449c4f>] _spin_lock_bh+0x9/0x19 [<ffffffff803dd99f>] release_sock+0x13/0x9b [<ffffffff8040b34a>] inet_csk_accept+0xad/0x234 [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff8029d8e6>] inotify_d_instantiate+0x3e/0x68 [<ffffffff80428336>] inet_accept+0x25/0xb5 [<ffffffff803dce80>] sys_accept+0xff/0x1d1 [<ffffffff803db9e4>] sys_connect+0x86/0x9c [<ffffffff80287fef>] d_instantiate+0x52/0x61 [<ffffffff8020bdee>] system_call+0x7e/0x83 atd S 0000000000000000 0 4661 1 ffff810129c31eb8 0000000000000082 0000000000000000 ffffffff80243b12 ffff810129c31ee8 ffff810129d44720 ffff81012fc52e40 ffff810129d44928 0000000100000000 ffffffff802433ae 00000000ffffffff ffffffff80243925 Call Trace: [<ffffffff80243b12>] ktime_get_ts+0x1a/0x4e [<ffffffff802433ae>] enqueue_hrtimer+0x5c/0x63 [<ffffffff80243925>] hrtimer_start+0xf2/0x104 [<ffffffff80448e56>] do_nanosleep+0x46/0x77 [<ffffffff8024398f>] hrtimer_nanosleep+0x58/0x11e [<ffffffff80243553>] hrtimer_wakeup+0x0/0x22 [<ffffffff80243aa1>] sys_nanosleep+0x4c/0x62 [<ffffffff8020bdee>] system_call+0x7e/0x83 cron S 0000000000000000 0 4668 1 ffff810129ff1eb8 0000000000000086 0000000000000000 ffffffff80243b12 ffff810129ff1ee8 ffff81012b18ae40 ffff81012fcc2000 ffff81012b18b048 0000000300000000 ffffffff802433ae 00000000ffffffff ffffffff80243925 Call Trace: [<ffffffff80243b12>] ktime_get_ts+0x1a/0x4e [<ffffffff802433ae>] enqueue_hrtimer+0x5c/0x63 [<ffffffff80243925>] hrtimer_start+0xf2/0x104 [<ffffffff80448e56>] do_nanosleep+0x46/0x77 [<ffffffff8024398f>] hrtimer_nanosleep+0x58/0x11e [<ffffffff80243553>] hrtimer_wakeup+0x0/0x22 [<ffffffff80243aa1>] sys_nanosleep+0x4c/0x62 [<ffffffff8020bdee>] system_call+0x7e/0x83 portsentry S ffff8100052a15a0 0 4689 1 ffff810129f59a18 0000000000000082 ffff810129f599e0 0000000000000000 ffffffff80545ca0 ffff810129a82000 ffff81012a412720 ffff810129a82208 00000000000200d0 ffff810005114968 00000000ffffffff ffff810129a82000 Call Trace: [<ffffffff804489b4>] schedule_timeout+0x1e/0xad [<ffffffff802415e1>] add_wait_queue+0x15/0x44 [<ffffffff8040caaf>] tcp_poll+0x25/0x138 [<ffffffff80284031>] do_select+0x3fc/0x45f [<ffffffff8028451a>] __pollwait+0x0/0xe1 [<ffffffff802295f6>] default_wake_function+0x0/0xe sage repeated 9 times [<ffffffff80284250>] core_sys_select+0x1bc/0x265 [<ffffffff8028bba8>] mntput_no_expire+0x1c/0x80 [<ffffffff802eebcf>] __up_read+0x13/0x8a [<ffffffff8021f1d8>] do_page_fault+0x425/0x779 [<ffffffff80241321>] bit_waitqueue+0x16/0x82 [<ffffffff802413cb>] wake_up_bit+0x11/0x22 [<ffffffff802846bc>] sys_select+0xc1/0x183 [<ffffffff8020bdee>] system_call+0x7e/0x83 portsentry S 0000000000000000 0 4693 1 ffff810129f55a18 0000000000000086 0000000000000000 ffffffff8025c93f 00000010000200d0 ffff8101299f8720 ffff81012fc8a720 ffff8101299f8928 00000002000a80d2 0000000000000000 00000000ffffffff ffff810129a0f600 Call Trace: [<ffffffff8025c93f>] __alloc_pages+0x59/0x2ae [<ffffffff804489b4>] schedule_timeout+0x1e/0xad [<ffffffff803e326f>] datagram_poll+0x21/0xd9 [<ffffffff80420d77>] udp_poll+0x13/0xfb [<ffffffff80284031>] do_select+0x3fc/0x45f [<ffffffff8028451a>] __pollwait+0x0/0xe1 [<ffffffff802295f6>] default_wake_function+0x0/0xe sage repeated 9 times [<ffffffff80284250>] core_sys_select+0x1bc/0x265 [<ffffffff8028bba8>] mntput_no_expire+0x1c/0x80 [<ffffffff802eebcf>] __up_read+0x13/0x8a [<ffffffff8021f1d8>] do_page_fault+0x425/0x779 [<ffffffff80241321>] bit_waitqueue+0x16/0x82 [<ffffffff802413cb>] wake_up_bit+0x11/0x22 [<ffffffff802846bc>] sys_select+0xc1/0x183 [<ffffffff8020bdee>] system_call+0x7e/0x83 getty S 0000000000000002 0 4715 1 ffff810129f4ddb8 0000000000000086 ffff8100052cb380 0000000000000020 0000000000000020 ffff810129b2e720 ffff810129b2ee40 ffff810129b2e928 00000002052292f0 00000008802631f2 ffff81012c601000 ffff81012fbe4000 Call Trace: [<ffffffff804489b4>] schedule_timeout+0x1e/0xad [<ffffffff80264b64>] handle_mm_fault+0x424/0x772 [<ffffffff802415e1>] add_wait_queue+0x15/0x44 [<ffffffff8032ba06>] read_chan+0x3b8/0x66e [<ffffffff802295f6>] default_wake_function+0x0/0xe [<ffffffff80326fa3>] tty_ldisc_ref_wait+0xe/0x9b [<ffffffff8032899b>] tty_read+0x72/0xc8 [<ffffffff8027873b>] vfs_read+0xaa/0x132 [<ffffffff80278ad7>] sys_read+0x45/0x6e [<ffffffff8020bdee>] system_call+0x7e/0x83 getty S 0000000000000000 0 4717 1 ffff810129baddb8 0000000000000086 0000000000000000 0000000000000020 0000000000000020 ffff810129b2ee40 ffff81012fc8a720 ffff810129b2f048 00000002052292f0 00000008802631f2 00000000ffffffff ffff81012d122000 Call Trace: [<ffffffff804489b4>] schedule_timeout+0x1e/0xad [<ffffffff80264b64>] handle_mm_fault+0x424/0x772 [<ffffffff802415e1>] add_wait_queue+0x15/0x44 [<ffffffff8032ba06>] read_chan+0x3b8/0x66e [<ffffffff802295f6>] default_wake_function+0x0/0xe [<ffffffff80326fa3>] tty_ldisc_ref_wait+0xe/0x9b [<ffffffff8032899b>] tty_read+0x72/0xc8 [<ffffffff8027873b>] vfs_read+0xaa/0x132 [<ffffffff80278ad7>] sys_read+0x45/0x6e [<ffffffff8020bdee>] system_call+0x7e/0x83 getty S 0000000000000000 0 4718 1 ffff810129a59db8 0000000000000086 0000000000000000 0000000000000020 0000000000000020 ffff810129b2f560 ffff81012fc52e40 ffff810129b2f768 00000001052292f0 00000008802631f2 00000000ffffffff ffff81012b188800 Call Trace: [<ffffffff804489b4>] schedule_timeout+0x1e/0xad [<ffffffff80264b64>] handle_mm_fault+0x424/0x772 [<ffffffff802415e1>] add_wait_queue+0x15/0x44 [<ffffffff8032ba06>] read_chan+0x3b8/0x66e [<ffffffff802295f6>] default_wake_function+0x0/0xe [<ffffffff80326fa3>] tty_ldisc_ref_wait+0xe/0x9b [<ffffffff8032899b>] tty_read+0x72/0xc8 [<ffffffff8027873b>] vfs_read+0xaa/0x132 [<ffffffff80278ad7>] sys_read+0x45/0x6e [<ffffffff8020bdee>] system_call+0x7e/0x83 getty S 0000000000000000 0 4719 1 ffff810129b45db8 0000000000000086 0000000000000000 0000000000000020 0000000000000020 ffff810129b30000 ffff81012fcc2000 ffff810129b30208 00000003052292f0 00000008802631f2 00000000ffffffff ffff81012d199800 Call Trace: [<ffffffff804489b4>] schedule_timeout+0x1e/0xad [<ffffffff80264b64>] handle_mm_fault+0x424/0x772 [<ffffffff802415e1>] add_wait_queue+0x15/0x44 [<ffffffff8032ba06>] read_chan+0x3b8/0x66e [<ffffffff802295f6>] default_wake_function+0x0/0xe [<ffffffff80326fa3>] tty_ldisc_ref_wait+0xe/0x9b [<ffffffff8032899b>] tty_read+0x72/0xc8 [<ffffffff8027873b>] vfs_read+0xaa/0x132 [<ffffffff80278ad7>] sys_read+0x45/0x6e [<ffffffff8020bdee>] system_call+0x7e/0x83 getty S 0000000000000000 0 4721 1 ffff810129babdb8 0000000000000082 0000000000000000 0000000000000020 0000000000000020 ffff81012a412e40 ffffffff805354c0 ffff81012a413048 00000000052292f0 00000008802631f2 00000000ffffffff ffff81012e879000 Call Trace: [<ffffffff804489b4>] schedule_timeout+0x1e/0xad [<ffffffff80264b64>] handle_mm_fault+0x424/0x772 [<ffffffff802415e1>] add_wait_queue+0x15/0x44 [<ffffffff8032ba06>] read_chan+0x3b8/0x66e [<ffffffff802295f6>] default_wake_function+0x0/0xe [<ffffffff80326fa3>] tty_ldisc_ref_wait+0xe/0x9b [<ffffffff8032899b>] tty_read+0x72/0xc8 [<ffffffff8027873b>] vfs_read+0xaa/0x132 [<ffffffff80278ad7>] sys_read+0x45/0x6e [<ffffffff8020bdee>] system_call+0x7e/0x83 getty S ffff8100052aa5a0 0 4722 1 ffff810129f2fdb8 0000000000000086 ffff810129f2fd80 0000000000000020 0000000000000020 ffff810129b30720 ffff810129b2f560 ffff810129b30928 00000001052292f0 00000008802631f2 00000000ffffffff ffff81012d1ac800 Call Trace: [<ffffffff804489b4>] schedule_timeout+0x1e/0xad [<ffffffff80264b64>] handle_mm_fault+0x424/0x772 [<ffffffff802415e1>] add_wait_queue+0x15/0x44 [<ffffffff8032ba06>] read_chan+0x3b8/0x66e [<ffffffff802295f6>] default_wake_function+0x0/0xe [<ffffffff80326fa3>] tty_ldisc_ref_wait+0xe/0x9b [<ffffffff8032899b>] tty_read+0x72/0xc8 [<ffffffff8027873b>] vfs_read+0xaa/0x132 [<ffffffff80278ad7>] sys_read+0x45/0x6e [<ffffffff8020bdee>] system_call+0x7e/0x83 sshd S 0000000000000000 0 4740 4534 ffff810129f37c18 0000000000000086 0000000000000000 ffffffff8043d3ce 0000000000000001 ffff810129825560 ffffffff805354c0 ffff810129825768 0000000029b66580 ffffffff805354c0 00000000ffffffff ffffffff80287d9d Call Trace: [<ffffffff8043d3ce>] unix_stream_sendmsg+0x262/0x329 [<ffffffff80287d9d>] __d_lookup+0xb0/0x100 [<ffffffff804489b4>] schedule_timeout+0x1e/0xad [<ffffffff802414fd>] prepare_to_wait+0x15/0x5f [<ffffffff8043c7a1>] unix_stream_recvmsg+0x245/0x4ea [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff803db40b>] sock_aio_read+0xdd/0xec [<ffffffff80277fd4>] do_sync_read+0xc9/0x10c [<ffffffff802ec753>] kobject_get+0x12/0x17 [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff80276ba4>] do_filp_open+0x2d/0x3d [<ffffffff8027874e>] vfs_read+0xbd/0x132 [<ffffffff80278ad7>] sys_read+0x45/0x6e [<ffffffff8020bdee>] system_call+0x7e/0x83 sshd S 0000000000000003 0 4743 4740 ffff810129ae1a18 0000000000000082 0000000000000000 ffff810128605900 ffff810129ae19cc ffff810129b38e40 ffff810129a83560 ffff810129b39048 0000000300000000 0000000000000001 0000000000000096 0000000000000003 Call Trace: [<ffffffff80228f46>] __wake_up+0x38/0x4e [<ffffffff804489b4>] schedule_timeout+0x1e/0xad [<ffffffff8032713c>] tty_poll+0x5f/0x6d [<ffffffff80284031>] do_select+0x3fc/0x45f [<ffffffff8028451a>] __pollwait+0x0/0xe1 [<ffffffff802295f6>] default_wake_function+0x0/0xe sage repeated 5 times [<ffffffff80449c4f>] _spin_lock_bh+0x9/0x19 [<ffffffff803dd99f>] release_sock+0x13/0x9b [<ffffffff8040df5a>] tcp_sendmsg+0x9af/0xab3 [<ffffffff80228cb6>] __check_preempt_curr_fair+0x56/0x78 [<ffffffff803db31f>] sock_aio_write+0xd1/0xe0 [<ffffffff80284250>] core_sys_select+0x1bc/0x265 [<ffffffff80241559>] remove_wait_queue+0x12/0x44 [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff80233df8>] current_fs_time+0x1e/0x24 [<ffffffff80325fc3>] tty_ldisc_deref+0x62/0x75 [<ffffffff802846bc>] sys_select+0xc1/0x183 [<ffffffff80278b45>] sys_write+0x45/0x6e [<ffffffff8020bdee>] system_call+0x7e/0x83 tcsh S 0000000000000000 0 4744 4743 ffff810129a4ff28 0000000000000082 0000000000000000 ffff810129824e40 0000000000000003 ffff810129824e40 ffff81012fcc2000 ffff810129825048 0000000380448110 ffff810129a4ff70 00000000ffffffff ffff810129a4ff58 Call Trace: [<ffffffff8020bdee>] system_call+0x7e/0x83 [<ffffffff8023a486>] sys_rt_sigsuspend+0xbc/0xdd [<ffffffff8023ab6e>] sys_rt_sigprocmask+0xb5/0xce [<ffffffff8020c107>] ptregscall_common+0x67/0xb0 smbd S 0000000000000000 0 4792 4513 ffff810128d7ba18 0000000000000082 0000000000000000 ffff81012fca9c00 ffff810128d7b9cc ffff810129b38720 ffff81012fcc2000 ffff810129b38928 0000000328d7ba18 ffff810128d7ba28 00000000ffffffff 0000000100310e04 Call Trace: [<ffffffff80448a20>] schedule_timeout+0x8a/0xad [<ffffffff80237ba4>] process_timeout+0x0/0x5 [<ffffffff80284031>] do_select+0x3fc/0x45f [<ffffffff8028451a>] __pollwait+0x0/0xe1 [<ffffffff802295f6>] default_wake_function+0x0/0xe [<ffffffff802295f6>] default_wake_function+0x0/0xe [<ffffffff80287d9d>] __d_lookup+0xb0/0x100 [<ffffffff80287d9d>] __d_lookup+0xb0/0x100 [<ffffffff8027ec7d>] do_lookup+0x63/0x1ae [<ffffffff80287198>] dput+0x1c/0x10b [<ffffffff80280eb1>] __link_path_walk+0xbb7/0xd0c [<ffffffff8028bba8>] mntput_no_expire+0x1c/0x80 [<ffffffff802810d4>] link_path_walk+0xce/0xe0 [<ffffffff80276a44>] __dentry_open+0x101/0x1aa [<ffffffff80284250>] core_sys_select+0x1bc/0x265 [<ffffffff802eb8c9>] _atomic_dec_and_lock+0x39/0x58 [<ffffffff8028bba8>] mntput_no_expire+0x1c/0x80 [<ffffffff8027ada4>] cp_new_stat+0xe7/0xff [<ffffffff80284755>] sys_select+0x15a/0x183 [<ffffffff8020bdee>] system_call+0x7e/0x83 screen S 0000000000000000 0 4890 4744 ffff810128523f68 0000000000000086 0000000000000000 ffffffff80233813 00000d4c0000131a ffff8101282ec720 ffff81012fcc2000 ffff8101282ec928 000000034e5a42d0 0000000000000000 00000000ffffffff ffffffff80233aa5 Call Trace: [<ffffffff80233813>] do_setitimer+0x15e/0x329 [<ffffffff80233aa5>] alarm_setitimer+0x35/0x65 [<ffffffff8023a3c1>] sys_pause+0x19/0x22 [<ffffffff8020bdee>] system_call+0x7e/0x83 screen S 0000000000000000 0 4891 4890 ffff8101286ada18 0000000000000086 0000000000000000 0000000000000286 0000000000000053 ffff810129a83560 ffff81012fcc2000 ffff810129a83768 0000000300000000 0000000000000001 00000000ffffffff 0000000000000003 Call Trace: [<ffffffff804489b4>] schedule_timeout+0x1e/0xad [<ffffffff8032713c>] tty_poll+0x5f/0x6d [<ffffffff80284031>] do_select+0x3fc/0x45f [<ffffffff8028451a>] __pollwait+0x0/0xe1 [<ffffffff802295f6>] default_wake_function+0x0/0xe sage repeated 6 times [<ffffffff80227f17>] __wake_up_common+0x3e/0x68 [<ffffffff80228f46>] __wake_up+0x38/0x4e [<ffffffff80284250>] core_sys_select+0x1bc/0x265 [<ffffffff80241559>] remove_wait_queue+0x12/0x44 [<ffffffff80228f46>] __wake_up+0x38/0x4e [<ffffffff80325fc3>] tty_ldisc_deref+0x62/0x75 [<ffffffff802846bc>] sys_select+0xc1/0x183 [<ffffffff80278b45>] sys_write+0x45/0x6e [<ffffffff8020bdee>] system_call+0x7e/0x83 tcsh S 0000000000000000 0 4893 4891 ffff8101282abdb8 0000000000000086 0000000000000000 0000000000000000 ffff81012868bc00 ffff81012ac4c720 ffffffff805354c0 ffff81012ac4c928 000000002c930000 ffff81012840f410 00000000ffffffff 0000000000000000 Call Trace: [<ffffffff804489b4>] schedule_timeout+0x1e/0xad [<ffffffff802415e1>] add_wait_queue+0x15/0x44 [<ffffffff8032ba06>] read_chan+0x3b8/0x66e [<ffffffff802295f6>] default_wake_function+0x0/0xe [<ffffffff80326fa3>] tty_ldisc_ref_wait+0xe/0x9b [<ffffffff8032899b>] tty_read+0x72/0xc8 [<ffffffff8027873b>] vfs_read+0xaa/0x132 [<ffffffff80278ad7>] sys_read+0x45/0x6e [<ffffffff8020bdee>] system_call+0x7e/0x83 md3_raid5 S 0000000000000000 0 4923 2 ffff8101284f1e80 0000000000000046 0000000000000000 0000000000000046 ffff81012ea9e5e0 ffff81012ec75560 ffff81012fcc2000 ffff81012ec75768 000000032ca9ae00 ffffffff803b03e2 00000000ffffffff ffff81012cb33800 Call Trace: [<ffffffff803b03e2>] unplug_slaves+0x5f/0x9a [<ffffffff804489b4>] schedule_timeout+0x1e/0xad [<ffffffff802414fd>] prepare_to_wait+0x15/0x5f [<ffffffff803c02fb>] md_thread+0xbb/0xf1 [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff803c0240>] md_thread+0x0/0xf1 [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 xfslogd/0 S 0000000000000000 0 4952 2 ffff81010bc07ed0 0000000000000046 0000000000000000 0000000000000287 ffff810129fa2840 ffff810129b30e40 ffff81012ec75560 ffff810129b31048 000000002e2a5840 0000000000000046 0000000000000287 ffff8100a62e8780 Call Trace: [<ffffffff8023e6c7>] worker_thread+0x74/0x9b [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff8023e653>] worker_thread+0x0/0x9b [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 xfslogd/1 S 0000000000000000 0 4953 2 ffff810107d5bed0 0000000000000046 0000000000000000 ffffffff8827f63f ffff8101267f7300 ffff810129b31560 ffff81012fc52e40 ffff810129b31768 000000012e2a5b40 ffff81012ecc8000 00000000ffffffff 0000000000000002 Call Trace: [<ffffffff8827f63f>] :xfs:xfs_buf_rele+0x32/0xa0 [<ffffffff8827f8e4>] :xfs:xfs_buf_iodone_work+0x0/0x41 [<ffffffff8023e6c7>] worker_thread+0x74/0x9b [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff8023e653>] worker_thread+0x0/0x9b [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 xfslogd/2 S 0000000000000000 0 4954 2 ffff8101267efed0 0000000000000046 0000000000000000 0000000000000001 0000000000000282 ffff81012fe14000 ffff81012fc8a720 ffff81012fe14208 000000022ad3be40 00000000ffffffff 00000000ffffffff ffff81012aaac700 Call Trace: [<ffffffff8023e6c7>] worker_thread+0x74/0x9b [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff8023e653>] worker_thread+0x0/0x9b [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 xfslogd/3 S 0000000000000003 0 4955 2 ffff8101175d9ed0 0000000000000046 0000000000000000 ffffffff80273327 ffff810108b12000 ffff81012b01d560 ffff81012ec75560 ffff81012b01d768 000000032e2a5f00 0000000000000287 ffff81012ed0e900 ffff81012fdf4a00 Call Trace: [<ffffffff80273327>] add_partial+0x12/0x3f [<ffffffff8023e6c7>] worker_thread+0x74/0x9b [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff8023e653>] worker_thread+0x0/0x9b [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 xfsdatad/0 S 0000000000000000 0 4956 2 ffff81010f493ed0 0000000000000046 0000000000000000 0000000000000003 ffff81010f493e90 ffff810129b2e000 ffffffff805354c0 ffff810129b2e208 000000000a405c38 ffffffffffffffff 00000000ffffffff 0000000000000000 Call Trace: [<ffffffff8023e6c7>] worker_thread+0x74/0x9b [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff8023e653>] worker_thread+0x0/0x9b [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 xfsdatad/1 S 0000000000000001 0 4957 2 ffff8101267bfed0 0000000000000046 ffff8100ca52a9c0 ffffffff80297e45 0000000000000286 ffff8101282ece40 ffff81012ec75560 ffff8101282ed048 00000001ffffffff 0000000000000000 ffff81012b6b6b40 ffff81010a405c38 Call Trace: [<ffffffff80297e45>] end_buffer_async_write+0xe5/0x106 [<ffffffff8023e6c7>] worker_thread+0x74/0x9b [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff8023e653>] worker_thread+0x0/0x9b [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 xfsdatad/2 S 0000000000000000 0 4958 2 ffff810100e79ed0 0000000000000046 0000000000000000 ffffffff80297e45 0000000000000286 ffff8101282ed560 ffff81012fc8a720 ffff8101282ed768 00000002ffffffff 0000000000000287 00000000ffffffff ffffffff805a2de8 Call Trace: [<ffffffff80297e45>] end_buffer_async_write+0xe5/0x106 [<ffffffff8023e6c7>] worker_thread+0x74/0x9b [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff8023e653>] worker_thread+0x0/0x9b [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 xfsdatad/3 S 0000000000000003 0 4959 2 ffff81011aae5ed0 0000000000000046 ffff8100cb399c30 ffffffff80297e45 0000000000000286 ffff8101282ec000 ffff81012ec75560 ffff8101282ec208 00000003ffffffff 0000000000000287 ffff81012b6b6c60 ffffffff805a2de8 Call Trace: [<ffffffff80297e45>] end_buffer_async_write+0xe5/0x106 [<ffffffff8023e6c7>] worker_thread+0x74/0x9b [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff8023e653>] worker_thread+0x0/0x9b [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 xfs_mru_cache S ffff8100052a1640 0 4961 2 ffff81010f421ed0 0000000000000046 ffff81010f421e98 0000000000000000 ffff810117894000 ffff810117894000 ffff810129d44e40 ffff810117894208 0000000000000000 ffffffff80448110 00000000ffffffff 0000000000000046 Call Trace: [<ffffffff80448110>] thread_return+0x0/0xf9 [<ffffffff8023e6c7>] worker_thread+0x74/0x9b [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff8023e653>] worker_thread+0x0/0x9b [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 xfsbufd D 0000000000000000 0 4966 2 ffff810102e29cc0 0000000000000046 0000000000000000 0000000000000001 0000000000000086 ffff810117894720 ffff81012fc8a720 ffff810117894928 000000022cb33800 0000000000000046 00000000ffffffff ffffffff802e27e0 Call Trace: [<ffffffff802e27e0>] __generic_unplug_device+0x13/0x24 [<ffffffff803b12cf>] get_active_stripe+0x22f/0x4ca [<ffffffff80228f46>] __wake_up+0x38/0x4e [<ffffffff802295f6>] default_wake_function+0x0/0xe [<ffffffff803b6b3b>] make_request+0x3f3/0x577 [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff802e1728>] generic_make_request+0x1be/0x1f5 [<ffffffff802e3caf>] submit_bio+0xb4/0xbb [<ffffffff88280c06>] :xfs:_xfs_buf_ioapply+0x276/0x2a1 [<ffffffff88280c6a>] :xfs:xfs_buf_iorequest+0x39/0x66 [<ffffffff88284280>] :xfs:xfs_bdstrat_cb+0x37/0x3b [<ffffffff882810df>] :xfs:xfsbufd+0x8a/0xe3 [<ffffffff88281055>] :xfs:xfsbufd+0x0/0xe3 [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 xfssyncd S 0000000000000000 0 4973 2 ffff81010d8fde90 0000000000000046 0000000000000000 ffffffff8826e9d7 0000000000000000 ffff810117894e40 ffffffff805354c0 ffff810117895048 0000000000000000 ffff81010d8fdea0 00000000ffffffff 0000000100310522 Call Trace: [<ffffffff8826e9d7>] :xfs:xfs_icsb_count+0xf5/0x105 [<ffffffff80448a20>] schedule_timeout+0x8a/0xad [<ffffffff80237ba4>] process_timeout+0x0/0x5 [<ffffffff88285ca5>] :xfs:xfssyncd+0x33/0x13f [<ffffffff88285c72>] :xfs:xfssyncd+0x0/0x13f [<ffffffff802412cc>] kthread+0x47/0x73 [<ffffffff8020cc08>] child_rip+0xa/0x12 [<ffffffff80241285>] kthread+0x0/0x73 [<ffffffff8020cbfe>] child_rip+0x0/0x12 bonnie++ D 0000000000000003 0 4981 4893 ffff81011598b578 0000000000000082 0000000000000000 0000000000000001 0000000000000096 ffff810117895560 ffff81012ec75560 ffff810117895768 000000031f434070 0000000000000046 ffff81012ea9e5e0 ffffffff802e27e0 Call Trace: [<ffffffff802e27e0>] __generic_unplug_device+0x13/0x24 [<ffffffff802e35f4>] generic_unplug_device+0x18/0x28 [<ffffffff803b12cf>] get_active_stripe+0x22f/0x4ca [<ffffffff802295f6>] default_wake_function+0x0/0xe [<ffffffff803b6b3b>] make_request+0x3f3/0x577 [<ffffffff8025acf6>] mempool_alloc+0x24/0xda [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff802e1728>] generic_make_request+0x1be/0x1f5 [<ffffffff802e3caf>] submit_bio+0xb4/0xbb [<ffffffff80298e2f>] __bio_add_page+0x109/0x1b9 [<ffffffff8827dcb4>] :xfs:xfs_submit_ioend_bio+0x1e/0x27 [<ffffffff8827e0ce>] :xfs:xfs_submit_ioend+0x88/0xc6 [<ffffffff8827ef51>] :xfs:xfs_page_state_convert+0x51e/0x56d [<ffffffff8827f0f2>] :xfs:xfs_vm_writepage+0xa7/0xe1 [<ffffffff8025d022>] __writepage+0xa/0x23 [<ffffffff8025d552>] write_cache_pages+0x176/0x2a3 [<ffffffff8025d018>] __writepage+0x0/0x23 [<ffffffff8025d6bb>] do_writepages+0x20/0x2d [<ffffffff80291ec8>] __writeback_single_inode+0xcd/0x3a7 [<ffffffff8827e194>] :xfs:__xfs_get_blocks+0x61/0x19b [<ffffffff802924fa>] sync_sb_inodes+0x1cb/0x2af [<ffffffff80292961>] writeback_inodes+0x7d/0xd3 [<ffffffff8025d8fd>] balance_dirty_pages_ratelimited_nr+0x113/0x1f4 [<ffffffff80259d91>] generic_file_buffered_write+0x570/0x695 [<ffffffff80233df8>] current_fs_time+0x1e/0x24 [<ffffffff802eec67>] __up_write+0x21/0x10e [<ffffffff882853b3>] :xfs:xfs_write+0x6dd/0xa82 [<ffffffff80227f17>] __wake_up_common+0x3e/0x68 [<ffffffff88281932>] :xfs:xfs_file_aio_write+0x5a/0x5d [<ffffffff80277ec8>] do_sync_write+0xc9/0x10c [<ffffffff802eebcf>] __up_read+0x13/0x8a [<ffffffff802413dc>] autoremove_wake_function+0x0/0x2e [<ffffffff80228561>] update_curr+0xdf/0xfe [<ffffffff80278608>] vfs_write+0xad/0x136 [<ffffffff80278b45>] sys_write+0x45/0x6e [<ffffffff8020bdee>] system_call+0x7e/0x83 tcsh S 0000000000000000 0 4983 4891 ffff8100a536bf28 0000000000000086 0000000000000000 ffff81010fea0000 0000000000000000 ffff81010fea0000 ffffffff805354c0 ffff81010fea0208 0000000080448110 ffff8100a536bf70 00000000ffffffff ffff8100a536bf58 Call Trace: [<ffffffff8020bdee>] system_call+0x7e/0x83 [<ffffffff8023a486>] sys_rt_sigsuspend+0xbc/0xdd [<ffffffff8023ab6e>] sys_rt_sigprocmask+0xb5/0xce [<ffffffff8020c107>] ptregscall_common+0x67/0xb0 tcsh R running task 0 5038 4983 ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: 2.6.23.1: mdadm/raid5 hung/d-state 2007-11-04 21:49 ` Neil Brown 2007-11-04 21:51 ` Justin Piszcz @ 2007-11-05 8:36 ` BERTRAND Joël 2007-11-07 16:39 ` Chuck Ebbert 1 sibling, 1 reply; 35+ messages in thread From: BERTRAND Joël @ 2007-11-05 8:36 UTC (permalink / raw) To: Neil Brown; +Cc: Justin Piszcz, linux-kernel, linux-raid Neil Brown wrote: > On Sunday November 4, jpiszcz@lucidpixels.com wrote: >> # ps auxww | grep D >> USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND >> root 273 0.0 0.0 0 0 ? D Oct21 14:40 [pdflush] >> root 274 0.0 0.0 0 0 ? D Oct21 13:00 [pdflush] >> >> After several days/weeks, this is the second time this has happened, while >> doing regular file I/O (decompressing a file), everything on the device >> went into D-state. > > At a guess (I haven't looked closely) I'd say it is the bug that was > meant to be fixed by > > commit 4ae3f847e49e3787eca91bced31f8fd328d50496 > > except that patch applied badly and needed to be fixed with > the following patch (not in git yet). > These have been sent to stable@ and should be in the queue for 2.6.23.2 My linux-2.6.23/drivers/md/raid5.c contains your patch for a long time : ... spin_lock(&sh->lock); clear_bit(STRIPE_HANDLE, &sh->state); clear_bit(STRIPE_DELAYED, &sh->state); s.syncing = test_bit(STRIPE_SYNCING, &sh->state); s.expanding = test_bit(STRIPE_EXPAND_SOURCE, &sh->state); s.expanded = test_bit(STRIPE_EXPAND_READY, &sh->state); /* Now to look around and see what can be done */ /* clean-up completed biofill operations */ if (test_bit(STRIPE_OP_BIOFILL, &sh->ops.complete)) { clear_bit(STRIPE_OP_BIOFILL, &sh->ops.pending); clear_bit(STRIPE_OP_BIOFILL, &sh->ops.ack); clear_bit(STRIPE_OP_BIOFILL, &sh->ops.complete); } rcu_read_lock(); for (i=disks; i--; ) { mdk_rdev_t *rdev; struct r5dev *dev = &sh->dev[i]; ... but it doesn't fix this bug. Regards, JKB ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: 2.6.23.1: mdadm/raid5 hung/d-state 2007-11-05 8:36 ` BERTRAND Joël @ 2007-11-07 16:39 ` Chuck Ebbert 2007-11-07 16:48 ` BERTRAND Joël 0 siblings, 1 reply; 35+ messages in thread From: Chuck Ebbert @ 2007-11-07 16:39 UTC (permalink / raw) To: BERTRAND Joël; +Cc: Neil Brown, Justin Piszcz, linux-kernel, linux-raid On 11/05/2007 03:36 AM, BERTRAND Joël wrote: > Neil Brown wrote: >> On Sunday November 4, jpiszcz@lucidpixels.com wrote: >>> # ps auxww | grep D >>> USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND >>> root 273 0.0 0.0 0 0 ? D Oct21 14:40 >>> [pdflush] >>> root 274 0.0 0.0 0 0 ? D Oct21 13:00 >>> [pdflush] >>> >>> After several days/weeks, this is the second time this has happened, >>> while doing regular file I/O (decompressing a file), everything on >>> the device went into D-state. >> >> At a guess (I haven't looked closely) I'd say it is the bug that was >> meant to be fixed by >> >> commit 4ae3f847e49e3787eca91bced31f8fd328d50496 >> >> except that patch applied badly and needed to be fixed with >> the following patch (not in git yet). >> These have been sent to stable@ and should be in the queue for 2.6.23.2 > > My linux-2.6.23/drivers/md/raid5.c contains your patch for a long > time : > > ... > spin_lock(&sh->lock); > clear_bit(STRIPE_HANDLE, &sh->state); > clear_bit(STRIPE_DELAYED, &sh->state); > > s.syncing = test_bit(STRIPE_SYNCING, &sh->state); > s.expanding = test_bit(STRIPE_EXPAND_SOURCE, &sh->state); > s.expanded = test_bit(STRIPE_EXPAND_READY, &sh->state); > /* Now to look around and see what can be done */ > > /* clean-up completed biofill operations */ > if (test_bit(STRIPE_OP_BIOFILL, &sh->ops.complete)) { > clear_bit(STRIPE_OP_BIOFILL, &sh->ops.pending); > clear_bit(STRIPE_OP_BIOFILL, &sh->ops.ack); > clear_bit(STRIPE_OP_BIOFILL, &sh->ops.complete); > } > > rcu_read_lock(); > for (i=disks; i--; ) { > mdk_rdev_t *rdev; > struct r5dev *dev = &sh->dev[i]; > ... > > but it doesn't fix this bug. > Did that chunk starting with "clean-up completed biofill operations" end up where it belongs? The patch with the big context moves it to a different place from where the original one puts it when applied to 2.6.23... Lately I've seen several problems where the context isn't enough to make a patch apply properly when some offsets have changed. In some cases a patch won't apply at all because two nearly-identical areas are being changed and the first chunk gets applied where the second one should, leaving nowhere for the second chunk to apply. - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: 2.6.23.1: mdadm/raid5 hung/d-state 2007-11-07 16:39 ` Chuck Ebbert @ 2007-11-07 16:48 ` BERTRAND Joël 2007-11-08 11:42 ` BERTRAND Joël 0 siblings, 1 reply; 35+ messages in thread From: BERTRAND Joël @ 2007-11-07 16:48 UTC (permalink / raw) To: Chuck Ebbert; +Cc: Neil Brown, Justin Piszcz, linux-kernel, linux-raid Chuck Ebbert wrote: > On 11/05/2007 03:36 AM, BERTRAND Joël wrote: >> Neil Brown wrote: >>> On Sunday November 4, jpiszcz@lucidpixels.com wrote: >>>> # ps auxww | grep D >>>> USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND >>>> root 273 0.0 0.0 0 0 ? D Oct21 14:40 >>>> [pdflush] >>>> root 274 0.0 0.0 0 0 ? D Oct21 13:00 >>>> [pdflush] >>>> >>>> After several days/weeks, this is the second time this has happened, >>>> while doing regular file I/O (decompressing a file), everything on >>>> the device went into D-state. >>> At a guess (I haven't looked closely) I'd say it is the bug that was >>> meant to be fixed by >>> >>> commit 4ae3f847e49e3787eca91bced31f8fd328d50496 >>> >>> except that patch applied badly and needed to be fixed with >>> the following patch (not in git yet). >>> These have been sent to stable@ and should be in the queue for 2.6.23.2 >> My linux-2.6.23/drivers/md/raid5.c contains your patch for a long >> time : >> >> ... >> spin_lock(&sh->lock); >> clear_bit(STRIPE_HANDLE, &sh->state); >> clear_bit(STRIPE_DELAYED, &sh->state); >> >> s.syncing = test_bit(STRIPE_SYNCING, &sh->state); >> s.expanding = test_bit(STRIPE_EXPAND_SOURCE, &sh->state); >> s.expanded = test_bit(STRIPE_EXPAND_READY, &sh->state); >> /* Now to look around and see what can be done */ >> >> /* clean-up completed biofill operations */ >> if (test_bit(STRIPE_OP_BIOFILL, &sh->ops.complete)) { >> clear_bit(STRIPE_OP_BIOFILL, &sh->ops.pending); >> clear_bit(STRIPE_OP_BIOFILL, &sh->ops.ack); >> clear_bit(STRIPE_OP_BIOFILL, &sh->ops.complete); >> } >> >> rcu_read_lock(); >> for (i=disks; i--; ) { >> mdk_rdev_t *rdev; >> struct r5dev *dev = &sh->dev[i]; >> ... >> >> but it doesn't fix this bug. >> > > Did that chunk starting with "clean-up completed biofill operations" end > up where it belongs? The patch with the big context moves it to a different > place from where the original one puts it when applied to 2.6.23... > > Lately I've seen several problems where the context isn't enough to make > a patch apply properly when some offsets have changed. In some cases a > patch won't apply at all because two nearly-identical areas are being > changed and the first chunk gets applied where the second one should, > leaving nowhere for the second chunk to apply. I always apply this kind of patches by hands, and no by patch command. Last patch sent here seems to fix this bug : gershwin:[/usr/scripts] > cat /proc/mdstat Personalities : [raid1] [raid6] [raid5] [raid4] md7 : active raid1 sdi1[2] md_d0p1[0] 1464725632 blocks [2/1] [U_] [=====>...............] recovery = 27.1% (396992504/1464725632) finish=1040.3min speed=17104K/sec Regards, JKB - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: 2.6.23.1: mdadm/raid5 hung/d-state 2007-11-07 16:48 ` BERTRAND Joël @ 2007-11-08 11:42 ` BERTRAND Joël 2007-11-08 12:44 ` Justin Piszcz 0 siblings, 1 reply; 35+ messages in thread From: BERTRAND Joël @ 2007-11-08 11:42 UTC (permalink / raw) To: BERTRAND Joël Cc: Chuck Ebbert, Neil Brown, Justin Piszcz, linux-kernel, linux-raid BERTRAND Joël wrote: > Chuck Ebbert wrote: >> On 11/05/2007 03:36 AM, BERTRAND Joël wrote: >>> Neil Brown wrote: >>>> On Sunday November 4, jpiszcz@lucidpixels.com wrote: >>>>> # ps auxww | grep D >>>>> USER PID %CPU %MEM VSZ RSS TTY STAT START TIME >>>>> COMMAND >>>>> root 273 0.0 0.0 0 0 ? D Oct21 14:40 >>>>> [pdflush] >>>>> root 274 0.0 0.0 0 0 ? D Oct21 13:00 >>>>> [pdflush] >>>>> >>>>> After several days/weeks, this is the second time this has happened, >>>>> while doing regular file I/O (decompressing a file), everything on >>>>> the device went into D-state. >>>> At a guess (I haven't looked closely) I'd say it is the bug that was >>>> meant to be fixed by >>>> >>>> commit 4ae3f847e49e3787eca91bced31f8fd328d50496 >>>> >>>> except that patch applied badly and needed to be fixed with >>>> the following patch (not in git yet). >>>> These have been sent to stable@ and should be in the queue for 2.6.23.2 >>> My linux-2.6.23/drivers/md/raid5.c contains your patch for a long >>> time : >>> >>> ... >>> spin_lock(&sh->lock); >>> clear_bit(STRIPE_HANDLE, &sh->state); >>> clear_bit(STRIPE_DELAYED, &sh->state); >>> >>> s.syncing = test_bit(STRIPE_SYNCING, &sh->state); >>> s.expanding = test_bit(STRIPE_EXPAND_SOURCE, &sh->state); >>> s.expanded = test_bit(STRIPE_EXPAND_READY, &sh->state); >>> /* Now to look around and see what can be done */ >>> >>> /* clean-up completed biofill operations */ >>> if (test_bit(STRIPE_OP_BIOFILL, &sh->ops.complete)) { >>> clear_bit(STRIPE_OP_BIOFILL, &sh->ops.pending); >>> clear_bit(STRIPE_OP_BIOFILL, &sh->ops.ack); >>> clear_bit(STRIPE_OP_BIOFILL, &sh->ops.complete); >>> } >>> >>> rcu_read_lock(); >>> for (i=disks; i--; ) { >>> mdk_rdev_t *rdev; >>> struct r5dev *dev = &sh->dev[i]; >>> ... >>> >>> but it doesn't fix this bug. >>> >> >> Did that chunk starting with "clean-up completed biofill operations" end >> up where it belongs? The patch with the big context moves it to a >> different >> place from where the original one puts it when applied to 2.6.23... >> >> Lately I've seen several problems where the context isn't enough to make >> a patch apply properly when some offsets have changed. In some cases a >> patch won't apply at all because two nearly-identical areas are being >> changed and the first chunk gets applied where the second one should, >> leaving nowhere for the second chunk to apply. > > I always apply this kind of patches by hands, and no by patch > command. Last patch sent here seems to fix this bug : > > gershwin:[/usr/scripts] > cat /proc/mdstat > Personalities : [raid1] [raid6] [raid5] [raid4] > md7 : active raid1 sdi1[2] md_d0p1[0] > 1464725632 blocks [2/1] [U_] > [=====>...............] recovery = 27.1% (396992504/1464725632) > finish=1040.3min speed=17104K/sec Resync done. Patch fix this bug. Regards, JKB - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: 2.6.23.1: mdadm/raid5 hung/d-state 2007-11-08 11:42 ` BERTRAND Joël @ 2007-11-08 12:44 ` Justin Piszcz 0 siblings, 0 replies; 35+ messages in thread From: Justin Piszcz @ 2007-11-08 12:44 UTC (permalink / raw) To: BERTRAND Joël; +Cc: Chuck Ebbert, Neil Brown, linux-kernel, linux-raid [-- Attachment #1: Type: TEXT/PLAIN, Size: 3488 bytes --] On Thu, 8 Nov 2007, BERTRAND Joël wrote: > BERTRAND Joël wrote: >> Chuck Ebbert wrote: >>> On 11/05/2007 03:36 AM, BERTRAND Joël wrote: >>>> Neil Brown wrote: >>>>> On Sunday November 4, jpiszcz@lucidpixels.com wrote: >>>>>> # ps auxww | grep D >>>>>> USER PID %CPU %MEM VSZ RSS TTY STAT START TIME >>>>>> COMMAND >>>>>> root 273 0.0 0.0 0 0 ? D Oct21 14:40 >>>>>> [pdflush] >>>>>> root 274 0.0 0.0 0 0 ? D Oct21 13:00 >>>>>> [pdflush] >>>>>> >>>>>> After several days/weeks, this is the second time this has happened, >>>>>> while doing regular file I/O (decompressing a file), everything on >>>>>> the device went into D-state. >>>>> At a guess (I haven't looked closely) I'd say it is the bug that was >>>>> meant to be fixed by >>>>> >>>>> commit 4ae3f847e49e3787eca91bced31f8fd328d50496 >>>>> >>>>> except that patch applied badly and needed to be fixed with >>>>> the following patch (not in git yet). >>>>> These have been sent to stable@ and should be in the queue for 2.6.23.2 >>>> My linux-2.6.23/drivers/md/raid5.c contains your patch for a long >>>> time : >>>> >>>> ... >>>> spin_lock(&sh->lock); >>>> clear_bit(STRIPE_HANDLE, &sh->state); >>>> clear_bit(STRIPE_DELAYED, &sh->state); >>>> >>>> s.syncing = test_bit(STRIPE_SYNCING, &sh->state); >>>> s.expanding = test_bit(STRIPE_EXPAND_SOURCE, &sh->state); >>>> s.expanded = test_bit(STRIPE_EXPAND_READY, &sh->state); >>>> /* Now to look around and see what can be done */ >>>> >>>> /* clean-up completed biofill operations */ >>>> if (test_bit(STRIPE_OP_BIOFILL, &sh->ops.complete)) { >>>> clear_bit(STRIPE_OP_BIOFILL, &sh->ops.pending); >>>> clear_bit(STRIPE_OP_BIOFILL, &sh->ops.ack); >>>> clear_bit(STRIPE_OP_BIOFILL, &sh->ops.complete); >>>> } >>>> >>>> rcu_read_lock(); >>>> for (i=disks; i--; ) { >>>> mdk_rdev_t *rdev; >>>> struct r5dev *dev = &sh->dev[i]; >>>> ... >>>> >>>> but it doesn't fix this bug. >>>> >>> >>> Did that chunk starting with "clean-up completed biofill operations" end >>> up where it belongs? The patch with the big context moves it to a >>> different >>> place from where the original one puts it when applied to 2.6.23... >>> >>> Lately I've seen several problems where the context isn't enough to make >>> a patch apply properly when some offsets have changed. In some cases a >>> patch won't apply at all because two nearly-identical areas are being >>> changed and the first chunk gets applied where the second one should, >>> leaving nowhere for the second chunk to apply. >> >> I always apply this kind of patches by hands, and no by patch command. >> Last patch sent here seems to fix this bug : >> >> gershwin:[/usr/scripts] > cat /proc/mdstat >> Personalities : [raid1] [raid6] [raid5] [raid4] >> md7 : active raid1 sdi1[2] md_d0p1[0] >> 1464725632 blocks [2/1] [U_] >> [=====>...............] recovery = 27.1% (396992504/1464725632) >> finish=1040.3min speed=17104K/sec > > Resync done. Patch fix this bug. > > Regards, > > JKB > Excellent! I cannot easily re-produce the bug on my system so I will wait for the next stable patch set to include it and let everyone know if it happens again, thanks. ^ permalink raw reply [flat|nested] 35+ messages in thread
end of thread, other threads:[~2007-11-09 20:36 UTC | newest] Thread overview: 35+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2007-11-04 12:03 2.6.23.1: mdadm/raid5 hung/d-state Justin Piszcz 2007-11-04 12:39 ` 2.6.23.1: mdadm/raid5 hung/d-state (md3_raid5 stuck in endless loop?) Justin Piszcz 2007-11-04 12:48 ` 2.6.23.1: mdadm/raid5 hung/d-state Michael Tokarev 2007-11-04 12:52 ` Justin Piszcz 2007-11-04 14:55 ` Michael Tokarev 2007-11-04 14:59 ` Justin Piszcz 2007-11-04 18:17 ` BERTRAND Joël 2007-11-04 21:40 ` David Greaves 2007-11-04 13:40 ` BERTRAND Joël 2007-11-04 13:42 ` Justin Piszcz 2007-11-04 21:49 ` Neil Brown 2007-11-04 21:51 ` Justin Piszcz 2007-11-05 18:35 ` Dan Williams 2007-11-05 18:35 ` Justin Piszcz 2007-11-06 0:19 ` Dan Williams 2007-11-06 10:19 ` BERTRAND Joël 2007-11-06 11:29 ` Justin Piszcz 2007-11-06 11:39 ` BERTRAND Joël 2007-11-06 11:42 ` Justin Piszcz 2007-11-06 12:20 ` BERTRAND Joël 2007-11-07 1:25 ` Dan Williams 2007-11-07 5:00 ` Jeff Lessem 2007-11-08 17:45 ` Bill Davidsen 2007-11-08 18:02 ` Dan Williams 2007-11-09 20:36 ` Jeff Lessem 2007-11-08 21:40 ` Carlos Carvalho 2007-11-09 9:14 ` Justin Piszcz 2007-11-09 14:09 ` Fabiano Silva 2007-11-07 11:20 ` BERTRAND Joël 2007-11-06 23:18 ` Jeff Lessem 2007-11-05 8:36 ` BERTRAND Joël 2007-11-07 16:39 ` Chuck Ebbert 2007-11-07 16:48 ` BERTRAND Joël 2007-11-08 11:42 ` BERTRAND Joël 2007-11-08 12:44 ` Justin Piszcz
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).