public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* 2.6.0-t3: vfs/ext3 do_lookup bug?!
@ 2003-08-20 15:14 Martin Zwickel
  2003-08-20 18:36 ` Andrew Morton
  0 siblings, 1 reply; 8+ messages in thread
From: Martin Zwickel @ 2003-08-20 15:14 UTC (permalink / raw)
  To: LKML

[-- Attachment #1: Type: text/plain, Size: 2076 bytes --]

Hi there!

Today I wanted to check out some src-files from cvs.
But my fault was, that I ran cvs twice at the same time.

so two "cvs upd -d -A" are now in 'D' state.

I think they got stuck because both tried to access the same file.


#ps lax
4     0 11833 11832  15   0  4136 2664 down   D    pts/18     0:07 cvs upd -d -A
4     0 11933 11932  22   0  2672 1180 down   D    pts/19     0:00 cvs upd -d -A

#sysrq-t + dmesg:
cvs           D 00000196 294197680 11833  11832                     (NOTLB)
dc84de5c 00000082 d2314a0c 00000196 d23148fc dc84de58 cafba080 cccee770 
       00000282 cafba080 cccee778 c0107f8e 00000001 cafba080 c0119687 d49d3e94 
       cccee778 dc84de7c 00000000 d919a9f0 00000000 cccee770 cccee708 d919a980 
Call Trace:
 [<c0107f8e>] __down+0x7c/0xc7
 [<c0119687>] default_wake_function+0x0/0x2e
 [<c0108118>] __down_failed+0x8/0xc
 [<c0159df3>] .text.lock.namei+0x5/0x16a
 [<c0156c98>] do_lookup+0x96/0xa1
 [<c0157077>] link_path_walk+0x3d4/0x762
 [<c0157c57>] open_namei+0x8e/0x3f3
 [<c014a265>] filp_open+0x43/0x69
 [<c014a64c>] sys_open+0x5b/0x8b
 [<c0109063>] syscall_call+0x7/0xb

cvs           D C8E86424 297794096 11933  11932                     (NOTLB)
d49d3e80 00000086 c03a561b c8e86424 00000001 d092d408 c2dd2080 cccee770 
       00000286 c2dd2080 cccee778 c0107f8e 00000001 c2dd2080 c0119687 cccee778 
       dc84de70 d49d3f38 dffe46c0 d49d3ee4 00000000 cccee770 cccee708 d919a980 
Call Trace:
 [<c0107f8e>] __down+0x7c/0xc7
 [<c0119687>] default_wake_function+0x0/0x2e
 [<c0108118>] __down_failed+0x8/0xc
 [<c0159df3>] .text.lock.namei+0x5/0x16a
 [<c0156c98>] do_lookup+0x96/0xa1
 [<c0157077>] link_path_walk+0x3d4/0x762
 [<c015783c>] __user_walk+0x49/0x5e
 [<c0149be1>] sys_access+0x93/0x150
 [<c015393d>] sys_stat64+0x37/0x39
 [<c0109063>] syscall_call+0x7/0xb


So is it a kernel bug? the down in do_lookup shouldn't lock the process
forever...

Regards,
Martin

-- 
MyExcuse:
The salesman drove over the CPU board.

Martin Zwickel <martin.zwickel@technotrend.de>
Research & Development

TechnoTrend AG <http://www.technotrend.de>

[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: 2.6.0-t3: vfs/ext3 do_lookup bug?!
  2003-08-20 15:14 2.6.0-t3: vfs/ext3 do_lookup bug?! Martin Zwickel
@ 2003-08-20 18:36 ` Andrew Morton
  2003-08-20 19:06   ` Linus Torvalds
  0 siblings, 1 reply; 8+ messages in thread
From: Andrew Morton @ 2003-08-20 18:36 UTC (permalink / raw)
  To: Martin Zwickel; +Cc: linux-kernel

Martin Zwickel <martin.zwickel@technotrend.de> wrote:
>
> Today I wanted to check out some src-files from cvs.
>  But my fault was, that I ran cvs twice at the same time.
> 
>  so two "cvs upd -d -A" are now in 'D' state.
> 
>  I think they got stuck because both tried to access the same file.

How odd.  Were they the only processes which were in D state?

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: 2.6.0-t3: vfs/ext3 do_lookup bug?!
  2003-08-20 18:36 ` Andrew Morton
@ 2003-08-20 19:06   ` Linus Torvalds
  2003-08-21  6:33     ` Martin Zwickel
  0 siblings, 1 reply; 8+ messages in thread
From: Linus Torvalds @ 2003-08-20 19:06 UTC (permalink / raw)
  To: linux-kernel

Andrew Morton wrote:

> Martin Zwickel <martin.zwickel@technotrend.de> wrote:
>>
>> Today I wanted to check out some src-files from cvs.
>>  But my fault was, that I ran cvs twice at the same time.
>> 
>>  so two "cvs upd -d -A" are now in 'D' state.
>> 
>>  I think they got stuck because both tried to access the same file.
> 
> How odd.  Were they the only processes which were in D state?

They are probably hung on the same semaphore, ie "dir->i_sem". They almost
certainly are not deadlocked on each other: something else has either left
the directory semaphore locked, or is in turn waiting for something else.

Martin - do you have the _full_ list of processes available?

                Linus

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: 2.6.0-t3: vfs/ext3 do_lookup bug?!
  2003-08-20 19:06   ` Linus Torvalds
@ 2003-08-21  6:33     ` Martin Zwickel
  2003-08-21  6:41       ` Andrew Morton
  0 siblings, 1 reply; 8+ messages in thread
From: Martin Zwickel @ 2003-08-21  6:33 UTC (permalink / raw)
  To: torvalds; +Cc: linux-kernel, Andrew Morton

[-- Attachment #1: Type: text/plain, Size: 1371 bytes --]

On Wed, 20 Aug 2003 12:06:34 -0700
Linus Torvalds <torvalds@osdl.org> bubbled:

> Andrew Morton wrote:
> 
> > Martin Zwickel <martin.zwickel@technotrend.de> wrote:
> >>
> >> Today I wanted to check out some src-files from cvs.
> >>  But my fault was, that I ran cvs twice at the same time.
> >> 
> >>  so two "cvs upd -d -A" are now in 'D' state.
> >> 
> >>  I think they got stuck because both tried to access the same file.
> > 
> > How odd.  Were they the only processes which were in D state?
> 
> They are probably hung on the same semaphore, ie "dir->i_sem". They almost
> certainly are not deadlocked on each other: something else has either left
> the directory semaphore locked, or is in turn waiting for something else.
> 
> Martin - do you have the _full_ list of processes available?
> 
>                 Linus

hmm, still not rebooted ;)
and i recognized that updatedb now also got stuck in the same down.
(somewhere @
/work/BuildEnv/sandbox/components/base/kernel/source/linux-2.4.21-rc1/include/c
onfig/x86/f00f/CVS/)
but where could the semaphore/mutex got locked and not unlocked?
the ps lax and lsof output is attached.
i'll reboot and fsck the fs...

and thx for the response.

Regards,
Martin

-- 
MyExcuse:
CPU needs bearings repacked

Martin Zwickel <martin.zwickel@technotrend.de>
Research & Development

TechnoTrend AG <http://www.technotrend.de>

[-- Attachment #2: pslax.list --]
[-- Type: application/octet-stream, Size: 16682 bytes --]

F   UID   PID  PPID PRI  NI   VSZ  RSS WCHAN  STAT TTY        TIME COMMAND
4     0     1     0  15   0  1428  452 schedu S    ?          0:08 init
1     0     2     1  34  19     0    0 ksofti SWN  ?          0:00 [ksoftirqd/0]
1     0     3     1   5 -10     0    0 worker SW<  ?          0:01 [events/0]
1     0     4     1   5 -10     0    0 worker SW<  ?          0:23 [kblockd/0]
1     0     5     1  15   0     0    0 hub_th SW   ?          0:00 [khubd]
1     0     8     1  15   0     0    0 kswapd SW   ?          0:40 [kswapd0]
1     0     9     1  10 -10     0    0 worker SW<  ?          0:00 [aio/0]
1     0    10     1  24   0     0    0 serio_ SW   ?          0:00 [kseriod]
1     0    11     1  15   0     0    0 kjourn SW   ?          0:05 [kjournald]
5     0   117     1  25   0  1824  792 devfsd S    ?          0:00 /sbin/devfsd /dev
1     0  2392     1  15   0     0    0 kjourn SW   ?          0:00 [kjournald]
1     0  2393     1  15   0     0    0 kjourn SW   ?          0:00 [kjournald]
1     0  2394     1  15   0     0    0 kjourn SW   ?          0:01 [kjournald]
1     0  2395     1  15   0     0    0 kjourn SW   ?          0:00 [kjournald]
5     0  2976     1  25   0  1448  464 schedu S    ?          0:00 /usr/sbin/acpid -c /etc/acpi/events
1  1009  3040     1  15   0  2344  864 clock_ S    ?          0:00 /usr/bin/freshclam -d -c 2 -l /var/log/clam-update.log
1     4  3158     1  15   0  2764  872 schedu S    ?          0:00 lpd Waiting  
5     1  3293     1  15   0  1480  304 schedu S    ?          0:00 /sbin/portmap
5     0  3326     1  25   0  1620  580 schedu S    ?          0:00 /sbin/rpc.statd
5     0  3342     1  15   0     0    0 schedu SW   ?          0:00 [nfsd]
1     0  3343     1  15   0     0    0 schedu SW   ?          0:00 [nfsd]
1     0  3344     1  15   0     0    0 schedu SW   ?          0:00 [nfsd]
1     0  3345     1  15   0     0    0 schedu SW   ?          0:00 [nfsd]
1     0  3350     1  21   0     0    0 schedu SW   ?          0:00 [lockd]
1     0  3351     1  15   0     0    0 rpciod SW   ?          0:01 [rpciod]
1     0  3346     1  15   0     0    0 schedu SW   ?          0:00 [nfsd]
1     0  3347     1  15   0     0    0 schedu SW   ?          0:00 [nfsd]
1     0  3348     1  15   0     0    0 schedu SW   ?          0:00 [nfsd]
1     0  3349     1  15   0     0    0 schedu SW   ?          0:00 [nfsd]
1     0  3355     1  23   0  1760  572 schedu S    ?          0:00 /usr/sbin/rpc.mountd
5     0  3445     1  15   0  2060  968 schedu S    ?          0:02 /usr/sbin/syslog-ng
5     0  3454     1  15   0  2120 2116 schedu SL   ?          0:00 /usr/bin/ntpd -p /var/run/ntpd.pid
4     0  3527     1  15   0  3976  828 schedu S    ?          0:00 /usr/lib/postfix/master
4   207  3551  3527  15   0  4024 1064 schedu S    ?          0:00 qmgr -l -t fifo -u
5     0  3693     1  15   0  1700  712 clock_ S    ?          0:00 /usr/sbin/smartd
5     0  3724     1  17   0 20744 4536 schedu S    ?          0:05 /usr/bin/perl /usr/bin/spamd -d -r /var/run/spamd.pid -a -c -L -s stderr
5     0  3755     1  16   0  2952  984 schedu S    ?          0:00 /usr/sbin/sshd
5     0  3778     1  15   0  1588  624 clock_ S    ?          0:00 /usr/sbin/cron
1     0  3837     1  25   0  1416  228 pause  S    ?          0:00 /opt/vmware/bin/vmnet-bridge -d /var/run/vmnet-bridge-0.pid /dev/vmnet0 eth0
5     0  4195     1  25   0  2208  588 schedu S    ?          0:00 /usr/sbin/xinetd -pidfile /var/run/xinetd.pid -stayalive -reuse
1     0  4198     1  25   0  1416  228 pause  S    ?          0:00 /opt/vmware/bin/vmnet-netifup -d /var/run/vmnet-netifup-vmnet1.pid /dev/vmnet1 vmnet1
1     0  4201     1  25   0  1416  228 pause  S    ?          0:00 /opt/vmware/bin/vmnet-netifup -d /var/run/vmnet-netifup-vmnet8.pid /dev/vmnet8 vmnet8
5     0  4218     1  15   0  1652  520 schedu S    ?          0:00 /opt/vmware/bin/vmnet-dhcpd -cf /etc/vmware/vmnet1/dhcpd/dhcpd.conf -lf /etc/vmware/vmnet1/dhcpd/dhcpd.leases -pf /var/run/vmnet-dhcpd-vmnet1.pid vmnet1
1     0  4219     1  25   0  1648  488 schedu S    ?          0:00 /opt/vmware/bin/vmnet-dhcpd -cf /etc/vmware/vmnet8/dhcpd/dhcpd.conf -lf /etc/vmware/vmnet8/dhcpd/dhcpd.leases -pf /var/run/vmnet-dhcpd-vmnet8.pid vmnet8
5     0  4221     1  15   0  2040  748 schedu S    ?          0:02 /opt/vmware/bin/vmware-nmbd -D -l /dev/null -s /etc/vmware/vmnet1/smb/smb.conf -f /var/run/vmware-nmbd-vmnet1.pid
5     0  4223     1  25   0  3576  572 schedu S    ?          0:00 /opt/vmware/bin/vmware-smbd -D -l /dev/null -s /etc/vmware/vmnet1/smb/smb.conf -f /var/run/vmware-smbd-vmnet1.pid
4     0  4238     1  22   0  1404  412 schedu S    tty1       0:00 /sbin/agetty 38400 tty1 linux
4     0  4239     1  22   0  1404  412 schedu S    tty2       0:00 /sbin/agetty 38400 tty2 linux
4     0  4240     1  22   0  1404  412 schedu S    tty3       0:00 /sbin/agetty 38400 tty3 linux
4     0  4241     1  22   0  1404  412 schedu S    tty4       0:00 /sbin/agetty 38400 tty4 linux
4     0  4242     1  22   0  1404  412 schedu S    tty5       0:00 /sbin/agetty 38400 tty5 linux
4     0  4243     1  22   0  1404  412 schedu S    tty6       0:00 /sbin/agetty 38400 tty6 linux
5     0  4255     1  15   0 12772 1500 schedu S    ?          0:00 /usr/bin/gdm
4     0  5590     1  21   0  2132  836 wait4  S    ?          0:00 /bin/sh /root/startgame.sh ooffice
4     0  5604  5590  15   0 144708 1308 rt_sig S   ?          1:04 /work2/opt/OpenOffice.org1.1_beta/program/soffice.bin
5     0  5641  5604  15   0 144708 1308 schedu S   ?          0:00 /work2/opt/OpenOffice.org1.1_beta/program/soffice.bin
1     0  5642  5641  24   0 144708 1308 schedu S   ?          0:00 /work2/opt/OpenOffice.org1.1_beta/program/soffice.bin
5     0  5644  5641  15   0 144708 1308 rt_sig S   ?          0:00 /work2/opt/OpenOffice.org1.1_beta/program/soffice.bin
1     0  5721  5641  15   0 144708 1308 rt_sig S   ?          0:01 /work2/opt/OpenOffice.org1.1_beta/program/soffice.bin
5     0 32386  4255  17   0 13600 1892 wait4  S    ?          0:00 /usr/bin/gdm
4     0 32607 32386  24   0  2136  848 wait4  S    ?          0:00 /bin/sh /etc/X11/gdm/Sessions//enlightenment
4     0 32625 32607  15   0  2468  788 clock_ S    ?          0:01 xautolock -locker xlock -mode blank -icongeometry 256x256 -nice 20 -timeout 10 -info 'Time to waste, waste to time!' -timeelapsed -echokeys -dpmssuspend 60 -dpmsoff 120 -time 5
4     0 32627 32607  15   0  8304 4096 schedu S    ?         57:34 /usr/bin/enlightenment
5     0 32628 32627  15   0  2504  772 schedu S    ?          0:00 ssh-agent -- bash -c ssh-add; exec /usr/bin/enlightenment
4     0 32642 32627  15   0  3672 1092 schedu S    ?          0:00 /usr/bin/E-Exec.epplet
4     0 32644 32627  15   0  3744 1332 schedu S    ?          0:01 E-Clock.epplet
4     0 32695 32627  15   0 10212 2348 schedu S    ?          0:00 xterm -title Xeno's Terminal -ls -sb -sl 10000 -vb -r
4     0 32699 32627  15   0 10220 2684 schedu S    ?          0:00 xterm -title Xeno's Terminal -ls -sb -sl 10000 -vb -r
4     0 32701 32627  15   0  8648 1760 schedu S    ?          0:00 xterm -title Xeno's Terminal -ls -sb -sl 10000 -vb -r
4     0 32702 32699  17   0  2868 1540 schedu S    pts/0      0:00 -bash
4     0 32705 32695  15   0  2868 1528 schedu S    pts/1      0:00 -bash
4     0 32708 32701  19   0  2868  996 schedu S    pts/2      0:00 -bash
4     0   301 32627  15   0  9288 1832 schedu S    ?          0:00 xterm -title Xeno's Terminal -ls -sb -sl 10000 -vb -r
4     0   302   301  15   0  2868  996 wait4  S    pts/3      0:00 -bash
4     0   963 32627  15   0  8584 1712 schedu S    ?          0:00 xterm -title Xeno's Terminal -ls -sb -sl 10000 -vb -r
4     0   964   963  15   0  2868  996 schedu S    pts/4      0:00 -bash
4     0  1147 32627  15   0  8596 1768 schedu S    ?          0:00 xterm -title Xeno's Terminal -ls -sb -sl 10000 -vb -r
4     0  1148  1147  24   0  2868 1000 wait4  S    pts/5      0:00 -bash
4     0  1280   302  15   0  6464 1300 schedu S    pts/3      0:00 vi transponders.c
4     0  1796  1148  15   0  6396  628 schedu S    pts/5      0:00 vi /work/budget/test/scan/liste
4     0  1957 32627  15   0  8592 1728 schedu S    ?          0:00 xterm -title Xeno's Terminal -ls -sb -sl 10000 -vb -r
4     0  1958  1957  15   0  2868  996 schedu S    pts/7      0:00 -bash
4     0  2429 32627  15   0  8584 2292 schedu S    ?          0:00 xterm -title Xeno's Terminal -ls -sb -sl 10000 -vb -r
4     0  2430  2429  15   0  2868 1376 wait4  S    pts/8      0:00 -bash
4     0  5576 32627  15   0  8580 3020 schedu S    ?          0:02 xterm -title Xeno's Terminal -ls -sb -sl 10000 -vb -r
4     0  5578 32627  15   0  8572 2252 schedu S    ?          0:00 xterm -title Xeno's Terminal -ls -sb -sl 10000 -vb -r
4     0  5580 32627  15   0  8596 3056 schedu S    ?          0:00 xterm -title Xeno's Terminal -ls -sb -sl 10000 -vb -r
4     0  5582 32627  15   0  8572 2432 schedu S    ?          0:00 xterm -title Xeno's Terminal -ls -sb -sl 10000 -vb -r
4     0  5583  5576  16   0  2868 1492 schedu S    pts/10     0:00 -bash
4     0  5584  5578  16   0  2868 1420 schedu S    pts/9      0:00 -bash
4     0  5585  5580  15   0  2868 1332 schedu S    pts/11     0:00 -bash
4     0  5586  5582  15   0  2868 1320 schedu S    pts/12     0:00 -bash
4     0  5675 32627  15   0  8572 1984 schedu S    ?          0:01 xterm -title Xeno's Terminal -ls -sb -sl 10000 -vb -r
4     0  5676  5675  15   0  2868 1036 schedu S    pts/13     0:00 -bash
4     0 26899 32627  15   0  8580 2580 schedu S    ?          0:02 xterm -title Xeno's Terminal -ls -sb -sl 10000 -vb -r
4     0 26900 26899  17   0  2868 1408 schedu S    pts/15     0:00 -bash
4     0 27148 32627  15   0  8572 2072 schedu S    ?          0:00 xterm -title Xeno's Terminal -ls -sb -sl 10000 -vb -r
4     0 27149 27148  15   0  2868 1508 schedu S    pts/6      0:00 -bash
4     0 27200  2430  15   0  2816 1424 schedu S    pts/8      0:00 ssh -p 48116 xeno@192.168.0.1
4     0 27261 32627  15   0  8588 3256 schedu S    ?          0:01 xterm -title Xeno's Terminal -ls -sb -sl 10000 -vb -r
4     0 27263 32627  15   0  8580 2724 schedu S    ?          0:00 xterm -title Xeno's Terminal -ls -sb -sl 10000 -vb -r
4     0 27264 27263  15   0  2868 1888 wait4  S    pts/16     0:00 -bash
4     0 27265 27261  15   0  2868 1888 schedu S    pts/17     0:00 -bash
4     0 27701 32627  15   0  8588 3092 schedu S    ?          0:00 xterm -title Xeno's Terminal -ls -sb -sl 10000 -vb -r
4     0 27702 27701  15   0  2868 1888 wait4  S    pts/18     0:00 -bash
4     0 27745 27264  15   0  6456 2484 schedu S    pts/16     0:00 vi Config.in.templ
4     0 28068 32627  15   0  9212 6488 schedu S    ?          0:06 xterm -title Xeno's Terminal -ls -sb -sl 10000 -vb -r
4     0 28069 28068  15   0  2868 1892 wait4  S    pts/19     0:00 -bash
4     0 28132 32627  15   0  9548 6820 schedu S    ?          0:03 xterm -title Xeno's Terminal -ls -sb -sl 10000 -vb -r
4     0 28133 28132  15   0  2868 1904 schedu S    pts/20     0:00 -bash
4     0 30309 32627  15   0  8588 3040 schedu S    ?          0:00 xterm -title SSH@Router -ls -sb -sl 10000 -r -e bash --login -c ssh -p 48116 xeno@192.168.0.1
4     0 30310 30309  15   0  2816 1424 schedu S    pts/22     0:00 ssh -p 48116 xeno@192.168.0.1
4     0 30332 32627  15   0  8572 2692 schedu S    ?          0:00 xterm -title Xeno's Terminal -ls -sb -sl 10000 -vb -r
4     0 30333 30332  15   0  2868 1856 schedu S    pts/23     0:00 -bash
4     0 30380 32627  15   0  8588 3740 schedu S    ?          0:01 xterm -title Xeno's Terminal -ls -sb -sl 10000 -vb -r
4     0 30381 30380  15   0  2868 1884 wait4  S    pts/24     0:00 -bash
4     0 30926 32627  15   0  8592 5652 schedu S    ?          0:05 xterm -title Xeno's Terminal -ls -sb -sl 10000 -vb -r
4     0 30927 30926  17   0  2868 1412 schedu S    pts/25     0:00 -bash
4     0 31590 32627  21   0 48900 15656 rt_sig S   ?          0:00 /usr/local/bin/licq
5     0 31591 31590  15   0 48900 15656 schedu S   ?          0:00 /usr/local/bin/licq
1     0 31592 31591  15   0 48900 15656 schedu S   ?          0:00 /usr/local/bin/licq
1     0 31593 31591  15   0 48900 15656 schedu S   ?          0:00 /usr/local/bin/licq
5     0 31594 31591  15   0 48900 15656 schedu S   ?          0:03 /usr/local/bin/licq
4     0 31910 32627  15   0  8580 3012 schedu S    ?          0:00 xterm -title Xeno's Terminal -ls -sb -sl 10000 -vb -r
4     0 31911 31910  15   0  2868 1884 schedu S    pts/26     0:00 -bash
4     0  7879 32627  15   0  8604 5544 schedu S    ?          0:09 xterm -title Xeno's Terminal -ls -sb -sl 10000 -vb -r
4     0  7880  7879  15   0  2868 1892 schedu S    pts/28     0:00 -bash
4     0  8207 32627  15   0 67256 45668 schedu S   ?          7:55 /usr/lib/mozilla/mozilla-bin
5     0  8324  8207  15   0 67256 45668 schedu S   ?          0:00 /usr/lib/mozilla/mozilla-bin
5     0  8325  8324  15   0 67256 45668 schedu S   ?          0:00 /usr/lib/mozilla/mozilla-bin
1     0  8326  8324  20   0 67256 45668 rt_sig S   ?          0:00 /usr/lib/mozilla/mozilla-bin
5     0  8412  8324  15   0 67256 45668 clock_ S   ?          0:01 /usr/lib/mozilla/mozilla-bin
4     0 11690     1  15   0  2804 1072 schedu S    ?          0:00 /usr/bin/esd -terminate -nobeeps -as 2 -spawnfd 25
1     0 14096     1  15   0     0    0 pdflus SW   ?          0:00 [pdflush]
1     0  7526     1  15   0     0    0 pdflus SW   ?          0:00 [pdflush]
4     0 15687 32627  15   0 43748 27548 schedu SL  ?          1:58 sylpheed
4     0 19661 30381  15   0  2948 1552 schedu S    pts/24     0:00 ssh root@dhcp240
4     0  6696 32627  15   0 16492 10644 schedu S   ?         11:29 gkrellm2
4     0  6762 32627  15   0  8572 2652 schedu S    ?          0:00 xterm -title Xeno's Terminal -ls -sb -sl 10000 -vb -r
4     0  6763  6762  18   0  2868 1860 schedu S    pts/14     0:00 -bash
4     0 32593 32386  35  19 395604 109660 -   RNL  ?        291:37 /etc/X11/X :0 -nolisten tcp -auth /var/lib/gdm/:0.Xauth vt7
4     0  7556 32627  15   0  8572 2984 schedu S    ?          0:00 xterm -title SSH@Server -ls -sb -sl 10000 -r -e bash --login -c ssh xeno@192.168.0.2
4     0  7557  7556  15   0  2892 1464 schedu S    pts/27     0:00 ssh xeno@192.168.0.2
5     0 11219     1  39  19  2856 1560 fork   DN   ?          0:00 ./foldingathome
1     0 11225 11219  39  19  2856 1560 rt_sig SN   ?          0:00 ./foldingathome
4     0 11608 32627  15   0  8588 2916 schedu S    ?          0:00 xterm -title Xeno's Terminal -ls -sb -sl 10000 -vb -r
4     0 11609 11608  15   0  2872 1896 schedu S    pts/29     0:00 -bash
4     0 11832 27702  15   0  2136  932 wait4  S    pts/18     0:00 /bin/sh /usr/bin/cvsupd
4     0 11833 11832  15   0  4136 2656 down   D    pts/18     0:07 cvs upd -d -A
4     0 11932 28069  15   0  2136  932 wait4  S    pts/19     0:00 /bin/sh /usr/bin/cvsupd
4     0 11933 11932  22   0  2672 1180 down   D    pts/19     0:00 cvs upd -d -A
4     0 15862 32627  15   0  8580 5816 schedu S    ?          0:00 xterm -title Xeno's Terminal -ls -sb -sl 10000 -vb -r
4     0 15863 15862  15   0  2872 1896 schedu S    pts/30     0:00 -bash
4     0 16083 32627  15   0  8580 2744 schedu S    ?          0:00 xterm -title Xeno's Terminal -ls -sb -sl 10000 -vb -r
4     0 16084 16083  15   0  2872 1900 schedu S    pts/31     0:00 -bash
4     0 16112 16084  15   0 40428 13720 schedu S   pts/31     0:19 xmms -e Ayak - Sail Away.mp3
5     0 16113 16112  15   0 40428 13720 schedu S   pts/31     0:00 xmms -e Ayak - Sail Away.mp3
5     0 16114 16113  15   0 40428 13720 schedu S   pts/31     5:41 xmms -e Ayak - Sail Away.mp3
5     0 16115 16113  15   0 40428 13720 clock_ S   pts/31     0:00 xmms -e Ayak - Sail Away.mp3
1     0 16443  8324  15   0 67256 45668 rt_sig S   ?          0:00 /usr/lib/mozilla/mozilla-bin
4     0 20370 32627  15   0  9932 7192 schedu S    ?          0:01 xterm -title Xeno's Terminal -ls -sb -sl 10000 -vb -r
4     0 20371 20370  15   0  2872 1868 wait4  S    pts/32     0:00 -bash
5     0 22082  3778  21   0  1596  676 pipe_w S    ?          0:00 /USR/SBIN/CRON
4     0 22083 22082  15   0  2120  924 wait4  S    ?          0:00 /bin/sh -c test -x /usr/sbin/run-crons && /usr/sbin/run-crons 
4     0 22084 22083  25   0  2132  976 wait4  S    ?          0:00 /bin/sh /usr/sbin/run-crons
4     0 22099 22084  25   0  2120  916 wait4  S    ?          0:00 /bin/sh /etc/cron.daily/slocate.cron
4     0 22100 22099  15   0  1688  648 down   D    ?          0:07 /usr/bin/updatedb
4   207 23295  3527  15   0  3868 1172 schedu S    ?          0:00 pickup -l -t fifo -u
4     0 27496 20371  21   0  2600  664 -      R    pts/32     0:00 ps lax

[-- Attachment #3: cutted-dmesg.txt --]
[-- Type: text/plain, Size: 15349 bytes --]

n+0x0/0x2e
 [<c0119687>] default_wake_function+0x0/0x2e
 [<c0109063>] syscall_call+0x7/0xb

cvs           D C8E86424 297794096 11933  11932                     (NOTLB)
d49d3e80 00000086 c03a561b c8e86424 00000001 d092d408 c2dd2080 cccee770 
       00000286 c2dd2080 cccee778 c0107f8e 00000001 c2dd2080 c0119687 c7ec1f3c 
       dc84de70 d49d3f38 dffe46c0 d49d3ee4 00000000 cccee770 cccee708 d919a980 
Call Trace:
 [<c0107f8e>] __down+0x7c/0xc7
 [<c0119687>] default_wake_function+0x0/0x2e
 [<c0108118>] __down_failed+0x8/0xc
 [<c0159df3>] .text.lock.namei+0x5/0x16a
 [<c0156c98>] do_lookup+0x96/0xa1
 [<c0157077>] link_path_walk+0x3d4/0x762
 [<c015783c>] __user_walk+0x49/0x5e
 [<c0149be1>] sys_access+0x93/0x150
 [<c015393d>] sys_stat64+0x37/0x39
 [<c0109063>] syscall_call+0x7/0xb

xterm         S 00000000 344429488 15862  32627 15863   16083 11608 (NOTLB)
dab51e8c 00000086 000000d0 00000000 000000d0 d3d55100 c62d6680 00000000 
       7fffffff 00000005 00000000 c01237d6 d7f51000 d7f51920 dab51f20 d7f51000 
       c9883d80 00000004 00000004 c0281033 d7f51000 c9883d80 dab51f20 00000000 
Call Trace:
 [<c01237d6>] schedule_timeout+0xb1/0xb3
 [<c0281033>] tty_poll+0x7b/0x9f
 [<c015b73d>] do_select+0x17b/0x2b6
 [<c015b41d>] __pollwait+0x0/0xc4
 [<c015bba5>] sys_select+0x308/0x549
 [<c03371a4>] sock_ioctl+0xaf/0x23d
 [<c0109063>] syscall_call+0x7/0xb

bash          S C0282EEA 3980833712 15863  15862                     (NOTLB)
c4037e78 00000082 0000000b c0282eea d2b19000 00000000 d6bca680 00000008 
       7fffffff d2b19928 00000001 c01237d6 d2b19000 dedd9640 00000092 c4036000 
       c02822b3 0000001c dfb5f3e0 db811bc8 c0123f8d c0131d4d 00000246 00000008 
Call Trace:
 [<c0282eea>] opost_block+0x116/0x1e0
 [<c01237d6>] schedule_timeout+0xb1/0xb3
 [<c02822b3>] tty_default_put_char+0x33/0x37
 [<c0123f8d>] __dequeue_signal+0xe8/0x184
 [<c0131d4d>] unlock_page+0x15/0x53
 [<c028489d>] read_chan+0x29f/0x83a
 [<c0284fb9>] write_chan+0x181/0x244
 [<c0119687>] default_wake_function+0x0/0x2e
 [<c0119687>] default_wake_function+0x0/0x2e
 [<c0119687>] default_wake_function+0x0/0x2e
 [<c027fc85>] tty_read+0xe5/0x10e
 [<c014ae09>] vfs_read+0xbc/0x127
 [<c014b094>] sys_read+0x42/0x63
 [<c0109063>] syscall_call+0x7/0xb

xterm         S 00000000 112604080 16083  32627 16084   20370 15862 (NOTLB)
ce933e8c 00000082 000000d0 00000000 000000d0 cfe92980 c7dce680 00000000 
       7fffffff 00000005 00000000 c01237d6 c7ba6000 c7ba6920 ce933f20 c7ba6000 
       d6f06780 00000004 00000004 c0281033 c7ba6000 d6f06780 ce933f20 00000000 
Call Trace:
 [<c01237d6>] schedule_timeout+0xb1/0xb3
 [<c0281033>] tty_poll+0x7b/0x9f
 [<c015b73d>] do_select+0x17b/0x2b6
 [<c015b41d>] __pollwait+0x0/0xc4
 [<c015bba5>] sys_select+0x308/0x549
 [<c03371a4>] sock_ioctl+0xaf/0x23d
 [<c0109063>] syscall_call+0x7/0xb

bash          S C0282EEA 3833189296 16084  16083 16112               (NOTLB)
c2f23e78 00000086 0000000a c0282eea d4ebf000 00000000 de784680 00000008 
       7fffffff d4ebf928 00000001 c01237d6 d4ebf000 c57304c0 0000000a c2f22000 
       c02822b3 d4ebf000 00000000 c2f23ea7 00000001 c0131d4d 00000246 00000008 
Call Trace:
 [<c0282eea>] opost_block+0x116/0x1e0
 [<c01237d6>] schedule_timeout+0xb1/0xb3
 [<c02822b3>] tty_default_put_char+0x33/0x37
 [<c0131d4d>] unlock_page+0x15/0x53
 [<c028489d>] read_chan+0x29f/0x83a
 [<c0284fb9>] write_chan+0x181/0x244
 [<c0119687>] default_wake_function+0x0/0x2e
 [<c0119687>] default_wake_function+0x0/0x2e
 [<c0119687>] default_wake_function+0x0/0x2e
 [<c027fc85>] tty_read+0xe5/0x10e
 [<c014ae09>] vfs_read+0xbc/0x127
 [<c014b094>] sys_read+0x42/0x63
 [<c0109063>] syscall_call+0x7/0xb

xmms          S C015B45E 4269691824 16112  16084 16113               (NOTLB)
d0bcff04 00000086 00000000 c015b45e 00000246 d0bcff18 d23e8680 2db9a07c 
       d0bcff18 d0bcff60 0000000a c0123784 d0bcff18 c96b1f00 c619c940 c0412668 
       c0412668 2db9a07c 4b87ad6e c012371c d23e8680 c0412280 d0bcff5c 00000000 
Call Trace:
 [<c015b45e>] __pollwait+0x41/0xc4
 [<c0123784>] schedule_timeout+0x5f/0xb3
 [<c012371c>] process_timeout+0x0/0x9
 [<c015bf1c>] do_poll+0xa6/0xc5
 [<c015c0d8>] sys_poll+0x19d/0x28d
 [<c015b41d>] __pollwait+0x0/0xc4
 [<c0109063>] syscall_call+0x7/0xb

xmms          S CB07DFA0 75725104 16113  16112 16114               (NOTLB)
cb07df04 00000086 c96b1480 cb07dfa0 00000246 cb07df18 c6844100 2db9a77e 
       cb07df18 cb07df60 000007d1 c0123784 cb07df18 c96b1480 c57301c0 cc201f18 
       dfd6128c 2db9a77e 4b87ad6e c012371c c6844100 c0412280 cb07df5c 00000000 
Call Trace:
 [<c0123784>] schedule_timeout+0x5f/0xb3
 [<c012371c>] process_timeout+0x0/0x9
 [<c015bf1c>] do_poll+0xa6/0xc5
 [<c015c0d8>] sys_poll+0x19d/0x28d
 [<c015b41d>] __pollwait+0x0/0xc4
 [<c0109063>] syscall_call+0x7/0xb

xmms          S 00000000 227344688 16114  16113         16115       (NOTLB)
d4117e8c 00000086 000000d0 00000000 00000246 d4117ea0 c6845900 2db9a0ca 
       d4117ea0 00000005 00000000 c0123784 d4117ea0 dd7ca200 00000010 c04128d8 
       c04128d8 2db9a0ca 4b87ad6e c012371c c6845900 c0412280 c9edb980 00000000 
Call Trace:
 [<c0123784>] schedule_timeout+0x5f/0xb3
 [<c012371c>] process_timeout+0x0/0x9
 [<c015b73d>] do_select+0x17b/0x2b6
 [<c015b41d>] __pollwait+0x0/0xc4
 [<c015bba5>] sys_select+0x308/0x549
 [<c015f229>] dput+0xd5/0x161
 [<c014bdac>] __fput+0x9f/0xf5
 [<c0109063>] syscall_call+0x7/0xb

xmms          S 00000000 340643120 16115  16113               16114 (NOTLB)
dad23ef8 00000086 00000000 00000000 00000246 dad23f68 c6844d00 dad22000 
       dad23f68 00000000 00000000 c012c5a0 dad23f68 dad23f18 00000000 dad23f20 
       c04d0fa0 dad2201c 00000000 1dcd6500 2db9a0cd 00000001 00000000 c6844d00 
Call Trace:
 [<c012c5a0>] do_clock_nanosleep+0x1b6/0x303
 [<c0119687>] default_wake_function+0x0/0x2e
 [<e0a39cdd>] __nvsym00795+0x31/0x50 [nvidia]
 [<c0119687>] default_wake_function+0x0/0x2e
 [<e0a22947>] nv_kern_isr+0x25/0x5c [nvidia]
 [<c012c1a2>] nanosleep_wake_up+0x0/0x9
 [<c012c23c>] sys_nanosleep+0x80/0x105
 [<c0109063>] syscall_call+0x7/0xb

mozilla-bin   S 00000001 4077705648 16443   8324                8412 (NOTLB)
d2975f8c 00000086 00000000 00000001 c014b000 c3868280 df8a6080 d2974000 
       d2975fc4 d2975fa8 00000000 c010827c d2975fa8 429b48ac 00000008 80000000 
       00000000 00000000 00000000 429b48ac 400f1078 00000000 d2974000 c0109063 
Call Trace:
 [<c014b000>] vfs_write+0xd5/0x127
 [<c010827c>] sys_rt_sigsuspend+0xc7/0xfb
 [<c0109063>] syscall_call+0x7/0xb

xterm         S 00000000 229380916 20370  32627 20371   27545 16083 (NOTLB)
d4307e8c 00000082 000000d0 00000000 000000d0 ca507f80 c6844700 00000000 
       7fffffff 00000005 00000000 c01237d6 d728b000 d728b920 d4307f20 d728b000 
       ca507300 00000004 00000004 c0281033 d728b000 ca507300 00000000 00000000 
Call Trace:
 [<c01237d6>] schedule_timeout+0xb1/0xb3
 [<c0281033>] tty_poll+0x7b/0x9f
 [<c015b73d>] do_select+0x17b/0x2b6
 [<c015b41d>] __pollwait+0x0/0xc4
 [<c015bba5>] sys_select+0x308/0x549
 [<c03371a4>] sock_ioctl+0xaf/0x23d
 [<c0109063>] syscall_call+0x7/0xb

bash          S C0282EEA 106789296 20371  20370                     (NOTLB)
cc8afe78 00000082 00000007 c0282eea c3a5b000 00000000 c62d6080 00000008 
       7fffffff c3a5b928 00000001 c01237d6 c3a5b000 df6a91c0 0000000a cc8ae000 
       c02822b3 c3a5b000 00000000 cc8afea7 00000001 0a28b000 00000246 00000008 
Call Trace:
 [<c0282eea>] opost_block+0x116/0x1e0
 [<c01237d6>] schedule_timeout+0xb1/0xb3
 [<c02822b3>] tty_default_put_char+0x33/0x37
 [<c028489d>] read_chan+0x29f/0x83a
 [<c0284fb9>] write_chan+0x181/0x244
 [<c0119687>] default_wake_function+0x0/0x2e
 [<c0119687>] default_wake_function+0x0/0x2e
 [<c0119687>] default_wake_function+0x0/0x2e
 [<c027fc85>] tty_read+0xe5/0x10e
 [<c014ae09>] vfs_read+0xbc/0x127
 [<c014b094>] sys_read+0x42/0x63
 [<c0109063>] syscall_call+0x7/0xb

cron          S 00000001 10410416 22082   3778 22083               (NOTLB)
c87bdedc 00000086 cb0cc058 00000001 c013e2a8 de711a80 c7dce080 df9ab568 
       df9ab500 c87bdf04 00001000 c01557bb 00000000 c7dce080 c011a810 c87bdf10 
       c87bdf10 df9ab500 df9ab53c df9ab500 00000000 c7dce080 c011a810 d6882480 
Call Trace:
 [<c013e2a8>] do_anonymous_page+0x10b/0x1e5
 [<c01557bb>] pipe_wait+0x97/0xb6
 [<c011a810>] autoremove_wake_function+0x0/0x4f
 [<c011a810>] autoremove_wake_function+0x0/0x4f
 [<c015591e>] pipe_read+0x144/0x242
 [<c014ae09>] vfs_read+0xbc/0x127
 [<c014b094>] sys_read+0x42/0x63
 [<c0109063>] syscall_call+0x7/0xb

sh            S DBE74E24 4093152536 22083  22082 22084               (NOTLB)
c2b25f50 00000082 00030002 dbe74e24 00000000 c2b25f58 ceb9ad80 ceb9ae1c 
       fffffe00 ceb9ad80 ceb9ae1c c011e99e ffffffff 00000000 dbe74d80 bffff910 
       c2b24000 00000001 00000000 ceb9ad80 c0119687 00000000 00000000 fffffff2 
Call Trace:
 [<c011e99e>] sys_wait4+0x1c0/0x267
 [<c0119687>] default_wake_function+0x0/0x2e
 [<c0119687>] default_wake_function+0x0/0x2e
 [<c0109063>] syscall_call+0x7/0xb

run-crons     S CAFBB924 20630704 22084  22083 22099               (NOTLB)
dd223f50 00000086 00030002 cafbb924 00000000 dd223f58 dbe74d80 dbe74e1c 
       fffffe00 dbe74d80 dbe74e1c c011e99e ffffffff 00000000 cafbb880 bffff670 
       dd222000 00000001 00000000 dbe74d80 c0119687 00000000 00000000 fffffff2 
Call Trace:
 [<c011e99e>] sys_wait4+0x1c0/0x267
 [<c0119687>] default_wake_function+0x0/0x2e
 [<c0119687>] default_wake_function+0x0/0x2e
 [<c0109063>] syscall_call+0x7/0xb

slocate.cron  S DF8A7324 4255745384 22099  22084 22100               (NOTLB)
c8a55f50 00000082 00030002 df8a7324 00000000 c8a55f58 cafbb880 cafbb91c 
       fffffe00 cafbb880 cafbb91c c011e99e ffffffff 00000000 df8a7280 bffff880 
       c8a54000 00000001 00000000 cafbb880 c0119687 00000000 00000000 fffffff2 
Call Trace:
 [<c011e99e>] sys_wait4+0x1c0/0x267
 [<c0119687>] default_wake_function+0x0/0x2e
 [<c0119687>] default_wake_function+0x0/0x2e
 [<c0109063>] syscall_call+0x7/0xb

updatedb      D 001AC5CA 3898705852 22100  22099                     (NOTLB)
c7ec1f28 00000082 00000000 001ac5ca 000041ed 00000002 df8a7280 cccee770 
       00000286 df8a7280 cccee778 c0107f8e 00000001 df8a7280 c0119687 cccee778 
       d49d3e94 00000000 3f4320b9 00000000 cccee770 cccee708 c7246f00 fffffff7 
Call Trace:
 [<c0107f8e>] __down+0x7c/0xc7
 [<c0119687>] default_wake_function+0x0/0x2e
 [<c0108118>] __down_failed+0x8/0xc
 [<c017a1e3>] ext3_readdir+0x0/0x4f1
 [<c015b3b4>] .text.lock.readdir+0x5/0x15
 [<c015b376>] sys_getdents64+0x98/0xd1
 [<c015b1a4>] filldir64+0x0/0x13a
 [<c0109063>] syscall_call+0x7/0xb

pickup        S 00000000 4209842608 23295   3527                3551 (NOTLB)
d1a9de8c 00000086 000000d0 00000000 00000246 d1a9dea0 d6bca080 2dbb2318 
       d1a9dea0 00000007 00000000 c0123784 d1a9dea0 df2efe00 00000020 c0412de8 
       c0412de8 2dbb2318 4b87ad6e c012371c d6bca080 c0412280 de866680 00000000 
Call Trace:
 [<c0123784>] schedule_timeout+0x5f/0xb3
 [<c012371c>] process_timeout+0x0/0x9
 [<c015b73d>] do_select+0x17b/0x2b6
 [<c015b41d>] __pollwait+0x0/0xc4
 [<c015bba5>] sys_select+0x308/0x549
 [<c011ee62>] do_setitimer+0x1af/0x1d9
 [<c0109063>] syscall_call+0x7/0xb

xterm         S 00000000 4263759024 27545  32627 27546   27584 20370 (NOTLB)
c5ef5e8c 00000082 000000d0 00000000 000000d0 d7fdcd80 c7cb6d80 00000000 
       7fffffff 00000005 00000000 c01237d6 c56bd000 c56bd920 c5ef5f20 c56bd000 
       d7d50f00 00000004 00000004 c0281033 c56bd000 d7d50f00 c5ef5f20 00000000 
Call Trace:
 [<c01237d6>] schedule_timeout+0xb1/0xb3
 [<c0281033>] tty_poll+0x7b/0x9f
 [<c015b73d>] do_select+0x17b/0x2b6
 [<c015b41d>] __pollwait+0x0/0xc4
 [<c015bba5>] sys_select+0x308/0x549
 [<c03371a4>] sock_ioctl+0xaf/0x23d
 [<c0109063>] syscall_call+0x7/0xb

bash          S C0282EEA 88078512 27546  27545                     (NOTLB)
cd0b7e78 00000082 00000007 c0282eea dd0fb000 00000000 c7cb6180 00000008 
       7fffffff dd0fb928 00000001 c01237d6 dd0fb000 d9e6a440 0000000a cd0b6000 
       c02822b3 dd0fb000 00000000 cd0b7ea7 00000001 c0131d4d 00000246 00000008 
Call Trace:
 [<c0282eea>] opost_block+0x116/0x1e0
 [<c01237d6>] schedule_timeout+0xb1/0xb3
 [<c02822b3>] tty_default_put_char+0x33/0x37
 [<c0131d4d>] unlock_page+0x15/0x53
 [<c028489d>] read_chan+0x29f/0x83a
 [<c0284fb9>] write_chan+0x181/0x244
 [<c0119687>] default_wake_function+0x0/0x2e
 [<c0119687>] default_wake_function+0x0/0x2e
 [<c0119687>] default_wake_function+0x0/0x2e
 [<c027fc85>] tty_read+0xe5/0x10e
 [<c014ae09>] vfs_read+0xbc/0x127
 [<c014b094>] sys_read+0x42/0x63
 [<c0109063>] syscall_call+0x7/0xb

xterm         S 00000000 4096252336 27584  32627 27585   27654 27545 (NOTLB)
c6667e8c 00000086 000000d0 00000000 000000d0 da6c4e00 d23e8080 00000000 
       7fffffff 00000005 00000000 c01237d6 dcc43000 dcc43920 c6667f20 dcc43000 
       cbdb3300 00000004 00000004 c0281033 dcc43000 cbdb3300 c6667f20 00000000 
Call Trace:
 [<c01237d6>] schedule_timeout+0xb1/0xb3
 [<c0281033>] tty_poll+0x7b/0x9f
 [<c015b73d>] do_select+0x17b/0x2b6
 [<c015b41d>] __pollwait+0x0/0xc4
 [<c015bba5>] sys_select+0x308/0x549
 [<c012a33d>] rcu_check_quiescent_state+0x6b/0x6d
 [<c012a40a>] rcu_process_callbacks+0xcb/0xe6
 [<c03371a4>] sock_ioctl+0xaf/0x23d
 [<c0109063>] syscall_call+0x7/0xb

bash          S C0282EEA 104777520 27585  27584                     (NOTLB)
dba3be78 00000086 0000000a c0282eea cc0a5000 00000000 d564d300 00000008 
       7fffffff cc0a5928 00000001 c01237d6 cc0a5000 c2271800 0000000a dba3a000 
       c02822b3 cc0a5000 00000000 dba3bea7 00000001 c0131d4d 00000246 00000008 
Call Trace:
 [<c0282eea>] opost_block+0x116/0x1e0
 [<c01237d6>] schedule_timeout+0xb1/0xb3
 [<c02822b3>] tty_default_put_char+0x33/0x37
 [<c0131d4d>] unlock_page+0x15/0x53
 [<c028489d>] read_chan+0x29f/0x83a
 [<c0284fb9>] write_chan+0x181/0x244
 [<c0119687>] default_wake_function+0x0/0x2e
 [<c0119687>] default_wake_function+0x0/0x2e
 [<c0119687>] default_wake_function+0x0/0x2e
 [<c027fc85>] tty_read+0xe5/0x10e
 [<c014ae09>] vfs_read+0xbc/0x127
 [<c014b094>] sys_read+0x42/0x63
 [<c0109063>] syscall_call+0x7/0xb

xterm         S 00000000 79599920 27654  32627 27655         27584 (NOTLB)
d51e7e8c 00000082 000000d0 00000000 000000d0 da6c4700 d05fc100 00000000 
       7fffffff 00000005 00000000 c01237d6 c2e9e000 c2e9e920 d51e7f20 c2e9e000 
       cd3c1c00 00000004 00000004 c0281033 c2e9e000 cd3c1c00 d51e7f20 00000000 
Call Trace:
 [<c01237d6>] schedule_timeout+0xb1/0xb3
 [<c0281033>] tty_poll+0x7b/0x9f
 [<c015b73d>] do_select+0x17b/0x2b6
 [<c015b41d>] __pollwait+0x0/0xc4
 [<c015bba5>] sys_select+0x308/0x549
 [<c03371a4>] sock_ioctl+0xaf/0x23d
 [<c0109063>] syscall_call+0x7/0xb

bash          S C0282EEA 78911792 27655  27654                     (NOTLB)
da18fe78 00000086 0000000a c0282eea cb33e000 00000000 d564c100 00000008 
       7fffffff cb33e928 00000001 c01237d6 cb33e000 d9e6a280 0000000a da18e000 
       c02822b3 cb33e000 00000000 da18fea7 00000001 0ae9e000 00000246 00000008 
Call Trace:
 [<c0282eea>] opost_block+0x116/0x1e0
 [<c01237d6>] schedule_timeout+0xb1/0xb3
 [<c02822b3>] tty_default_put_char+0x33/0x37
 [<c028489d>] read_chan+0x29f/0x83a
 [<c0284fb9>] write_chan+0x181/0x244
 [<c0119687>] default_wake_function+0x0/0x2e
 [<c0119687>] default_wake_function+0x0/0x2e
 [<c0119687>] default_wake_function+0x0/0x2e
 [<c027fc85>] tty_read+0xe5/0x10e
 [<c014ae09>] vfs_read+0xbc/0x127
 [<c014b094>] sys_read+0x42/0x63
 [<c0109063>] syscall_call+0x7/0xb


[-- Attachment #4: lsof.updatedb --]
[-- Type: application/octet-stream, Size: 1224 bytes --]

updatedb  22100    root  cwd    DIR        3,5     4096    1754570 /work/BuildEnv/sandbox/components/base/kernel/source/linux-2.4.21-rc1/include/config/x86/f00f/CVS
updatedb  22100    root  rtd    DIR        3,3     4096          2 /
updatedb  22100    root  txt    REG        3,3    26760     310118 /usr/bin/slocate
updatedb  22100    root  mem    REG        3,3    90374     785552 /lib/ld-2.3.2.so
updatedb  22100    root  mem    REG        3,3  1545216     752397 /lib/libc-2.3.2.so
updatedb  22100    root  mem    REG        3,3    50346     785573 /lib/libnss_compat-2.3.2.so
updatedb  22100    root  mem    REG        3,3    89820     785569 /lib/libnsl-2.3.2.so
updatedb  22100    root    0r  FIFO        0,7           265684282 pipe
updatedb  22100    root    1w  FIFO        0,7           265684283 pipe
updatedb  22100    root    2w  FIFO        0,7           265684283 pipe
updatedb  22100    root    3w   REG        3,3  4444160     295599 /var/lib/slocate/slocate.db.tmp
updatedb  22100    root    4r   DIR        3,3     8192     928670 /root
updatedb  22100    root    5r   DIR        3,5     4096    1754570 /work/BuildEnv/sandbox/components/base/kernel/source/linux-2.4.21-rc1/include/config/x86/f00f/CVS

[-- Attachment #5: lsof.cvs --]
[-- Type: application/octet-stream, Size: 3434 bytes --]

cvs       11833    root  cwd    DIR        3,5     4096    1754568 /work/BuildEnv/sandbox/components/base/kernel/source/linux-2.4.21-rc1/include/config/x86/f00f
cvs       11833    root  rtd    DIR        3,3     4096          2 /
cvs       11833    root  txt    REG        3,3   671472     311972 /usr/bin/cvs
cvs       11833    root  mem    REG        3,3    90374     785552 /lib/ld-2.3.2.so
cvs       11833    root  mem    REG        3,3    22773     785558 /lib/libcrypt-2.3.2.so
cvs       11833    root  mem    REG        3,3    89820     785569 /lib/libnsl-2.3.2.so
cvs       11833    root  mem    REG        3,3  1545216     752397 /lib/libc-2.3.2.so
cvs       11833    root  mem    REG        3,3    43556     785563 /lib/libnss_files-2.3.2.so
cvs       11833    root    0u   CHR     136,18                  20 /dev/pts/18
cvs       11833    root    1u   CHR     136,18                  20 /dev/pts/18
cvs       11833    root    2w   CHR        1,3                   7 /dev/null
cvs       11833    root    3u  IPv4  246250216                 TCP phoebee:40389->server.augsburg:cvspserver (ESTABLISHED)
cvs       11833    root    4u  IPv4  246250216                 TCP phoebee:40389->server.augsburg:cvspserver (ESTABLISHED)
cvs       11933    root  cwd    DIR        3,5     4096    1754549 /work/BuildEnv/sandbox/components/base/kernel/source/linux-2.4.21-rc1/include/config/x86
cvs       11933    root  rtd    DIR        3,3     4096          2 /
cvs       11933    root  txt    REG        3,3   671472     311972 /usr/bin/cvs
cvs       11933    root  mem    REG        3,3    90374     785552 /lib/ld-2.3.2.so
cvs       11933    root  mem    REG        3,3    22773     785558 /lib/libcrypt-2.3.2.so
cvs       11933    root  mem    REG        3,3    89820     785569 /lib/libnsl-2.3.2.so
cvs       11933    root  mem    REG        3,3  1545216     752397 /lib/libc-2.3.2.so
cvs       11933    root  mem    REG        3,3    43556     785563 /lib/libnss_files-2.3.2.so
cvs       11933    root    0u   CHR     136,19                  21 /dev/pts/19
cvs       11933    root    1u   CHR     136,19                  21 /dev/pts/19
cvs       11933    root    2w   CHR        1,3                   7 /dev/null
cvs       11933    root    3u  IPv4  246342564                 TCP phoebee:40401->server.augsburg:cvspserver (CLOSE_WAIT)
cvs       11933    root    4u  IPv4  246342564                 TCP phoebee:40401->server.augsburg:cvspserver (CLOSE_WAIT)
cvs       11933    root    5r   DIR        3,5     4096     672684 /work/BuildEnv/sandbox/components
cvs       11933    root    6r   DIR        3,5     4096     672684 /work/BuildEnv/sandbox/components
cvs       11933    root    7r   DIR        3,5     4096    1621100 /work/BuildEnv/sandbox/components/base
cvs       11933    root    8r   DIR        3,5     4096    1671025 /work/BuildEnv/sandbox/components/base/kernel
cvs       11933    root    9r   DIR        3,5     4096    1671061 /work/BuildEnv/sandbox/components/base/kernel/source
cvs       11933    root   10r   DIR        3,5     4096    1671066 /work/BuildEnv/sandbox/components/base/kernel/source/linux-2.4.21-rc1
cvs       11933    root   11r   DIR        3,5     4096    1737284 /work/BuildEnv/sandbox/components/base/kernel/source/linux-2.4.21-rc1/include
cvs       11933    root   12r   DIR        3,5    12288    1740140 /work/BuildEnv/sandbox/components/base/kernel/source/linux-2.4.21-rc1/include/config

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: 2.6.0-t3: vfs/ext3 do_lookup bug?!
  2003-08-21  6:33     ` Martin Zwickel
@ 2003-08-21  6:41       ` Andrew Morton
  2003-08-21  7:25         ` Martin Zwickel
  0 siblings, 1 reply; 8+ messages in thread
From: Andrew Morton @ 2003-08-21  6:41 UTC (permalink / raw)
  To: Martin Zwickel; +Cc: torvalds, linux-kernel

Martin Zwickel <martin.zwickel@technotrend.de> wrote:
>
> cutted-dmesg.txt  text/plain (15496 bytes)

Try `dmesg -s 1000000'.   The silly thing has too small a buffer.


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: 2.6.0-t3: vfs/ext3 do_lookup bug?!
  2003-08-21  6:41       ` Andrew Morton
@ 2003-08-21  7:25         ` Martin Zwickel
  2003-08-21  7:40           ` Andrew Morton
  0 siblings, 1 reply; 8+ messages in thread
From: Martin Zwickel @ 2003-08-21  7:25 UTC (permalink / raw)
  To: Andrew Morton; +Cc: torvalds, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 1578 bytes --]

On Wed, 20 Aug 2003 23:41:19 -0700
Andrew Morton <akpm@osdl.org> bubbled:

> Martin Zwickel <martin.zwickel@technotrend.de> wrote:
> >
> > cutted-dmesg.txt  text/plain (15496 bytes)
> 
> Try `dmesg -s 1000000'.   The silly thing has too small a buffer.

too late.. :(
rebooted and fscked.
on reboot, my console did hang up while unmounting fs's and i got tons of
strange errors about something on my fs(where the processes got stuck). can't
remeber the outputs, was too much and too fast.
only a sysrq-b helped.

on another fs i got some "Deleted inode ###### has zero dtime.  Fix<y>?".
(on other boxes i get them sometimes too if i manually check a ext3
fs)
shouldnt ext3 prevent those errors, since it has a journal and should
recover them???

on the fs where the processes got stuck i got some unattached inodes:
Unattached inode 1035466
Connect to /lost+found<y>? yes

Inode 1035466 ref count is 2, should be 1.  Fix<y>? yes

Unattached inode 1053163
Connect to /lost+found<y>? yes

Inode 1053163 ref count is 2, should be 1.  Fix<y>? yes

Inode 1053382 ref count is 1, should be 2.  Fix<y>? yes

Pass 5: Checking group summary information

is this the normal behaviour, to e2fsck the ext3 fs's after some time?
i thought that ext3 handles those errors itself.

well, after the reboot and fsck i can access my files again.

ps.: 2.6.0-t3 scheduler performance is not that good...

Regards,
Martin

-- 
MyExcuse:
Zombie processes detected, machine is haunted.

Martin Zwickel <martin.zwickel@technotrend.de>
Research & Development

TechnoTrend AG <http://www.technotrend.de>

[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: 2.6.0-t3: vfs/ext3 do_lookup bug?!
  2003-08-21  7:25         ` Martin Zwickel
@ 2003-08-21  7:40           ` Andrew Morton
  2003-08-21  8:32             ` Martin Zwickel
  0 siblings, 1 reply; 8+ messages in thread
From: Andrew Morton @ 2003-08-21  7:40 UTC (permalink / raw)
  To: Martin Zwickel; +Cc: torvalds, linux-kernel

Martin Zwickel <martin.zwickel@technotrend.de> wrote:
>
> On Wed, 20 Aug 2003 23:41:19 -0700
> Andrew Morton <akpm@osdl.org> bubbled:
> 
> > Martin Zwickel <martin.zwickel@technotrend.de> wrote:
> > >
> > > cutted-dmesg.txt  text/plain (15496 bytes)
> > 
> > Try `dmesg -s 1000000'.   The silly thing has too small a buffer.
> 
> too late.. :(
> rebooted and fscked.
> on reboot, my console did hang up while unmounting fs's and i got tons of
> strange errors about something on my fs(where the processes got stuck). can't
> remeber the outputs, was too much and too fast.
> only a sysrq-b helped.
> 
> on another fs i got some "Deleted inode ###### has zero dtime.  Fix<y>?".

Sigh.  Well the filesystem obviously shat itself, so the fsck errors aren't
that surprising.

My guess would be that something oopsed while holding a directory semaphore
and you missed the oops.  Maybe you were in X at the time?

If it happens again, please remember that dmesg needs the `-s 1000000'
option to prevent it from truncating output.

> ps.: 2.6.0-t3 scheduler performance is not that good...

It's pretty bad.  I'm running 2.6.0-test3-mm1 here which has Ingo/Con
goodies and it is significantly improved.

Once that code is working sufficiently well for everyone and there is
consensus that the general direction is correct and the possible
regressions with mixed database workloads are sorted out, we'll fix it up. 
So don't panic yet.


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: 2.6.0-t3: vfs/ext3 do_lookup bug?!
  2003-08-21  7:40           ` Andrew Morton
@ 2003-08-21  8:32             ` Martin Zwickel
  0 siblings, 0 replies; 8+ messages in thread
From: Martin Zwickel @ 2003-08-21  8:32 UTC (permalink / raw)
  To: Andrew Morton; +Cc: torvalds, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 1381 bytes --]

On Thu, 21 Aug 2003 00:40:18 -0700
Andrew Morton <akpm@osdl.org> bubbled:

> Sigh.  Well the filesystem obviously shat itself, so the fsck errors aren't
> that surprising.
> 
> My guess would be that something oopsed while holding a directory semaphore
> and you missed the oops.  Maybe you were in X at the time?

yes, but I cant find an oops in my /var/log/messages and .?.gz. the last oops
is a month ago.
well, something really strange happened and if it happens again, i'll try to
figure it out what went wrong.

> 
> If it happens again, please remember that dmesg needs the `-s 1000000'
> option to prevent it from truncating output.

yep, will do that next time.

> 
> > ps.: 2.6.0-t3 scheduler performance is not that good...
> 
> It's pretty bad.  I'm running 2.6.0-test3-mm1 here which has Ingo/Con
> goodies and it is significantly improved.
> 
> Once that code is working sufficiently well for everyone and there is
> consensus that the general direction is correct and the possible
> regressions with mixed database workloads are sorted out, we'll fix it up. 
> So don't panic yet.

I tried many patches for test2. and some worked not that bad.
ok, i'll give -mm1 a try.

and thx for the help.

Regards,
Martin

-- 
MyExcuse:
temporary routing anomaly

Martin Zwickel <martin.zwickel@technotrend.de>
Research & Development

TechnoTrend AG <http://www.technotrend.de>

[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2003-08-21  8:32 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2003-08-20 15:14 2.6.0-t3: vfs/ext3 do_lookup bug?! Martin Zwickel
2003-08-20 18:36 ` Andrew Morton
2003-08-20 19:06   ` Linus Torvalds
2003-08-21  6:33     ` Martin Zwickel
2003-08-21  6:41       ` Andrew Morton
2003-08-21  7:25         ` Martin Zwickel
2003-08-21  7:40           ` Andrew Morton
2003-08-21  8:32             ` Martin Zwickel

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox