public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Thorsten Knabe <linux@thorsten-knabe.de>
To: linux-kernel@vger.kernel.org
Subject: [BUG] Linux 2.6.25.4 task_struct leak
Date: Thu, 29 May 2008 17:05:08 +0200	[thread overview]
Message-ID: <483EC624.90503@thorsten-knabe.de> (raw)

[-- Attachment #1: Type: text/plain, Size: 6223 bytes --]

[1.] One line summary of the problem:
Linux 2.6.25.4 x86_64 task_struct leak on host when running UML
2.6.23.16 compiled for i386

[2.] Full description of the problem/report:
I'm seeing a massive task_struct leak on vanilla Linux 2.6.25.4 x86_64
(64-bit) running User Mode Linux 2.6.23.16 kernels compiled for i386
(32-bit). It seems a task_struct leaks on the HOST for every process
that has been created and destroyed by an UML guest. The task_structs
remain allocated on the host even after the UML guests have been shut
down completely.
Other 32-bit applications do NOT leak task_structs.
Also there is no task_struct leak when running the same 32-bit UML
guests on a Linux 2.6.25.4 i386 (32-bit) host kernel.

[3.] Keywords (i.e., modules, networking, kernel):
task_struct leak, UML

[4.] Kernel information
[4.1.] Kernel version (from /proc/version):
Linux version 2.6.25.4 (tek@tek01) (gcc version 3.3.6 (Debian
1:3.3.6-15)) #2 SMP PREEMPT Sat May 24 21:46:37 CEST 2008

[4.2.] Kernel .config file:
Config attached!

[5.] Most recent kernel version which did not have the bug:
2.6.23.17 does not leak task_structs.
2.6.24 - 2.6.25.3 has not been tested
Running 2.6.23.16 i386 UML on a 2.6.25.4 i386 host does NOT leak
task_structs.
Running 64-bit UML guests has not been tested.

[6.] Output of Oops.. message (if applicable) with symbolic information
     resolved (see Documentation/oops-tracing.txt)
No Ooops!

[7.] A small shell script or example program which triggers the
     problem (if possible)
Start 32-bit UML on 64-bit host, then shut it down again. Compare
/proc/slabinfo with number of running tasks/threads.
Before running UML guests, number of active task_structs in
/proc/slabinfo matches number of running tasks/threads on host.
After running and shutting down UML guest differ. I had >500000 active
task_structs in /proc/slabinfo with less than 500 tasks/threads running
on the host.

[8.] Environment
[8.1.] Software (add the output of the ver_linux script here)
Linux tek01 2.6.25.4 #2 SMP PREEMPT Sat May 24 21:46:37 CEST 2008 x86_64
GNU/Linux

Gnu C                  4.1.2
Gnu make               3.81
binutils               2.17
util-linux             2.12r
mount                  2.12r
module-init-tools      3.3-pre2
e2fsprogs              1.40-WIP
reiserfsprogs          3.6.19
xfsprogs               2.8.11
Linux C Library        2.3.6
Dynamic linker (ldd)   2.3.6
Procps                 3.2.7
Net-tools              1.60
Console-tools          0.2.3
oprofile               0.9.2
Sh-utils               5.97
udev                   105
Modules Loaded         nvidia rfcomm l2cap bluetooth nfs lockd nfs_acl
sunrpc af_packet ppdev lp nf_conntrack_ipv6 ip6t_REJECT ip6t_LOG
ip6table_filter ip6_tables ipt_MASQUERADE xt_tcpudp xt_state ipt_REJECT
ipt_LOG iptable_raw iptable_mangle iptable_nat iptable_filter dm_crypt
crypto_blkcipher dm_snapshot dm_mirror dm_mod deadline_iosched fuse
cryptoloop loop tun zaphfc zaptel crc_ccitt powernow_k8 freq_table
nf_nat_ftp nf_nat nf_conntrack_ftp nf_conntrack_ipv4 nf_conntrack
ip_tables x_tables w83627hf hwmon_vid eeprom i2c_viapro sg sr_mod sbp2
cx88_blackbird cx2341x usbhid snd_via82xx wm8775 gameport
snd_mpu401_uart tuner tea5767 tda8290 tda18271 tda827x tuner_xc2028
xc5000 snd_via82xx_modem snd_ac97_codec ac97_bus firmware_class
snd_seq_oss tda9887 tuner_simple mt20xx tea5761 snd_seq_midi cx88_alsa
snd_pcm_oss snd_mixer_oss snd_rawmidi snd_seq_midi_event snd_pcm snd_seq
firewire_ohci firewire_core cx8802 cx8800 cx88xx ir_common
compat_ioctl32 videodev v4l1_compat hisax snd_timer k8temp
snd_seq_device i2c_algo_bit tveeprom crc_itu_t isdn snd_page_alloc
v4l2_common i2c_core psmouse rtc snd hwmon ehci_hcd parport_pc parport
serio_raw pcspkr uhci_hcd soundcore btcx_risc videobuf_dma_sg
videobuf_core usbcore slhc skge ohci1394 ieee1394 thermal button
processor evdev

[8.2.] Processor information (from /proc/cpuinfo):
processor       : 0
vendor_id       : AuthenticAMD
cpu family      : 15
model           : 35
model name      : AMD Athlon(tm) 64 X2 Dual Core Processor 4400+
stepping        : 2
cpu MHz         : 2200.000
cache size      : 1024 KB
physical id     : 0
siblings        : 2
core id         : 0
cpu cores       : 2
fpu             : yes
fpu_exception   : yes
cpuid level     : 1
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge
mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext
fxsr_opt lm 3dnowext 3dnow rep_good pni lahf_lm cmp_legacy
bogomips        : 4409.04
TLB size        : 1024 4K pages
clflush size    : 64
cache_alignment : 64
address sizes   : 40 bits physical, 48 bits virtual
power management: ts fid vid ttp

processor       : 1
vendor_id       : AuthenticAMD
cpu family      : 15
model           : 35
model name      : AMD Athlon(tm) 64 X2 Dual Core Processor 4400+
stepping        : 2
cpu MHz         : 2200.000
cache size      : 1024 KB
physical id     : 0
siblings        : 2
core id         : 1
cpu cores       : 2
fpu             : yes
fpu_exception   : yes
cpuid level     : 1
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge
mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext
fxsr_opt lm 3dnowext 3dnow rep_good pni lahf_lm cmp_legacy
bogomips        : 4405.71
TLB size        : 1024 4K pages
clflush size    : 64
cache_alignment : 64
address sizes   : 40 bits physical, 48 bits virtual
power management: ts fid vid ttp


[8.3.] Module information (from /proc/modules):
Not relevant!

[8.4.] Loaded driver and hardware information (/proc/ioports, /proc/iomem)
Not relevant!

[8.5.] PCI information ('lspci -vvv' as root)
Not relevant!

[8.6.] SCSI information (from /proc/scsi/scsi)
Not relevant!

[8.7.] Other information that might be relevant to the problem
       (please look in /proc and include all information that you
       think to be relevant):
[X.] Other notes, patches, fixes, workarounds:

Probably some kind of 64-bit kernel running 32-bit user-space
applications, that is only triggered by running UML.

Regards,
Thorsten

-- 
___
 |        | /                 E-Mail: linux@thorsten-knabe.de
 |horsten |/\nabe                WWW: http://linux.thorsten-knabe.de

[-- Attachment #2: config.gz --]
[-- Type: application/gzip, Size: 17044 bytes --]

             reply	other threads:[~2008-05-29 15:30 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-05-29 15:05 Thorsten Knabe [this message]
2008-06-01 21:31 ` [BUG] Linux 2.6.25.4 task_struct leak Chris Wright
2008-06-02  1:05   ` Jeff Dike
2008-06-04 22:40     ` Thorsten Knabe
2008-06-05  0:49       ` Jeff Dike
2008-06-05  1:06         ` Chris Wright
2008-06-08 11:39         ` Thorsten Knabe
2008-06-08 14:34           ` WANG Cong
2008-06-12 18:58             ` Roland McGrath
2008-06-12 19:01             ` [PATCH stable-2.6.25] x86_64 ptrace: fix sys32_ptrace " Roland McGrath
2008-06-30  6:44               ` Ingo Molnar

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=483EC624.90503@thorsten-knabe.de \
    --to=linux@thorsten-knabe.de \
    --cc=linux-kernel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox