qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [Qemu-devel] [Bug 648356] [NEW] VirtFS possible memory leak in 9p virtio mapped
@ 2010-09-26 20:09 Moshroum
  2010-09-27 22:32 ` [Qemu-devel] [Bug 648356] " Moshroum
                   ` (2 more replies)
  0 siblings, 3 replies; 4+ messages in thread
From: Moshroum @ 2010-09-26 20:09 UTC (permalink / raw)
  To: qemu-devel

Public bug reported:

I use as client Debian squeeze i386 with a custom kernel:
Linux (none) 2.6.35.5 #3 Thu Sep 23 18:36:02 UTC 2010 i686 GNU/Linux

And as host Debian squeeze amd64
Linux asd 2.6.32-5-amd64 #1 SMP Fri Sep 17 21:50:19 UTC 2010 x86_64 GNU/Linux

kvm version is:
kvm-88-5908-gdd67374

Started the client using:
sudo /usr/local/kvm/bin/qemu-system-x86_64 -m 1024 -kernel linux-2.6.35.5.qemu -drive file=root.img,if=virtio -net nic,macaddr=02:ca:ff:ee:ba:be,model=virtio,vlan=1 -net tap,ifname=tap1,vlan=1,script=no -virtfs local,path=/host,security_model=mapped,mount_tag=host -nographic


I've done following inside the guest:

$  mount -t 9p -o trans=virtio host /mnt
$ rm -f /mnt/test
$ touch /mnt/test
$ ls -l /mnt/test
$ while true ;do ls -l /host/test > /dev/null; done

Now I can see on my host system that the memory consumption starts at
90MB and after a minute it raises to 130MB.  The extra memory
consumption stops when I stop the while-loop.

$ while true ;do ls -l /tmp > /dev/null; done

Doesn't show that behaviour.

** Affects: qemu
     Importance: Undecided
         Status: New

-- 
VirtFS possible memory leak in 9p virtio mapped
https://bugs.launchpad.net/bugs/648356
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.

Status in QEMU: New

Bug description:
I use as client Debian squeeze i386 with a custom kernel:
Linux (none) 2.6.35.5 #3 Thu Sep 23 18:36:02 UTC 2010 i686 GNU/Linux

And as host Debian squeeze amd64
Linux asd 2.6.32-5-amd64 #1 SMP Fri Sep 17 21:50:19 UTC 2010 x86_64 GNU/Linux

kvm version is:
kvm-88-5908-gdd67374

Started the client using:
sudo /usr/local/kvm/bin/qemu-system-x86_64 -m 1024 -kernel linux-2.6.35.5.qemu -drive file=root.img,if=virtio -net nic,macaddr=02:ca:ff:ee:ba:be,model=virtio,vlan=1 -net tap,ifname=tap1,vlan=1,script=no -virtfs local,path=/host,security_model=mapped,mount_tag=host -nographic


I've done following inside the guest:

$  mount -t 9p -o trans=virtio host /mnt
$ rm -f /mnt/test
$ touch /mnt/test
$ ls -l /mnt/test
$ while true ;do ls -l /host/test > /dev/null; done

Now I can see on my host system that the memory consumption starts at 90MB and after a minute it raises to 130MB.  The extra memory consumption stops when I stop the while-loop.

$ while true ;do ls -l /tmp > /dev/null; done

Doesn't show that behaviour.

^ permalink raw reply	[flat|nested] 4+ messages in thread

* [Qemu-devel] [Bug 648356] Re: VirtFS possible memory leak in 9p virtio mapped
  2010-09-26 20:09 [Qemu-devel] [Bug 648356] [NEW] VirtFS possible memory leak in 9p virtio mapped Moshroum
@ 2010-09-27 22:32 ` Moshroum
  2010-09-27 22:45 ` Moshroum
  2012-01-19  9:59 ` Deepak Shetty
  2 siblings, 0 replies; 4+ messages in thread
From: Moshroum @ 2010-09-27 22:32 UTC (permalink / raw)
  To: qemu-devel

** Description changed:

  I use as client Debian squeeze i386 with a custom kernel:
  Linux (none) 2.6.35.5 #3 Thu Sep 23 18:36:02 UTC 2010 i686 GNU/Linux
  
  And as host Debian squeeze amd64
  Linux asd 2.6.32-5-amd64 #1 SMP Fri Sep 17 21:50:19 UTC 2010 x86_64 GNU/Linux
  
  kvm version is:
  kvm-88-5908-gdd67374
  
  Started the client using:
  sudo /usr/local/kvm/bin/qemu-system-x86_64 -m 1024 -kernel linux-2.6.35.5.qemu -drive file=root.img,if=virtio -net nic,macaddr=02:ca:ff:ee:ba:be,model=virtio,vlan=1 -net tap,ifname=tap1,vlan=1,script=no -virtfs local,path=/host,security_model=mapped,mount_tag=host -nographic
  
- 
  I've done following inside the guest:
  
  $  mount -t 9p -o trans=virtio host /mnt
  $ rm -f /mnt/test
  $ touch /mnt/test
  $ ls -l /mnt/test
- $ while true ;do ls -l /host/test > /dev/null; done
+ $ while true ;do ls -l /mnt/test > /dev/null; done
  
  Now I can see on my host system that the memory consumption starts at
  90MB and after a minute it raises to 130MB.  The extra memory
  consumption stops when I stop the while-loop.
  
  $ while true ;do ls -l /tmp > /dev/null; done
  
  Doesn't show that behaviour.

-- 
VirtFS possible memory leak in 9p virtio mapped
https://bugs.launchpad.net/bugs/648356
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.

Status in QEMU: New

Bug description:
I use as client Debian squeeze i386 with a custom kernel:
Linux (none) 2.6.35.5 #3 Thu Sep 23 18:36:02 UTC 2010 i686 GNU/Linux

And as host Debian squeeze amd64
Linux asd 2.6.32-5-amd64 #1 SMP Fri Sep 17 21:50:19 UTC 2010 x86_64 GNU/Linux

kvm version is:
kvm-88-5908-gdd67374

Started the client using:
sudo /usr/local/kvm/bin/qemu-system-x86_64 -m 1024 -kernel linux-2.6.35.5.qemu -drive file=root.img,if=virtio -net nic,macaddr=02:ca:ff:ee:ba:be,model=virtio,vlan=1 -net tap,ifname=tap1,vlan=1,script=no -virtfs local,path=/host,security_model=mapped,mount_tag=host -nographic

I've done following inside the guest:

$  mount -t 9p -o trans=virtio host /mnt
$ rm -f /mnt/test
$ touch /mnt/test
$ ls -l /mnt/test
$ while true ;do ls -l /mnt/test > /dev/null; done

Now I can see on my host system that the memory consumption starts at 90MB and after a minute it raises to 130MB.  The extra memory consumption stops when I stop the while-loop.

$ while true ;do ls -l /tmp > /dev/null; done

Doesn't show that behaviour.

^ permalink raw reply	[flat|nested] 4+ messages in thread

* [Qemu-devel] [Bug 648356] Re: VirtFS possible memory leak in 9p virtio mapped
  2010-09-26 20:09 [Qemu-devel] [Bug 648356] [NEW] VirtFS possible memory leak in 9p virtio mapped Moshroum
  2010-09-27 22:32 ` [Qemu-devel] [Bug 648356] " Moshroum
@ 2010-09-27 22:45 ` Moshroum
  2012-01-19  9:59 ` Deepak Shetty
  2 siblings, 0 replies; 4+ messages in thread
From: Moshroum @ 2010-09-27 22:45 UTC (permalink / raw)
  To: qemu-devel

Updated to v2.6.36.6 with https://patchwork.kernel.org/patch/127401/ and
it still has the problem. It increases the memory usage not as fast, but
it still quite a lot.

I also tried to mount it using -oversion=9p2000.L

/sys/kernel/debug/kmemleak says on unmount:


unreferenced object 0xf791a870 (size 192):
  comm "swapper", pid 1, jiffies 4294892433 (age 784.692s)
  hex dump (first 32 bytes):
    00 00 00 e0 00 00 00 00 ff ff bf fe 00 00 00 00  ................
    00 b9 9d f7 00 02 00 00 6b 6b 6b 6b 6b 6b 6b 6b  ........kkkkkkkk
  backtrace:
    [<c1616140>] kmemleak_alloc+0x70/0x160
    [<c1174ad4>] __kmalloc+0x254/0x440
    [<c16210cf>] pci_acpi_scan_root+0x260/0x3b3
    [<c161c98b>] acpi_pci_root_add+0x295/0x4b7
    [<c1343dd7>] acpi_device_probe+0x72/0x277
    [<c1404db5>] driver_probe_device+0x135/0x410
    [<c14051af>] __driver_attach+0x11f/0x130
    [<c14037a2>] bus_for_each_dev+0x92/0x110
    [<c14049b7>] driver_attach+0x27/0x40
    [<c1403ced>] bus_add_driver+0x1bd/0x4e0
    [<c14056bb>] driver_register+0xcb/0x290
    [<c1345129>] acpi_bus_register_driver+0x55/0x65
    [<c196c96e>] acpi_pci_root_init+0x47/0x72
    [<c100105e>] do_one_initcall+0x3e/0x2c0
    [<c1940546>] kernel_init+0x1a6/0x305
    [<c10049c2>] kernel_thread_helper+0x6/0x14
unreferenced object 0xf79db900 (size 16):
  comm "swapper", pid 1, jiffies 4294892433 (age 784.692s)
  hex dump (first 16 bytes):
    50 43 49 20 42 75 73 20 30 30 30 30 3a 30 30 00  PCI Bus 0000:00.
  backtrace:
    [<c1616140>] kmemleak_alloc+0x70/0x160
    [<c1174ad4>] __kmalloc+0x254/0x440
    [<c12efbb8>] kvasprintf+0x68/0xd0
    [<c12efc4d>] kasprintf+0x2d/0x50
    [<c1621115>] pci_acpi_scan_root+0x2a6/0x3b3
    [<c161c98b>] acpi_pci_root_add+0x295/0x4b7
    [<c1343dd7>] acpi_device_probe+0x72/0x277
    [<c1404db5>] driver_probe_device+0x135/0x410
    [<c14051af>] __driver_attach+0x11f/0x130
    [<c14037a2>] bus_for_each_dev+0x92/0x110
    [<c14049b7>] driver_attach+0x27/0x40
    [<c1403ced>] bus_add_driver+0x1bd/0x4e0
    [<c14056bb>] driver_register+0xcb/0x290
    [<c1345129>] acpi_bus_register_driver+0x55/0x65
    [<c196c96e>] acpi_pci_root_init+0x47/0x72
    [<c100105e>] do_one_initcall+0x3e/0x2c0
unreferenced object 0xf6a12c60 (size 96):
  comm "mount", pid 1191, jiffies 4294893979 (age 778.536s)
  hex dump (first 32 bytes):
    01 00 00 00 ad 4e ad de ff ff ff ff ff ff ff ff  .....N..........
    28 65 11 c2 80 e8 b8 c1 8d 2c 78 c1 00 00 00 00  (e.......,x.....
  backtrace:
    [<c1616140>] kmemleak_alloc+0x70/0x160
    [<c117380b>] kmem_cache_alloc+0x21b/0x340
    [<c160c4a9>] p9_idpool_create+0x59/0xf0
    [<c160b4b2>] p9_client_create+0xe2/0x650
    [<c129b1df>] v9fs_session_init+0x35f/0x890
    [<c1295a51>] v9fs_get_sb+0xb1/0x440
    [<c118c60a>] vfs_kern_mount+0xaa/0x240
    [<c118c833>] do_kern_mount+0x53/0x1c0
    [<c11bb41a>] do_mount+0x88a/0x10f0
    [<c11bbd62>] sys_mount+0xe2/0x170
    [<c10043df>] sysenter_do_call+0x12/0x38
    [<ffffffff>] 0xffffffff
unreferenced object 0xf6bb6cc0 (size 148):
  comm "mount", pid 1191, jiffies 4294893979 (age 778.536s)
  hex dump (first 32 bytes):
    00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
    00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
  backtrace:
    [<c1616140>] kmemleak_alloc+0x70/0x160
    [<c117380b>] kmem_cache_alloc+0x21b/0x340
    [<c12dc2af>] idr_pre_get+0x8f/0xd0
    [<c160c345>] p9_idpool_get+0x35/0xf0
    [<c160b4d6>] p9_client_create+0x106/0x650
    [<c129b1df>] v9fs_session_init+0x35f/0x890
    [<c1295a51>] v9fs_get_sb+0xb1/0x440
    [<c118c60a>] vfs_kern_mount+0xaa/0x240
    [<c118c833>] do_kern_mount+0x53/0x1c0
    [<c11bb41a>] do_mount+0x88a/0x10f0
    [<c11bbd62>] sys_mount+0xe2/0x170
    [<c10043df>] sysenter_do_call+0x12/0x38
    [<ffffffff>] 0xffffffff
unreferenced object 0xf6bb6d80 (size 148):
  comm "mount", pid 1191, jiffies 4294893979 (age 778.616s)
  hex dump (first 32 bytes):
    00 00 00 00 c0 6c bb f6 00 00 00 00 00 00 00 00  .....l..........
    00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
  backtrace:
    [<c1616140>] kmemleak_alloc+0x70/0x160
    [<c117380b>] kmem_cache_alloc+0x21b/0x340
    [<c12dc2af>] idr_pre_get+0x8f/0xd0
    [<c160c345>] p9_idpool_get+0x35/0xf0
    [<c160b4d6>] p9_client_create+0x106/0x650
    [<c129b1df>] v9fs_session_init+0x35f/0x890
    [<c1295a51>] v9fs_get_sb+0xb1/0x440
    [<c118c60a>] vfs_kern_mount+0xaa/0x240
    [<c118c833>] do_kern_mount+0x53/0x1c0
    [<c11bb41a>] do_mount+0x88a/0x10f0
    [<c11bbd62>] sys_mount+0xe2/0x170
    [<c10043df>] sysenter_do_call+0x12/0x38
    [<ffffffff>] 0xffffffff
unreferenced object 0xf6bb6b40 (size 148):
  comm "mount", pid 1191, jiffies 4294893979 (age 778.616s)
  hex dump (first 32 bytes):
    00 00 00 00 80 6d bb f6 00 00 00 00 00 00 00 00  .....m..........
    00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
  backtrace:
    [<c1616140>] kmemleak_alloc+0x70/0x160
    [<c117380b>] kmem_cache_alloc+0x21b/0x340
    [<c12dc2af>] idr_pre_get+0x8f/0xd0
    [<c160c345>] p9_idpool_get+0x35/0xf0
    [<c160b4d6>] p9_client_create+0x106/0x650
    [<c129b1df>] v9fs_session_init+0x35f/0x890
    [<c1295a51>] v9fs_get_sb+0xb1/0x440
    [<c118c60a>] vfs_kern_mount+0xaa/0x240
    [<c118c833>] do_kern_mount+0x53/0x1c0
    [<c11bb41a>] do_mount+0x88a/0x10f0
    [<c11bbd62>] sys_mount+0xe2/0x170
    [<c10043df>] sysenter_do_call+0x12/0x38
    [<ffffffff>] 0xffffffff
unreferenced object 0xf6bb6e40 (size 148):
  comm "mount", pid 1191, jiffies 4294893979 (age 778.616s)
  hex dump (first 32 bytes):
    00 00 00 00 40 6b bb f6 00 00 00 00 00 00 00 00  ....@k..........
    00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
  backtrace:
    [<c1616140>] kmemleak_alloc+0x70/0x160
    [<c117380b>] kmem_cache_alloc+0x21b/0x340
    [<c12dc2af>] idr_pre_get+0x8f/0xd0
    [<c160c345>] p9_idpool_get+0x35/0xf0
    [<c160b4d6>] p9_client_create+0x106/0x650
    [<c129b1df>] v9fs_session_init+0x35f/0x890
    [<c1295a51>] v9fs_get_sb+0xb1/0x440
    [<c118c60a>] vfs_kern_mount+0xaa/0x240
    [<c118c833>] do_kern_mount+0x53/0x1c0
    [<c11bb41a>] do_mount+0x88a/0x10f0
    [<c11bbd62>] sys_mount+0xe2/0x170
    [<c10043df>] sysenter_do_call+0x12/0x38
    [<ffffffff>] 0xffffffff
unreferenced object 0xf6bb6f00 (size 148):
  comm "mount", pid 1191, jiffies 4294893979 (age 778.616s)
  hex dump (first 32 bytes):
    00 00 00 00 40 6e bb f6 00 00 00 00 00 00 00 00  ....@n..........
    00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
  backtrace:
    [<c1616140>] kmemleak_alloc+0x70/0x160
    [<c117380b>] kmem_cache_alloc+0x21b/0x340
    [<c12dc2af>] idr_pre_get+0x8f/0xd0
    [<c160c345>] p9_idpool_get+0x35/0xf0
    [<c160b4d6>] p9_client_create+0x106/0x650
    [<c129b1df>] v9fs_session_init+0x35f/0x890
    [<c1295a51>] v9fs_get_sb+0xb1/0x440
    [<c118c60a>] vfs_kern_mount+0xaa/0x240
    [<c118c833>] do_kern_mount+0x53/0x1c0
    [<c11bb41a>] do_mount+0x88a/0x10f0
    [<c11bbd62>] sys_mount+0xe2/0x170
    [<c10043df>] sysenter_do_call+0x12/0x38
    [<ffffffff>] 0xffffffff
unreferenced object 0xf6501000 (size 148):
  comm "mount", pid 1191, jiffies 4294893979 (age 778.688s)
  hex dump (first 32 bytes):
    00 00 00 00 00 6f bb f6 00 00 00 00 00 00 00 00  .....o..........
    00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
  backtrace:
    [<c1616140>] kmemleak_alloc+0x70/0x160
    [<c117380b>] kmem_cache_alloc+0x21b/0x340
    [<c12dc2af>] idr_pre_get+0x8f/0xd0
    [<c160c345>] p9_idpool_get+0x35/0xf0
    [<c160b4d6>] p9_client_create+0x106/0x650
    [<c129b1df>] v9fs_session_init+0x35f/0x890
    [<c1295a51>] v9fs_get_sb+0xb1/0x440
    [<c118c60a>] vfs_kern_mount+0xaa/0x240
    [<c118c833>] do_kern_mount+0x53/0x1c0
    [<c11bb41a>] do_mount+0x88a/0x10f0
    [<c11bbd62>] sys_mount+0xe2/0x170
    [<c10043df>] sysenter_do_call+0x12/0x38
    [<ffffffff>] 0xffffffff
unreferenced object 0xf65010c0 (size 148):
  comm "mount", pid 1191, jiffies 4294893979 (age 778.688s)
  hex dump (first 32 bytes):
    00 00 00 00 00 10 50 f6 00 00 00 00 00 00 00 00  ......P.........
    00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
  backtrace:
    [<c1616140>] kmemleak_alloc+0x70/0x160
    [<c117380b>] kmem_cache_alloc+0x21b/0x340
    [<c12dc2af>] idr_pre_get+0x8f/0xd0
    [<c160c345>] p9_idpool_get+0x35/0xf0
    [<c160b4d6>] p9_client_create+0x106/0x650
    [<c129b1df>] v9fs_session_init+0x35f/0x890
    [<c1295a51>] v9fs_get_sb+0xb1/0x440
    [<c118c60a>] vfs_kern_mount+0xaa/0x240
    [<c118c833>] do_kern_mount+0x53/0x1c0
    [<c11bb41a>] do_mount+0x88a/0x10f0
    [<c11bbd62>] sys_mount+0xe2/0x170
    [<c10043df>] sysenter_do_call+0x12/0x38
    [<ffffffff>] 0xffffffff
unreferenced object 0xf6501180 (size 148):
  comm "mount", pid 1191, jiffies 4294893979 (age 778.688s)
  hex dump (first 32 bytes):
    00 00 00 00 c0 10 50 f6 00 00 00 00 00 00 00 00  ......P.........
    00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
  backtrace:
    [<c1616140>] kmemleak_alloc+0x70/0x160
    [<c117380b>] kmem_cache_alloc+0x21b/0x340
    [<c12dc2af>] idr_pre_get+0x8f/0xd0
    [<c160c345>] p9_idpool_get+0x35/0xf0
    [<c160b4d6>] p9_client_create+0x106/0x650
    [<c129b1df>] v9fs_session_init+0x35f/0x890
    [<c1295a51>] v9fs_get_sb+0xb1/0x440
    [<c118c60a>] vfs_kern_mount+0xaa/0x240
    [<c118c833>] do_kern_mount+0x53/0x1c0
    [<c11bb41a>] do_mount+0x88a/0x10f0
    [<c11bbd62>] sys_mount+0xe2/0x170
    [<c10043df>] sysenter_do_call+0x12/0x38
    [<ffffffff>] 0xffffffff
unreferenced object 0xf6501240 (size 148):
  comm "mount", pid 1191, jiffies 4294893979 (age 778.688s)
  hex dump (first 32 bytes):
    00 00 00 00 80 11 50 f6 00 00 00 00 00 00 00 00  ......P.........
    00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
  backtrace:
    [<c1616140>] kmemleak_alloc+0x70/0x160
    [<c117380b>] kmem_cache_alloc+0x21b/0x340
    [<c12dc2af>] idr_pre_get+0x8f/0xd0
    [<c160c345>] p9_idpool_get+0x35/0xf0
    [<c160b4d6>] p9_client_create+0x106/0x650
    [<c129b1df>] v9fs_session_init+0x35f/0x890
    [<c1295a51>] v9fs_get_sb+0xb1/0x440
    [<c118c60a>] vfs_kern_mount+0xaa/0x240
    [<c118c833>] do_kern_mount+0x53/0x1c0
    [<c11bb41a>] do_mount+0x88a/0x10f0
    [<c11bbd62>] sys_mount+0xe2/0x170
    [<c10043df>] sysenter_do_call+0x12/0x38
    [<ffffffff>] 0xffffffff
unreferenced object 0xf6501300 (size 148):
  comm "mount", pid 1191, jiffies 4294893979 (age 778.760s)
  hex dump (first 32 bytes):
    00 00 00 00 40 12 50 f6 00 00 00 00 00 00 00 00  ....@.P.........
    00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
  backtrace:
    [<c1616140>] kmemleak_alloc+0x70/0x160
    [<c117380b>] kmem_cache_alloc+0x21b/0x340
    [<c12dc2af>] idr_pre_get+0x8f/0xd0
    [<c160c345>] p9_idpool_get+0x35/0xf0
    [<c160b4d6>] p9_client_create+0x106/0x650
    [<c129b1df>] v9fs_session_init+0x35f/0x890
    [<c1295a51>] v9fs_get_sb+0xb1/0x440
    [<c118c60a>] vfs_kern_mount+0xaa/0x240
    [<c118c833>] do_kern_mount+0x53/0x1c0
    [<c11bb41a>] do_mount+0x88a/0x10f0
    [<c11bbd62>] sys_mount+0xe2/0x170
    [<c10043df>] sysenter_do_call+0x12/0x38
    [<ffffffff>] 0xffffffff
unreferenced object 0xf65013c0 (size 148):
  comm "mount", pid 1191, jiffies 4294893979 (age 778.760s)
  hex dump (first 32 bytes):
    00 00 00 00 00 13 50 f6 00 00 00 00 00 00 00 00  ......P.........
    00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
  backtrace:
    [<c1616140>] kmemleak_alloc+0x70/0x160
    [<c117380b>] kmem_cache_alloc+0x21b/0x340
    [<c12dc2af>] idr_pre_get+0x8f/0xd0
    [<c160c345>] p9_idpool_get+0x35/0xf0
    [<c160b4d6>] p9_client_create+0x106/0x650
    [<c129b1df>] v9fs_session_init+0x35f/0x890
    [<c1295a51>] v9fs_get_sb+0xb1/0x440
    [<c118c60a>] vfs_kern_mount+0xaa/0x240
    [<c118c833>] do_kern_mount+0x53/0x1c0
    [<c11bb41a>] do_mount+0x88a/0x10f0
    [<c11bbd62>] sys_mount+0xe2/0x170
    [<c10043df>] sysenter_do_call+0x12/0x38
    [<ffffffff>] 0xffffffff
unreferenced object 0xf6501480 (size 148):
  comm "mount", pid 1191, jiffies 4294893979 (age 778.760s)
  hex dump (first 32 bytes):
    00 00 00 00 c0 13 50 f6 00 00 00 00 00 00 00 00  ......P.........
    00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
  backtrace:
    [<c1616140>] kmemleak_alloc+0x70/0x160
    [<c117380b>] kmem_cache_alloc+0x21b/0x340
    [<c12dc2af>] idr_pre_get+0x8f/0xd0
    [<c160c345>] p9_idpool_get+0x35/0xf0
    [<c160b4d6>] p9_client_create+0x106/0x650
    [<c129b1df>] v9fs_session_init+0x35f/0x890
    [<c1295a51>] v9fs_get_sb+0xb1/0x440
    [<c118c60a>] vfs_kern_mount+0xaa/0x240
    [<c118c833>] do_kern_mount+0x53/0x1c0
    [<c11bb41a>] do_mount+0x88a/0x10f0
    [<c11bbd62>] sys_mount+0xe2/0x170
    [<c10043df>] sysenter_do_call+0x12/0x38
    [<ffffffff>] 0xffffffff
unreferenced object 0xf6501540 (size 148):
  comm "mount", pid 1191, jiffies 4294893979 (age 778.760s)
  hex dump (first 32 bytes):
    00 00 00 00 80 14 50 f6 00 00 00 00 00 00 00 00  ......P.........
    00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
  backtrace:
    [<c1616140>] kmemleak_alloc+0x70/0x160
    [<c117380b>] kmem_cache_alloc+0x21b/0x340
    [<c12dc2af>] idr_pre_get+0x8f/0xd0
    [<c160c345>] p9_idpool_get+0x35/0xf0
    [<c160b4d6>] p9_client_create+0x106/0x650
    [<c129b1df>] v9fs_session_init+0x35f/0x890
    [<c1295a51>] v9fs_get_sb+0xb1/0x440
    [<c118c60a>] vfs_kern_mount+0xaa/0x240
    [<c118c833>] do_kern_mount+0x53/0x1c0
    [<c11bb41a>] do_mount+0x88a/0x10f0
    [<c11bbd62>] sys_mount+0xe2/0x170
    [<c10043df>] sysenter_do_call+0x12/0x38
    [<ffffffff>] 0xffffffff
unreferenced object 0xf6501600 (size 148):
  comm "mount", pid 1191, jiffies 4294893979 (age 778.828s)
  hex dump (first 32 bytes):
    01 00 00 00 60 2c a1 f6 00 00 00 00 00 00 00 00  ....`,..........
    00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
  backtrace:
    [<c1616140>] kmemleak_alloc+0x70/0x160
    [<c117380b>] kmem_cache_alloc+0x21b/0x340
    [<c12dc2af>] idr_pre_get+0x8f/0xd0
    [<c160c345>] p9_idpool_get+0x35/0xf0
    [<c160b4d6>] p9_client_create+0x106/0x650
    [<c129b1df>] v9fs_session_init+0x35f/0x890
    [<c1295a51>] v9fs_get_sb+0xb1/0x440
    [<c118c60a>] vfs_kern_mount+0xaa/0x240
    [<c118c833>] do_kern_mount+0x53/0x1c0
    [<c11bb41a>] do_mount+0x88a/0x10f0
    [<c11bbd62>] sys_mount+0xe2/0x170
    [<c10043df>] sysenter_do_call+0x12/0x38
    [<ffffffff>] 0xffffffff

-- 
VirtFS possible memory leak in 9p virtio mapped
https://bugs.launchpad.net/bugs/648356
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.

Status in QEMU: New

Bug description:
I use as client Debian squeeze i386 with a custom kernel:
Linux (none) 2.6.35.5 #3 Thu Sep 23 18:36:02 UTC 2010 i686 GNU/Linux

And as host Debian squeeze amd64
Linux asd 2.6.32-5-amd64 #1 SMP Fri Sep 17 21:50:19 UTC 2010 x86_64 GNU/Linux

kvm version is:
kvm-88-5908-gdd67374

Started the client using:
sudo /usr/local/kvm/bin/qemu-system-x86_64 -m 1024 -kernel linux-2.6.35.5.qemu -drive file=root.img,if=virtio -net nic,macaddr=02:ca:ff:ee:ba:be,model=virtio,vlan=1 -net tap,ifname=tap1,vlan=1,script=no -virtfs local,path=/host,security_model=mapped,mount_tag=host -nographic

I've done following inside the guest:

$  mount -t 9p -o trans=virtio host /mnt
$ rm -f /mnt/test
$ touch /mnt/test
$ ls -l /mnt/test
$ while true ;do ls -l /mnt/test > /dev/null; done

Now I can see on my host system that the memory consumption starts at 90MB and after a minute it raises to 130MB.  The extra memory consumption stops when I stop the while-loop.

$ while true ;do ls -l /tmp > /dev/null; done

Doesn't show that behaviour.

^ permalink raw reply	[flat|nested] 4+ messages in thread

* [Qemu-devel] [Bug 648356] Re: VirtFS possible memory leak in 9p virtio mapped
  2010-09-26 20:09 [Qemu-devel] [Bug 648356] [NEW] VirtFS possible memory leak in 9p virtio mapped Moshroum
  2010-09-27 22:32 ` [Qemu-devel] [Bug 648356] " Moshroum
  2010-09-27 22:45 ` Moshroum
@ 2012-01-19  9:59 ` Deepak Shetty
  2 siblings, 0 replies; 4+ messages in thread
From: Deepak Shetty @ 2012-01-19  9:59 UTC (permalink / raw)
  To: qemu-devel

1) Host memory consumption is not the right measure to conclude on VM
mem leaks, esp. because QEMU does a mmap for the VM memory so as pages
are touched inside the guest, host will allocate and this will be seen
as increase in QEMU RSS size, as long as we don't get OOM, it should not
be considered as mem leak.

2) Since the time this bug was reported and now,  there are loads of mem leaks
fixes in kernel's 9p client code (fs/9p) and qemu 9p server code... 

3)In a while true kind of script, when one would do a Ctrl-C, it could potentially cut-off the umount operation in between, causing
kmemleak to show false positives.

4) We  ran the modified version of the submiited script ( instead of
while true, ran it for a defined number of iterations) and then invoked
kmemleak and did not find anything reported which relates to 9p. See the
log below...

     [<ffffffff810fbada>] __kmalloc+0xf7/0x122
     [<ffffffff81668680>] pci_acpi_scan_root+0x10f/0x2c6
     [<ffffffff81658f35>] acpi_pci_root_add+0x1d5/0x41f
     [<ffffffff813cd649>] acpi_device_probe+0x49/0x117
     [<ffffffff8147d378>] driver_probe_device+0xa5/0x135
     [<ffffffff8147d461>] __driver_attach+0x59/0x7c
     [<ffffffff8147be21>] bus_for_each_dev+0x57/0x83
     [<ffffffff8147d070>] driver_attach+0x19/0x1b
     [<ffffffff8147ccf2>] bus_add_driver+0xab/0x201
     [<ffffffff8147d8db>] driver_register+0x93/0x100
     [<ffffffff813cdd81>] acpi_bus_register_driver+0x3e/0x40
     [<ffffffff81cde2f8>] acpi_pci_root_init+0x20/0x28
     [<ffffffff8100020a>] do_one_initcall+0x7a/0x130
     [<ffffffff81cbab44>] kernel_init+0x9a/0x114
     [<ffffffff81680b64>] kernel_thread_helper+0x4/0x10
 unreferenced object 0xffff88001faf06e0 (size 16):
   comm "swapper/0", pid 1, jiffies 4294667775 (age 135.226s)
   hex dump (first 16 bytes):
     50 43 49 20 42 75 73 20 30 30 30 30 3a 30 30 00  PCI Bus 0000:00.
   backtrace:
     [<ffffffff8165460a>] kmemleak_alloc+0x21/0x3e
     [<ffffffff810fbada>] __kmalloc+0xf7/0x122
     [<ffffffff8138fd5d>] kvasprintf+0x45/0x6e
     [<ffffffff8138fdbe>] kasprintf+0x38/0x3a
     [<ffffffff816686a6>] pci_acpi_scan_root+0x135/0x2c6
     [<ffffffff81658f35>] acpi_pci_root_add+0x1d5/0x41f
     [<ffffffff813cd649>] acpi_device_probe+0x49/0x117
     [<ffffffff8147d378>] driver_probe_device+0xa5/0x135
     [<ffffffff8147d461>] __driver_attach+0x59/0x7c
     [<ffffffff8147be21>] bus_for_each_dev+0x57/0x83
     [<ffffffff8147d070>] driver_attach+0x19/0x1b
     [<ffffffff8147ccf2>] bus_add_driver+0xab/0x201
     [<ffffffff8147d8db>] dri

5) Lastly we also ran the exact same script (while true... ) for a long
duration ( few days) on 9p exported path and did not find any OOM

So this makes me believe that there aren't any mem leaks in the 9p virtio mapped flow.
This bug can be closed.

** Changed in: qemu
       Status: New => Invalid

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/648356

Title:
  VirtFS possible memory leak in 9p virtio mapped

Status in QEMU:
  Invalid

Bug description:
  I use as client Debian squeeze i386 with a custom kernel:
  Linux (none) 2.6.35.5 #3 Thu Sep 23 18:36:02 UTC 2010 i686 GNU/Linux

  And as host Debian squeeze amd64
  Linux asd 2.6.32-5-amd64 #1 SMP Fri Sep 17 21:50:19 UTC 2010 x86_64 GNU/Linux

  kvm version is:
  kvm-88-5908-gdd67374

  Started the client using:
  sudo /usr/local/kvm/bin/qemu-system-x86_64 -m 1024 -kernel linux-2.6.35.5.qemu -drive file=root.img,if=virtio -net nic,macaddr=02:ca:ff:ee:ba:be,model=virtio,vlan=1 -net tap,ifname=tap1,vlan=1,script=no -virtfs local,path=/host,security_model=mapped,mount_tag=host -nographic

  I've done following inside the guest:

  $  mount -t 9p -o trans=virtio host /mnt
  $ rm -f /mnt/test
  $ touch /mnt/test
  $ ls -l /mnt/test
  $ while true ;do ls -l /mnt/test > /dev/null; done

  Now I can see on my host system that the memory consumption starts at
  90MB and after a minute it raises to 130MB.  The extra memory
  consumption stops when I stop the while-loop.

  $ while true ;do ls -l /tmp > /dev/null; done

  Doesn't show that behaviour.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/648356/+subscriptions

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2012-01-19 10:06 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-09-26 20:09 [Qemu-devel] [Bug 648356] [NEW] VirtFS possible memory leak in 9p virtio mapped Moshroum
2010-09-27 22:32 ` [Qemu-devel] [Bug 648356] " Moshroum
2010-09-27 22:45 ` Moshroum
2012-01-19  9:59 ` Deepak Shetty

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).