public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
* Qemu-kvm is leaking my memory ???
@ 2008-03-14 22:14 Zdenek Kabelac
  2008-03-16 13:46 ` Avi Kivity
  0 siblings, 1 reply; 13+ messages in thread
From: Zdenek Kabelac @ 2008-03-14 22:14 UTC (permalink / raw)
  To: kvm-devel

Hello

Recently I'm using qemu-kvm on fedora-rawhide box with my own kernels
(with many debug options) I've noticed that over the time my memory
seems to disappear somewhere.

Here is my memory trace after boot and some time of work - thus memory
should be populated.

MemTotal:      2007460 kB
MemFree:        618772 kB
Buffers:         46044 kB
Cached:         733156 kB
SwapCached:          0 kB
Active:         613384 kB
Inactive:       541844 kB
SwapTotal:           0 kB
SwapFree:            0 kB
Dirty:             148 kB
Writeback:           0 kB
AnonPages:      376152 kB
Mapped:          67184 kB
Slab:            80340 kB
SReclaimable:    50284 kB
SUnreclaim:      30056 kB
PageTables:      27976 kB
NFS_Unstable:        0 kB
Bounce:              0 kB
CommitLimit:   1003728 kB
Committed_AS:   810968 kB
VmallocTotal: 34359738367 kB
VmallocUsed:     71244 kB
VmallocChunk: 34359666419 kB
618772 + 46044 + 733156 + 148 + 376152 + 67184 + 80340 + 50284 + 30056
+ 27976 = 2030112

2GB  (though could be wrong - I could be wrong and adding something improperly)

And this memory listing is when I work during the day with qemu-kvm do
something like 30-50 qemu restarts.  Then before I rebooted the
machine I've killed nearly all running task (i.e no Xserver, most of
services turned of)

MemTotal:      2007416 kB
MemFree:        652412 kB
Buffers:         70000 kB
Cached:         607144 kB
SwapCached:          0 kB
Active:         571464 kB
Inactive:       709796 kB
SwapTotal:           0 kB
SwapFree:            0 kB
Dirty:               0 kB
Writeback:           0 kB
AnonPages:        6408 kB
Mapped:           4844 kB
Slab:            52620 kB
SReclaimable:    32752 kB
SUnreclaim:      19868 kB
PageTables:       1468 kB
NFS_Unstable:        0 kB
Bounce:              0 kB
CommitLimit:   1003708 kB
Committed_AS:    33988 kB
VmallocTotal: 34359738367 kB
VmallocUsed:     68152 kB
VmallocChunk: 34359668731 kB

I've have expected much more free memory here and I definitely do not
see how this could combine 2GB of my memory:

652412 + 70000 + 607144 + 6408 + 4844 + 52620 + 32752 + 19868 + 1468 =
1447516

1.4GB

so where is my 600MB piece of memory hiding ?

Zdenek

-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Qemu-kvm is leaking my memory ???
  2008-03-14 22:14 Qemu-kvm is leaking my memory ??? Zdenek Kabelac
@ 2008-03-16 13:46 ` Avi Kivity
  2008-03-19 15:05   ` Zdenek Kabelac
  0 siblings, 1 reply; 13+ messages in thread
From: Avi Kivity @ 2008-03-16 13:46 UTC (permalink / raw)
  To: Zdenek Kabelac; +Cc: kvm-devel

Zdenek Kabelac wrote:
> Hello
>
> Recently I'm using qemu-kvm on fedora-rawhide box with my own kernels
> (with many debug options) I've noticed that over the time my memory
> seems to disappear somewhere.
>
> Here is my memory trace after boot and some time of work - thus memory
> should be populated.
>   

No idea how these should add up.  What does 'free' say?

-- 
error compiling committee.c: too many arguments to function


-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Qemu-kvm is leaking my memory ???
  2008-03-16 13:46 ` Avi Kivity
@ 2008-03-19 15:05   ` Zdenek Kabelac
  2008-03-19 15:56     ` Avi Kivity
  0 siblings, 1 reply; 13+ messages in thread
From: Zdenek Kabelac @ 2008-03-19 15:05 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm-devel

2008/3/16, Avi Kivity <avi@qumranet.com>:
> Zdenek Kabelac wrote:
>  > Hello
>  >
>  > Recently I'm using qemu-kvm on fedora-rawhide box with my own kernels
>  > (with many debug options) I've noticed that over the time my memory
>  > seems to disappear somewhere.
>  >
>  > Here is my memory trace after boot and some time of work - thus memory
>  > should be populated.
>  >
>
>
> No idea how these should add up.  What does 'free' say?

Ok - here goes my free log (I'm loggin free prior each start of my qemu-kvm
so here is the log for this afternoon:
(I'm running same apps all the time - except during kernel compilation
I'm reading some www pages - and working with gnome-terminal - so some
slightly more memory could have been eaten by them - but not in the
range of hundreds of MB)


Wed Mar 19 12:54:38 CET 2008
             total       used       free     shared    buffers     cached
Mem:       2007460    1525240     482220          0      18060     469812
-/+ buffers/cache:    1037368     970092
Swap:            0          0          0
Wed Mar 19 13:27:51 CET 2008
             total       used       free     shared    buffers     cached
Mem:       2007460    1491672     515788          0      13024     404220
-/+ buffers/cache:    1074428     933032
Swap:            0          0          0
Wed Mar 19 13:51:38 CET 2008
             total       used       free     shared    buffers     cached
Mem:       2007460    1513000     494460          0      12676     366708
-/+ buffers/cache:    1133616     873844
Swap:            0          0          0
Wed Mar 19 14:05:30 CET 2008
             total       used       free     shared    buffers     cached
Mem:       2007460    1976592      30868          0      12220     785672
-/+ buffers/cache:    1178700     828760
Swap:            0          0          0
Wed Mar 19 14:13:52 CET 2008
             total       used       free     shared    buffers     cached
Mem:       2007460    1865500     141960          0      14592     633136
-/+ buffers/cache:    1217772     789688
Swap:            0          0          0
Wed Mar 19 14:16:04 CET 2008
             total       used       free     shared    buffers     cached
Mem:       2007460    1533432     474028          0       5852     304736
-/+ buffers/cache:    1222844     784616
Swap:            0          0          0
Wed Mar 19 15:05:33 CET 2008
             total       used       free     shared    buffers     cached
Mem:       2007460    1545796     461664          0       4100     276756
-/+ buffers/cache:    1264940     742520
Swap:            0          0          0
Wed Mar 19 15:14:07 CET 2008
             total       used       free     shared    buffers     cached
Mem:       2007460    1748680     258780          0       8324     427172
-/+ buffers/cache:    1313184     694276
Swap:            0          0          0


-now it's:
             total       used       free     shared    buffers     cached
Mem:       2007460    1784952     222508          0      20644     335360
-/+ buffers/cache:    1428948     578512
Swap:            0          0          0


and top-twenty memory list of currently running processes:

top - 15:52:29 up 19:07, 12 users,  load average: 0.33, 0.30, 0.60
Tasks: 298 total,   1 running, 296 sleeping,   1 stopped,   0 zombie
Cpu(s):  1.6%us,  3.3%sy,  0.0%ni, 95.1%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:   2007460k total,  1770748k used,   236712k free,    20304k buffers
Swap:        0k total,        0k used,        0k free,   335036k cached

  PID  PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
15974  20   0  655m 207m  28m S  0.0 10.6   3:31.31 firefox
 3980  20   0  378m  63m  10m S  1.3  3.2   1:00.53 gnome-terminal
 2657  20   0  481m  58m 9928 S  2.3  3.0  19:16.03 Xorg
12492  20   0  494m  34m  17m S  0.0  1.8   1:20.52 pidgin
 3535  20   0  336m  22m  12m S  0.0  1.2   0:15.41 gnome-panel
 3571  20   0  265m  16m  10m S  0.0  0.9   0:06.25 nm-applet
 3638  20   0  298m  16m 9296 S  0.0  0.8   0:36.79 wnck-applet
 3546  20   0  458m  16m  10m S  0.0  0.8   1:21.65 gnome-power-man
 3579  20   0  261m  16m 8252 S  0.0  0.8   0:02.65 python
 3532  20   0  200m  15m 8144 S  0.3  0.8   1:14.34 metacity
 3754  20   0  325m  14m 9856 S  0.0  0.7   0:00.42 mixer_applet2
 3909  20   0  243m  14m 7988 S  0.0  0.7   0:06.13 notification-da
 3706  20   0  330m  14m 9764 S  0.0  0.7   0:01.40 clock-applet
 3534  20   0  449m  13m 9884 S  0.0  0.7   0:00.92 nautilus
 3540  20   0  250m  12m 8616 S  0.3  0.6   0:07.30 pk-update-icon
 3708  20   0  300m  12m 7940 S  0.0  0.6   0:03.15 gnome-keyboard-
 3752  20   0  290m  11m 8028 S  0.0  0.6   0:00.27 gnome-brightnes
 3553  20   0  286m  11m 8144 S  0.0  0.6   0:04.29 krb5-auth-dialo
 3761  20   0  270m  11m 7968 S  0.0  0.6   0:23.02 cpufreq-applet
 2898  20   0  328m  10m 8240 S  0.0  0.5   0:07.95 gnome-settings-
 3702  20   0  282m 9436 7460 S  0.0  0.5   0:00.25 drivemount_appl
 3749  20   0  288m 8848 6924 S  0.0  0.4   0:00.11 gnome-inhibit-a
 3756  20   0  274m 8704 6604 S  0.7  0.4   4:32.99 multiload-apple
 3759  20   0  273m 7964 6416 S  0.0  0.4   0:00.33 notification-ar
 2827  20   0  267m 6608 5304 S  0.0  0.3   0:00.18 gnome-session
31581  20   0  108m 5764 2000 S  0.0  0.3   0:00.82 gconfd-2
 7934  20   0  110m 4960 2372 S  0.0  0.2   0:13.31 fte
 3742  20   0  173m 4896 4004 S  0.0  0.2   0:00.40 pam-panel-icon
 3531  20   0  215m 4824 3484 S  0.3  0.2   0:24.68 gnome-screensav
 2421  20   0 34012 4628 3620 S  0.3  0.2   0:23.85 hald
29575  20   0  109m 4156 2272 S  0.0  0.2   0:00.21 fte
 2904   9 -11  186m 3800 2964 S  0.3  0.2   1:37.88 pulseaudio
 2752  20   0  176m 3688 2716 S  0.0  0.2   0:00.04 gdm-session-wor
 3720  20   0  162m 3552 2932 S  0.0  0.2   0:00.13 gnome-vfs-daemo
 2134  20   0  110m 3528  992 S  0.0  0.2   0:00.51 rsyslogd



$ cat /proc/meminfo
MemTotal:      2007460 kB
MemFree:        222332 kB
Buffers:         20780 kB
Cached:         335400 kB
SwapCached:          0 kB
Active:         981836 kB
Inactive:       532248 kB
SwapTotal:           0 kB
SwapFree:            0 kB
Dirty:             156 kB
Writeback:           0 kB
AnonPages:      493208 kB
Mapped:          73264 kB
Slab:            98552 kB
SReclaimable:    57796 kB
SUnreclaim:      40756 kB
PageTables:      43180 kB
NFS_Unstable:        0 kB
Bounce:              0 kB
CommitLimit:   1003728 kB
Committed_AS:  2054396 kB
VmallocTotal: 34359738367 kB
VmallocUsed:     68536 kB
VmallocChunk: 34359668731 kB


$ cat /proc/vmstat
nr_free_pages 55583
nr_inactive 133077
nr_active 245476
nr_anon_pages 123307
nr_mapped 18318
nr_file_pages 89063
nr_dirty 39
nr_writeback 0
nr_slab_reclaimable 14452
nr_slab_unreclaimable 10190
nr_page_table_pages 10785
nr_unstable 0
nr_bounce 0
nr_vmscan_write 57271
pgpgin 26968249
pgpgout 40773242
pswpin 0
pswpout 0
pgalloc_dma 307982
pgalloc_dma32 46469184
pgalloc_normal 0
pgalloc_movable 0
pgfree 46832935
pgactivate 6847846
pgdeactivate 6616407
pgfault 37817316
pgmajfault 27426
pgrefill_dma 212390
pgrefill_dma32 18516702
pgrefill_normal 0
pgrefill_movable 0
pgsteal_dma 192152
pgsteal_dma32 13101346
pgsteal_normal 0
pgsteal_movable 0
pgscan_kswapd_dma 217294
pgscan_kswapd_dma32 17699659
pgscan_kswapd_normal 0
pgscan_kswapd_movable 0
pgscan_direct_dma 544
pgscan_direct_dma32 52677
pgscan_direct_normal 0
pgscan_direct_movable 0
pginodesteal 282
slabs_scanned 1252864
kswapd_steal 13267510
kswapd_inodesteal 341476
pageoutrun 211214
allocstall 359
pgrotated 14866

$ slabtop -sc

 Active / Total Objects (% used)    : 163039 / 210169 (77.6%)
 Active / Total Slabs (% used)      : 17172 / 17172 (100.0%)
 Active / Total Caches (% used)     : 102 / 124 (82.3%)
 Active / Total Size (% used)       : 82070.16K / 90868.73K (90.3%)
 Minimum / Average / Maximum Object : 0.08K / 0.43K / 9.66K

  OBJS ACTIVE  USE OBJ SIZE  SLABS OBJ/SLAB CACHE SIZE NAME
 21810  21809  99%    1.51K   4362        5     34896K ext3_inode_cache
 68310  24144  35%    0.17K   2970       23     11880K buffer_head
 14298  14267  99%    0.61K   2383        6      9532K radix_tree_node
 24804  24791  99%    0.32K   2067       12      8268K dentry
   391    391 100%    9.66K    391        1      6256K task_struct
 24106  23844  98%    0.23K   1418       17      5672K vm_area_struct
  5750   5277  91%    0.38K    575       10      2300K filp
 13494  13478  99%    0.15K    519       26      2076K sysfs_dir_cache
   406    349  85%    4.07K     58        7      1856K kmalloc-4096
  1125   1120  99%    1.37K    225        5      1800K shmem_inode_cache
   780    733  93%    2.07K     52       15      1664K kmalloc-2048
   990    983  99%    1.15K    165        6      1320K proc_inode_cache
   129    129 100%    4.12K    129        1      1032K names_cache


I would have save I'm already missing quite a few hundred megabytes...

What else would be interesting to see ?

Zdenek

-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Qemu-kvm is leaking my memory ???
  2008-03-19 15:05   ` Zdenek Kabelac
@ 2008-03-19 15:56     ` Avi Kivity
  2008-03-19 17:31       ` Zdenek Kabelac
  0 siblings, 1 reply; 13+ messages in thread
From: Avi Kivity @ 2008-03-19 15:56 UTC (permalink / raw)
  To: Zdenek Kabelac; +Cc: kvm-devel

Zdenek Kabelac wrote:
> 2008/3/16, Avi Kivity <avi@qumranet.com>:
>   
>> Zdenek Kabelac wrote:
>>  > Hello
>>  >
>>  > Recently I'm using qemu-kvm on fedora-rawhide box with my own kernels
>>  > (with many debug options) I've noticed that over the time my memory
>>  > seems to disappear somewhere.
>>  >
>>  > Here is my memory trace after boot and some time of work - thus memory
>>  > should be populated.
>>  >
>>
>>
>> No idea how these should add up.  What does 'free' say?
>>     
>
> Ok - here goes my free log (I'm loggin free prior each start of my qemu-kvm
> so here is the log for this afternoon:
> (I'm running same apps all the time - except during kernel compilation
> I'm reading some www pages - and working with gnome-terminal - so some
> slightly more memory could have been eaten by them - but not in the
> range of hundreds of MB)
>
>   

Can you make sure that it isn't other processes?  Go to runlevel 3 and 
start the VM using vnc or X-over-network?

What host kernel and kvm version are you using?

-- 
error compiling committee.c: too many arguments to function


-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Qemu-kvm is leaking my memory ???
  2008-03-19 15:56     ` Avi Kivity
@ 2008-03-19 17:31       ` Zdenek Kabelac
  2008-03-19 17:40         ` Avi Kivity
  0 siblings, 1 reply; 13+ messages in thread
From: Zdenek Kabelac @ 2008-03-19 17:31 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm-devel

2008/3/19, Avi Kivity <avi@qumranet.com>:
> Zdenek Kabelac wrote:
>  > 2008/3/16, Avi Kivity <avi@qumranet.com>:
>  >
>  >> Zdenek Kabelac wrote:
>  >>  > Hello
>  >>  >
>  >>  > Recently I'm using qemu-kvm on fedora-rawhide box with my own kernels
>  >>  > (with many debug options) I've noticed that over the time my memory
>  >>  > seems to disappear somewhere.
>  >>  >
>  >>  > Here is my memory trace after boot and some time of work - thus memory
>  >>  > should be populated.
>  >>  >
>  >>
>  >>
>  >> No idea how these should add up.  What does 'free' say?
>  >>
>  >
>  > Ok - here goes my free log (I'm loggin free prior each start of my qemu-kvm
>  > so here is the log for this afternoon:
>  > (I'm running same apps all the time - except during kernel compilation
>  > I'm reading some www pages - and working with gnome-terminal - so some
>  > slightly more memory could have been eaten by them - but not in the
>  > range of hundreds of MB)
>  >
>  >
>
>
> Can you make sure that it isn't other processes?  Go to runlevel 3 and
>  start the VM using vnc or X-over-network?

Hmmm not really sure what do you mean by external VNC - I could grab
this info once I'll finish some work today and kill all the apps
running in the system - so most of the memory should be released -
will go to  single mode for this - is this what do you want ?

>
>  What host kernel and kvm version are you using?

Usually running quite up-to-date Linus git tree kernel -
Both host/guest are running 2.6.25-rc6 kernels
For compiling using gcc-4.3

kvm itself is fedora rawhide package:
kvm-63-2.fc9.x86_64

(somehow I've troubles to compile the kvm-userspace git tree as libkvm
mismatches my kernel version - which probably means I would have to
use kvm linux kernel to use kvm-userspace ??)
(actually why the gcc-3.x is preferred when this compiler is IMHO far
more broken then 4.3 ?)

I think I've already posted my configuration already several times if
it's needed I'll repost again - I've many debugging features enabled
for my kernels
(yet having no idea how to use them to detect my lost memory :))

Zdenek

-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Qemu-kvm is leaking my memory ???
  2008-03-19 17:31       ` Zdenek Kabelac
@ 2008-03-19 17:40         ` Avi Kivity
  2008-03-23  9:26           ` Zdenek Kabelac
  0 siblings, 1 reply; 13+ messages in thread
From: Avi Kivity @ 2008-03-19 17:40 UTC (permalink / raw)
  To: Zdenek Kabelac; +Cc: kvm-devel

Zdenek Kabelac wrote:
> 2008/3/19, Avi Kivity <avi@qumranet.com>:
>   
>> Zdenek Kabelac wrote:
>>  > 2008/3/16, Avi Kivity <avi@qumranet.com>:
>>  >
>>  >> Zdenek Kabelac wrote:
>>  >>  > Hello
>>  >>  >
>>  >>  > Recently I'm using qemu-kvm on fedora-rawhide box with my own kernels
>>  >>  > (with many debug options) I've noticed that over the time my memory
>>  >>  > seems to disappear somewhere.
>>  >>  >
>>  >>  > Here is my memory trace after boot and some time of work - thus memory
>>  >>  > should be populated.
>>  >>  >
>>  >>
>>  >>
>>  >> No idea how these should add up.  What does 'free' say?
>>  >>
>>  >
>>  > Ok - here goes my free log (I'm loggin free prior each start of my qemu-kvm
>>  > so here is the log for this afternoon:
>>  > (I'm running same apps all the time - except during kernel compilation
>>  > I'm reading some www pages - and working with gnome-terminal - so some
>>  > slightly more memory could have been eaten by them - but not in the
>>  > range of hundreds of MB)
>>  >
>>  >
>>
>>
>> Can you make sure that it isn't other processes?  Go to runlevel 3 and
>>  start the VM using vnc or X-over-network?
>>     
>
> Hmmm not really sure what do you mean by external VNC - I could grab
> this info once I'll finish some work today and kill all the apps
> running in the system - so most of the memory should be released -
> will go to  single mode for this - is this what do you want ?
>
>   

The -vnc switch, so there's no local X server.  A remote X server should 
be fine as well.  Use runlevel 3, which means network but no local X server.

>>  What host kernel and kvm version are you using?
>>     
>
> Usually running quite up-to-date Linus git tree kernel -
> Both host/guest are running 2.6.25-rc6 kernels
> For compiling using gcc-4.3
>
> kvm itself is fedora rawhide package:
> kvm-63-2.fc9.x86_64
>
> (somehow I've troubles to compile the kvm-userspace git tree as libkvm
> mismatches my kernel version - which probably means I would have to
> use kvm linux kernel to use kvm-userspace ??)
>   

If running kvm.git, do ./configure --with-patched-kernel.  Please report 
kvm compiler errors.

> (actually why the gcc-3.x is preferred when this compiler is IMHO far
> more broken then 4.3 ?)
>
>   

qemu requires gcc 3.  The kernel may be compiled with any gcc that it 
supports.

-- 
error compiling committee.c: too many arguments to function


-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Qemu-kvm is leaking my memory ???
  2008-03-19 17:40         ` Avi Kivity
@ 2008-03-23  9:26           ` Zdenek Kabelac
  2008-03-23 10:22             ` Avi Kivity
  0 siblings, 1 reply; 13+ messages in thread
From: Zdenek Kabelac @ 2008-03-23  9:26 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm-devel

[-- Attachment #1: Type: text/plain, Size: 1363 bytes --]

2008/3/19, Avi Kivity <avi@qumranet.com>:
> Zdenek Kabelac wrote:
>  > 2008/3/19, Avi Kivity <avi@qumranet.com>:
>  >
>  >> Zdenek Kabelac wrote:
>  >>  > 2008/3/16, Avi Kivity <avi@qumranet.com>:
>
> The -vnc switch, so there's no local X server.  A remote X server should
>  be fine as well.  Use runlevel 3, which means network but no local X server.

Ok I've finaly got some time to make a comparable measurements about memory -

I'm attaching empty   trace log which is from the level where most of
processes were killed (as you can see in the 'ps' trace)

Then there are attachments after using qemu 7 times (log of free
before execution is also attached)

Both logs are after  3>/proc/sys/vm/drop_cache

So how else I could help with tracking my problem.
(I should also probably mention my 'busy loop' with dm_setup - which
seems to be caused by some problematic page_fault routine - and I've
been testing this issue during this qemu run - from gdb it looks like
the code loop in the page_fault call)

>
> If running kvm.git, do ./configure --with-patched-kernel.  Please report
>  kvm compiler errors.

I'll check

>  > (actually why the gcc-3.x is preferred when this compiler is IMHO far
>  > more broken then 4.3 ?)
>
> qemu requires gcc 3.  The kernel may be compiled with any gcc that it
>  supports.

Though I only have gcc-4.3 on my system

Zdenek

[-- Attachment #2: empty --]
[-- Type: application/octet-stream, Size: 29217 bytes --]

USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root         1  0.0  0.0   4076   844 ?        Ss   11:58   0:00 /sbin/init
root         2  0.0  0.0      0     0 ?        S<   11:58   0:00 [kthreadd]
root         3  0.0  0.0      0     0 ?        S<   11:58   0:00 [migration/0]
root         4  0.0  0.0      0     0 ?        S<   11:58   0:00 [ksoftirqd/0]
root         5  0.0  0.0      0     0 ?        S<   11:58   0:00 [watchdog/0]
root         6  0.0  0.0      0     0 ?        S<   11:58   0:00 [migration/1]
root         7  0.0  0.0      0     0 ?        S<   11:58   0:00 [ksoftirqd/1]
root         8  0.0  0.0      0     0 ?        S<   11:58   0:00 [watchdog/1]
root         9  0.0  0.0      0     0 ?        S<   11:58   0:00 [events/0]
root        10  0.0  0.0      0     0 ?        S<   11:58   0:00 [events/1]
root        11  0.0  0.0      0     0 ?        S<   11:58   0:00 [khelper]
root        56  0.0  0.0      0     0 ?        S<   11:58   0:00 [kblockd/0]
root        57  0.0  0.0      0     0 ?        S<   11:58   0:00 [kblockd/1]
root        59  0.0  0.0      0     0 ?        S<   11:58   0:00 [kacpid]
root        60  0.0  0.0      0     0 ?        S<   11:58   0:00 [kacpi_notify]
root       156  0.0  0.0      0     0 ?        S<   11:58   0:00 [ata/0]
root       157  0.0  0.0      0     0 ?        S<   11:58   0:00 [ata/1]
root       158  0.0  0.0      0     0 ?        S<   11:58   0:00 [ata_aux]
root       160  0.0  0.0      0     0 ?        S<   11:58   0:00 [kseriod]
root       180  0.0  0.0      0     0 ?        S<   11:58   0:00 [kondemand/0]
root       181  0.0  0.0      0     0 ?        S<   11:58   0:00 [kondemand/1]
root       207  0.0  0.0      0     0 ?        S    11:58   0:00 [pdflush]
root       208  0.0  0.0      0     0 ?        S    11:58   0:00 [pdflush]
root       209  0.0  0.0      0     0 ?        S<   11:58   0:00 [kswapd0]
root       293  0.0  0.0      0     0 ?        S<   11:58   0:00 [aio/0]
root       294  0.0  0.0      0     0 ?        S<   11:58   0:00 [aio/1]
root       444  0.0  0.0      0     0 ?        S<   11:58   0:00 [scsi_eh_0]
root       446  0.0  0.0      0     0 ?        S<   11:58   0:00 [scsi_eh_1]
root       448  0.0  0.0      0     0 ?        S<   11:58   0:00 [scsi_eh_2]
root       464  0.0  0.0      0     0 ?        S<   11:58   0:01 [scsi_eh_3]
root       466  0.0  0.0      0     0 ?        S<   11:58   0:00 [scsi_eh_4]
root       523  0.0  0.0      0     0 ?        S<   11:58   0:00 [ksuspend_usbd]
root       524  0.0  0.0      0     0 ?        S<   11:58   0:00 [khubd]
root       533  0.0  0.0      0     0 ?        S<   11:58   0:00 [kjournald]
root       572  0.0  0.0  12900   960 ?        S<s  11:58   0:00 /sbin/udevd -d
root      1140  0.0  0.0      0     0 ?        S<   11:58   0:00 [kauditd]
root      1274  0.0  0.0      0     0 ?        S<   11:58   0:00 [iwl3945/0]
root      1275  0.0  0.0      0     0 ?        S<   11:58   0:00 [iwl3945/1]
root      1276  0.0  0.0      0     0 ?        S<   11:58   0:00 [iwl3945]
root      1287  0.0  0.0      0     0 ?        S<   11:58   0:00 [kmmcd]
root      1331  0.0  0.0      0     0 ?        S<   11:58   0:00 [kpsmoused]
root      1410  0.0  0.0      0     0 ?        S<   11:58   0:00 [mmcqd]
root      1489  0.0  0.0      0     0 ?        S<   11:58   0:00 [kstriped]
root      1505  0.0  0.0      0     0 ?        S<   11:58   0:00 [kmpathd/0]
root      1506  0.0  0.0      0     0 ?        S<   11:58   0:00 [kmpathd/1]
root      1535  0.0  0.0      0     0 ?        S<   11:59   0:00 [kdmflush]
root      1554  0.0  0.0      0     0 ?        S<   11:59   0:00 [kdmflush]
root      1589  0.0  0.0      0     0 ?        S<   11:59   0:00 [kjournald]
root      1590  0.0  0.0      0     0 ?        S<   11:59   0:00 [kjournald]
rpc       2087  0.0  0.0  18804   768 ?        Ss   11:59   0:00 rpcbind
rpcuser   2107  0.0  0.0  10280   816 ?        Ss   11:59   0:00 rpc.statd
root      2131  0.0  0.0      0     0 ?        S<   11:59   0:00 [rpciod/0]
root      2134  0.0  0.0      0     0 ?        S<   11:59   0:00 [rpciod/1]
root      2141  0.0  0.0  53192   800 ?        Ss   11:59   0:00 rpc.idmapd
root      2170  0.0  0.1 111188  3380 ?        Sl   11:59   0:00 rsyslogd -c 3
root      2235  0.0  0.0  19484   900 ?        Ss   11:59   0:00 xinetd -stayalive -pidfile /var/run/xinetd.pid
root      2284  0.0  0.0  86452   268 ?        Ss   11:59   0:00 rpc.rquotad
root      2287  0.0  0.0      0     0 ?        S    11:59   0:00 [lockd]
root      2288  0.0  0.0      0     0 ?        S<   11:59   0:00 [nfsd4]
root      2289  0.0  0.0      0     0 ?        S    11:59   0:00 [nfsd]
root      2290  0.0  0.0      0     0 ?        S    11:59   0:00 [nfsd]
root      2291  0.0  0.0      0     0 ?        S    11:59   0:00 [nfsd]
root      2292  0.0  0.0      0     0 ?        S    11:59   0:00 [nfsd]
root      2293  0.0  0.0      0     0 ?        S    11:59   0:00 [nfsd]
root      2294  0.0  0.0      0     0 ?        S    11:59   0:00 [nfsd]
root      2295  0.0  0.0      0     0 ?        S    11:59   0:00 [nfsd]
root      2296  0.0  0.0      0     0 ?        S    11:59   0:00 [nfsd]
root      2299  0.0  0.0  23148   376 ?        Ss   11:59   0:00 rpc.mountd
root      2316  0.0  0.0  93660  1140 ?        Ss   11:59   0:00 crond
root      2322  0.0  0.0  16540   328 ?        Ss   11:59   0:00 /usr/sbin/atd
root      2594  0.0  0.0   3916   500 tty4     Ss+  11:59   0:00 /sbin/mingetty tty4
root      2595  0.0  0.0   3916   496 tty5     Ss+  11:59   0:00 /sbin/mingetty tty5
root      2597  0.0  0.0   3916   496 tty2     Ss+  11:59   0:00 /sbin/mingetty tty2
root      2598  0.0  0.0   3916   500 tty3     Ss+  11:59   0:00 /sbin/mingetty tty3
root      2599  0.0  0.1 109440  2800 ?        Ss   11:59   0:00 login -- root     
root      2600  0.0  0.0   3916   500 tty6     Ss+  11:59   0:00 /sbin/mingetty tty6
root      3783  0.0  0.0      0     0 ?        S<   12:00   0:00 [kjournald]
root      3824  0.0  0.0      0     0 ?        S<   12:00   0:00 [kjournald]
root      9718  0.0  0.0   6084   556 ?        SN   13:28   0:00 /usr/bin/inotifywait --quiet --monitor --event create,open,close --exclude (config-|initrd|System.map|Basenames|Conflictname|__db.001|__db.002|__db.003|Dirnames|Filemd5s|Group|Installtid|Name|Packages|Providename|Provideversion|Pubkeys|Requirename|Requireversion|Sha1header|Sigmd5|Triggername) /boot/ /var/lib/rpm/ /usr/src/akmods/
root      9719  0.0  0.0  87596   780 ?        SN   13:28   0:00 /bin/bash - /usr/sbin/akmodsd --daemon
root      9753  0.1  0.0  87516  1780 tty1     Ss   13:28   0:01 -bash
root     10073  4.0  0.0  87208  1008 tty1     R+   13:37   0:00 ps aux
MemTotal:      2007448 kB
MemFree:       1957080 kB
Buffers:           692 kB
Cached:           4748 kB
SwapCached:          0 kB
Active:          11984 kB
Inactive:         1088 kB
SwapTotal:           0 kB
SwapFree:            0 kB
Dirty:              28 kB
Writeback:           0 kB
AnonPages:        7700 kB
Mapped:           3780 kB
Slab:            16908 kB
SReclaimable:     2708 kB
SUnreclaim:      14200 kB
PageTables:       1244 kB
NFS_Unstable:        0 kB
Bounce:              0 kB
CommitLimit:   1003724 kB
Committed_AS:    53888 kB
VmallocTotal: 34359738367 kB
VmallocUsed:     67076 kB
VmallocChunk: 34359670267 kB
nr_free_pages 489270
nr_inactive 290
nr_active 2991
nr_anon_pages 1920
nr_mapped 945
nr_file_pages 1370
nr_dirty 7
nr_writeback 0
nr_slab_reclaimable 677
nr_slab_unreclaimable 3549
nr_page_table_pages 309
nr_unstable 0
nr_bounce 0
nr_vmscan_write 0
pgpgin 994743
pgpgout 394231
pswpin 0
pswpout 0
pgalloc_dma 4482
pgalloc_dma32 2819700
pgalloc_normal 0
pgalloc_movable 0
pgfree 3313696
pgactivate 116843
pgdeactivate 26646
pgfault 3984749
pgmajfault 4324
pgrefill_dma 699
pgrefill_dma32 51018
pgrefill_normal 0
pgrefill_movable 0
pgsteal_dma 769
pgsteal_dma32 43783
pgsteal_normal 0
pgsteal_movable 0
pgscan_kswapd_dma 769
pgscan_kswapd_dma32 52129
pgscan_kswapd_normal 0
pgscan_kswapd_movable 0
pgscan_direct_dma 0
pgscan_direct_dma32 0
pgscan_direct_normal 0
pgscan_direct_movable 0
pginodesteal 0
slabs_scanned 508160
kswapd_steal 44552
kswapd_inodesteal 0
pageoutrun 807
allocstall 0
pgrotated 0
slabinfo - version: 2.1
# name            <active_objs> <num_objs> <objsize> <objperslab> <pagesperslab> : tunables <limit> <batchcount> <sharedfactor> : slabdata <active_slabs> <num_slabs> <sharedavail>
fat_inode_cache        6      6   1272    6    2 : tunables    0    0    0 : slabdata      1      1      0
fat_cache              0      0    104   39    1 : tunables    0    0    0 : slabdata      0      0      0
nf_conntrack_expect      0      0    312   13    1 : tunables    0    0    0 : slabdata      0      0      0
nf_conntrack          22     22    368   11    1 : tunables    0    0    0 : slabdata      2      2      0
bridge_fdb_cache       0      0    192   21    1 : tunables    0    0    0 : slabdata      0      0      0
nfsd4_delegations      0      0    344   11    1 : tunables    0    0    0 : slabdata      0      0      0
nfsd4_stateids         0      0    200   20    1 : tunables    0    0    0 : slabdata      0      0      0
nfsd4_files            0      0    144   28    1 : tunables    0    0    0 : slabdata      0      0      0
nfsd4_stateowners      0      0    496    8    1 : tunables    0    0    0 : slabdata      0      0      0
rpc_buffers            9      9   2176    3    2 : tunables    0    0    0 : slabdata      3      3      0
rpc_tasks             17     18    448    9    1 : tunables    0    0    0 : slabdata      2      2      0
rpc_inode_cache       10     10   1472    5    2 : tunables    0    0    0 : slabdata      2      2      0
dm_mpath_io            0      0    112   36    1 : tunables    0    0    0 : slabdata      0      0      0
dm_uevent              0      0   2680    3    2 : tunables    0    0    0 : slabdata      0      0      0
dm_target_io         546    546     96   42    1 : tunables    0    0    0 : slabdata     13     13      0
dm_io                540    540    112   36    1 : tunables    0    0    0 : slabdata     15     15      0
kmalloc_dma-512        7      7    584    7    1 : tunables    0    0    0 : slabdata      1      1      0
uhci_urb_priv         64     64    128   32    1 : tunables    0    0    0 : slabdata      2      2      0
UNIX                  20     35   1472    5    2 : tunables    0    0    0 : slabdata      7      7      0
flow_cache             0      0    168   24    1 : tunables    0    0    0 : slabdata      0      0      0
scsi_sense_cache      42     42    192   21    1 : tunables    0    0    0 : slabdata      2      2      0
scsi_cmd_cache        18     18    448    9    1 : tunables    0    0    0 : slabdata      2      2      0
cfq_io_context        49    108    224   18    1 : tunables    0    0    0 : slabdata      6      6      0
cfq_queue             49     95    208   19    1 : tunables    0    0    0 : slabdata      5      5      0
mqueue_inode_cache      5      5   1536    5    2 : tunables    0    0    0 : slabdata      1      1      0
ext2_inode_cache       0      0   1584    5    2 : tunables    0    0    0 : slabdata      0      0      0
ext2_xattr             0      0    160   25    1 : tunables    0    0    0 : slabdata      0      0      0
journal_handle        64     64    128   32    1 : tunables    0    0    0 : slabdata      2      2      0
journal_head          68    144    168   24    1 : tunables    0    0    0 : slabdata      6      6      0
revoke_table          92     92     88   46    1 : tunables    0    0    0 : slabdata      2      2      0
revoke_record         64     64    128   32    1 : tunables    0    0    0 : slabdata      2      2      0
ext3_inode_cache     199    420   1544    5    2 : tunables    0    0    0 : slabdata     84     84      0
ext3_xattr            50     50    160   25    1 : tunables    0    0    0 : slabdata      2      2      0
dnotify_cache         36     36    112   36    1 : tunables    0    0    0 : slabdata      1      1      0
inotify_event_cache     72     72    112   36    1 : tunables    0    0    0 : slabdata      2      2      0
inotify_watch_cache     59    112    144   28    1 : tunables    0    0    0 : slabdata      4      4      0
kioctx                 0      0    640    6    1 : tunables    0    0    0 : slabdata      0      0      0
kiocb                  0      0    320   12    1 : tunables    0    0    0 : slabdata      0      0      0
fasync_cache          84     84     96   42    1 : tunables    0    0    0 : slabdata      2      2      0
shmem_inode_cache   1039   1050   1400    5    2 : tunables    0    0    0 : slabdata    210    210      0
nsproxy                0      0    128   32    1 : tunables    0    0    0 : slabdata      0      0      0
posix_timers_cache      0      0    320   12    1 : tunables    0    0    0 : slabdata      0      0      0
uid_cache             24     24    320   12    1 : tunables    0    0    0 : slabdata      2      2      0
ip_mrt_cache           0      0    192   21    1 : tunables    0    0    0 : slabdata      0      0      0
UDP-Lite               0      0   1344    6    2 : tunables    0    0    0 : slabdata      0      0      0
tcp_bind_bucket       70    128    128   32    1 : tunables    0    0    0 : slabdata      4      4      0
inet_peer_cache        0      0    192   21    1 : tunables    0    0    0 : slabdata      0      0      0
secpath_cache          0      0    128   32    1 : tunables    0    0    0 : slabdata      0      0      0
xfrm_dst_cache         0      0    448    9    1 : tunables    0    0    0 : slabdata      0      0      0
ip_fib_alias           0      0    104   39    1 : tunables    0    0    0 : slabdata      0      0      0
ip_fib_hash           56     56    144   28    1 : tunables    0    0    0 : slabdata      2      2      0
ip_dst_cache          20     20    384   10    1 : tunables    0    0    0 : slabdata      2      2      0
arp_cache             18     18    448    9    1 : tunables    0    0    0 : slabdata      2      2      0
RAW                   12     12   1280    6    2 : tunables    0    0    0 : slabdata      2      2      0
UDP                   20     30   1344    6    2 : tunables    0    0    0 : slabdata      5      5      0
tw_sock_TCP           32     32    256   16    1 : tunables    0    0    0 : slabdata      2      2      0
request_sock_TCP      21     21    192   21    1 : tunables    0    0    0 : slabdata      1      1      0
TCP                   12     15   2304    3    2 : tunables    0    0    0 : slabdata      5      5      0
eventpoll_pwq         56     56    144   28    1 : tunables    0    0    0 : slabdata      2      2      0
eventpoll_epi         32     32    256   16    1 : tunables    0    0    0 : slabdata      2      2      0
sgpool-128             4      4   5248    1    2 : tunables    0    0    0 : slabdata      4      4      0
sgpool-64              8      9   2688    3    2 : tunables    0    0    0 : slabdata      3      3      0
sgpool-32             10     10   1408    5    2 : tunables    0    0    0 : slabdata      2      2      0
sgpool-16             10     10    768    5    1 : tunables    0    0    0 : slabdata      2      2      0
sgpool-8              18     18    448    9    1 : tunables    0    0    0 : slabdata      2      2      0
scsi_bidi_sdb          0      0     96   42    1 : tunables    0    0    0 : slabdata      0      0      0
scsi_io_context        0      0    184   22    1 : tunables    0    0    0 : slabdata      0      0      0
blkdev_queue          21     21   2256    3    2 : tunables    0    0    0 : slabdata      7      7      0
blkdev_requests       30     33    368   11    1 : tunables    0    0    0 : slabdata      3      3      0
blkdev_ioc            52    120    200   20    1 : tunables    0    0    0 : slabdata      6      6      0
biovec-256            34     34   4224    1    2 : tunables    0    0    0 : slabdata     34     34      0
biovec-128            36     36   2176    3    2 : tunables    0    0    0 : slabdata     12     12      0
biovec-64             48     49   1152    7    2 : tunables    0    0    0 : slabdata      7      7      0
biovec-16             52     60    384   10    1 : tunables    0    0    0 : slabdata      6      6      0
biovec-4              74     84    192   21    1 : tunables    0    0    0 : slabdata      4      4      0
biovec-1             118    126     96   42    1 : tunables    0    0    0 : slabdata      3      3      0
bio                   74     84    192   21    1 : tunables    0    0    0 : slabdata      4      4      0
sock_inode_cache      45     60   1280    6    2 : tunables    0    0    0 : slabdata     10     10      0
skbuff_fclone_cache     14     14    576    7    1 : tunables    0    0    0 : slabdata      2      2      0
skbuff_head_cache    333    516    320   12    1 : tunables    0    0    0 : slabdata     43     43      0
file_lock_cache       26     26    304   13    1 : tunables    0    0    0 : slabdata      2      2      0
Acpi-Operand        3142   3180    136   30    1 : tunables    0    0    0 : slabdata    106    106      0
Acpi-ParseExt         60     60    136   30    1 : tunables    0    0    0 : slabdata      2      2      0
Acpi-Parse            72     72    112   36    1 : tunables    0    0    0 : slabdata      2      2      0
Acpi-State            52     52    152   26    1 : tunables    0    0    0 : slabdata      2      2      0
Acpi-Namespace      1950   1950    104   39    1 : tunables    0    0    0 : slabdata     50     50      0
proc_inode_cache     440    444   1176    6    2 : tunables    0    0    0 : slabdata     74     74      0
sigqueue              34     34    232   17    1 : tunables    0    0    0 : slabdata      2      2      0
radix_tree_node      473    816    624    6    1 : tunables    0    0    0 : slabdata    136    136      0
bdev_cache            35     35   1536    5    2 : tunables    0    0    0 : slabdata      7      7      0
sysfs_dir_cache    12799  12870    152   26    1 : tunables    0    0    0 : slabdata    495    495      0
mnt_cache             48     48    320   12    1 : tunables    0    0    0 : slabdata      4      4      0
inode_cache           94    154   1144    7    2 : tunables    0    0    0 : slabdata     22     22      0
dentry              1784   2904    328   12    1 : tunables    0    0    0 : slabdata    242    242      0
filp                 281    550    384   10    1 : tunables    0    0    0 : slabdata     55     55      0
names_cache            2      2   4224    1    2 : tunables    0    0    0 : slabdata      2      2      0
idr_layer_cache      249    258    600    6    1 : tunables    0    0    0 : slabdata     43     43      0
buffer_head          242    483    176   23    1 : tunables    0    0    0 : slabdata     21     21      0
mm_struct             31     72   1280    6    2 : tunables    0    0    0 : slabdata     12     12      0
vm_area_struct       825   1360    240   17    1 : tunables    0    0    0 : slabdata     80     80      0
fs_cache              67    126    192   21    1 : tunables    0    0    0 : slabdata      6      6      0
files_cache           28     44    896    4    1 : tunables    0    0    0 : slabdata     11     11      0
signal_cache          90    116    960    4    1 : tunables    0    0    0 : slabdata     29     29      0
sighand_cache         88    105   2304    3    2 : tunables    0    0    0 : slabdata     35     35      0
task_struct           86     86   9888    1    4 : tunables    0    0    0 : slabdata     86     86      0
anon_vma             364    812    144   28    1 : tunables    0    0    0 : slabdata     29     29      0
pid                  126    231    192   21    1 : tunables    0    0    0 : slabdata     11     11      0
kmalloc-4096         382    392   4168    7    8 : tunables    0    0    0 : slabdata     56     56      0
kmalloc-2048         381    510   2120   15    8 : tunables    0    0    0 : slabdata     34     34      0
kmalloc-1024         380    435   1096   29    8 : tunables    0    0    0 : slabdata     15     15      0
kmalloc-512          420    455    584    7    1 : tunables    0    0    0 : slabdata     65     65      0
kmalloc-256          545    588    328   12    1 : tunables    0    0    0 : slabdata     49     49      0
kmalloc-128          355    380    200   20    1 : tunables    0    0    0 : slabdata     19     19      0
kmalloc-64          1788   2340    136   30    1 : tunables    0    0    0 : slabdata     78     78      0
kmalloc-32           683    780    104   39    1 : tunables    0    0    0 : slabdata     20     20      0
kmalloc-16          2471   2530     88   46    1 : tunables    0    0    0 : slabdata     55     55      0
kmalloc-8           3125   3213     80   51    1 : tunables    0    0    0 : slabdata     63     63      0
kmalloc-192          257    300    264   15    1 : tunables    0    0    0 : slabdata     20     20      0
kmalloc-96          1710   1824    168   24    1 : tunables    0    0    0 : slabdata     76     76      0
Name                   Objects Objsize    Space Slabs/Part/Cpu  O/S O %Fr %Ef Flg
Acpi-Namespace            1950      32   204.8K         50/0/0   39 0   0  30 PZFU
Acpi-Operand              3142      64   434.1K       106/10/0   30 0   9  46 PZFU
Acpi-Parse                  72      40     8.1K          2/0/0   36 0   0  35 PZFU
Acpi-ParseExt               60      64     8.1K          2/0/0   30 0   0  46 PZFU
Acpi-State                  52      80     8.1K          2/0/0   26 0   0  50 PZFU
anon_vma                   364      72   118.7K        29/25/0   28 0  86  22 PZFU
arp_cache                   18     348     8.1K          2/0/0    9 0   0  76 APZFU
bdev_cache                  35    1432    57.3K          7/0/0    5 1   0  87 APaZFU
bio                         74     104    16.3K          4/1/0   21 0  25  46 APZFU
biovec-1                   118      16    12.2K          3/1/0   42 0  33  15 APZFU
biovec-128                  36    2048    98.3K         12/0/0    3 1   0  75 APZFU
biovec-16                   52     256    24.5K          6/1/0   10 0  16  54 APZFU
biovec-256                  34    4096   278.5K         34/0/0    1 1   0  50 APZFU
biovec-4                    74      64    16.3K          4/1/0   21 0  25  28 APZFU
biovec-64                   48    1024    57.3K          7/1/0    7 1  14  85 APZFU
blkdev_ioc                  52     128    24.5K          6/4/0   20 0  66  27 PZFU
blkdev_queue                21    2184    57.3K          7/0/0    3 1   0  79 PZFU
blkdev_requests             30     296    12.2K          3/1/0   11 0  33  72 PZFU
buffer_head                257     104    86.0K        21/12/0   23 0  57  31 PaZFU
cfq_io_context              49     152    24.5K          6/4/0   18 0  66  30 PZFU
cfq_queue                   49     136    20.4K          5/3/0   19 0  60  32 PZFU
dentry                    6228     256     2.1M        519/0/0   12 0   0  75 PaZFU
dm_io                      540      40    61.4K         15/0/0   36 0   0  35 PZFU
dm_target_io               546      24    53.2K         13/0/0   42 0   0  24 PZFU
dnotify_cache               36      40     4.0K          1/0/0   36 0   0  35 PZFU
eventpoll_epi               32     128     8.1K          2/0/0   16 0   0  50 APZFU
eventpoll_pwq               56      72     8.1K          2/0/0   28 0   0  49 PZFU
ext3_inode_cache           203    1472   688.1K        84/66/0    5 1  78  43 PaZFU
ext3_xattr                  50      88     8.1K          2/0/0   25 0   0  53 PaZFU
fasync_cache                84      24     8.1K          2/0/0   42 0   0  24 PZFU
fat_inode_cache              6    1200     8.1K          1/0/0    6 1   0  87 PaZFU
file_lock_cache             26     232     8.1K          2/0/0   13 0   0  73 PZFU
files_cache                 28     768    45.0K         11/7/0    4 0  63  47 APZFU
filp                       620     288   258.0K         63/0/0   10 0   0  69 APZFU
fs_cache                    67     120    24.5K          6/4/0   21 0  66  32 APZFU
idr_layer_cache            249     528   176.1K         43/5/0    6 0  11  74 PZFU
inode_cache               4494    1072     5.2M        642/0/0    7 1   0  91 PaZFU
inotify_event_cache         72      40     8.1K          2/0/0   36 0   0  35 PZFU
inotify_watch_cache         59      72    16.3K          4/2/0   28 0  50  25 PZFU
ip_dst_cache                20     312     8.1K          2/0/0   10 0   0  76 APZFU
ip_fib_hash                 56      72     8.1K          2/0/0   28 0   0  49 PZFU
journal_handle              64      56     8.1K          2/0/0   32 0   0  43 PaZFU
journal_head                68      96    24.5K          6/4/0   24 0  66  26 PaZFU
kmalloc-1024               380    1024   491.5K         15/8/0   29 3  53  79 PZFU
kmalloc-128                355     128    77.8K         19/5/0   20 0  26  58 PZFU
kmalloc-16                2471      16   225.2K         55/5/0   46 0   9  17 PZFU
kmalloc-192                257     192    81.9K         20/5/0   15 0  25  60 PZFU
kmalloc-2048               381    2048     1.1M        34/13/0   15 3  38  70 PZFU
kmalloc-256                545     256   200.7K         49/7/0   12 0  14  69 PZFU
kmalloc-32                 683      32    81.9K         20/6/0   39 0  30  26 PZFU
kmalloc-4096               382    4096     1.8M         56/3/0    7 3   5  85 PZFU
kmalloc-512                420     512   266.2K        65/12/0    7 0  18  80 PZFU
kmalloc-64                1788      64   319.4K        78/30/0   30 0  38  35 PZFU
kmalloc-8                 3125       8   258.0K         63/8/0   51 0  12   9 PZFU
kmalloc-96                1722      96   311.2K         76/9/0   24 0  11  53 PZFU
kmalloc_dma-512              7     512     4.0K          1/0/0    7 0   0  87 dPZFU
mm_struct                   31    1176    98.3K        12/10/0    6 1  83  37 APZFU
mnt_cache                   48     208    16.3K          4/0/0   12 0   0  60 APZFU
mqueue_inode_cache           5    1456     8.1K          1/0/0    5 1   0  88 APZFU
names_cache                  2    4096    16.3K          2/0/0    1 1   0  50 APZFU
nf_conntrack                22     296     8.1K          2/0/0   11 0   0  79 PZFU
pid                        126      88    45.0K         11/9/0   21 0  81  24 APZFU
proc_inode_cache           462    1104   630.7K         77/0/0    6 1   0  80 PaZFU
radix_tree_node            483     552   557.0K       136/96/0    6 0  70  47 PZFU
RAW                         12    1200    16.3K          2/0/0    6 1   0  87 APZFU
request_sock_TCP            21      88     4.0K          1/0/0   21 0   0  45 APZFU
revoke_record               64      32     8.1K          2/0/0   32 0   0  25 APaZFU
revoke_table                92      16     8.1K          2/0/0   46 0   0  17 PaZFU
rpc_buffers                  9    2048    24.5K          3/0/0    3 1   0  75 APZFU
rpc_inode_cache             10    1376    16.3K          2/0/0    5 1   0  83 APaZFU
rpc_tasks                   17     352     8.1K          2/1/0    9 0  50  73 APZFU
scsi_cmd_cache              18     320     8.1K          2/0/0    9 0   0  70 APZFU
scsi_sense_cache            42      96     8.1K          2/0/0   21 0   0  49 APZFU
sgpool-128                   4    5120    32.7K          4/0/0    1 1   0  62 APZFU
sgpool-16                   10     640     8.1K          2/0/0    5 0   0  78 APZFU
sgpool-32                   10    1280    16.3K          2/0/0    5 1   0  78 APZFU
sgpool-64                    8    2560    24.5K          3/1/0    3 1  33  83 APZFU
sgpool-8                    18     320     8.1K          2/0/0    9 0   0  70 APZFU
shmem_inode_cache         1039    1328     1.7M        210/4/0    5 1   1  80 PZFU
sighand_cache               88    2184   286.7K        35/11/0    3 1  31  67 APZFU
signal_cache                90     880   118.7K        29/15/0    4 0  51  66 APZFU
sigqueue                    34     160     8.1K          2/0/0   17 0   0  66 PZFU
skbuff_fclone_cache         14     452     8.1K          2/0/0    7 0   0  77 APZFU
skbuff_head_cache          333     224   176.1K        43/41/0   12 0  95  42 APZFU
sock_inode_cache            45    1200    81.9K         10/6/0    6 1  60  65 APaZFU
sysfs_dir_cache          12799      80     2.0M       495/11/0   26 0   2  50 PZFU
task_struct                 86    9808     1.4M         86/0/0    1 2   0  59 PZFU
TCP                         12    2200    40.9K          5/2/0    3 1  40  64 APZFU
tcp_bind_bucket             70      40    16.3K          4/2/0   32 0  50  17 APZFU
tw_sock_TCP                 32     144     8.1K          2/0/0   16 0   0  56 APZFU
UDP                         20    1224    40.9K          5/3/0    6 1  60  59 APZFU
uhci_urb_priv               64      56     8.1K          2/0/0   32 0   0  43 PZFU
uid_cache                   24     208     8.1K          2/0/0   12 0   0  60 APZFU
UNIX                        20    1376    57.3K          7/4/0    5 1  57  47 APZFU
vm_area_struct             814     168   327.6K        80/59/0   17 0  73  41 PZFU

[-- Attachment #3: leaked.runlog --]
[-- Type: application/octet-stream, Size: 1820 bytes --]

Sat Mar 22 21:09:48 CET 2008

             total       used       free     shared    buffers     cached
Mem:       2007448    1669756     337692          0      34028     988040
-/+ buffers/cache:     647688    1359760
Swap:            0          0          0
Sat Mar 22 22:12:34 CET 2008

             total       used       free     shared    buffers     cached
Mem:       2007448    1483300     524148          0      25660     809800
-/+ buffers/cache:     647840    1359608
Swap:            0          0          0
Sat Mar 22 22:23:35 CET 2008

             total       used       free     shared    buffers     cached
Mem:       2007448    1604092     403356          0      26472     906352
-/+ buffers/cache:     671268    1336180
Swap:            0          0          0
Sat Mar 22 23:30:09 CET 2008

             total       used       free     shared    buffers     cached
Mem:       2007448    1676232     331216          0      27920     930184
-/+ buffers/cache:     718128    1289320
Swap:            0          0          0
Sat Mar 22 23:38:06 CET 2008

             total       used       free     shared    buffers     cached
Mem:       2007448    1788520     218928          0      28380    1011280
-/+ buffers/cache:     748860    1258588
Swap:            0          0          0
Sat Mar 22 23:51:14 CET 2008

             total       used       free     shared    buffers     cached
Mem:       2007448    1980800      26648          0      49824    1098728
-/+ buffers/cache:     832248    1175200
Swap:            0          0          0
Sun Mar 23 01:05:20 CET 2008

             total       used       free     shared    buffers     cached
Mem:       2007448    1745896     261552          0      52804     804832
-/+ buffers/cache:     888260    1119188
Swap:            0          0          0

[-- Attachment #4: leaked.meminfo --]
[-- Type: application/octet-stream, Size: 630 bytes --]

MemTotal:      2007448 kB
MemFree:       1552484 kB
Buffers:           848 kB
Cached:           7984 kB
SwapCached:          0 kB
Active:         130932 kB
Inactive:       284432 kB
SwapTotal:           0 kB
SwapFree:            0 kB
Dirty:              24 kB
Writeback:           0 kB
AnonPages:        8844 kB
Mapped:           4708 kB
Slab:            18656 kB
SReclaimable:     3412 kB
SUnreclaim:      15244 kB
PageTables:       1480 kB
NFS_Unstable:        0 kB
Bounce:              0 kB
CommitLimit:   1003724 kB
Committed_AS:    55792 kB
VmallocTotal: 34359738367 kB
VmallocUsed:     68152 kB
VmallocChunk: 34359668731 kB

[-- Attachment #5: leaked.free --]
[-- Type: application/octet-stream, Size: 230 bytes --]

             total       used       free     shared    buffers     cached
Mem:       2007448     454992    1552456          0        964       8020
-/+ buffers/cache:     446008    1561440
Swap:            0          0          0

[-- Attachment #6: leaked.slabinfo --]
[-- Type: application/octet-stream, Size: 8607 bytes --]

Name                   Objects Objsize    Space Slabs/Part/Cpu  O/S O %Fr %Ef Flg
inode_cache               4676    1072     5.4M        668/0/0    7 1   0  91 PaZFU
dentry                    6696     256     2.2M        558/0/0   12 0   0  75 PaZFU
sysfs_dir_cache          13180      80     2.0M        509/9/0   26 0   1  50 PZFU
kmalloc-4096               389    4096     1.8M         57/4/0    7 3   7  85 PZFU
shmem_inode_cache         1058    1328     1.7M        215/9/0    5 1   4  79 PZFU
task_struct                 90    9808     1.4M         90/0/0    1 2   0  59 PZFU
radix_tree_node            729     552     1.0M      260/219/0    6 0  84  37 PZFU
kmalloc-2048               390    2048     1.0M        31/12/0   15 3  38  78 PZFU
ext3_inode_cache           306    1472   966.6K       118/88/0    5 1  74  46 PaZFU
proc_inode_cache           551    1104   761.8K         93/2/0    6 1   2  79 PaZFU
kmalloc-1024               387    1024   557.0K         17/7/0   29 3  41  71 PZFU
Acpi-Operand              3194      64   438.2K        107/6/0   30 0   5  46 PZFU
kmalloc-64                1942      64   421.8K       103/54/0   30 0  52  29 PZFU
vm_area_struct            1009     168   397.3K        97/71/0   17 0  73  42 PZFU
kmalloc-96                1733      96   315.3K        77/10/0   24 0  12  52 PZFU
sighand_cache               90    2184   311.2K        38/17/0    3 1  44  63 APZFU
filp                       551     288   286.7K        70/28/0   10 0  40  55 APZFU
kmalloc-512                435     512   278.5K        68/12/0    7 0  17  79 PZFU
biovec-256                  34    4096   278.5K         34/0/0    1 1   0  50 APZFU
kmalloc-8                 3304       8   270.3K         66/8/0   51 0  12   9 PZFU
kmalloc-16                2493      16   225.2K         55/5/0   46 0   9  17 PZFU
idr_layer_cache            275     528   208.8K        51/10/0    6 0  19  69 PZFU
Acpi-Namespace            1950      32   204.8K         50/0/0   39 0   0  30 PZFU
kmalloc-256                548     256   200.7K         49/7/0   12 0  14  69 PZFU
buffer_head                371     104   139.2K        34/24/0   23 0  70  27 PaZFU
skbuff_head_cache          336     224   131.0K        32/10/0   12 0  31  57 APZFU
signal_cache                93     880   122.8K        30/13/0    4 0  43  66 APZFU
anon_vma                   434      72   110.5K        27/22/0   28 0  81  28 PZFU
biovec-128                  38    2048   106.4K         13/1/0    3 1   7  73 APZFU
sock_inode_cache            53    1200   106.4K         13/7/0    6 1  53  59 APaZFU
blkdev_queue                33    2184    90.1K         11/0/0    3 1   0  79 PZFU
kmalloc-192                260     192    86.0K         21/7/0   15 0  33  58 PZFU
kmalloc-32                 716      32    81.9K         20/5/0   39 0  25  27 PZFU
kmalloc-128                379     128    81.9K         20/4/0   20 0  20  59 PZFU
mm_struct                   34    1176    81.9K         10/8/0    6 1  80  48 APZFU
UNIX                        27    1376    65.5K          8/6/0    5 1  75  56 APZFU
dm_io                      540      40    61.4K         15/0/0   36 0   0  35 PZFU
bdev_cache                  35    1432    57.3K          7/0/0    5 1   0  87 APaZFU
biovec-64                   48    1024    57.3K          7/1/0    7 1  14  85 APZFU
pid                        134      88    53.2K        13/11/0   21 0  84  22 APZFU
files_cache                 31     768    53.2K        13/10/0    4 0  76  44 APZFU
dm_target_io               546      24    53.2K         13/0/0   42 0   0  24 PZFU
TCP                         13    2200    40.9K          5/2/0    3 1  40  69 APZFU
UDP                         20    1224    32.7K          4/2/0    6 1  50  74 APZFU
sgpool-128                   4    5120    32.7K          4/0/0    1 1   0  62 APZFU
cfq_io_context              59     152    32.7K          8/6/0   18 0  75  27 PZFU
blkdev_ioc                  62     128    28.6K          7/5/0   20 0  71  27 PZFU
cfq_queue                   58     136    28.6K          7/5/0   19 0  71  27 PZFU
biovec-16                   54     256    24.5K          6/1/0   10 0  16  56 APZFU
sgpool-64                    8    2560    24.5K          3/1/0    3 1  33  83 APZFU
rpc_buffers                  9    2048    24.5K          3/0/0    3 1   0  75 APZFU
fs_cache                    70     120    20.4K          5/3/0   21 0  60  41 APZFU
journal_head                57      96    20.4K          5/3/0   24 0  60  26 PaZFU
sgpool-32                   10    1280    16.3K          2/0/0    5 1   0  78 APZFU
inotify_watch_cache         60      72    16.3K          4/2/0   28 0  50  26 PZFU
kvm_vcpu                     2    5488    16.3K          2/0/0    1 1   0  66 PZFU
names_cache                  2    4096    16.3K          2/0/0    1 1   0  50 APZFU
bio                         76     104    16.3K          4/1/0   21 0  25  48 APZFU
rpc_inode_cache             10    1376    16.3K          2/0/0    5 1   0  83 APaZFU
RAW                         12    1200    16.3K          2/0/0    6 1   0  87 APZFU
nf_conntrack                24     296    16.3K          4/2/0   11 0  50  43 PZFU
fat_inode_cache              7    1200    16.3K          2/1/0    6 1  50  51 PaZFU
biovec-1                   118      16    12.2K          3/1/0   42 0  33  15 APZFU
mnt_cache                   34     208    12.2K          3/1/0   12 0  33  57 APZFU
blkdev_requests             30     296    12.2K          3/1/0   11 0  33  72 PZFU
tcp_bind_bucket             69      40    12.2K          3/1/0   32 0  33  22 APZFU
biovec-4                    63      64    12.2K          3/0/0   21 0   0  32 APZFU
file_lock_cache             28     232    12.2K          3/1/0   13 0  33  52 PZFU
Acpi-State                  52      80     8.1K          2/0/0   26 0   0  50 PZFU
ip_fib_hash                 56      72     8.1K          2/0/0   28 0   0  49 PZFU
eventpoll_pwq               56      72     8.1K          2/0/0   28 0   0  49 PZFU
fasync_cache                84      24     8.1K          2/0/0   42 0   0  24 PZFU
sigqueue                    34     160     8.1K          2/0/0   17 0   0  66 PZFU
inotify_event_cache         72      40     8.1K          2/0/0   36 0   0  35 PZFU
ext3_xattr                  50      88     8.1K          2/0/0   25 0   0  53 PaZFU
eventpoll_epi               32     128     8.1K          2/0/0   16 0   0  50 APZFU
revoke_record               64      32     8.1K          2/0/0   32 0   0  25 APaZFU
revoke_table                92      16     8.1K          2/0/0   46 0   0  17 PaZFU
ip_dst_cache                20     312     8.1K          2/0/0   10 0   0  76 APZFU
journal_handle              64      56     8.1K          2/0/0   32 0   0  43 PaZFU
mqueue_inode_cache           5    1456     8.1K          1/0/0    5 1   0  88 APZFU
arp_cache                   18     348     8.1K          2/0/0    9 0   0  76 APZFU
uid_cache                   24     208     8.1K          2/0/0   12 0   0  60 APZFU
scsi_cmd_cache              18     320     8.1K          2/0/0    9 0   0  70 APZFU
scsi_sense_cache            42      96     8.1K          2/0/0   21 0   0  49 APZFU
sgpool-16                   10     640     8.1K          2/0/0    5 0   0  78 APZFU
uhci_urb_priv               64      56     8.1K          2/0/0   32 0   0  43 PZFU
kvm_pte_chain               64      56     8.1K          2/0/0   32 0   0  43 PZFU
kvm_rmap_desc               72      40     8.1K          2/0/0   36 0   0  35 PZFU
kvm_mmu_page_header         50      88     8.1K          2/0/0   25 0   0  53 PZFU
sgpool-8                    18     320     8.1K          2/0/0    9 0   0  70 APZFU
skbuff_fclone_cache         14     452     8.1K          2/0/0    7 0   0  77 APZFU
posix_timers_cache          24     248     8.1K          2/0/0   12 0   0  72 PZFU
tw_sock_TCP                 32     144     8.1K          2/0/0   16 0   0  56 APZFU
rpc_tasks                   17     352     8.1K          2/1/0    9 0  50  73 APZFU
request_sock_TCP            42      88     8.1K          2/0/0   21 0   0  45 APZFU
Acpi-ParseExt               60      64     8.1K          2/0/0   30 0   0  46 PZFU
Acpi-Parse                  72      40     8.1K          2/0/0   36 0   0  35 PZFU
kmalloc_dma-512              7     512     4.0K          1/0/0    7 0   0  87 dPZFU
ip_fib_alias                39      32     4.0K          1/0/0   39 0   0  30 PZFU
dnotify_cache               36      40     4.0K          1/0/0   36 0   0  35 PZFU
inet_peer_cache             21      64     4.0K          1/0/0   21 0   0  32 APZFU

[-- Attachment #7: leaked.vmstat --]
[-- Type: application/octet-stream, Size: 944 bytes --]

nr_free_pages 388135
nr_inactive 71118
nr_active 32735
nr_anon_pages 2209
nr_mapped 1177
nr_file_pages 2220
nr_dirty 8
nr_writeback 0
nr_slab_reclaimable 853
nr_slab_unreclaimable 3807
nr_page_table_pages 368
nr_unstable 0
nr_bounce 0
nr_vmscan_write 1546
pgpgin 4478462
pgpgout 5879067
pswpin 0
pswpout 0
pgalloc_dma 123316
pgalloc_dma32 70502689
pgalloc_normal 0
pgalloc_movable 0
pgfree 71014329
pgactivate 649329
pgdeactivate 504519
pgfault 95794801
pgmajfault 7341
pgrefill_dma 13592
pgrefill_dma32 1215553
pgrefill_normal 0
pgrefill_movable 0
pgsteal_dma 19534
pgsteal_dma32 1303074
pgsteal_normal 0
pgsteal_movable 0
pgscan_kswapd_dma 21066
pgscan_kswapd_dma32 1394109
pgscan_kswapd_normal 0
pgscan_kswapd_movable 0
pgscan_direct_dma 32
pgscan_direct_dma32 10244
pgscan_direct_normal 0
pgscan_direct_movable 0
pginodesteal 0
slabs_scanned 473856
kswapd_steal 1312332
kswapd_inodesteal 88626
pageoutrun 22494
allocstall 160
pgrotated 430

[-- Attachment #8: leaked.ps --]
[-- Type: application/postscript, Size: 7026 bytes --]

[-- Attachment #9: Type: text/plain, Size: 228 bytes --]

-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/

[-- Attachment #10: Type: text/plain, Size: 158 bytes --]

_______________________________________________
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Qemu-kvm is leaking my memory ???
  2008-03-23  9:26           ` Zdenek Kabelac
@ 2008-03-23 10:22             ` Avi Kivity
  2008-03-23 12:20               ` Avi Kivity
  0 siblings, 1 reply; 13+ messages in thread
From: Avi Kivity @ 2008-03-23 10:22 UTC (permalink / raw)
  To: Zdenek Kabelac; +Cc: kvm-devel

Zdenek Kabelac wrote:
> 2008/3/19, Avi Kivity <avi@qumranet.com>:
>   
>> Zdenek Kabelac wrote:
>>  > 2008/3/19, Avi Kivity <avi@qumranet.com>:
>>  >
>>  >> Zdenek Kabelac wrote:
>>  >>  > 2008/3/16, Avi Kivity <avi@qumranet.com>:
>>
>> The -vnc switch, so there's no local X server.  A remote X server should
>>  be fine as well.  Use runlevel 3, which means network but no local X server.
>>     
>
> Ok I've finaly got some time to make a comparable measurements about memory -
>
> I'm attaching empty   trace log which is from the level where most of
> processes were killed (as you can see in the 'ps' trace)
>
> Then there are attachments after using qemu 7 times (log of free
> before execution is also attached)
>
> Both logs are after  3>/proc/sys/vm/drop_cache
>
>   

I see the same issue too now, and am investigating.

-- 
error compiling committee.c: too many arguments to function


-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Qemu-kvm is leaking my memory ???
  2008-03-23 10:22             ` Avi Kivity
@ 2008-03-23 12:20               ` Avi Kivity
  2008-03-23 23:06                 ` Zdenek Kabelac
  0 siblings, 1 reply; 13+ messages in thread
From: Avi Kivity @ 2008-03-23 12:20 UTC (permalink / raw)
  To: Zdenek Kabelac; +Cc: kvm-devel

[-- Attachment #1: Type: text/plain, Size: 276 bytes --]

Avi Kivity wrote:
>
> I see the same issue too now, and am investigating.
>

The attached patch should fix the issue.  It is present in 2.6.25-rc6 
only, and not in kvm.git, which is why few people noticed it.

-- 
error compiling committee.c: too many arguments to function


[-- Attachment #2: fix-kvm-2.6.25-rc6-leak.patch --]
[-- Type: text/x-patch, Size: 463 bytes --]

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 4ba85d9..e55af12 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1412,7 +1412,7 @@ static void mmu_guess_page_from_pte_write(struct kvm_vcpu *vcpu, gpa_t gpa,
 	up_read(&current->mm->mmap_sem);
 
 	vcpu->arch.update_pte.gfn = gfn;
-	vcpu->arch.update_pte.page = gfn_to_page(vcpu->kvm, gfn);
+	vcpu->arch.update_pte.page = page;
 }
 
 void kvm_mmu_pte_write(struct kvm_vcpu *vcpu, gpa_t gpa,

[-- Attachment #3: Type: text/plain, Size: 228 bytes --]

-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/

[-- Attachment #4: Type: text/plain, Size: 158 bytes --]

_______________________________________________
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: Qemu-kvm is leaking my memory ???
  2008-03-23 12:20               ` Avi Kivity
@ 2008-03-23 23:06                 ` Zdenek Kabelac
  2008-03-24 10:09                   ` Avi Kivity
  0 siblings, 1 reply; 13+ messages in thread
From: Zdenek Kabelac @ 2008-03-23 23:06 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm-devel

2008/3/23, Avi Kivity <avi@qumranet.com>:
> Avi Kivity wrote:
>  >
>  > I see the same issue too now, and am investigating.
>  >
>
>
> The attached patch should fix the issue.  It is present in 2.6.25-rc6
>  only, and not in kvm.git, which is why few people noticed it.
>

Hi

Tested - and actually seeing no difference in my case of memory leak.
Still it looks like over 30M per execution of qemu is lost.
(tested with fresh 2.6.25-rc6 with your patch)

Also now I'd have said that before my dmsetup status loop test case
was not causing big problems and it was just enough to run another
dmsetup to unblock the loop - now it's usually leads to some wierd end
of qemu itself - will explore more....

So it's probably fixing some bug - and exposing another.

As I said before - in my debuger it was looping in page_fault hadler -
i.e.  memory should be paged_in - but as soon as the handler return to
the code to continue memcopy - new page_fault is invoked and pointer &
couters are not changed.

Zdenek

-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Qemu-kvm is leaking my memory ???
  2008-03-23 23:06                 ` Zdenek Kabelac
@ 2008-03-24 10:09                   ` Avi Kivity
  2008-03-24 16:18                     ` Avi Kivity
  0 siblings, 1 reply; 13+ messages in thread
From: Avi Kivity @ 2008-03-24 10:09 UTC (permalink / raw)
  To: Zdenek Kabelac; +Cc: kvm-devel

Zdenek Kabelac wrote:
> 2008/3/23, Avi Kivity <avi@qumranet.com>:
>   
>> Avi Kivity wrote:
>>  >
>>  > I see the same issue too now, and am investigating.
>>  >
>>
>>
>> The attached patch should fix the issue.  It is present in 2.6.25-rc6
>>  only, and not in kvm.git, which is why few people noticed it.
>>
>>     
>
> Hi
>
> Tested - and actually seeing no difference in my case of memory leak.
> Still it looks like over 30M per execution of qemu is lost.
> (tested with fresh 2.6.25-rc6 with your patch)
>
>   

Can you double check? 2.6.25-rc6 definitely leaks without, and here it 
doesn't with the patch.

> Also now I'd have said that before my dmsetup status loop test case
> was not causing big problems and it was just enough to run another
> dmsetup to unblock the loop - now it's usually leads to some wierd end
> of qemu itself - will explore more....
>
> So it's probably fixing some bug - and exposing another.
>
> As I said before - in my debuger it was looping in page_fault hadler -
> i.e.  memory should be paged_in - but as soon as the handler return to
> the code to continue memcopy - new page_fault is invoked and pointer &
> couters are not changed.

I'll add some code to make it possible to enable the mmu tracer in runtime.

-- 
Do not meddle in the internals of kernels, for they are subtle and quick to panic.


-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Qemu-kvm is leaking my memory ???
  2008-03-24 10:09                   ` Avi Kivity
@ 2008-03-24 16:18                     ` Avi Kivity
  2008-03-24 21:42                       ` Zdenek Kabelac
  0 siblings, 1 reply; 13+ messages in thread
From: Avi Kivity @ 2008-03-24 16:18 UTC (permalink / raw)
  To: Zdenek Kabelac; +Cc: kvm-devel

[-- Attachment #1: Type: text/plain, Size: 523 bytes --]

Avi Kivity wrote:
>>
>>
>> Tested - and actually seeing no difference in my case of memory leak.
>> Still it looks like over 30M per execution of qemu is lost.
>> (tested with fresh 2.6.25-rc6 with your patch)
>>
>>   
>
> Can you double check? 2.6.25-rc6 definitely leaks without, and here it 
> doesn't with the patch.
>

btw, there's an additional patch I have queued up that might have an 
effect.  please test the attached (which is my 2.6.25 queue).


-- 
error compiling committee.c: too many arguments to function


[-- Attachment #2: kvm-2.6.25-rollup.patch --]
[-- Type: text/x-patch, Size: 2551 bytes --]

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index d8172aa..e55af12 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -222,8 +222,7 @@ static int is_io_pte(unsigned long pte)
 
 static int is_rmap_pte(u64 pte)
 {
-	return pte != shadow_trap_nonpresent_pte
-		&& pte != shadow_notrap_nonpresent_pte;
+	return is_shadow_present_pte(pte);
 }
 
 static gfn_t pse36_gfn_delta(u32 gpte)
@@ -893,14 +892,25 @@ static void mmu_set_spte(struct kvm_vcpu *vcpu, u64 *shadow_pte,
 			 int *ptwrite, gfn_t gfn, struct page *page)
 {
 	u64 spte;
-	int was_rmapped = is_rmap_pte(*shadow_pte);
+	int was_rmapped = 0;
 	int was_writeble = is_writeble_pte(*shadow_pte);
+	hfn_t host_pfn = (*shadow_pte & PT64_BASE_ADDR_MASK) >> PAGE_SHIFT;
 
 	pgprintk("%s: spte %llx access %x write_fault %d"
 		 " user_fault %d gfn %lx\n",
 		 __FUNCTION__, *shadow_pte, pt_access,
 		 write_fault, user_fault, gfn);
 
+	if (is_rmap_pte(*shadow_pte)) {
+		if (host_pfn != page_to_pfn(page)) {
+			pgprintk("hfn old %lx new %lx\n",
+				 host_pfn, page_to_pfn(page));
+			rmap_remove(vcpu->kvm, shadow_pte);
+		}
+		else
+			was_rmapped = 1;
+	}
+
 	/*
 	 * We don't set the accessed bit, since we sometimes want to see
 	 * whether the guest actually used the pte (in order to detect
@@ -1402,7 +1412,7 @@ static void mmu_guess_page_from_pte_write(struct kvm_vcpu *vcpu, gpa_t gpa,
 	up_read(&current->mm->mmap_sem);
 
 	vcpu->arch.update_pte.gfn = gfn;
-	vcpu->arch.update_pte.page = gfn_to_page(vcpu->kvm, gfn);
+	vcpu->arch.update_pte.page = page;
 }
 
 void kvm_mmu_pte_write(struct kvm_vcpu *vcpu, gpa_t gpa,
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 94ea724..8e14628 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -349,8 +349,6 @@ static void update_exception_bitmap(struct kvm_vcpu *vcpu)
 
 static void reload_tss(void)
 {
-#ifndef CONFIG_X86_64
-
 	/*
 	 * VT restores TR but not its size.  Useless.
 	 */
@@ -361,7 +359,6 @@ static void reload_tss(void)
 	descs = (void *)gdt.base;
 	descs[GDT_ENTRY_TSS].type = 9; /* available TSS */
 	load_TR_desc();
-#endif
 }
 
 static void load_transition_efer(struct vcpu_vmx *vmx)
@@ -1436,7 +1433,7 @@ static int init_rmode_tss(struct kvm *kvm)
 	int ret = 0;
 	int r;
 
-	down_read(&current->mm->mmap_sem);
+	down_read(&kvm->slots_lock);
 	r = kvm_clear_guest_page(kvm, fn, 0, PAGE_SIZE);
 	if (r < 0)
 		goto out;
@@ -1459,7 +1456,7 @@ static int init_rmode_tss(struct kvm *kvm)
 
 	ret = 1;
 out:
-	up_read(&current->mm->mmap_sem);
+	up_read(&kvm->slots_lock);
 	return ret;
 }
 

[-- Attachment #3: Type: text/plain, Size: 228 bytes --]

-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/

[-- Attachment #4: Type: text/plain, Size: 158 bytes --]

_______________________________________________
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: Qemu-kvm is leaking my memory ???
  2008-03-24 16:18                     ` Avi Kivity
@ 2008-03-24 21:42                       ` Zdenek Kabelac
  0 siblings, 0 replies; 13+ messages in thread
From: Zdenek Kabelac @ 2008-03-24 21:42 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm-devel

2008/3/24, Avi Kivity <avi@qumranet.com>:
> Avi Kivity wrote:
>  >>
>  >>
>  >> Tested - and actually seeing no difference in my case of memory leak.
>  >> Still it looks like over 30M per execution of qemu is lost.
>  >> (tested with fresh 2.6.25-rc6 with your patch)
>  >>
>  >>
>  >
>  > Can you double check? 2.6.25-rc6 definitely leaks without, and here it
>  > doesn't with the patch.
>  >
>
>
> btw, there's an additional patch I have queued up that might have an
>  effect.  please test the attached (which is my 2.6.25 queue).


Yep - I've made a quick short test - and it looks promising - so far I
can not see the leak
with your additional patch.

But I still have get my busy loop problem. Though now it's sometime
stack-back-traced on the  leaveq  - maybe this instruction might cause
some problems ??

Before this patch - I've always got the back-trace at the point of
copy_user_generic_string -
now its slightly different  -- and still applies when I run the second
dmsetup status - it unblocks the looped one)

Call Trace:
 [<ffffffff8803558d>] :dm_mod:dm_compat_ctl_ioctl+0xd/0x20
 [<ffffffff802bd352>] compat_sys_ioctl+0x182/0x3d0
 [<ffffffff80283d20>] vfs_write+0x130/0x170
 [<ffffffff80221192>] sysenter_do_call+0x1b/0x66


Call Trace:
 [<ffffffff88032100>] ? :dm_mod:table_status+0x0/0x90
 [<ffffffff80436809>] ? error_exit+0x0/0x51
 [<ffffffff88032100>] ? :dm_mod:table_status+0x0/0x90
 [<ffffffff8032d157>] ? copy_user_generic_string+0x17/0x40
 [<ffffffff880332d7>] ? :dm_mod:copy_params+0x87/0xb0
 [<ffffffff80237b11>] ? __capable+0x11/0x30
 [<ffffffff88033469>] ? :dm_mod:ctl_ioctl+0x169/0x260
 [<ffffffff80340712>] ? tty_ldisc_deref+0x62/0x80
 [<ffffffff8034320c>] ? tty_write+0x22c/0x260
 [<ffffffff8803358d>] ? :dm_mod:dm_compat_ctl_ioctl+0xd/0x20
 [<ffffffff802bd352>] ? compat_sys_ioctl+0x182/0x3d0
 [<ffffffff80283d20>] ? vfs_write+0x130/0x170
 [<ffffffff80221192>] ? sysenter_do_call+0x1b/0x66



Here is dissambled  dm_compat_ctl_ioctl:

0000000000001fa0 <dm_compat_ctl_ioctl>:
        return (long)ctl_ioctl(command, (struct dm_ioctl __user *)u);
}

#ifdef CONFIG_COMPAT
static long dm_compat_ctl_ioctl(struct file *file, uint command, ulong u)
{
    1fa0:       55                      push   %rbp
    1fa1:       89 f7                   mov    %esi,%edi
    1fa3:       48 89 e5                mov    %rsp,%rbp
        return r;
}

static long dm_ctl_ioctl(struct file *file, uint command, ulong u)
{
        return (long)ctl_ioctl(command, (struct dm_ioctl __user *)u);
    1fa6:       89 d6                   mov    %edx,%esi
    1fa8:       e8 73 fd ff ff          callq  1d20 <ctl_ioctl>

#ifdef CONFIG_COMPAT
static long dm_compat_ctl_ioctl(struct file *file, uint command, ulong u)
{
        return (long)dm_ctl_ioctl(file, command, (ulong) compat_ptr(u));
}
    1fad:       c9                      leaveq
        return r;
}

static long dm_ctl_ioctl(struct file *file, uint command, ulong u)
{
        return (long)ctl_ioctl(command, (struct dm_ioctl __user *)u);
    1fae:       48 98                   cltq

#ifdef CONFIG_COMPAT
static long dm_compat_ctl_ioctl(struct file *file, uint command, ulong u)
{
        return (long)dm_ctl_ioctl(file, command, (ulong) compat_ptr(u));
}
    1fb0:       c3                      retq

Zdenek

-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2008-03-24 21:42 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-03-14 22:14 Qemu-kvm is leaking my memory ??? Zdenek Kabelac
2008-03-16 13:46 ` Avi Kivity
2008-03-19 15:05   ` Zdenek Kabelac
2008-03-19 15:56     ` Avi Kivity
2008-03-19 17:31       ` Zdenek Kabelac
2008-03-19 17:40         ` Avi Kivity
2008-03-23  9:26           ` Zdenek Kabelac
2008-03-23 10:22             ` Avi Kivity
2008-03-23 12:20               ` Avi Kivity
2008-03-23 23:06                 ` Zdenek Kabelac
2008-03-24 10:09                   ` Avi Kivity
2008-03-24 16:18                     ` Avi Kivity
2008-03-24 21:42                       ` Zdenek Kabelac

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox