* does anyone run guests for more than 5 minutes? (virtio-net perf anomaly)
@ 2009-03-03 20:13 Alex Williamson
2009-03-06 0:04 ` Marcelo Tosatti
0 siblings, 1 reply; 6+ messages in thread
From: Alex Williamson @ 2009-03-03 20:13 UTC (permalink / raw)
To: kvm-devel
It seems like something happens around the 5 minute uptime in the guest
that causes virtio-net throughput to plummet. Here's the scenario:
guest started as:
taskset -c 4 /usr/local/bin/qemu-system-x86_64 -hda /dev/sdb -m
2048 -vnc :1 -net nic,macaddr=02:00:10:91:73:02,model=virtio
-net tap,script=$HOME/bin/null-ifup -serial
tcp::1234,server,nowait -mem-path /hugepages/
null-ifup looks like this:
#!/bin/sh
/sbin/ifconfig $1 192.168.0.1
/sbin/route add -net 192.168.0.0/24 gw 192.168.0.1
The guest gets a static IP of 192.168.0.2.
netserver (part of netperf) in the host is pinned to CPU0, which shares
cache with CPU4 from the above taskset.
When the guest boots, I run:
netperf -c -C -H 192.168.0.1 -t TCP_STREAM -- -m 64k
This results in ~13.5Gbps (note you won't get close to this if you don't
get the tasksets correct)
Wait 5 minutes, retry. Now I get ~4Gbps. The only way I can get
13.5Gbps again is by rebooting the guest within the same qemu context,
or of course restarting it completely.
Any guesses as to what might be going on? Can anyone reproduce? I'm
hoping that I'm doing something dumb, but can't figure out what it is.
The system is running v2.6.29-rc6-121-g64e7130 in the guest,
v2.6.29-rc6-123-gbd7b3b4 on the host, kvm module kvm-84-620-g5bffffc and
userspace kvm-84-95-gea1b668. Thanks,
Alex
^ permalink raw reply [flat|nested] 6+ messages in thread* Re: does anyone run guests for more than 5 minutes? (virtio-net perf anomaly)
2009-03-03 20:13 does anyone run guests for more than 5 minutes? (virtio-net perf anomaly) Alex Williamson
@ 2009-03-06 0:04 ` Marcelo Tosatti
2009-03-06 4:13 ` Alex Williamson
0 siblings, 1 reply; 6+ messages in thread
From: Marcelo Tosatti @ 2009-03-06 0:04 UTC (permalink / raw)
To: Alex Williamson; +Cc: kvm-devel
On Tue, Mar 03, 2009 at 01:13:41PM -0700, Alex Williamson wrote:
> It seems like something happens around the 5 minute uptime in the guest
> that causes virtio-net throughput to plummet. Here's the scenario:
>
> guest started as:
>
> taskset -c 4 /usr/local/bin/qemu-system-x86_64 -hda /dev/sdb -m
> 2048 -vnc :1 -net nic,macaddr=02:00:10:91:73:02,model=virtio
> -net tap,script=$HOME/bin/null-ifup -serial
> tcp::1234,server,nowait -mem-path /hugepages/
>
> null-ifup looks like this:
>
> #!/bin/sh
> /sbin/ifconfig $1 192.168.0.1
> /sbin/route add -net 192.168.0.0/24 gw 192.168.0.1
>
> The guest gets a static IP of 192.168.0.2.
>
> netserver (part of netperf) in the host is pinned to CPU0, which shares
> cache with CPU4 from the above taskset.
>
> When the guest boots, I run:
>
> netperf -c -C -H 192.168.0.1 -t TCP_STREAM -- -m 64k
>
> This results in ~13.5Gbps (note you won't get close to this if you don't
> get the tasksets correct)
>
> Wait 5 minutes, retry. Now I get ~4Gbps. The only way I can get
> 13.5Gbps again is by rebooting the guest within the same qemu context,
> or of course restarting it completely.
>
> Any guesses as to what might be going on? Can anyone reproduce? I'm
> hoping that I'm doing something dumb, but can't figure out what it is.
> The system is running v2.6.29-rc6-121-g64e7130 in the guest,
> v2.6.29-rc6-123-gbd7b3b4 on the host, kvm module kvm-84-620-g5bffffc and
> userspace kvm-84-95-gea1b668. Thanks,
Nope. Collect kvm_stat -l before/after the slowdown?
^ permalink raw reply [flat|nested] 6+ messages in thread* Re: does anyone run guests for more than 5 minutes? (virtio-net perf anomaly)
2009-03-06 0:04 ` Marcelo Tosatti
@ 2009-03-06 4:13 ` Alex Williamson
2009-03-06 15:26 ` Alex Williamson
0 siblings, 1 reply; 6+ messages in thread
From: Alex Williamson @ 2009-03-06 4:13 UTC (permalink / raw)
To: Marcelo Tosatti; +Cc: kvm-devel
[-- Attachment #1: Type: text/plain, Size: 650 bytes --]
On Thu, 2009-03-05 at 21:04 -0300, Marcelo Tosatti wrote:
> On Tue, Mar 03, 2009 at 01:13:41PM -0700, Alex Williamson wrote:
> > Any guesses as to what might be going on? Can anyone reproduce? I'm
> > hoping that I'm doing something dumb, but can't figure out what it is.
> > The system is running v2.6.29-rc6-121-g64e7130 in the guest,
> > v2.6.29-rc6-123-gbd7b3b4 on the host, kvm module kvm-84-620-g5bffffc and
> > userspace kvm-84-95-gea1b668. Thanks,
>
> Nope. Collect kvm_stat -l before/after the slowdown?
Attached. This shows about 1 minute of data before the slowdown and a
dramatic change starting around the 80th row. Thanks,
Alex
[-- Attachment #2: virtio-slow.log --]
[-- Type: text/x-log, Size: 50864 bytes --]
efer_relo exits fpu_reloa halt_exit halt_wake host_stat hypercall insn_emul insn_emul invlpg io_exits irq_exits irq_injec irq_windo kvm_reque largepage mmio_exit mmu_cache mmu_flood mmu_pde_z mmu_pte_u mmu_pte_w mmu_recyc mmu_shado mmu_unsyn mmu_unsyn nmi_injec nmi_windo pf_fixed pf_guest remote_tl request_n signal_ex tlb_flush
0 9620 0 0 0 6363 0 2790 0 0 6347 302 2434 180 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2
0 9478 0 0 0 6219 0 2795 0 0 6216 291 2433 175 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 9465 0 0 0 6216 0 2791 0 0 6211 283 2432 180 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 9459 0 0 0 6228 0 2777 0 0 6218 292 2426 172 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 9457 0 0 0 6216 0 2780 0 0 6212 282 2426 183 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 9448 0 0 0 6215 0 2778 0 0 6211 285 2424 172 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2
0 9466 0 0 0 6220 0 2789 0 0 6217 284 2430 175 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 9467 0 0 0 6221 0 2788 0 0 6215 294 2431 171 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 18157 4112 159 146 10248 1504 8151 0 207 6204 293 2354 161 0 0 3855 73 11 90 1195 1897 0 74 0 0 0 0 368 1087 0 0 0 1532
0 9483 0 0 0 6228 0 2794 0 0 6224 283 2434 182 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 9452 0 0 0 6218 0 2784 0 0 6215 278 2428 173 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2
0 9450 0 0 0 6227 0 2777 0 0 6221 274 2427 178 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 9459 0 0 0 6219 0 2785 0 0 6215 282 2428 177 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 9465 0 0 0 6225 0 2784 0 0 6219 281 2430 181 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 9525 0 0 0 6287 0 2784 0 0 6276 290 2430 176 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 9487 0 0 0 6224 0 2793 0 0 6216 297 2434 180 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2
0 9455 0 0 0 6221 0 2781 0 0 6216 282 2427 175 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 9441 0 0 0 6214 0 2779 0 0 6211 281 2426 169 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 14365 271 150 136 6492 1502 4385 0 206 6306 285 2386 161 0 0 15 75 12 92 1190 1897 0 75 0 0 0 0 363 986 0 0 0 1520
0 9437 0 0 0 6208 0 2779 0 0 6205 282 2425 171 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
efer_relo exits fpu_reloa halt_exit halt_wake host_stat hypercall insn_emul insn_emul invlpg io_exits irq_exits irq_injec irq_windo kvm_reque largepage mmio_exit mmu_cache mmu_flood mmu_pde_z mmu_pte_u mmu_pte_w mmu_recyc mmu_shado mmu_unsyn mmu_unsyn nmi_injec nmi_windo pf_fixed pf_guest remote_tl request_n signal_ex tlb_flush
0 9466 0 0 0 6207 0 2799 0 0 6204 291 2434 170 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2
0 9457 0 0 0 6210 0 2791 0 0 6209 286 2430 171 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 9440 0 0 0 6210 0 2781 0 0 6206 277 2426 176 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 9466 0 0 0 6218 0 2788 0 0 6213 283 2430 182 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 9520 0 0 0 6261 0 2791 0 0 6249 306 2429 174 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 9427 0 0 0 6197 0 2775 0 0 6195 277 2420 178 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2
0 9439 0 0 0 6204 0 2780 0 0 6198 284 2424 177 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 9425 0 0 0 6195 0 2779 0 0 6193 282 2422 171 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 14424 270 153 139 6511 1504 4362 0 207 6319 310 2406 175 0 0 15 75 12 92 1192 1899 0 75 0 0 0 0 360 1017 0 0 0 1523
0 9438 0 0 0 6205 0 2773 0 0 6198 285 2423 178 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 9451 0 0 0 6202 0 2785 0 0 6197 288 2426 179 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2
0 9452 0 0 0 6201 0 2793 0 0 6197 287 2430 175 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 9448 0 0 0 6208 0 2783 0 0 6201 286 2426 178 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 9458 0 0 0 6209 0 2785 0 0 6202 294 2426 177 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 9522 0 0 0 6266 0 2789 0 0 6249 298 2430 186 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 9439 0 0 0 6204 0 2780 0 0 6201 282 2424 174 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2
0 9451 0 0 0 6205 0 2784 0 0 6200 300 2426 167 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 9430 0 0 0 6198 0 2784 0 0 6197 280 2425 169 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 14409 271 157 142 6491 1495 4381 0 206 6290 308 2400 164 0 0 15 74 11 93 1190 1895 0 74 0 0 0 0 364 1026 0 0 0 1519
0 9433 0 0 0 6203 0 2780 0 0 6197 288 2422 171 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
efer_relo exits fpu_reloa halt_exit halt_wake host_stat hypercall insn_emul insn_emul invlpg io_exits irq_exits irq_injec irq_windo kvm_reque largepage mmio_exit mmu_cache mmu_flood mmu_pde_z mmu_pte_u mmu_pte_w mmu_recyc mmu_shado mmu_unsyn mmu_unsyn nmi_injec nmi_windo pf_fixed pf_guest remote_tl request_n signal_ex tlb_flush
0 9451 0 0 0 6199 0 2786 0 0 6195 287 2427 178 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2
0 9440 0 0 0 6196 0 2781 0 0 6192 290 2424 177 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 9451 0 0 0 6202 0 2789 0 0 6196 294 2426 172 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 9447 0 0 0 6208 0 2789 0 0 6201 283 2430 174 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 9528 0 0 0 6269 0 2788 0 0 6260 301 2431 179 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 9427 0 0 0 6203 0 2774 0 0 6202 276 2420 173 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2
0 9444 0 0 0 6210 0 2783 0 0 6203 286 2426 172 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 9455 0 0 0 6209 0 2781 0 0 6204 293 2426 177 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 14336 270 150 141 6500 1505 4348 0 207 6317 294 2398 177 0 0 15 76 13 93 1191 1899 0 76 0 0 0 0 362 955 0 0 0 1524
0 9422 0 0 0 6196 0 2776 0 0 6190 282 2421 174 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 9438 0 0 0 6191 0 2779 0 0 6188 295 2421 174 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2
0 9429 0 0 0 6184 0 2782 0 0 6182 289 2421 176 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 9430 0 0 0 6188 0 2779 0 0 6180 287 2421 184 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 9454 0 0 0 6184 0 2797 0 0 6181 304 2428 172 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 9490 0 0 0 6248 0 2783 0 0 6239 294 2424 174 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 9431 0 0 0 6180 0 2788 0 0 6178 288 2424 175 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2
0 9426 0 0 0 6181 0 2773 0 0 6177 281 2417 195 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 9444 0 0 0 6185 0 2796 0 0 6178 293 2429 177 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 14399 271 162 142 6496 1502 4373 0 206 6298 302 2407 153 0 0 15 75 13 92 1191 1896 0 76 0 0 0 0 364 1021 0 0 0 1522
0 9426 0 0 0 6178 0 2776 0 0 6172 289 2418 186 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
efer_relo exits fpu_reloa halt_exit halt_wake host_stat hypercall insn_emul insn_emul invlpg io_exits irq_exits irq_injec irq_windo kvm_reque largepage mmio_exit mmu_cache mmu_flood mmu_pde_z mmu_pte_u mmu_pte_w mmu_recyc mmu_shado mmu_unsyn mmu_unsyn nmi_injec nmi_windo pf_fixed pf_guest remote_tl request_n signal_ex tlb_flush
0 9410 0 0 0 6180 0 2770 0 0 6174 286 2415 178 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2
0 9426 0 0 0 6181 0 2772 0 0 6175 291 2415 188 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 9426 0 0 0 6175 0 2776 0 0 6170 292 2417 188 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 9397 0 0 0 6168 0 2763 0 0 6163 282 2410 189 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 9494 0 0 0 6249 0 2780 0 0 6241 291 2422 182 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 9418 0 0 0 6176 0 2775 0 0 6173 297 2417 171 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2
0 9413 0 0 0 6167 0 2771 0 0 6163 296 2414 183 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 9447 0 0 0 6174 0 2791 0 0 6171 301 2424 184 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 14346 270 162 146 6467 1499 4325 0 207 6266 289 2379 170 0 0 15 74 10 91 1192 1896 0 73 0 0 0 0 360 1047 0 0 0 1520
0 9446 0 0 0 6188 0 2782 0 0 6182 303 2422 179 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 9445 0 0 0 6182 0 2788 0 0 6176 305 2424 174 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2
0 9419 0 0 0 6187 0 2780 0 0 6181 292 2421 166 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 9435 0 0 0 6183 0 2789 0 0 6177 301 2426 168 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 9440 0 0 0 6190 0 2779 0 0 6186 290 2420 186 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 9498 0 0 0 6237 0 2794 0 0 6229 300 2428 175 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 9728 0 0 0 6460 0 2796 0 0 6441 313 2433 176 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2
0 9423 0 0 0 6182 0 2787 0 0 6174 295 2424 166 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 9428 0 0 0 6184 0 2781 0 0 6176 299 2420 172 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 16440 270 878 855 7418 1508 5682 0 206 6192 467 2516 126 0 0 15 76 14 93 1189 1901 0 77 0 0 0 0 370 990 0 0 0 1522
0 18199 0 2706 2706 9807 0 8357 0 0 5440 1668 2739 27 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
efer_relo exits fpu_reloa halt_exit halt_wake host_stat hypercall insn_emul insn_emul invlpg io_exits irq_exits irq_injec irq_windo kvm_reque largepage mmio_exit mmu_cache mmu_flood mmu_pde_z mmu_pte_u mmu_pte_w mmu_recyc mmu_shado mmu_unsyn mmu_unsyn nmi_injec nmi_windo pf_fixed pf_guest remote_tl request_n signal_ex tlb_flush
0 15722 0 2620 2613 7858 0 7814 0 0 5232 49 2642 5 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2
0 20347 0 4101 4096 10728 0 9594 0 0 6629 22 4107 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 21449 0 4696 4692 11731 0 9660 0 0 7038 55 4712 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 22334 0 4678 4677 11696 0 10578 0 0 7014 59 4706 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 20977 0 4476 4471 11235 0 9674 0 0 6761 64 4510 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 20384 0 4425 4418 11052 0 9243 0 0 6630 80 4469 5 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2
0 20602 0 4269 4266 10666 0 9847 0 0 6398 88 4299 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 20308 0 4061 4060 10153 0 10053 0 0 6091 95 4108 6 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 24583 269 4137 4110 10658 1508 10716 0 207 6507 93 4174 6 0 0 15 76 13 93 1192 1902 0 76 0 0 0 0 365 1020 0 0 0 1526
0 23651 0 5220 5219 13050 0 10568 0 0 7829 29 5227 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 22236 0 4959 4949 12379 0 9817 0 0 7422 35 4971 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2
0 22559 0 4995 4988 12475 0 10067 0 0 7485 13 5000 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 21935 0 4742 4740 11855 0 10017 0 0 7110 59 4764 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 21538 0 4737 4736 11838 0 9628 0 0 7101 71 4762 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 19445 0 3573 3572 9496 0 9869 0 0 5920 78 3618 5 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 19887 0 4459 4454 11139 0 8651 0 0 6681 89 4510 5 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2
0 21313 0 4269 4262 10661 0 10561 0 0 6395 88 4302 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 20159 0 4104 4102 10257 0 9807 0 0 6151 93 4134 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 24324 269 4118 4098 10624 1500 10466 0 206 6492 98 4166 10 0 0 15 74 12 93 1194 1899 0 75 0 0 0 0 364 1050 0 0 0 1525
0 23604 0 5212 5208 13024 0 10547 0 0 7814 30 5219 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
efer_relo exits fpu_reloa halt_exit halt_wake host_stat hypercall insn_emul insn_emul invlpg io_exits irq_exits irq_injec irq_windo kvm_reque largepage mmio_exit mmu_cache mmu_flood mmu_pde_z mmu_pte_u mmu_pte_w mmu_recyc mmu_shado mmu_unsyn mmu_unsyn nmi_injec nmi_windo pf_fixed pf_guest remote_tl request_n signal_ex tlb_flush
0 22249 0 4959 4948 12378 0 9831 0 0 7419 36 4968 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2
0 22587 0 4993 4990 12479 0 10100 0 0 7485 8 4992 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 21859 0 4719 4718 11796 0 10008 0 0 7077 52 4735 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 21759 0 4715 4712 11781 0 9906 0 0 7069 70 4739 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 19245 0 3613 3607 9565 0 9608 0 0 5955 63 3646 6 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 19969 0 4442 4436 11098 0 8775 0 0 6654 88 4504 6 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2
0 21161 0 4269 4265 10664 0 10422 0 0 6396 74 4297 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 20276 0 4071 4066 10170 0 10010 0 0 6099 91 4104 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 24423 269 4102 4074 10571 1493 10654 0 207 6458 91 4144 1 0 0 15 72 11 87 1194 1891 0 72 0 0 0 0 354 1039 0 0 0 1522
0 23590 0 5202 5200 13000 0 10559 0 0 7798 34 5212 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 22251 0 4951 4944 12368 0 9842 0 0 7418 31 4964 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2
0 22477 0 4991 4988 12471 0 9991 0 0 7481 18 4994 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 21818 0 4711 4706 11772 0 9993 0 0 7060 49 4725 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 21907 0 4701 4700 11750 0 10084 0 0 7047 73 4735 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 18982 0 3597 3592 9535 0 9359 0 0 5937 82 3647 9 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 20443 0 4437 4429 11361 0 8965 0 0 6925 95 4497 18 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2
0 20931 0 4269 4264 10663 0 10188 0 0 6396 79 4300 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 20090 0 4039 4038 10096 0 9910 0 0 6057 81 4072 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 24332 269 4062 4040 10481 1498 10672 0 206 6404 111 4109 10 0 0 15 75 11 92 1189 1894 0 74 0 0 0 0 365 980 0 0 0 1519
0 23640 0 5223 5220 13053 0 10554 0 0 7833 30 5234 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
efer_relo exits fpu_reloa halt_exit halt_wake host_stat hypercall insn_emul insn_emul invlpg io_exits irq_exits irq_injec irq_windo kvm_reque largepage mmio_exit mmu_cache mmu_flood mmu_pde_z mmu_pte_u mmu_pte_w mmu_recyc mmu_shado mmu_unsyn mmu_unsyn nmi_injec nmi_windo pf_fixed pf_guest remote_tl request_n signal_ex tlb_flush
0 22227 0 4952 4948 12372 0 9811 0 0 7420 38 4967 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2
0 22574 0 4993 4990 12477 0 10082 0 0 7484 13 4993 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 21872 0 4724 4718 11800 0 10019 0 0 7078 52 4741 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 21708 0 4723 4718 11803 0 9839 0 0 7079 63 4743 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 21726 0 4479 4477 11246 0 10402 0 0 6767 75 4518 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 19834 0 4453 4452 11131 0 8617 0 0 6678 80 4483 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2
0 21901 0 4269 4265 10782 0 10981 0 0 6516 76 4363 62 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 20437 0 4068 4066 10224 0 10103 0 0 6156 78 4128 30 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 24655 269 4081 4060 10594 1504 10850 0 207 6496 112 4166 39 0 0 15 76 13 93 1190 1898 0 76 0 0 0 0 362 979 0 0 0 1524
^ permalink raw reply [flat|nested] 6+ messages in thread* Re: does anyone run guests for more than 5 minutes? (virtio-net perf anomaly)
2009-03-06 4:13 ` Alex Williamson
@ 2009-03-06 15:26 ` Alex Williamson
2009-03-06 17:17 ` Marcelo Tosatti
0 siblings, 1 reply; 6+ messages in thread
From: Alex Williamson @ 2009-03-06 15:26 UTC (permalink / raw)
To: Marcelo Tosatti; +Cc: kvm-devel
On Thu, 2009-03-05 at 21:25 -0700, Alex Williamson wrote:
> On Thu, 2009-03-05 at 21:04 -0300, Marcelo Tosatti wrote:
> > On Tue, Mar 03, 2009 at 01:13:41PM -0700, Alex Williamson wrote:
> > > Any guesses as to what might be going on? Can anyone reproduce? I'm
> > > hoping that I'm doing something dumb, but can't figure out what it is.
> > > The system is running v2.6.29-rc6-121-g64e7130 in the guest,
> > > v2.6.29-rc6-123-gbd7b3b4 on the host, kvm module kvm-84-620-g5bffffc and
> > > userspace kvm-84-95-gea1b668. Thanks,
> >
> > Nope. Collect kvm_stat -l before/after the slowdown?
>
> Attached. This shows about 1 minute of data before the slowdown and a
> dramatic change starting around the 80th row. Thanks,
For a bit easier consumption, here's a google spreadsheet and chart of
what appear to be the interesting columns:
http://spreadsheets.google.com/pub?key=pdwpc4VwMbjWxyQfvw8AEYg
Alex
^ permalink raw reply [flat|nested] 6+ messages in thread* Re: does anyone run guests for more than 5 minutes? (virtio-net perf anomaly)
2009-03-06 15:26 ` Alex Williamson
@ 2009-03-06 17:17 ` Marcelo Tosatti
2009-03-06 17:25 ` Alex Williamson
0 siblings, 1 reply; 6+ messages in thread
From: Marcelo Tosatti @ 2009-03-06 17:17 UTC (permalink / raw)
To: Alex Williamson; +Cc: kvm-devel
On Fri, Mar 06, 2009 at 08:26:33AM -0700, Alex Williamson wrote:
> On Thu, 2009-03-05 at 21:25 -0700, Alex Williamson wrote:
> > On Thu, 2009-03-05 at 21:04 -0300, Marcelo Tosatti wrote:
> > > On Tue, Mar 03, 2009 at 01:13:41PM -0700, Alex Williamson wrote:
> > > > Any guesses as to what might be going on? Can anyone reproduce? I'm
> > > > hoping that I'm doing something dumb, but can't figure out what it is.
> > > > The system is running v2.6.29-rc6-121-g64e7130 in the guest,
> > > > v2.6.29-rc6-123-gbd7b3b4 on the host, kvm module kvm-84-620-g5bffffc and
> > > > userspace kvm-84-95-gea1b668. Thanks,
> > >
> > > Nope. Collect kvm_stat -l before/after the slowdown?
> >
> > Attached. This shows about 1 minute of data before the slowdown and a
> > dramatic change starting around the 80th row. Thanks,
>
> For a bit easier consumption, here's a google spreadsheet and chart of
> what appear to be the interesting columns:
>
> http://spreadsheets.google.com/pub?key=pdwpc4VwMbjWxyQfvw8AEYg
>
> Alex
irq_injection goes up significantly. Is it virtio_net generating more
interrupts? Check the rate of irq's generated by hw/virtio-net.c and
compare that with rate seen in /proc/interrupts inside the guest?
^ permalink raw reply [flat|nested] 6+ messages in thread* Re: does anyone run guests for more than 5 minutes? (virtio-net perf anomaly)
2009-03-06 17:17 ` Marcelo Tosatti
@ 2009-03-06 17:25 ` Alex Williamson
0 siblings, 0 replies; 6+ messages in thread
From: Alex Williamson @ 2009-03-06 17:25 UTC (permalink / raw)
To: Marcelo Tosatti; +Cc: kvm-devel
On Fri, 2009-03-06 at 14:17 -0300, Marcelo Tosatti wrote:
> On Fri, Mar 06, 2009 at 08:26:33AM -0700, Alex Williamson wrote:
> > On Thu, 2009-03-05 at 21:25 -0700, Alex Williamson wrote:
> > > On Thu, 2009-03-05 at 21:04 -0300, Marcelo Tosatti wrote:
> > > > On Tue, Mar 03, 2009 at 01:13:41PM -0700, Alex Williamson wrote:
> > > > > Any guesses as to what might be going on? Can anyone reproduce? I'm
> > > > > hoping that I'm doing something dumb, but can't figure out what it is.
> > > > > The system is running v2.6.29-rc6-121-g64e7130 in the guest,
> > > > > v2.6.29-rc6-123-gbd7b3b4 on the host, kvm module kvm-84-620-g5bffffc and
> > > > > userspace kvm-84-95-gea1b668. Thanks,
> > > >
> > > > Nope. Collect kvm_stat -l before/after the slowdown?
> > >
> > > Attached. This shows about 1 minute of data before the slowdown and a
> > > dramatic change starting around the 80th row. Thanks,
> >
> > For a bit easier consumption, here's a google spreadsheet and chart of
> > what appear to be the interesting columns:
> >
> > http://spreadsheets.google.com/pub?key=pdwpc4VwMbjWxyQfvw8AEYg
> >
> > Alex
>
> irq_injection goes up significantly. Is it virtio_net generating more
> interrupts? Check the rate of irq's generated by hw/virtio-net.c and
> compare that with rate seen in /proc/interrupts inside the guest?
The rate inside the guest more than doubles when things slow down, ~2k/s
-> ~4.5k/s. Also interesting, the soft interrupt time reported by top
in the guest goes from ~65% to zero. I haven't checked what
hw/virtio-net.c is generating. FWIW, it's not processor specific, I see
the same thing on an AMD based system, as shown in the updated
spreadsheet. Thanks,
Alex
^ permalink raw reply [flat|nested] 6+ messages in threadend of thread, other threads:[~2009-03-06 17:24 UTC | newest] Thread overview: 6+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2009-03-03 20:13 does anyone run guests for more than 5 minutes? (virtio-net perf anomaly) Alex Williamson 2009-03-06 0:04 ` Marcelo Tosatti 2009-03-06 4:13 ` Alex Williamson 2009-03-06 15:26 ` Alex Williamson 2009-03-06 17:17 ` Marcelo Tosatti 2009-03-06 17:25 ` Alex Williamson
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox