* Re: Fwd: kvm-autotest: False PASS results
[not found] ` <a50cf5ab0905101015q5582031dy691f70588e0b073@mail.gmail.com>
@ 2009-06-01 15:03 ` Uri Lublin
2009-06-02 7:12 ` sudhir kumar
0 siblings, 1 reply; 2+ messages in thread
From: Uri Lublin @ 2009-06-01 15:03 UTC (permalink / raw)
To: sudhir kumar; +Cc: kvm-devel
On 05/10/2009 08:15 PM, sudhir kumar wrote:
> Hi Uri,
> Any comments?
>
>
> ---------- Forwarded message ----------
> From: sudhir kumar<smalikphy@gmail.com>
>
> The kvm-autotest shows the following PASS results for migration,
> while the VM was crashed and test should have failed.
>
> Here is the sequence of test commands and results grepped from
> kvm-autotest output.
>
> /root/sudhir/regression/test/kvm-autotest-phx/client/tests/kvm_runtest_2/qemu
> -name 'vm1' -monitor
> unix:/tmp/monitor-20090508-055624-QSuS,server,nowait -drive
> file=/root/sudhir/regression/test/kvm-autotest-phx/client/tests/kvm_runtest_2/images/rhel5-32.raw,if=ide,boot=on
> -net nic,vlan=0 -net user,vlan=0 -m 8192
> -smp 4 -redir tcp:5000::22 -vnc :1
>
>
> /root/sudhir/regression/test/kvm-autotest-phx/client/tests/kvm_runtest_2/qemu
> -name 'dst' -monitor
> unix:/tmp/monitor-20090508-055625-iamW,server,nowait -drive
> file=/root/sudhir/regression/test/kvm-autotest-phx/client/tests/kvm_runtest_2/images/rhel5-32.raw,if=ide,boot=on
> -net nic,vlan=0 -net user,vlan=0 -m 8192
> -smp 4 -redir tcp:5001::22 -vnc :2 -incoming tcp:0:5200
>
>
>
> 2009-05-08 05:58:43,471 Configuring logger for client level
> GOOD
> kvm_runtest_2.raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.1
> END GOOD
> kvm_runtest_2.raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.1
>
> GOOD
> kvm_runtest_2.raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2
> kvm_runtest_2.raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2
> timestamp=1241762371
> localtime=May 08 05:59:31 completed successfully
> Persistent state variable __group_level now set to 1
> END GOOD
> kvm_runtest_2.raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2
> kvm_runtest_2.raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2
> timestamp=1241762371
> localtime=May 08 05:59:31
>
> From the test output it looks that the test was succesful to
> log into the guest after migration:
>
> 20090508-055926 raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2: Migration
> finished successfully
> 20090508-055926 raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2: DEBUG:
> send_monitor_cmd: Sending monitor command: screendump
> /root/sudhir/regression/test/kvm-autotest-phx/client/results/default/kvm_runtest_2.raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2/debug/migration_post.ppm
> 20090508-055926 raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2: DEBUG:
> send_monitor_cmd: Sending monitor command: screendump
> /root/sudhir/regression/test/kvm-autotest-phx/client/results/default/kvm_runtest_2.raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2/debug/migration_pre.ppm
> 20090508-055926 raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2: DEBUG:
> send_monitor_cmd: Sending monitor command: quit
> 20090508-055926 raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2: DEBUG:
> is_sshd_running: Timeout
> 20090508-055926 raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2: Logging into
> guest after migration...
> 20090508-055926 raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2: DEBUG:
> remote_login: Trying to login...
> 20090508-055927 raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2: DEBUG:
> remote_login: Got 'Are you sure...'; sending 'yes'
> 20090508-055927 raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2: DEBUG:
> remote_login: Got password prompt; sending '123456'
> 20090508-055928 raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2: DEBUG:
> remote_login: Got shell prompt -- logged in
> 20090508-055928 raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2: Logged in
> after migration
> 20090508-055928 raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2: DEBUG:
> get_command_status_output: Sending command: help
> 20090508-055930 raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2: DEBUG:
> postprocess_vm: Postprocessing VM 'vm1'...
> 20090508-055930 raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2: DEBUG:
> postprocess_vm: VM object found in environment
>
> When I did vnc to the final migrated VM was crashed with a call trace
> as shown in the attachment.
> Quite less possible that the call trace appeared after the test
> finished as migration with memory
> more than 4GB is already broken [BUG 52527]. This looks a false PASS
> to me. Any idea how can we handle
> such falso positive results? Shall we wait for sometime after
> migration, log into the vm, do some work or run some good test,
> get output and report that the vm is alive?
>
I don't think it's a False PASS.
It seems the test was able to ssh into the guest, and run a command on the guest.
Currently we only run migration once (round-trip). I think we should run
migration more than once (using iterations). If the guest crashes due to
migration, it would fail following rounds of migration.
Sorry for the late reply,
Uri.
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: Fwd: kvm-autotest: False PASS results
2009-06-01 15:03 ` Fwd: kvm-autotest: False PASS results Uri Lublin
@ 2009-06-02 7:12 ` sudhir kumar
0 siblings, 0 replies; 2+ messages in thread
From: sudhir kumar @ 2009-06-02 7:12 UTC (permalink / raw)
To: Uri Lublin; +Cc: kvm-devel
On Mon, Jun 1, 2009 at 8:33 PM, Uri Lublin <uril@redhat.com> wrote:
> On 05/10/2009 08:15 PM, sudhir kumar wrote:
>>
>> Hi Uri,
>> Any comments?
>>
>>
>> ---------- Forwarded message ----------
>> From: sudhir kumar<smalikphy@gmail.com>
>>
>> The kvm-autotest shows the following PASS results for migration,
>> while the VM was crashed and test should have failed.
>>
>> Here is the sequence of test commands and results grepped from
>> kvm-autotest output.
>>
>>
>> /root/sudhir/regression/test/kvm-autotest-phx/client/tests/kvm_runtest_2/qemu
>> -name 'vm1' -monitor
>> unix:/tmp/monitor-20090508-055624-QSuS,server,nowait -drive
>>
>> file=/root/sudhir/regression/test/kvm-autotest-phx/client/tests/kvm_runtest_2/images/rhel5-32.raw,if=ide,boot=on
>> -net nic,vlan=0 -net user,vlan=0 -m 8192
>> -smp 4 -redir tcp:5000::22 -vnc :1
>>
>>
>>
>> /root/sudhir/regression/test/kvm-autotest-phx/client/tests/kvm_runtest_2/qemu
>> -name 'dst' -monitor
>> unix:/tmp/monitor-20090508-055625-iamW,server,nowait -drive
>>
>> file=/root/sudhir/regression/test/kvm-autotest-phx/client/tests/kvm_runtest_2/images/rhel5-32.raw,if=ide,boot=on
>> -net nic,vlan=0 -net user,vlan=0 -m 8192
>> -smp 4 -redir tcp:5001::22 -vnc :2 -incoming tcp:0:5200
>>
>>
>>
>> 2009-05-08 05:58:43,471 Configuring logger for client level
>> GOOD
>> kvm_runtest_2.raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.1
>> END GOOD
>> kvm_runtest_2.raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.1
>>
>> GOOD
>> kvm_runtest_2.raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2
>> kvm_runtest_2.raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2
>> timestamp=1241762371
>> localtime=May 08 05:59:31 completed successfully
>> Persistent state variable __group_level now set to 1
>> END GOOD
>> kvm_runtest_2.raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2
>> kvm_runtest_2.raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2
>> timestamp=1241762371
>> localtime=May 08 05:59:31
>>
>> From the test output it looks that the test was succesful to
>> log into the guest after migration:
>>
>> 20090508-055926 raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2: Migration
>> finished successfully
>> 20090508-055926 raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2: DEBUG:
>> send_monitor_cmd: Sending monitor command: screendump
>>
>> /root/sudhir/regression/test/kvm-autotest-phx/client/results/default/kvm_runtest_2.raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2/debug/migration_post.ppm
>> 20090508-055926 raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2: DEBUG:
>> send_monitor_cmd: Sending monitor command: screendump
>>
>> /root/sudhir/regression/test/kvm-autotest-phx/client/results/default/kvm_runtest_2.raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2/debug/migration_pre.ppm
>> 20090508-055926 raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2: DEBUG:
>> send_monitor_cmd: Sending monitor command: quit
>> 20090508-055926 raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2: DEBUG:
>> is_sshd_running: Timeout
>> 20090508-055926 raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2: Logging into
>> guest after migration...
>> 20090508-055926 raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2: DEBUG:
>> remote_login: Trying to login...
>> 20090508-055927 raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2: DEBUG:
>> remote_login: Got 'Are you sure...'; sending 'yes'
>> 20090508-055927 raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2: DEBUG:
>> remote_login: Got password prompt; sending '123456'
>> 20090508-055928 raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2: DEBUG:
>> remote_login: Got shell prompt -- logged in
>> 20090508-055928 raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2: Logged in
>> after migration
>> 20090508-055928 raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2: DEBUG:
>> get_command_status_output: Sending command: help
>> 20090508-055930 raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2: DEBUG:
>> postprocess_vm: Postprocessing VM 'vm1'...
>> 20090508-055930 raw.8gb_mem.smp4.RHEL.5.3.i386.migrate.2: DEBUG:
>> postprocess_vm: VM object found in environment
>>
>> When I did vnc to the final migrated VM was crashed with a call trace
>> as shown in the attachment.
>> Quite less possible that the call trace appeared after the test
>> finished as migration with memory
>> more than 4GB is already broken [BUG 52527]. This looks a false PASS
>> to me. Any idea how can we handle
>> such falso positive results? Shall we wait for sometime after
>> migration, log into the vm, do some work or run some good test,
>> get output and report that the vm is alive?
>>
>
>
> I don't think it's a False PASS.
> It seems the test was able to ssh into the guest, and run a command on the
> guest.
>
> Currently we only run migration once (round-trip). I think we should run
> migration more than once (using iterations). If the guest crashes due to
> migration, it would fail following rounds of migration.
Also I would like to have some scripts like basic_test.py which will
be executed inside the guest to check it's health to a more extent.
Though this will again need different scripts for windows/linux/mac
etc. Do you agree on it?
>
> Sorry for the late reply,
Its OK. Thanks for the response.
> Uri.
>
--
Sudhir Kumar
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2009-06-02 7:12 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <a50cf5ab0905081141j63f9c833s22fa2110b5c8872b@mail.gmail.com>
[not found] ` <a50cf5ab0905101015q5582031dy691f70588e0b073@mail.gmail.com>
2009-06-01 15:03 ` Fwd: kvm-autotest: False PASS results Uri Lublin
2009-06-02 7:12 ` sudhir kumar
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox