From mboxrd@z Thu Jan 1 00:00:00 1970 From: Ryan Harper Subject: Re: [ANNOUNCE] kvm-autotest Date: Sat, 12 Jul 2008 10:31:32 -0500 Message-ID: <20080712153132.GR4188@us.ibm.com> References: <48709B6D.6030300@qumranet.com> <20080709154357.GA6217@dmt.cnet> <4875FC00.2090205@qumranet.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: Marcelo Tosatti , kvm@vger.kernel.org, Dror Russo To: Uri Lublin Return-path: Received: from e6.ny.us.ibm.com ([32.97.182.146]:41183 "EHLO e6.ny.us.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753740AbYGLPb7 (ORCPT ); Sat, 12 Jul 2008 11:31:59 -0400 Received: from d01relay04.pok.ibm.com (d01relay04.pok.ibm.com [9.56.227.236]) by e6.ny.us.ibm.com (8.13.8/8.13.8) with ESMTP id m6CFYBFb003966 for ; Sat, 12 Jul 2008 11:34:11 -0400 Received: from d01av02.pok.ibm.com (d01av02.pok.ibm.com [9.56.224.216]) by d01relay04.pok.ibm.com (8.13.8/8.13.8/NCO v9.0) with ESMTP id m6CFVab6167224 for ; Sat, 12 Jul 2008 11:31:36 -0400 Received: from d01av02.pok.ibm.com (loopback [127.0.0.1]) by d01av02.pok.ibm.com (8.12.11.20060308/8.13.3) with ESMTP id m6CFVZjY023565 for ; Sat, 12 Jul 2008 11:31:36 -0400 Content-Disposition: inline In-Reply-To: <4875FC00.2090205@qumranet.com> Sender: kvm-owner@vger.kernel.org List-ID: * Uri Lublin [2008-07-10 07:42]: > Marcelo Tosatti wrote: > >On Sun, Jul 06, 2008 at 01:16:13PM +0300, Uri Lublin wrote: > >> > >> The test framework is based on autotest ( > >> http://test.kernel.org/autotest ). > >>Currently we only using client tests, later we may want to use server > >>tests > >>for more complicated tests. > > > >This is looking great. Easy to use and fast. A few comments: > > > >- As you mention, it should reuse the server/client model for running > > tests inside guests. I hacked up a "kvm_autotest" test that > > basically does: > > > >tests = ["linus_stress", "bash_shared_mapping", "rmaptest", "tsc", > >"scrashme", "isic", "sleeptest", "libhugetlbfs", "..."] > > > >vm.ssh.scp_to_remote(autotest_tarball, '/root') > >(s,o) = vm.ssh.ssh('tar zvxf kvm-autotest.tar.gz') > >for i in range(0, len(tests)): > > (s,o) = vm.ssh.ssh('kvm-autotest/client/bin/autotest ' + > > 'kvm-autotest/client/tests/' + tests[i] + > > '/control') > > print(o) > > > >Which poorly replicates what the client/server infrastructure already > >provides. IMO its a waste of time to write specialized client > >tests (other than virt specific ones). > > > > You see guests as clients and the host as the server. > We were thinking of the host as a client and multi-host operations to be > done by a server. guest-operations would be done using ssh (for linux > guests) as your example above. You make a good point that we can use > server/client infrastructure for guest operations. As it is simpler to > write autotest client tests, and we thought most of the tests would be run > as client tests, we want to postpone the server tests and focus on adding > tests and guests to the matrix. It's definitely worth looking at the autotest server code/samples. There exists code in-tree already to build an deploy kvm via autotest server mode which a single machine can drive the building, installing, creation of guests on N number of clients, directing each guest image to run various autotest client tests, collecting all of the results. See autotest/server/samples/*kvm* A proper server setup is a little involved[1] but much more streamlined these days. > You probably do not need to change the migration test. I think we need to > run some load on the guest (we'd probably have many load options, and the > user will choose/configure which one to use) before migration starts and > keep it running during the migration process. > > > > >- Currently its difficult to debug client test failure inside guests, > > since the VM and its image are destroyed. Perhaps the client/server > > model handles error handling/reporting much more nicely. > > Agreed, we thought of that, but it's not cooked yet. Currently we always > cleanup. We should not remove the (temporary) image if the test fails. > Should we keep the guest running upon failure ? Currently we continue to > test the next guest. We should probably have a configuration flag for that > too. I suppose it depends on the failure, if we're capturing the boot log then for boot failures, I don't see any reason to keep running. If we succeed in booting but fail at some further point, then I think it makes sense to keep them running. -- Ryan Harper Software Engineer; Linux Technology Center IBM Corp., Austin, Tx (512) 838-9253 T/L: 678-9253 ryanh@us.ibm.com