From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:46160) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1S5yeE-0000u9-JR for qemu-devel@nongnu.org; Fri, 09 Mar 2012 07:10:18 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1S5yeC-0002mF-MH for qemu-devel@nongnu.org; Fri, 09 Mar 2012 07:10:14 -0500 Received: from mx1.redhat.com ([209.132.183.28]:60564) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1S5yeC-0002m8-EG for qemu-devel@nongnu.org; Fri, 09 Mar 2012 07:10:12 -0500 Message-ID: <4F59F3F3.9050200@redhat.com> Date: Fri, 09 Mar 2012 13:13:39 +0100 From: Kevin Wolf MIME-Version: 1.0 References: <20120308150722.GA30576@t420s.optimusnet> <4F58CCBA.9000702@codemonkey.ws> <20120308160505.GA32360@t420s.optimusnet> <4F58E67A.3050708@codemonkey.ws> <20120308175907.GA4900@t420s.optimusnet> <4F5905AA.3060304@codemonkey.ws> <20120308210209.GA11998@t420s.optimusnet> <4F59237F.6010406@codemonkey.ws> <20120308222433.GB11998@t420s.optimusnet> <4F593F08.8050606@codemonkey.ws> <20120308235101.GA24883@t420s.optimusnet> <4F59E5FE.60906@redhat.com> <4F59F0BF.9000507@codemonkey.ws> In-Reply-To: <4F59F0BF.9000507@codemonkey.ws> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [RFC] Future goals for autotest and virtualization tests List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Anthony Liguori Cc: Lucas Meneghel Rodrigues , Scott Zawalski , Ademar Reis , QEMU devel , Cleber Rosa Am 09.03.2012 12:59, schrieb Anthony Liguori: > On 03/09/2012 05:14 AM, Kevin Wolf wrote: >> Am 09.03.2012 00:51, schrieb Ademar Reis: >>> On Thu, Mar 08, 2012 at 05:21:44PM -0600, Anthony Liguori wrote: >>>>> Plus it's not unconditional: the test runner will report tests >>>>> SKIPPED if a dependency is not present. >>>> >>>> But then the tests aren't run so if most developers didn't have it >>>> installed, and most tests were written with it, most developers >>>> wouldn't be running most tests which defeats the purpose, no? >>> >>> Part of a TDD approach is to have build and test bots frequently >>> running tests on multiple platforms with different >>> configurations. >>> >>> You can't expect developers to run all tests all the time. >> >> I think this is one of the most important points: Not all developers >> must run all the tests all the time. >> >> Actually, Anthony agreed with me when I said that developers should run >> some sanity tests for all of qemu and maybe a few more tests for the >> subsystems they're touching. > > And a small number of randomly chosen test cases. > > We don't want to have test cases that never run under normal circumstances or > else they're prone to break. That's why I've talked a lot about keeping 'make > check' time bound. This sounds horrible. make check results must be reproducible, not depend on a randomly chosen set. If you do it like this, a passed make check means exactly _nothing_. >> I agree that it would be bad to have a >> autotest dependency for those basic tests that everyone should run and >> that should take a few minutes at most. >> >> For the rest of test cases, however, not everyone needs to run them and >> I think an external dependency (that is reasonably easy to satisfy) is >> not a problem there. > > I'd prefer to avoid external dependencies as it just encourages people not to > test. There may be exceptions for certain types of tests, but it should be an > exception, not the rule. > >> A test bot would be great, but even if people just >> run them occasionally by hand, that would already detect bugs that are >> currently left in the code for months. If maintainers do it before >> pushing code into master, you'll even catch everything before it goes >> into master. This is as good as it gets. > > We'll integrate make check into buildbot which currently does look at > submaintainer trees. But make check will never be the full thing. And if you want to integrate make check into automated testing, then choosing a random subset of tests for each is an even worse idea than it is anyway. I believe the only sane thing to do is to distinguish between quick sanity tests that are run by make check, and a full test suite that is not run by every developer, but ideally by some test bots. Kevin