From mboxrd@z Thu Jan 1 00:00:00 1970 From: George Dunlap Subject: Re: Call to write tests for osstest Date: Wed, 14 Aug 2013 16:03:09 +0100 Message-ID: <520B9C2D.6080601@eu.citrix.com> References: <1369957247.20130808193221@eikelenboom.it> <1376047194.19531.112.camel@Solace> <676077209.20130809134642@eikelenboom.it> <20130809120008.GF18798@zion.uk.xensource.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii"; Format="flowed" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <20130809120008.GF18798@zion.uk.xensource.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Wei Liu Cc: Sander Eikelenboom , Dario Faggioli , "xen-devel@lists.xen.org" List-Id: xen-devel@lists.xenproject.org On 09/08/13 13:00, Wei Liu wrote: > On Fri, Aug 09, 2013 at 01:46:42PM +0200, Sander Eikelenboom wrote: >> Friday, August 9, 2013, 1:19:54 PM, you wrote: >> >>> On gio, 2013-08-08 at 19:32 +0200, Sander Eikelenboom wrote: >>>> * Some performance testing (network, block, cpu/mem/fork/real apps benchmarks, some other metrics) >>>> - perhaps makes separate graphs of these, so one can see performance increase or decrease over a larger time frame, and see at around what commits that occurred. >>>> - only after all basic tests succeeded and a push was done. >>>> >>> We are after this too. I am looking at how to make it possible to do >>> something like that on top of OSSTest. >>> Perf benchmarking is a little bit different from regression >>> smoke-testing, but still I think (hope? :-)) that most of the >>> infrastructure can be reused. >> Yes it does require to keep the rest of the circumstances the same for a longer period of time. >> But it could be quite valuable, slow introduced and minor performance regressions are hard to discover. >> >>> Also, what hardware to use and how to properly schedule these kind of >>> "tests" is something that needs a bit more of thinking/discussion, I >>> think. >> Yes since you have to keep the "environment" the same, the machine should only run the perf tests at that time. >> > This worries me. It is really hard to keep the "environment" the same. > AIUI network / block etc. performance can be affected by other kernel > subsystems. Also the kernel config could also have impact on > performance. The same is true for functional regressions -- a bug could be introduced by Xen, qemu, or the kernel. The tester already keeps track of this, as far as I know, and only moves one thing at a time. -George