* [Qemu-devel] KVM call minutes for Apr 5
@ 2011-04-05 15:07 Chris Wright
2011-04-05 15:29 ` Stefan Hajnoczi
2011-04-05 16:01 ` [Qemu-devel] spice in kvm-autotest [was: Re: KVM call minutes for Apr 5] Alon Levy
0 siblings, 2 replies; 17+ messages in thread
From: Chris Wright @ 2011-04-05 15:07 UTC (permalink / raw)
To: kvm; +Cc: qemu-devel
KVM Forum
- save the date is out, cfp will follow later this week
- abstracts due in 6wks, 2wk review period, notifications by end of May
Improving process to scale project
- Trivial patch bot
- Sub-maintainership
Trivial patch monkeys^Wteam
- small/simple patches posted can fall through the cracks (esp. for
areas that aren't well maintained)
- patches should be simple, easy to review (
- aiming to gather a team, so that the position can rotate
- patch submitter can rest assured
- Stefan and possibly Mike Roth are volunteering to get this started
- Cc: qemu-trivial@nongnu.org to send patches to the Trivial patch monkey
- details here:
http://wiki.qemu.org/Contribute/TrivialPatches
Sub-maintainership
- have MAINTAINERS file
- need to add git tree URLs
- needs another pass to make sure there are no missing subsystems
- make it clearer how maintained the subsystems are
- adding a wiki page to show how to become a subsystem maintainer
- one valuable step...write testing around the subsystem
- means you've had to learn the subsystem (builds expertise)
- allows for regression testing the subsystem (esp. validating new patches)
- sub-maintainers sometimes disappear
- can add another maintainer
- actively poke the maintainer when patches are languishing
- if you're going to be away, be sure to let list or backup know
- systematic patch tracking would help, patchwork doesn't quite cut it
- who receives pull request
- list + blue swirl/aurelien for tcg, anthony picking up plenty of
other bits
- infrastructure subsystems (qdev, migration, etc..)
- big invasive changes done externally, effective flag day for full merge
- subsystem localized change (e.g. vmstate fix for a specific device)
maintainers can work it out, be sure to have both
- facilitating patch review and hopefully improving subsystem over time
kvm-autotest
- roadmap...refactor to centralize testing (handle the xen-autotest split off)
- internally at RH, lmr and cleber maintain autotest server to test
branches (testing qemu.git daily)
- have good automation for installs and testing
- seems more QA focused than developers
- plenty of benefit for developers, so lack of developer use partly
cultural/visibility...
- kvm-autotest team always looking for feedback to improve for
developer use case
- kvm-autotest day to have folks use it, write test, give feedback?
- startup cost is/was steep, the day might be too much handholding
- install-fest? (to get it installed and up and running)
- buildbot or autotest for testing patches to verify building and working
- one goal is to reduce mailing list load (patch resubmission because
they haven't handled basic cases that buildbot or autotest would have
caught)
- fedora-virt test day coming up on April 14th. lucas will be on hand and
we can piggy back on that to include kvm-autotest install and virt testing
- kvm autotest run before qemu pull request and post merge to track
regressions, more frequent testing helps developers see breakage
quickly
- qemu.git daily testing already, only the "sanity" test subset
- run more comprehensive "stable" set of tests on weekends
- one issue is the large number of known failures, need to make these
easier to identify (and fix the failures one way or another)
- create database and verify (regressions) against that
- red/yellow/green (yellow shows area was already broken)
- autotest can be run against server, not just on laptop
- how to do remote client display testing (e.g. spice client)
- dogtail and LDTP
- graphics could be tested w/ screenshot compares
- WHQL testing automated as well
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Qemu-devel] KVM call minutes for Apr 5
2011-04-05 15:07 [Qemu-devel] KVM call minutes for Apr 5 Chris Wright
@ 2011-04-05 15:29 ` Stefan Hajnoczi
2011-04-05 17:37 ` Lucas Meneghel Rodrigues
2011-04-05 16:01 ` [Qemu-devel] spice in kvm-autotest [was: Re: KVM call minutes for Apr 5] Alon Levy
1 sibling, 1 reply; 17+ messages in thread
From: Stefan Hajnoczi @ 2011-04-05 15:29 UTC (permalink / raw)
To: Lucas Meneghel Rodrigues; +Cc: Chris Wright, paul.larson, qemu-devel, kvm
On Tue, Apr 5, 2011 at 4:07 PM, Chris Wright <chrisw@redhat.com> wrote:
> kvm-autotest
> - roadmap...refactor to centralize testing (handle the xen-autotest split off)
> - internally at RH, lmr and cleber maintain autotest server to test
> branches (testing qemu.git daily)
> - have good automation for installs and testing
> - seems more QA focused than developers
> - plenty of benefit for developers, so lack of developer use partly
> cultural/visibility...
> - kvm-autotest team always looking for feedback to improve for
> developer use case
> - kvm-autotest day to have folks use it, write test, give feedback?
> - startup cost is/was steep, the day might be too much handholding
> - install-fest? (to get it installed and up and running)
> - buildbot or autotest for testing patches to verify building and working
> - one goal is to reduce mailing list load (patch resubmission because
> they haven't handled basic cases that buildbot or autotest would have
> caught)
> - fedora-virt test day coming up on April 14th. lucas will be on hand and
> we can piggy back on that to include kvm-autotest install and virt testing
> - kvm autotest run before qemu pull request and post merge to track
> regressions, more frequent testing helps developers see breakage
> quickly
> - qemu.git daily testing already, only the "sanity" test subset
> - run more comprehensive "stable" set of tests on weekends
> - one issue is the large number of known failures, need to make these
> easier to identify (and fix the failures one way or another)
> - create database and verify (regressions) against that
> - red/yellow/green (yellow shows area was already broken)
> - autotest can be run against server, not just on laptop
Features that I think are important for a qemu.git kvm-autotest:
* Public results display (sounds like you're working on this)
* Public notifications of breakage, qemu.git/master failures to
qemu-devel mailing list.
* A one-time contributor can get their code tested. No requirement to
set up a server because contributors may not have the resources.
Perhaps kvm-autotest is a good platform for the automated testing of
ARM TCG. Paul is CCed, I recently saw the Jenkins qemu build and boot
tests he has set up. Lucas, do you have ideas on how these efforts
can work together to bring testing to upstream QEMU?
http://validation.linaro.org/jenkins/job/qemu-boot-images/
Stefan
^ permalink raw reply [flat|nested] 17+ messages in thread
* [Qemu-devel] spice in kvm-autotest [was: Re: KVM call minutes for Apr 5]
2011-04-05 15:07 [Qemu-devel] KVM call minutes for Apr 5 Chris Wright
2011-04-05 15:29 ` Stefan Hajnoczi
@ 2011-04-05 16:01 ` Alon Levy
2011-04-05 16:27 ` [Qemu-devel] " Lucas Meneghel Rodrigues
1 sibling, 1 reply; 17+ messages in thread
From: Alon Levy @ 2011-04-05 16:01 UTC (permalink / raw)
To: Lucas Meneghel Rodrigues; +Cc: Swapna, qemu-devel, kvm
On Tue, Apr 05, 2011 at 08:07:03AM -0700, Chris Wright wrote:
[snip]
> kvm-autotest
> - roadmap...refactor to centralize testing (handle the xen-autotest split off)
> - internally at RH, lmr and cleber maintain autotest server to test
> branches (testing qemu.git daily)
> - have good automation for installs and testing
> - seems more QA focused than developers
> - plenty of benefit for developers, so lack of developer use partly
> cultural/visibility...
> - kvm-autotest team always looking for feedback to improve for
> developer use case
> - kvm-autotest day to have folks use it, write test, give feedback?
> - startup cost is/was steep, the day might be too much handholding
> - install-fest? (to get it installed and up and running)
> - buildbot or autotest for testing patches to verify building and working
> - one goal is to reduce mailing list load (patch resubmission because
> they haven't handled basic cases that buildbot or autotest would have
> caught)
> - fedora-virt test day coming up on April 14th. lucas will be on hand and
> we can piggy back on that to include kvm-autotest install and virt testing
> - kvm autotest run before qemu pull request and post merge to track
> regressions, more frequent testing helps developers see breakage
> quickly
> - qemu.git daily testing already, only the "sanity" test subset
> - run more comprehensive "stable" set of tests on weekends
> - one issue is the large number of known failures, need to make these
> easier to identify (and fix the failures one way or another)
> - create database and verify (regressions) against that
> - red/yellow/green (yellow shows area was already broken)
> - autotest can be run against server, not just on laptop
> - how to do remote client display testing (e.g. spice client)
> - dogtail and LDTP
> - graphics could be tested w/ screenshot compares
> - WHQL testing automated as well
screenshots are already there, and they are a great start. But you can't
really do testing if you aren't recreating the same environment, and having
a client server where there is no client, while being a good test, doesn't
cover the case where the client is connected :)
So I was basically talking about the added requirement of creating a client
connection (one or more) to a single vm.
Note that I wasn't asking anyone to develop this - I'm just asking if patches
in that direction would be accepted/interesting. We (well, Swapna, cc-ed) are
still working on deciding exactly how to do automated testing as a project.
Regarding the dogtail/LDTP issue, they are about specific tests run inside the
guest, and they are certainly something we would leverage. But like I mentioned
on the call, there is a whole suite of whql tests that are display specific,
and don't require anything new. In fact, a few months ago I added support for
autotest to run one of them, resizing to all the possible modes - so I know I
don't need dogtail for significant portions of our testing. (sorry, no git link
- I'll clean it up and post, it's been done 10 months ago so probably won't
cleanly apply :)
For some ideas about what we are interested in see
http://spice-space.org/page/SpiceAutomatedTesting. (just the Requirements section).
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 17+ messages in thread
* [Qemu-devel] Re: spice in kvm-autotest [was: Re: KVM call minutes for Apr 5]
2011-04-05 16:01 ` [Qemu-devel] spice in kvm-autotest [was: Re: KVM call minutes for Apr 5] Alon Levy
@ 2011-04-05 16:27 ` Lucas Meneghel Rodrigues
2011-04-05 17:08 ` Alon Levy
0 siblings, 1 reply; 17+ messages in thread
From: Lucas Meneghel Rodrigues @ 2011-04-05 16:27 UTC (permalink / raw)
To: Alon Levy; +Cc: Swapna, qemu-devel, kvm
On Tue, 2011-04-05 at 19:01 +0300, Alon Levy wrote:
> screenshots are already there, and they are a great start. But you can't
> really do testing if you aren't recreating the same environment, and having
> a client server where there is no client, while being a good test, doesn't
> cover the case where the client is connected :)
>
> So I was basically talking about the added requirement of creating a client
> connection (one or more) to a single vm.
Yeah, I think now I've got the idea. We can certainly think of some
strategy to coordinate test execution in vms in one host, and have
another bare metal machine that opens clients to the vms on that host.
Autotest has infrastructure to coordinate tests among multiple bare
metal machines, we call that infrastructure 'barriers'. It works
basically through TCP socket communication, we have an autotest library
for that, and an API that allows coordination among multiple machines.
We use that for example, to coordinate cross host migration tests.
So, it might be tricky (specially setting up the environment on the
machine that will open displays and handle all the client opening
part[1]), but certainly doable IMHO.
[1] This is the part where the LDTP thing would come to place - we need
infrastructure to support opening the display, starting the graphical
clients and interacting with them in an automated fashion, LDTP and
dogtail are the means to do that.
> Note that I wasn't asking anyone to develop this - I'm just asking if patches
> in that direction would be accepted/interesting. We (well, Swapna, cc-ed) are
> still working on deciding exactly how to do automated testing as a project.
Sure, absolutely, I've been talking to Swapna about this.
> Regarding the dogtail/LDTP issue, they are about specific tests run inside the
> guest, and they are certainly something we would leverage. But like I mentioned
> on the call, there is a whole suite of whql tests that are display specific,
> and don't require anything new. In fact, a few months ago I added support for
> autotest to run one of them, resizing to all the possible modes - so I know I
> don't need dogtail for significant portions of our testing. (sorry, no git link
> - I'll clean it up and post, it's been done 10 months ago so probably won't
> cleanly apply :)
The thing about WHQL is that it has its own test suite coordination
program (DTM), that has to run on a separate machine/VM. So they have
all infrastructure in place there. If we need graphical clients being
started and controlled on a linux bare metal machine, I am afraid we'll
have to resort to those GUI test automation frameworks I mentioned. if
our test is geared more towards windows clients, then they won't be
needed, sure.
> For some ideas about what we are interested in see
> http://spice-space.org/page/SpiceAutomatedTesting. (just the Requirements section).
I took a look at it
^ permalink raw reply [flat|nested] 17+ messages in thread
* [Qemu-devel] Re: spice in kvm-autotest [was: Re: KVM call minutes for Apr 5]
2011-04-05 16:27 ` [Qemu-devel] " Lucas Meneghel Rodrigues
@ 2011-04-05 17:08 ` Alon Levy
2011-04-05 18:03 ` Lucas Meneghel Rodrigues
2011-04-05 18:08 ` Anthony Liguori
0 siblings, 2 replies; 17+ messages in thread
From: Alon Levy @ 2011-04-05 17:08 UTC (permalink / raw)
To: Lucas Meneghel Rodrigues; +Cc: Swapna, qemu-devel, kvm
On Tue, Apr 05, 2011 at 01:27:48PM -0300, Lucas Meneghel Rodrigues wrote:
> On Tue, 2011-04-05 at 19:01 +0300, Alon Levy wrote:
> > screenshots are already there, and they are a great start. But you can't
> > really do testing if you aren't recreating the same environment, and having
> > a client server where there is no client, while being a good test, doesn't
> > cover the case where the client is connected :)
> >
> > So I was basically talking about the added requirement of creating a client
> > connection (one or more) to a single vm.
>
> Yeah, I think now I've got the idea. We can certainly think of some
> strategy to coordinate test execution in vms in one host, and have
> another bare metal machine that opens clients to the vms on that host.
>
> Autotest has infrastructure to coordinate tests among multiple bare
> metal machines, we call that infrastructure 'barriers'. It works
> basically through TCP socket communication, we have an autotest library
> for that, and an API that allows coordination among multiple machines.
> We use that for example, to coordinate cross host migration tests.
>
> So, it might be tricky (specially setting up the environment on the
> machine that will open displays and handle all the client opening
> part[1]), but certainly doable IMHO.
>
> [1] This is the part where the LDTP thing would come to place - we need
> infrastructure to support opening the display, starting the graphical
> clients and interacting with them in an automated fashion, LDTP and
> dogtail are the means to do that.
This might be required just for stuff like automating applications in the guest
that have no existing infrastructure, but I think we should try to avoid this
because it's notoriously difficult to create and maintain gui tests. WHQL doesn't
need this.
>
> > Note that I wasn't asking anyone to develop this - I'm just asking if patches
> > in that direction would be accepted/interesting. We (well, Swapna, cc-ed) are
> > still working on deciding exactly how to do automated testing as a project.
>
> Sure, absolutely, I've been talking to Swapna about this.
>
> > Regarding the dogtail/LDTP issue, they are about specific tests run inside the
> > guest, and they are certainly something we would leverage. But like I mentioned
> > on the call, there is a whole suite of whql tests that are display specific,
> > and don't require anything new. In fact, a few months ago I added support for
> > autotest to run one of them, resizing to all the possible modes - so I know I
> > don't need dogtail for significant portions of our testing. (sorry, no git link
> > - I'll clean it up and post, it's been done 10 months ago so probably won't
> > cleanly apply :)
>
> The thing about WHQL is that it has its own test suite coordination
> program (DTM), that has to run on a separate machine/VM. So they have
> all infrastructure in place there. If we need graphical clients being
> started and controlled on a linux bare metal machine, I am afraid we'll
> have to resort to those GUI test automation frameworks I mentioned. if
> our test is geared more towards windows clients, then they won't be
> needed, sure.
oh, but the added requirement is just launching the client. So - say you already
have a guest running the whql test suite. The only addition would be to also
have a spice client connected to that vm at the time the whql test is running. you
don't need to do anything in that client - don't send it mouse events or keyboard
events. Just let spice-server do what it does in this case, which would be sending
the spice protocol messages caused by the guest activity, which only happens when
a client is actually connected.
Another thing - the client doesn't have to run on a separate vm, or even on a separate
process. An additional vm seems like overkill, you are already building the qemu user
space from source or a specific tarball, one of the patches on my private tree adds
building spice from git, this includes the client and server. But we also have python
bindings for our newer spice-gtk client, so you could even import it from the autotest
python process. The whole barrier thing seems way overkill for this.
>
> > For some ideas about what we are interested in see
> > http://spice-space.org/page/SpiceAutomatedTesting. (just the Requirements section).
>
> I took a look at it
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Qemu-devel] KVM call minutes for Apr 5
2011-04-05 15:29 ` Stefan Hajnoczi
@ 2011-04-05 17:37 ` Lucas Meneghel Rodrigues
2011-04-07 10:03 ` Stefan Hajnoczi
0 siblings, 1 reply; 17+ messages in thread
From: Lucas Meneghel Rodrigues @ 2011-04-05 17:37 UTC (permalink / raw)
To: Stefan Hajnoczi; +Cc: Chris Wright, paul.larson, qemu-devel, kvm
On Tue, 2011-04-05 at 16:29 +0100, Stefan Hajnoczi wrote:
> Features that I think are important for a qemu.git kvm-autotest:
> * Public results display (sounds like you're working on this)
^ Yes, we hope to get this setup soon.
> * Public notifications of breakage, qemu.git/master failures to
> qemu-devel mailing list.
^ The challenge is to get enough data to determine what is a new
breakage from a known issue, mainly. More related to have historical
data from test results than anything else, IMO.
> * A one-time contributor can get their code tested. No requirement to
> set up a server because contributors may not have the resources.
Coming back to the point that many colleagues made: We need a sort of
'make test' on the qemu trees that would fetch autotest and could setup
basic tests that people could run, maybe suggest test sets...
The problem I see is, getting guests up and running using configs that
actually matter is not trivial (there are things such as ensuring that
all auxiliary utilities are installed in a distro agnostic fashion,
having bridges and DHCP server setup on possibly a disconnected work
laptop, and stuff).
So, having a 'no brains involved at all' setup is quite a challenge,
suggestions welcome. Also, downloading isos, waiting for guests to
install and run thorough tests won't be fast. So J. Random Developer
might not bother to run tests even if we can provide a fool proof,
perfectly automated setup, because it'd take a long time at first to get
the tests run. This is also a challenge.
About the ideas of how a 'make tests' feature would work, please give me
some feedback. I picture something like:
-------------------------------------------------------------------
$ make test
Checking out autotest...
1 - Verifying directories (check if the directory structure expected by the default test config is there)
Creating /tmp/kvm_autotest_root/images
Creating /tmp/kvm_autotest_root/isos
Creating /tmp/kvm_autotest_root/steps_data
Do you want to setup NFS mounts or copy isos to some of those dirs? (y/n)
2 - Creating config files from samples (copy the default config samples to actual config files)
Creating config file /home/lmr/Code/autotest-git/client/tests/kvm/build.cfg from sample
Creating config file /home/lmr/Code/autotest-git/client/tests/kvm/cdkeys.cfg from sample
Creating config file /home/lmr/Code/autotest-git/client/tests/kvm/tests_base.cfg from sample
Creating config file /home/lmr/Code/autotest-git/client/tests/kvm/tests.cfg from sample
Creating config file /home/lmr/Code/autotest-git/client/tests/kvm/unittests.cfg from sample
3 - Do you want to test Linux guests? (y/n)
(if y)
Verifying iso (make sure we have the OS ISO needed for the default test set)
Verifying iso Fedora-14-x86_64-DVD.iso
/tmp/kvm_autotest_root/isos/linux/Fedora-14-x86_64-DVD.iso not found or corrupted
Would you like to download it? (y/n)
4 - Do you want to test windows guests? (y/n)
(if y)
Verifying winutils.iso (make sure we have the utility ISO needed for Windows testing)
In order to run the KVM autotests in Windows guests, we provide you an ISO that this script can download
Verifying iso winutils.iso
/tmp/kvm_autotest_root/isos/windows/winutils.iso not found or corrupted
Would you like to download it? (y/n)
5 - Checking for the KVM module (make sure kvm is loaded to accelerate qemu)
Running '/sbin/lsmod'
KVM module loaded
6 - Verify needed packages to get started
Please take a look at the online documentation http://www.linux-kvm.org/page/KVM-Autotest/Client_Install (session 'Install Prerequisite packages')
You can also edit the test config files (see output of step 2 for a list).
7 - What are the areas you want to verify (enter all that apply)
A) Unit tests (does not require to install guests)
B) Sanity (just install and boot guests)
C) Networking
D) Block device
E) Migration
F) Time drift
8 - Did you check your config and are ready to start testing (y/n)
Running tests...
-------------------------------------------------------------------
If someone answered n to both 3) and 4) only the unit tests would be
available to execute.
> Perhaps kvm-autotest is a good platform for the automated testing of
> ARM TCG. Paul is CCed, I recently saw the Jenkins qemu build and boot
> tests he has set up. Lucas, do you have ideas on how these efforts
> can work together to bring testing to upstream QEMU?
> http://validation.linaro.org/jenkins/job/qemu-boot-images/
I heard about jenkins before and it is indeed a nice project. What they
do here, from what I could assess browsing at the webpage you provided
is:
1) Build qemu.git every time there are commits
2) Boot pre-made 'pristine' images, one is a lenny arm image and the
other is a linaro arm image.
It is possible to do the same with kvm autotest, just a matter of not
performing guest install tests and executing only the boot tests with
pre-made images. What jenkins does here is a even quicker and shorter
version of our sanity jobs.
About how we can work together, I thought about some possibilities:
1) Modify the jenkins test step to execute a kvm autotest job after the
build, with the stripped down test set. We might gain some extra debug
info, that the current test step does not seem to provide
2) Do the normal test step and if that succeeds, trigger a kvm autotest
job that does more comprehensive testing, such as migration, time drift,
block layer, etc
The funny thing is that KVM autotest has infrastructure to do the same
as jenkins does, but jenkins is highly streamlined for the buildbot use
case (continuous build and integration), and I see that as a very nice
advantage. So I'd rather keep use jenkins and have kvm autotest plugged
into it conveniently.
^ permalink raw reply [flat|nested] 17+ messages in thread
* [Qemu-devel] Re: spice in kvm-autotest [was: Re: KVM call minutes for Apr 5]
2011-04-05 17:08 ` Alon Levy
@ 2011-04-05 18:03 ` Lucas Meneghel Rodrigues
2011-04-06 7:50 ` Alon Levy
2011-04-05 18:08 ` Anthony Liguori
1 sibling, 1 reply; 17+ messages in thread
From: Lucas Meneghel Rodrigues @ 2011-04-05 18:03 UTC (permalink / raw)
To: Alon Levy; +Cc: Swapna, qemu-devel, kvm
On Tue, 2011-04-05 at 20:08 +0300, Alon Levy wrote:
> On Tue, Apr 05, 2011 at 01:27:48PM -0300, Lucas Meneghel Rodrigues wrote:
> > On Tue, 2011-04-05 at 19:01 +0300, Alon Levy wrote:
>
> > > So I was basically talking about the added requirement of creating a client
> > > connection (one or more) to a single vm.
> >
> > Yeah, I think now I've got the idea. We can certainly think of some
> > strategy to coordinate test execution in vms in one host, and have
> > another bare metal machine that opens clients to the vms on that host.
> >
> > Autotest has infrastructure to coordinate tests among multiple bare
> > metal machines, we call that infrastructure 'barriers'. It works
> > basically through TCP socket communication, we have an autotest library
> > for that, and an API that allows coordination among multiple machines.
> > We use that for example, to coordinate cross host migration tests.
> >
> > So, it might be tricky (specially setting up the environment on the
> > machine that will open displays and handle all the client opening
> > part[1]), but certainly doable IMHO.
> >
> > [1] This is the part where the LDTP thing would come to place - we need
> > infrastructure to support opening the display, starting the graphical
> > clients and interacting with them in an automated fashion, LDTP and
> > dogtail are the means to do that.
>
> This might be required just for stuff like automating applications in the guest
> that have no existing infrastructure, but I think we should try to avoid this
> because it's notoriously difficult to create and maintain gui tests. WHQL doesn't
> need this.
Agreed.
> > > Regarding the dogtail/LDTP issue, they are about specific tests run inside the
> > > guest, and they are certainly something we would leverage. But like I mentioned
> > > on the call, there is a whole suite of whql tests that are display specific,
> > > and don't require anything new. In fact, a few months ago I added support for
> > > autotest to run one of them, resizing to all the possible modes - so I know I
> > > don't need dogtail for significant portions of our testing. (sorry, no git link
> > > - I'll clean it up and post, it's been done 10 months ago so probably won't
> > > cleanly apply :)
> >
> > The thing about WHQL is that it has its own test suite coordination
> > program (DTM), that has to run on a separate machine/VM. So they have
> > all infrastructure in place there. If we need graphical clients being
> > started and controlled on a linux bare metal machine, I am afraid we'll
> > have to resort to those GUI test automation frameworks I mentioned. if
> > our test is geared more towards windows clients, then they won't be
> > needed, sure.
>
> oh, but the added requirement is just launching the client. So - say you already
> have a guest running the whql test suite. The only addition would be to also
> have a spice client connected to that vm at the time the whql test is running. you
> don't need to do anything in that client - don't send it mouse events or keyboard
> events. Just let spice-server do what it does in this case, which would be sending
> the spice protocol messages caused by the guest activity, which only happens when
> a client is actually connected.
Ok, I didn't know that it's not necessary to interact with the graphical
client.
> Another thing - the client doesn't have to run on a separate vm, or even on a separate
> process. An additional vm seems like overkill, you are already building the qemu user
> space from source or a specific tarball, one of the patches on my private tree adds
> building spice from git, this includes the client and server. But we also have python
> bindings for our newer spice-gtk client, so you could even import it from the autotest
> python process. The whole barrier thing seems way overkill for this.
Fair enough. At some point, if people demand that other scenarios
(remote clients running on remote machines) be tested, then we can
re-visit the barrier thing.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Qemu-devel] Re: spice in kvm-autotest [was: Re: KVM call minutes for Apr 5]
2011-04-05 17:08 ` Alon Levy
2011-04-05 18:03 ` Lucas Meneghel Rodrigues
@ 2011-04-05 18:08 ` Anthony Liguori
2011-04-05 18:25 ` Lucas Meneghel Rodrigues
1 sibling, 1 reply; 17+ messages in thread
From: Anthony Liguori @ 2011-04-05 18:08 UTC (permalink / raw)
To: Lucas Meneghel Rodrigues, kvm, qemu-devel, Swapna
On 04/05/2011 12:08 PM, Alon Levy wrote:
>
>> The thing about WHQL is that it has its own test suite coordination
>> program (DTM), that has to run on a separate machine/VM. So they have
>> all infrastructure in place there. If we need graphical clients being
>> started and controlled on a linux bare metal machine, I am afraid we'll
>> have to resort to those GUI test automation frameworks I mentioned. if
>> our test is geared more towards windows clients, then they won't be
>> needed, sure.
> oh, but the added requirement is just launching the client
It would be interesting to have autotest support getting screen grabs
from a client vs. directly from qemu.
That would allow you to run the step files through the client.
Regards,
Anthony Liguori
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Qemu-devel] Re: spice in kvm-autotest [was: Re: KVM call minutes for Apr 5]
2011-04-05 18:08 ` Anthony Liguori
@ 2011-04-05 18:25 ` Lucas Meneghel Rodrigues
2011-04-05 18:41 ` Anthony Liguori
2011-04-05 18:44 ` Cleber Rosa
0 siblings, 2 replies; 17+ messages in thread
From: Lucas Meneghel Rodrigues @ 2011-04-05 18:25 UTC (permalink / raw)
To: Anthony Liguori; +Cc: Swapna, qemu-devel, kvm
On Tue, 2011-04-05 at 13:08 -0500, Anthony Liguori wrote:
> On 04/05/2011 12:08 PM, Alon Levy wrote:
> >
> >> The thing about WHQL is that it has its own test suite coordination
> >> program (DTM), that has to run on a separate machine/VM. So they have
> >> all infrastructure in place there. If we need graphical clients being
> >> started and controlled on a linux bare metal machine, I am afraid we'll
> >> have to resort to those GUI test automation frameworks I mentioned. if
> >> our test is geared more towards windows clients, then they won't be
> >> needed, sure.
> > oh, but the added requirement is just launching the client
>
> It would be interesting to have autotest support getting screen grabs
> from a client vs. directly from qemu.
Would need to write 'drivers' to get screen shots from KVM (Keyboard,
video and Mouse) or RSA, on IBM boxes... never thought about that
before, no idea of how difficult it is.
> That would allow you to run the step files through the client.
>
> Regards,
>
> Anthony Liguori
>
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Qemu-devel] Re: spice in kvm-autotest [was: Re: KVM call minutes for Apr 5]
2011-04-05 18:25 ` Lucas Meneghel Rodrigues
@ 2011-04-05 18:41 ` Anthony Liguori
2011-04-05 18:44 ` Cleber Rosa
1 sibling, 0 replies; 17+ messages in thread
From: Anthony Liguori @ 2011-04-05 18:41 UTC (permalink / raw)
To: Lucas Meneghel Rodrigues; +Cc: Swapna, qemu-devel, kvm
On 04/05/2011 01:25 PM, Lucas Meneghel Rodrigues wrote:
> On Tue, 2011-04-05 at 13:08 -0500, Anthony Liguori wrote:
>> On 04/05/2011 12:08 PM, Alon Levy wrote:
>>>> The thing about WHQL is that it has its own test suite coordination
>>>> program (DTM), that has to run on a separate machine/VM. So they have
>>>> all infrastructure in place there. If we need graphical clients being
>>>> started and controlled on a linux bare metal machine, I am afraid we'll
>>>> have to resort to those GUI test automation frameworks I mentioned. if
>>>> our test is geared more towards windows clients, then they won't be
>>>> needed, sure.
>>> oh, but the added requirement is just launching the client
>> It would be interesting to have autotest support getting screen grabs
>> from a client vs. directly from qemu.
> Would need to write 'drivers' to get screen shots from KVM (Keyboard,
> video and Mouse) or RSA, on IBM boxes...
Those things are lossy so it would get ugly pretty quick.
But for Spice or VNC, it ought to be pretty straight forward.
Regards,
Anthony Liguori
> never thought about that
> before, no idea of how difficult it is.
>
>> That would allow you to run the step files through the client.
>>
>> Regards,
>>
>> Anthony Liguori
>>
>>
>
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Qemu-devel] Re: spice in kvm-autotest [was: Re: KVM call minutes for Apr 5]
2011-04-05 18:25 ` Lucas Meneghel Rodrigues
2011-04-05 18:41 ` Anthony Liguori
@ 2011-04-05 18:44 ` Cleber Rosa
2011-04-05 20:32 ` Anthony Liguori
1 sibling, 1 reply; 17+ messages in thread
From: Cleber Rosa @ 2011-04-05 18:44 UTC (permalink / raw)
To: qemu-devel
On 04/05/2011 03:25 PM, Lucas Meneghel Rodrigues wrote:
> On Tue, 2011-04-05 at 13:08 -0500, Anthony Liguori wrote:
>> On 04/05/2011 12:08 PM, Alon Levy wrote:
>>>> The thing about WHQL is that it has its own test suite coordination
>>>> program (DTM), that has to run on a separate machine/VM. So they have
>>>> all infrastructure in place there. If we need graphical clients being
>>>> started and controlled on a linux bare metal machine, I am afraid we'll
>>>> have to resort to those GUI test automation frameworks I mentioned. if
>>>> our test is geared more towards windows clients, then they won't be
>>>> needed, sure.
>>> oh, but the added requirement is just launching the client
>> It would be interesting to have autotest support getting screen grabs
>> from a client vs. directly from qemu.
> Would need to write 'drivers' to get screen shots from KVM (Keyboard,
> video and Mouse) or RSA, on IBM boxes... never thought about that
> before, no idea of how difficult it is.
I guess Anthony meant grabbing screenshots from a VNC or SPICE client,
right?
>> That would allow you to run the step files through the client.
>>
>> Regards,
>>
>> Anthony Liguori
>>
>>
>
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Qemu-devel] Re: spice in kvm-autotest [was: Re: KVM call minutes for Apr 5]
2011-04-05 18:44 ` Cleber Rosa
@ 2011-04-05 20:32 ` Anthony Liguori
2011-04-06 7:58 ` Alon Levy
0 siblings, 1 reply; 17+ messages in thread
From: Anthony Liguori @ 2011-04-05 20:32 UTC (permalink / raw)
To: Cleber Rosa; +Cc: qemu-devel
On 04/05/2011 01:44 PM, Cleber Rosa wrote:
> On 04/05/2011 03:25 PM, Lucas Meneghel Rodrigues wrote:
>> On Tue, 2011-04-05 at 13:08 -0500, Anthony Liguori wrote:
>>> On 04/05/2011 12:08 PM, Alon Levy wrote:
>>>>> The thing about WHQL is that it has its own test suite coordination
>>>>> program (DTM), that has to run on a separate machine/VM. So they have
>>>>> all infrastructure in place there. If we need graphical clients being
>>>>> started and controlled on a linux bare metal machine, I am afraid
>>>>> we'll
>>>>> have to resort to those GUI test automation frameworks I
>>>>> mentioned. if
>>>>> our test is geared more towards windows clients, then they won't be
>>>>> needed, sure.
>>>> oh, but the added requirement is just launching the client
>>> It would be interesting to have autotest support getting screen grabs
>>> from a client vs. directly from qemu.
>> Would need to write 'drivers' to get screen shots from KVM (Keyboard,
>> video and Mouse) or RSA, on IBM boxes... never thought about that
>> before, no idea of how difficult it is.
>
> I guess Anthony meant grabbing screenshots from a VNC or SPICE client,
> right?
Yes.
Regards,
Anthony Liguori
>
>>> That would allow you to run the step files through the client.
>>>
>>> Regards,
>>>
>>> Anthony Liguori
>>>
>>>
>>
>>
>
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* [Qemu-devel] Re: spice in kvm-autotest [was: Re: KVM call minutes for Apr 5]
2011-04-05 18:03 ` Lucas Meneghel Rodrigues
@ 2011-04-06 7:50 ` Alon Levy
0 siblings, 0 replies; 17+ messages in thread
From: Alon Levy @ 2011-04-06 7:50 UTC (permalink / raw)
To: Lucas Meneghel Rodrigues; +Cc: Swapna, qemu-devel, kvm
On Tue, Apr 05, 2011 at 03:03:00PM -0300, Lucas Meneghel Rodrigues wrote:
> On Tue, 2011-04-05 at 20:08 +0300, Alon Levy wrote:
> > On Tue, Apr 05, 2011 at 01:27:48PM -0300, Lucas Meneghel Rodrigues wrote:
> > > On Tue, 2011-04-05 at 19:01 +0300, Alon Levy wrote:
> >
> > > > So I was basically talking about the added requirement of creating a client
> > > > connection (one or more) to a single vm.
> > >
> > > Yeah, I think now I've got the idea. We can certainly think of some
> > > strategy to coordinate test execution in vms in one host, and have
> > > another bare metal machine that opens clients to the vms on that host.
> > >
> > > Autotest has infrastructure to coordinate tests among multiple bare
> > > metal machines, we call that infrastructure 'barriers'. It works
> > > basically through TCP socket communication, we have an autotest library
> > > for that, and an API that allows coordination among multiple machines.
> > > We use that for example, to coordinate cross host migration tests.
> > >
> > > So, it might be tricky (specially setting up the environment on the
> > > machine that will open displays and handle all the client opening
> > > part[1]), but certainly doable IMHO.
> > >
> > > [1] This is the part where the LDTP thing would come to place - we need
> > > infrastructure to support opening the display, starting the graphical
> > > clients and interacting with them in an automated fashion, LDTP and
> > > dogtail are the means to do that.
> >
> > This might be required just for stuff like automating applications in the guest
> > that have no existing infrastructure, but I think we should try to avoid this
> > because it's notoriously difficult to create and maintain gui tests. WHQL doesn't
> > need this.
>
> Agreed.
>
> > > > Regarding the dogtail/LDTP issue, they are about specific tests run inside the
> > > > guest, and they are certainly something we would leverage. But like I mentioned
> > > > on the call, there is a whole suite of whql tests that are display specific,
> > > > and don't require anything new. In fact, a few months ago I added support for
> > > > autotest to run one of them, resizing to all the possible modes - so I know I
> > > > don't need dogtail for significant portions of our testing. (sorry, no git link
> > > > - I'll clean it up and post, it's been done 10 months ago so probably won't
> > > > cleanly apply :)
> > >
> > > The thing about WHQL is that it has its own test suite coordination
> > > program (DTM), that has to run on a separate machine/VM. So they have
> > > all infrastructure in place there. If we need graphical clients being
> > > started and controlled on a linux bare metal machine, I am afraid we'll
> > > have to resort to those GUI test automation frameworks I mentioned. if
> > > our test is geared more towards windows clients, then they won't be
> > > needed, sure.
> >
> > oh, but the added requirement is just launching the client. So - say you already
> > have a guest running the whql test suite. The only addition would be to also
> > have a spice client connected to that vm at the time the whql test is running. you
> > don't need to do anything in that client - don't send it mouse events or keyboard
> > events. Just let spice-server do what it does in this case, which would be sending
> > the spice protocol messages caused by the guest activity, which only happens when
> > a client is actually connected.
>
> Ok, I didn't know that it's not necessary to interact with the graphical
> client.
I'm saying most of the tests, certainly the ones we will start with (i.e. WHQL), won't
require this. On the other hand, some tests will certainly need to inject events to the
client, to test keyboard / mouse channels (like the sticky key bug we have atm) and that
would require either scripting the client (still no dogtail/etc.) or real gui automation
if we have to.
>
> > Another thing - the client doesn't have to run on a separate vm, or even on a separate
> > process. An additional vm seems like overkill, you are already building the qemu user
> > space from source or a specific tarball, one of the patches on my private tree adds
> > building spice from git, this includes the client and server. But we also have python
> > bindings for our newer spice-gtk client, so you could even import it from the autotest
> > python process. The whole barrier thing seems way overkill for this.
>
> Fair enough. At some point, if people demand that other scenarios
> (remote clients running on remote machines) be tested, then we can
> re-visit the barrier thing.
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Qemu-devel] Re: spice in kvm-autotest [was: Re: KVM call minutes for Apr 5]
2011-04-05 20:32 ` Anthony Liguori
@ 2011-04-06 7:58 ` Alon Levy
0 siblings, 0 replies; 17+ messages in thread
From: Alon Levy @ 2011-04-06 7:58 UTC (permalink / raw)
To: Anthony Liguori; +Cc: qemu-devel, Cleber Rosa
On Tue, Apr 05, 2011 at 03:32:35PM -0500, Anthony Liguori wrote:
> On 04/05/2011 01:44 PM, Cleber Rosa wrote:
> >On 04/05/2011 03:25 PM, Lucas Meneghel Rodrigues wrote:
> >>On Tue, 2011-04-05 at 13:08 -0500, Anthony Liguori wrote:
> >>>On 04/05/2011 12:08 PM, Alon Levy wrote:
> >>>>>The thing about WHQL is that it has its own test suite coordination
> >>>>>program (DTM), that has to run on a separate machine/VM. So they have
> >>>>>all infrastructure in place there. If we need graphical clients being
> >>>>>started and controlled on a linux bare metal machine, I am
> >>>>>afraid we'll
> >>>>>have to resort to those GUI test automation frameworks I
> >>>>>mentioned. if
> >>>>>our test is geared more towards windows clients, then they won't be
> >>>>>needed, sure.
> >>>>oh, but the added requirement is just launching the client
> >>>It would be interesting to have autotest support getting screen grabs
> >>>from a client vs. directly from qemu.
> >>Would need to write 'drivers' to get screen shots from KVM (Keyboard,
> >>video and Mouse) or RSA, on IBM boxes... never thought about that
> >>before, no idea of how difficult it is.
> >
> >I guess Anthony meant grabbing screenshots from a VNC or SPICE
> >client, right?
>
> Yes.
+1.
>
> Regards,
>
> Anthony Liguori
>
> >
> >>>That would allow you to run the step files through the client.
> >>>
> >>>Regards,
> >>>
> >>>Anthony Liguori
> >>>
> >>>
> >>
> >>
> >
> >
>
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Qemu-devel] KVM call minutes for Apr 5
2011-04-05 17:37 ` Lucas Meneghel Rodrigues
@ 2011-04-07 10:03 ` Stefan Hajnoczi
2011-04-08 12:58 ` Lucas Meneghel Rodrigues
0 siblings, 1 reply; 17+ messages in thread
From: Stefan Hajnoczi @ 2011-04-07 10:03 UTC (permalink / raw)
To: Lucas Meneghel Rodrigues; +Cc: Chris Wright, paul.larson, qemu-devel, kvm
On Tue, Apr 5, 2011 at 6:37 PM, Lucas Meneghel Rodrigues <lmr@redhat.com> wrote:
Thanks for your detailed response!
> On Tue, 2011-04-05 at 16:29 +0100, Stefan Hajnoczi wrote:
>> * Public notifications of breakage, qemu.git/master failures to
>> qemu-devel mailing list.
>
> ^ The challenge is to get enough data to determine what is a new
> breakage from a known issue, mainly. More related to have historical
> data from test results than anything else, IMO.
I agree. Does kvm-autotest currently archive test results?
>> * A one-time contributor can get their code tested. No requirement to
>> set up a server because contributors may not have the resources.
>
> Coming back to the point that many colleagues made: We need a sort of
> 'make test' on the qemu trees that would fetch autotest and could setup
> basic tests that people could run, maybe suggest test sets...
>
> The problem I see is, getting guests up and running using configs that
> actually matter is not trivial (there are things such as ensuring that
> all auxiliary utilities are installed in a distro agnostic fashion,
> having bridges and DHCP server setup on possibly a disconnected work
> laptop, and stuff).
>
> So, having a 'no brains involved at all' setup is quite a challenge,
> suggestions welcome. Also, downloading isos, waiting for guests to
> install and run thorough tests won't be fast. So J. Random Developer
> might not bother to run tests even if we can provide a fool proof,
> perfectly automated setup, because it'd take a long time at first to get
> the tests run. This is also a challenge.
I'm actually starting to think that there is no one-size-fits-all solution.
Developers need "make check"-type unit tests for various QEMU
subsystems. kvm-autotest could also run these unit tests as part of
its execution.
Then there are end-to-end acceptance tests. They simply require
storage, network, and time resources and there's no way around that.
These tests are more suited to centralized testing infrastructure that
periodically tests qemu.git.
On the community call I was trying to see if there is a "lightweight"
version of kvm-autotest that could be merged into qemu.git. But now I
think that this isn't realistic and it would be better to grow unit
tests in qemu.git while covering it with kvm-autotest for acceptance
testing.
>> Perhaps kvm-autotest is a good platform for the automated testing of
>> ARM TCG. Paul is CCed, I recently saw the Jenkins qemu build and boot
>> tests he has set up. Lucas, do you have ideas on how these efforts
>> can work together to bring testing to upstream QEMU?
>> http://validation.linaro.org/jenkins/job/qemu-boot-images/
>
> I heard about jenkins before and it is indeed a nice project. What they
> do here, from what I could assess browsing at the webpage you provided
> is:
>
> 1) Build qemu.git every time there are commits
> 2) Boot pre-made 'pristine' images, one is a lenny arm image and the
> other is a linaro arm image.
>
> It is possible to do the same with kvm autotest, just a matter of not
> performing guest install tests and executing only the boot tests with
> pre-made images. What jenkins does here is a even quicker and shorter
> version of our sanity jobs.
>
> About how we can work together, I thought about some possibilities:
>
> 1) Modify the jenkins test step to execute a kvm autotest job after the
> build, with the stripped down test set. We might gain some extra debug
> info, that the current test step does not seem to provide
> 2) Do the normal test step and if that succeeds, trigger a kvm autotest
> job that does more comprehensive testing, such as migration, time drift,
> block layer, etc
>
> The funny thing is that KVM autotest has infrastructure to do the same
> as jenkins does, but jenkins is highly streamlined for the buildbot use
> case (continuous build and integration), and I see that as a very nice
> advantage. So I'd rather keep use jenkins and have kvm autotest plugged
> into it conveniently.
That sounds good. I think the benefit of working together is that
different entities (Linaro, Red Hat, etc) can contribute QEMU tests
into a single place. That testing can then cover to both upstream and
downstream to prevent breakage.
So kvm-autotest can run in single job mode and kicked off from jenkins
or buildbot?
It sounds like kvm-autotest has or needs its own cron, result
archiving, etc infrastructure. Does it make sense to use a harness
like jenkins or buildbot instead and focus kvm-autotest purely as a
testing framework?
Stefan
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Qemu-devel] KVM call minutes for Apr 5
2011-04-07 10:03 ` Stefan Hajnoczi
@ 2011-04-08 12:58 ` Lucas Meneghel Rodrigues
2011-04-08 18:57 ` Stefan Hajnoczi
0 siblings, 1 reply; 17+ messages in thread
From: Lucas Meneghel Rodrigues @ 2011-04-08 12:58 UTC (permalink / raw)
To: Stefan Hajnoczi; +Cc: Chris Wright, paul.larson, qemu-devel, kvm
On Thu, 2011-04-07 at 11:03 +0100, Stefan Hajnoczi wrote:
> On Tue, Apr 5, 2011 at 6:37 PM, Lucas Meneghel Rodrigues <lmr@redhat.com> wrote:
>
> Thanks for your detailed response!
>
> > On Tue, 2011-04-05 at 16:29 +0100, Stefan Hajnoczi wrote:
> >> * Public notifications of breakage, qemu.git/master failures to
> >> qemu-devel mailing list.
> >
> > ^ The challenge is to get enough data to determine what is a new
> > breakage from a known issue, mainly. More related to have historical
> > data from test results than anything else, IMO.
>
> I agree. Does kvm-autotest currently archive test results?
It does. Our test layouts are currently evolving, and we are hoping to
reach a very good and sane format. We are also thinking more about how
to look at historical data and stablish regressions.
> >> * A one-time contributor can get their code tested. No requirement to
> >> set up a server because contributors may not have the resources.
> >
> > Coming back to the point that many colleagues made: We need a sort of
> > 'make test' on the qemu trees that would fetch autotest and could setup
> > basic tests that people could run, maybe suggest test sets...
> >
> > The problem I see is, getting guests up and running using configs that
> > actually matter is not trivial (there are things such as ensuring that
> > all auxiliary utilities are installed in a distro agnostic fashion,
> > having bridges and DHCP server setup on possibly a disconnected work
> > laptop, and stuff).
> >
> > So, having a 'no brains involved at all' setup is quite a challenge,
> > suggestions welcome. Also, downloading isos, waiting for guests to
> > install and run thorough tests won't be fast. So J. Random Developer
> > might not bother to run tests even if we can provide a fool proof,
> > perfectly automated setup, because it'd take a long time at first to get
> > the tests run. This is also a challenge.
>
> I'm actually starting to think that there is no one-size-fits-all solution.
>
> Developers need "make check"-type unit tests for various QEMU
> subsystems. kvm-autotest could also run these unit tests as part of
> its execution.
>
> Then there are end-to-end acceptance tests. They simply require
> storage, network, and time resources and there's no way around that.
> These tests are more suited to centralized testing infrastructure that
> periodically tests qemu.git.
>
> On the community call I was trying to see if there is a "lightweight"
> version of kvm-autotest that could be merged into qemu.git. But now I
> think that this isn't realistic and it would be better to grow unit
> tests in qemu.git while covering it with kvm-autotest for acceptance
> testing.
The "make check" could check out autotest in the background and execute
a very simplistic set of test, with pre-made small linux guests, very
much as jenkins + buildbot does. If we can figure a sane, automated
bridge + dnsmasq setup, then we can provide both the unittests and very
simple and restricted guest tests. Need to think more.
> >> Perhaps kvm-autotest is a good platform for the automated testing of
> >> ARM TCG. Paul is CCed, I recently saw the Jenkins qemu build and boot
> >> tests he has set up. Lucas, do you have ideas on how these efforts
> >> can work together to bring testing to upstream QEMU?
> >> http://validation.linaro.org/jenkins/job/qemu-boot-images/
> >
> > I heard about jenkins before and it is indeed a nice project. What they
> > do here, from what I could assess browsing at the webpage you provided
> > is:
> >
> > 1) Build qemu.git every time there are commits
> > 2) Boot pre-made 'pristine' images, one is a lenny arm image and the
> > other is a linaro arm image.
> >
> > It is possible to do the same with kvm autotest, just a matter of not
> > performing guest install tests and executing only the boot tests with
> > pre-made images. What jenkins does here is a even quicker and shorter
> > version of our sanity jobs.
> >
> > About how we can work together, I thought about some possibilities:
> >
> > 1) Modify the jenkins test step to execute a kvm autotest job after the
> > build, with the stripped down test set. We might gain some extra debug
> > info, that the current test step does not seem to provide
> > 2) Do the normal test step and if that succeeds, trigger a kvm autotest
> > job that does more comprehensive testing, such as migration, time drift,
> > block layer, etc
> >
> > The funny thing is that KVM autotest has infrastructure to do the same
> > as jenkins does, but jenkins is highly streamlined for the buildbot use
> > case (continuous build and integration), and I see that as a very nice
> > advantage. So I'd rather keep use jenkins and have kvm autotest plugged
> > into it conveniently.
>
> That sounds good. I think the benefit of working together is that
> different entities (Linaro, Red Hat, etc) can contribute QEMU tests
> into a single place. That testing can then cover to both upstream and
> downstream to prevent breakage.
>
> So kvm-autotest can run in single job mode and kicked off from jenkins
> or buildbot?
>
> It sounds like kvm-autotest has or needs its own cron, result
> archiving, etc infrastructure. Does it make sense to use a harness
> like jenkins or buildbot instead and focus kvm-autotest purely as a
> testing framework?
In the context that there are already jenkins/buildbot servers running
for qemu, having only the KVM testing part (autotest client + kvm test)
is a possibility, to make things easier to plug and work with what is
already deployed.
However, not possible to focus KVM autotest as a 'test framework'. What
we call KVM autotest is in actuality, a client test of autotest.
Autotest is a generic, large collection of programs and libraries
targeted at peforming automated testing on the linux platform, it was
developed to test the linux kernel itself, and it is used to do
precisely that. Look at test.kernel.org. All those tests are executed by
autotest.
So, autotest is much more than KVM testing, and I am one of the autotest
maintainers, so I am commited to work on all parts of that stack.
Several testing projects urelated to KVM use our code, and our
harnessing and infrastructure is already pretty good, we'll keep to
develop it.
The whole thing was designed in a modular way, so it's doable to use
parts of it (such as the autotest client and the KVM test) and integrate
with stuff such as jenkins and buildbot, and if people need and want to
do that, awesome. But we are going to continue develop autotest as a
whole framework/automation utilities/API, while developing the KVM test.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Qemu-devel] KVM call minutes for Apr 5
2011-04-08 12:58 ` Lucas Meneghel Rodrigues
@ 2011-04-08 18:57 ` Stefan Hajnoczi
0 siblings, 0 replies; 17+ messages in thread
From: Stefan Hajnoczi @ 2011-04-08 18:57 UTC (permalink / raw)
To: Lucas Meneghel Rodrigues; +Cc: Chris Wright, paul.larson, qemu-devel, kvm
On Fri, Apr 08, 2011 at 09:58:22AM -0300, Lucas Meneghel Rodrigues wrote:
> On Thu, 2011-04-07 at 11:03 +0100, Stefan Hajnoczi wrote:
> > On Tue, Apr 5, 2011 at 6:37 PM, Lucas Meneghel Rodrigues <lmr@redhat.com> wrote:
> > >> Perhaps kvm-autotest is a good platform for the automated testing of
> > >> ARM TCG. Paul is CCed, I recently saw the Jenkins qemu build and boot
> > >> tests he has set up. Lucas, do you have ideas on how these efforts
> > >> can work together to bring testing to upstream QEMU?
> > >> http://validation.linaro.org/jenkins/job/qemu-boot-images/
> > >
> > > I heard about jenkins before and it is indeed a nice project. What they
> > > do here, from what I could assess browsing at the webpage you provided
> > > is:
> > >
> > > 1) Build qemu.git every time there are commits
> > > 2) Boot pre-made 'pristine' images, one is a lenny arm image and the
> > > other is a linaro arm image.
> > >
> > > It is possible to do the same with kvm autotest, just a matter of not
> > > performing guest install tests and executing only the boot tests with
> > > pre-made images. What jenkins does here is a even quicker and shorter
> > > version of our sanity jobs.
> > >
> > > About how we can work together, I thought about some possibilities:
> > >
> > > 1) Modify the jenkins test step to execute a kvm autotest job after the
> > > build, with the stripped down test set. We might gain some extra debug
> > > info, that the current test step does not seem to provide
> > > 2) Do the normal test step and if that succeeds, trigger a kvm autotest
> > > job that does more comprehensive testing, such as migration, time drift,
> > > block layer, etc
> > >
> > > The funny thing is that KVM autotest has infrastructure to do the same
> > > as jenkins does, but jenkins is highly streamlined for the buildbot use
> > > case (continuous build and integration), and I see that as a very nice
> > > advantage. So I'd rather keep use jenkins and have kvm autotest plugged
> > > into it conveniently.
> >
> > That sounds good. I think the benefit of working together is that
> > different entities (Linaro, Red Hat, etc) can contribute QEMU tests
> > into a single place. That testing can then cover to both upstream and
> > downstream to prevent breakage.
> >
> > So kvm-autotest can run in single job mode and kicked off from jenkins
> > or buildbot?
> >
> > It sounds like kvm-autotest has or needs its own cron, result
> > archiving, etc infrastructure. Does it make sense to use a harness
> > like jenkins or buildbot instead and focus kvm-autotest purely as a
> > testing framework?
>
> In the context that there are already jenkins/buildbot servers running
> for qemu, having only the KVM testing part (autotest client + kvm test)
> is a possibility, to make things easier to plug and work with what is
> already deployed.
>
> However, not possible to focus KVM autotest as a 'test framework'. What
> we call KVM autotest is in actuality, a client test of autotest.
> Autotest is a generic, large collection of programs and libraries
> targeted at peforming automated testing on the linux platform, it was
> developed to test the linux kernel itself, and it is used to do
> precisely that. Look at test.kernel.org. All those tests are executed by
> autotest.
>
> So, autotest is much more than KVM testing, and I am one of the autotest
> maintainers, so I am commited to work on all parts of that stack.
> Several testing projects urelated to KVM use our code, and our
> harnessing and infrastructure is already pretty good, we'll keep to
> develop it.
>
> The whole thing was designed in a modular way, so it's doable to use
> parts of it (such as the autotest client and the KVM test) and integrate
> with stuff such as jenkins and buildbot, and if people need and want to
> do that, awesome. But we are going to continue develop autotest as a
> whole framework/automation utilities/API, while developing the KVM test.
I wasn't aware of the scope of autotest and your involvement. I need to
look into autotest more :).
Stefan
^ permalink raw reply [flat|nested] 17+ messages in thread
end of thread, other threads:[~2011-04-08 18:58 UTC | newest]
Thread overview: 17+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-04-05 15:07 [Qemu-devel] KVM call minutes for Apr 5 Chris Wright
2011-04-05 15:29 ` Stefan Hajnoczi
2011-04-05 17:37 ` Lucas Meneghel Rodrigues
2011-04-07 10:03 ` Stefan Hajnoczi
2011-04-08 12:58 ` Lucas Meneghel Rodrigues
2011-04-08 18:57 ` Stefan Hajnoczi
2011-04-05 16:01 ` [Qemu-devel] spice in kvm-autotest [was: Re: KVM call minutes for Apr 5] Alon Levy
2011-04-05 16:27 ` [Qemu-devel] " Lucas Meneghel Rodrigues
2011-04-05 17:08 ` Alon Levy
2011-04-05 18:03 ` Lucas Meneghel Rodrigues
2011-04-06 7:50 ` Alon Levy
2011-04-05 18:08 ` Anthony Liguori
2011-04-05 18:25 ` Lucas Meneghel Rodrigues
2011-04-05 18:41 ` Anthony Liguori
2011-04-05 18:44 ` Cleber Rosa
2011-04-05 20:32 ` Anthony Liguori
2011-04-06 7:58 ` Alon Levy
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).