* [xen-unstable bisection] complete test-amd64-amd64-xl-qemuu-ovmf-amd64
@ 2015-07-29 14:26 osstest service owner
0 siblings, 0 replies; 7+ messages in thread
From: osstest service owner @ 2015-07-29 14:26 UTC (permalink / raw)
To: xen-devel, osstest-admin
branch xen-unstable
xen branch xen-unstable
job test-amd64-amd64-xl-qemuu-ovmf-amd64
test debian-hvm-install
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/staging/qemu-xen-unstable.git
Tree: qemuu git://xenbits.xen.org/staging/qemu-upstream-unstable.git
Tree: xen git://xenbits.xen.org/xen.git
*** Found and reproduced problem changeset ***
Bug is in tree: xen git://xenbits.xen.org/xen.git
Bug introduced: 07d73925eaa066c3ae2fb161246863e99e79d9f7
Bug not present: 46648eb95f1a70940bb22c40d43703b7cff99a88
commit 07d73925eaa066c3ae2fb161246863e99e79d9f7
Author: Tiejun Chen <tiejun.chen@intel.com>
Date: Wed Jul 22 01:40:19 2015 +0000
hvmloader: get guest memory map into memory_map[]
Now we get this map layout by call XENMEM_memory_map then
save them into one global variable memory_map[]. It should
include lowmem range, rdm range and highmem range. Note
rdm range and highmem range may not exist in some cases.
And here we need to check if any reserved memory conflicts with
[RESERVED_MEMORY_DYNAMIC_START, RESERVED_MEMORY_DYNAMIC_END).
This range is used to allocate memory in hvmloder level, and
we would lead hvmloader failed in case of conflict since its
another rare possibility in real world.
CC: Keir Fraser <keir@xen.org>
CC: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Ian Jackson <ian.jackson@eu.citrix.com>
CC: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
CC: Ian Campbell <ian.campbell@citrix.com>
CC: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: Tiejun Chen <tiejun.chen@intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Reviewed-by: George Dunlap <george.dunlap@eu.citrix.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
For bisection revision-tuple graph see:
http://logs.test-lab.xenproject.org/osstest/results/bisect/xen-unstable/test-amd64-amd64-xl-qemuu-ovmf-amd64.debian-hvm-install.html
Revision IDs in each graph node refer, respectively, to the Trees above.
----------------------------------------
Searching for failure / basis pass:
60076 fail [host=fiano1] / 59833 [host=godello0] 59817 [host=italia1] 59795 [host=huxelrebe1] 59772 [host=italia0] 59699 [host=elbling0] 59681 [host=pinot1] 59654 [host=pinot1] 59631 [host=pinot1] 59611 [host=pinot1] 59590 [host=pinot1] 59568 [host=elbling1] 59544 [host=huxelrebe0] 59509 [host=huxelrebe0] 59472 [host=godello1] 59446 [host=chardonnay1] 59404 ok.
Failure / basis pass flights: 60076 / 59404
(tree with no url: ovmf)
(tree with no url: seabios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/staging/qemu-xen-unstable.git
Tree: qemuu git://xenbits.xen.org/staging/qemu-upstream-unstable.git
Tree: xen git://xenbits.xen.org/xen.git
Latest 3cdf91941d7490ba1d0a72729a667c42b489b23a c530a75c1e6a472b0eb9558310b518f0dfcd8860 3e2e51ecc1120bd59537ed19b6bc7066511c7e2e c4a962ec0c61aa9b860a3635c8424472e6c2cc2c 3e791ccb1d1d036ed25e880b1ef72ea8dcabe43a
Basis pass a0768244828d0da096ce957616150220da607be1 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3e2e51ecc1120bd59537ed19b6bc7066511c7e2e c4a962ec0c61aa9b860a3635c8424472e6c2cc2c 47b4d562b6a3441020fb6a7762603d1d3a74db27
Generating revisions with ./adhoc-revtuple-generator git://xenbits.xen.org/linux-pvops.git#a0768244828d0da096ce957616150220da607be1-3cdf91941d7490ba1d0a72729a667c42b489b23a git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/staging/qemu-xen-unstable.git#3e2e51ecc1120bd59537ed19b6bc7066511c7e2e-3e2e51ecc1120bd59537ed19b6bc7066511c7e2e git://xenbits.xen.org/staging/qemu-upstream-unstable.git#c4a962ec0c61aa9b860a3635c8424472e6c2cc2c-c4a962ec0c61aa9b860a3635c8424472e6c2cc2c git://xenbits.xen.org/xen.git#47b4d562b6a3441020fb6a7762603d1d3a74db27-3e791ccb1d1d036ed25e880b1ef72ea8dcabe43a
+ exec
+ sh -xe
+ cd /home/osstest/repos/linux-pvops
+ git remote set-url origin git://cache:9419/git://xenbits.xen.org/linux-pvops.git
+ git fetch -p origin +refs/heads/*:refs/remotes/origin/*
+ exec
+ sh -xe
+ cd /home/osstest/repos/xen
+ git remote set-url origin git://cache:9419/git://xenbits.xen.org/xen.git
+ git fetch -p origin +refs/heads/*:refs/remotes/origin/*
+ exec
+ sh -xe
+ cd /home/osstest/repos/linux-pvops
+ git remote set-url origin git://cache:9419/git://xenbits.xen.org/linux-pvops.git
+ git fetch -p origin +refs/heads/*:refs/remotes/origin/*
+ exec
+ sh -xe
+ cd /home/osstest/repos/xen
+ git remote set-url origin git://cache:9419/git://xenbits.xen.org/xen.git
+ git fetch -p origin +refs/heads/*:refs/remotes/origin/*
Loaded 2001 nodes in revision graph
Searching for test results:
59404 pass a0768244828d0da096ce957616150220da607be1 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3e2e51ecc1120bd59537ed19b6bc7066511c7e2e c4a962ec0c61aa9b860a3635c8424472e6c2cc2c 47b4d562b6a3441020fb6a7762603d1d3a74db27
59472 [host=godello1]
59446 [host=chardonnay1]
59509 [host=huxelrebe0]
59544 [host=huxelrebe0]
59568 [host=elbling1]
59611 [host=pinot1]
59590 [host=pinot1]
59654 [host=pinot1]
59631 [host=pinot1]
59681 [host=pinot1]
59699 [host=elbling0]
59736 []
59772 [host=italia0]
59817 [host=italia1]
59795 [host=huxelrebe1]
59833 [host=godello0]
59910 fail 3cdf91941d7490ba1d0a72729a667c42b489b23a c530a75c1e6a472b0eb9558310b518f0dfcd8860 3e2e51ecc1120bd59537ed19b6bc7066511c7e2e c4a962ec0c61aa9b860a3635c8424472e6c2cc2c 19b17e240c5f31eb1ff744946ce75afa729bfe91
60010 pass a0768244828d0da096ce957616150220da607be1 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3e2e51ecc1120bd59537ed19b6bc7066511c7e2e c4a962ec0c61aa9b860a3635c8424472e6c2cc2c 47b4d562b6a3441020fb6a7762603d1d3a74db27
60023 fail 3cdf91941d7490ba1d0a72729a667c42b489b23a c530a75c1e6a472b0eb9558310b518f0dfcd8860 3e2e51ecc1120bd59537ed19b6bc7066511c7e2e c4a962ec0c61aa9b860a3635c8424472e6c2cc2c 8398ec706ff60e17a5044470fa2e90a1b081f37a
60120 fail 3cdf91941d7490ba1d0a72729a667c42b489b23a c530a75c1e6a472b0eb9558310b518f0dfcd8860 3e2e51ecc1120bd59537ed19b6bc7066511c7e2e c4a962ec0c61aa9b860a3635c8424472e6c2cc2c 07d73925eaa066c3ae2fb161246863e99e79d9f7
60054 fail 3cdf91941d7490ba1d0a72729a667c42b489b23a c530a75c1e6a472b0eb9558310b518f0dfcd8860 3e2e51ecc1120bd59537ed19b6bc7066511c7e2e c4a962ec0c61aa9b860a3635c8424472e6c2cc2c 7985abb6dda04934894b2321fc657311595a66e1
60032 fail 3cdf91941d7490ba1d0a72729a667c42b489b23a c530a75c1e6a472b0eb9558310b518f0dfcd8860 3e2e51ecc1120bd59537ed19b6bc7066511c7e2e c4a962ec0c61aa9b860a3635c8424472e6c2cc2c 19b17e240c5f31eb1ff744946ce75afa729bfe91
60059 fail 3cdf91941d7490ba1d0a72729a667c42b489b23a c530a75c1e6a472b0eb9558310b518f0dfcd8860 3e2e51ecc1120bd59537ed19b6bc7066511c7e2e c4a962ec0c61aa9b860a3635c8424472e6c2cc2c 25652f232cbee8f4c6c740c86e9f12b45fa655e9
60073 pass 3cdf91941d7490ba1d0a72729a667c42b489b23a c530a75c1e6a472b0eb9558310b518f0dfcd8860 3e2e51ecc1120bd59537ed19b6bc7066511c7e2e c4a962ec0c61aa9b860a3635c8424472e6c2cc2c 46648eb95f1a70940bb22c40d43703b7cff99a88
60036 pass 3cdf91941d7490ba1d0a72729a667c42b489b23a c530a75c1e6a472b0eb9558310b518f0dfcd8860 3e2e51ecc1120bd59537ed19b6bc7066511c7e2e c4a962ec0c61aa9b860a3635c8424472e6c2cc2c afba7c6d9566f1556117b78e767f9678433a1f01
60101 fail 3cdf91941d7490ba1d0a72729a667c42b489b23a c530a75c1e6a472b0eb9558310b518f0dfcd8860 3e2e51ecc1120bd59537ed19b6bc7066511c7e2e c4a962ec0c61aa9b860a3635c8424472e6c2cc2c 07d73925eaa066c3ae2fb161246863e99e79d9f7
60063 fail 3cdf91941d7490ba1d0a72729a667c42b489b23a c530a75c1e6a472b0eb9558310b518f0dfcd8860 3e2e51ecc1120bd59537ed19b6bc7066511c7e2e c4a962ec0c61aa9b860a3635c8424472e6c2cc2c 07d73925eaa066c3ae2fb161246863e99e79d9f7
60044 pass 3cdf91941d7490ba1d0a72729a667c42b489b23a c530a75c1e6a472b0eb9558310b518f0dfcd8860 3e2e51ecc1120bd59537ed19b6bc7066511c7e2e c4a962ec0c61aa9b860a3635c8424472e6c2cc2c 132231d10343608faf5892785a08acc500326d04
60067 pass 3cdf91941d7490ba1d0a72729a667c42b489b23a c530a75c1e6a472b0eb9558310b518f0dfcd8860 3e2e51ecc1120bd59537ed19b6bc7066511c7e2e c4a962ec0c61aa9b860a3635c8424472e6c2cc2c 5ae03990c120a7b3067a52d9784c9aa72c0705a6
60050 pass 3cdf91941d7490ba1d0a72729a667c42b489b23a c530a75c1e6a472b0eb9558310b518f0dfcd8860 3e2e51ecc1120bd59537ed19b6bc7066511c7e2e c4a962ec0c61aa9b860a3635c8424472e6c2cc2c 742ab6ac1976ad6e3ec9f9d49fbf74aff27e6eb4
60070 pass 3cdf91941d7490ba1d0a72729a667c42b489b23a c530a75c1e6a472b0eb9558310b518f0dfcd8860 3e2e51ecc1120bd59537ed19b6bc7066511c7e2e c4a962ec0c61aa9b860a3635c8424472e6c2cc2c ab13e3dc788bf06701dd0104feb214b5870e2694
60106 fail 3cdf91941d7490ba1d0a72729a667c42b489b23a c530a75c1e6a472b0eb9558310b518f0dfcd8860 3e2e51ecc1120bd59537ed19b6bc7066511c7e2e c4a962ec0c61aa9b860a3635c8424472e6c2cc2c 3e791ccb1d1d036ed25e880b1ef72ea8dcabe43a
60076 fail 3cdf91941d7490ba1d0a72729a667c42b489b23a c530a75c1e6a472b0eb9558310b518f0dfcd8860 3e2e51ecc1120bd59537ed19b6bc7066511c7e2e c4a962ec0c61aa9b860a3635c8424472e6c2cc2c 3e791ccb1d1d036ed25e880b1ef72ea8dcabe43a
60104 pass 3cdf91941d7490ba1d0a72729a667c42b489b23a c530a75c1e6a472b0eb9558310b518f0dfcd8860 3e2e51ecc1120bd59537ed19b6bc7066511c7e2e c4a962ec0c61aa9b860a3635c8424472e6c2cc2c 46648eb95f1a70940bb22c40d43703b7cff99a88
60080 pass a0768244828d0da096ce957616150220da607be1 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3e2e51ecc1120bd59537ed19b6bc7066511c7e2e c4a962ec0c61aa9b860a3635c8424472e6c2cc2c 47b4d562b6a3441020fb6a7762603d1d3a74db27
60099 fail 3cdf91941d7490ba1d0a72729a667c42b489b23a c530a75c1e6a472b0eb9558310b518f0dfcd8860 3e2e51ecc1120bd59537ed19b6bc7066511c7e2e c4a962ec0c61aa9b860a3635c8424472e6c2cc2c 8398ec706ff60e17a5044470fa2e90a1b081f37a
60114 fail 3cdf91941d7490ba1d0a72729a667c42b489b23a c530a75c1e6a472b0eb9558310b518f0dfcd8860 3e2e51ecc1120bd59537ed19b6bc7066511c7e2e c4a962ec0c61aa9b860a3635c8424472e6c2cc2c 07d73925eaa066c3ae2fb161246863e99e79d9f7
60116 pass 3cdf91941d7490ba1d0a72729a667c42b489b23a c530a75c1e6a472b0eb9558310b518f0dfcd8860 3e2e51ecc1120bd59537ed19b6bc7066511c7e2e c4a962ec0c61aa9b860a3635c8424472e6c2cc2c 46648eb95f1a70940bb22c40d43703b7cff99a88
Searching for interesting versions
Result found: flight 59404 (pass), for basis pass
Result found: flight 60076 (fail), for basis failure
Repro found: flight 60080 (pass), for basis pass
Repro found: flight 60106 (fail), for basis failure
0 revisions at 3cdf91941d7490ba1d0a72729a667c42b489b23a c530a75c1e6a472b0eb9558310b518f0dfcd8860 3e2e51ecc1120bd59537ed19b6bc7066511c7e2e c4a962ec0c61aa9b860a3635c8424472e6c2cc2c 46648eb95f1a70940bb22c40d43703b7cff99a88
No revisions left to test, checking graph state.
Result found: flight 60073 (pass), for last pass
Result found: flight 60101 (fail), for first failure
Repro found: flight 60104 (pass), for last pass
Repro found: flight 60114 (fail), for first failure
Repro found: flight 60116 (pass), for last pass
Repro found: flight 60120 (fail), for first failure
*** Found and reproduced problem changeset ***
Bug is in tree: xen git://xenbits.xen.org/xen.git
Bug introduced: 07d73925eaa066c3ae2fb161246863e99e79d9f7
Bug not present: 46648eb95f1a70940bb22c40d43703b7cff99a88
+ exec
+ sh -xe
+ cd /home/osstest/repos/xen
+ git remote set-url origin git://cache:9419/git://xenbits.xen.org/xen.git
+ git fetch -p origin +refs/heads/*:refs/remotes/origin/*
commit 07d73925eaa066c3ae2fb161246863e99e79d9f7
Author: Tiejun Chen <tiejun.chen@intel.com>
Date: Wed Jul 22 01:40:19 2015 +0000
hvmloader: get guest memory map into memory_map[]
Now we get this map layout by call XENMEM_memory_map then
save them into one global variable memory_map[]. It should
include lowmem range, rdm range and highmem range. Note
rdm range and highmem range may not exist in some cases.
And here we need to check if any reserved memory conflicts with
[RESERVED_MEMORY_DYNAMIC_START, RESERVED_MEMORY_DYNAMIC_END).
This range is used to allocate memory in hvmloder level, and
we would lead hvmloader failed in case of conflict since its
another rare possibility in real world.
CC: Keir Fraser <keir@xen.org>
CC: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Ian Jackson <ian.jackson@eu.citrix.com>
CC: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
CC: Ian Campbell <ian.campbell@citrix.com>
CC: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: Tiejun Chen <tiejun.chen@intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Reviewed-by: George Dunlap <george.dunlap@eu.citrix.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
Revision graph left in /home/logs/results/bisect/xen-unstable/test-amd64-amd64-xl-qemuu-ovmf-amd64.debian-hvm-install.{dot,ps,png,html}.
----------------------------------------
60120: tolerable ALL FAIL
flight 60120 xen-unstable real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/60120/
Failures :-/ but no regressions.
Tests which did not succeed,
including tests which could not be run:
test-amd64-amd64-xl-qemuu-ovmf-amd64 9 debian-hvm-install fail baseline untested
jobs:
test-amd64-amd64-xl-qemuu-ovmf-amd64 fail
------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images
Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs
Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master
Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary
^ permalink raw reply [flat|nested] 7+ messages in thread
* [xen-unstable bisection] complete test-amd64-amd64-xl-qemuu-ovmf-amd64
@ 2016-05-22 5:37 osstest service owner
2016-05-23 10:03 ` Wei Liu
2016-05-24 17:50 ` Wei Liu
0 siblings, 2 replies; 7+ messages in thread
From: osstest service owner @ 2016-05-22 5:37 UTC (permalink / raw)
To: xen-devel, osstest-admin
branch xen-unstable
xenbranch xen-unstable
job test-amd64-amd64-xl-qemuu-ovmf-amd64
testid guest-start/debianhvm.repeat
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git
*** Found and reproduced problem changeset ***
Bug is in tree: xen git://xenbits.xen.org/xen.git
Bug introduced: 1542efcea893df874b13b1ea78101e1ff6a55c41
Bug not present: c32381352cce9744e640bf239d2704dae4af4fc7
Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/94689/
commit 1542efcea893df874b13b1ea78101e1ff6a55c41
Author: Wei Liu <wei.liu2@citrix.com>
Date: Wed May 18 11:48:25 2016 +0100
Config.mk: update ovmf changeset
Signed-off-by: Wei Liu <wei.liu2@citrix.com>
For bisection revision-tuple graph see:
http://logs.test-lab.xenproject.org/osstest/results/bisect/xen-unstable/test-amd64-amd64-xl-qemuu-ovmf-amd64.guest-start--debianhvm.repeat.html
Revision IDs in each graph node refer, respectively, to the Trees above.
----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/xen-unstable/test-amd64-amd64-xl-qemuu-ovmf-amd64.guest-start--debianhvm.repeat --summary-out=tmp/94689.bisection-summary --basis-template=94580 --blessings=real,real-bisect xen-unstable test-amd64-amd64-xl-qemuu-ovmf-amd64 guest-start/debianhvm.repeat
Searching for failure / basis pass:
94639 fail [host=merlot1] / 94580 [host=nocera0] 94563 [host=italia1] 94536 [host=baroque0] 94507 [host=chardonnay0] 94495 [host=pinot0] 94487 [host=elbling0] 94461 [host=rimava0] 94442 [host=baroque1] 94070 [host=pinot1] 94021 [host=italia1] 93963 [host=elbling1] 93920 [host=huxelrebe1] 93873 [host=elbling0] 93813 [host=chardonnay1] 93725 [host=fiano1] 93638 [host=chardonnay0] 93612 [host=rimava0] 93587 [host=baroque0] 93563 ok.
Failure / basis pass flights: 94639 / 93563
(tree with no url: minios)
(tree with no url: ovmf)
(tree with no url: seabios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git
Latest c977650a67e6ca6c3cff9548b031d072d00db80a c530a75c1e6a472b0eb9558310b518f0dfcd8860 df553c056104e3dd8a2bd2e72539a57c4c085bae 62b3d206425c245ed0a020390a64640d40d97471 a396c2549e079ab2f644aab8b2e7f47a8d0e3937
Basis pass 48763742b1bceb119b04656b8dd06e0617dfa89a c530a75c1e6a472b0eb9558310b518f0dfcd8860 21f6526d1da331611ac5fe12967549d1a04e149b ae69b059498e8a563c6d64c4aa4cb95e53d76680 f8d4c1d5c59eb328480957ff6f1bccaf113b4921
Generating revisions with ./adhoc-revtuple-generator git://xenbits.xen.org/linux-pvops.git#48763742b1bceb119b04656b8dd06e0617dfa89a-c977650a67e6ca6c3cff9548b031d072d00db80a git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/qemu-xen-traditional.git#21f6526d1da331611ac5fe12967549d1a04e149b-df553c056104e3dd8a2bd2e72539a57c4c085bae git://xenbits.xen.org/qemu-xen.git#ae69b059498e8a563c6d64c4aa4cb95e53d76680-62b3d206425c245ed0a020390a64640d40d97471 git://xenbits.xen.org/xen.git#f8d4c1d5c59eb328480957ff6f1bccaf113b4921-a396c2549e079ab2f644aab8b2e7f47a8d0e3937
Loaded 10941 nodes in revision graph
Searching for test results:
93563 pass 48763742b1bceb119b04656b8dd06e0617dfa89a c530a75c1e6a472b0eb9558310b518f0dfcd8860 21f6526d1da331611ac5fe12967549d1a04e149b ae69b059498e8a563c6d64c4aa4cb95e53d76680 f8d4c1d5c59eb328480957ff6f1bccaf113b4921
93587 [host=baroque0]
93612 [host=rimava0]
93638 [host=chardonnay0]
93725 [host=fiano1]
93813 [host=chardonnay1]
93873 [host=elbling0]
93963 [host=elbling1]
93920 [host=huxelrebe1]
94021 [host=italia1]
94070 [host=pinot1]
94442 [host=baroque1]
94461 [host=rimava0]
94495 [host=pinot0]
94487 [host=elbling0]
94507 [host=chardonnay0]
94536 [host=baroque0]
94563 [host=italia1]
94622 pass 1c767107ef341cdc080d44d3ee1c9ca1b6957ce0 c530a75c1e6a472b0eb9558310b518f0dfcd8860 e4ceb77cf88bc44f0b7fe39225c49d660735f327 62b3d206425c245ed0a020390a64640d40d97471 46699c7393bd991234b5642763c5c24b6b39a6c4
94620 pass 88ed791c43aad64fc2f3707bc3e82205031a73e3 c530a75c1e6a472b0eb9558310b518f0dfcd8860 e4ceb77cf88bc44f0b7fe39225c49d660735f327 62b3d206425c245ed0a020390a64640d40d97471 46ed6a814c2867260c0ebd9a7399466c801637be
94580 [host=nocera0]
94610 fail c977650a67e6ca6c3cff9548b031d072d00db80a c530a75c1e6a472b0eb9558310b518f0dfcd8860 df553c056104e3dd8a2bd2e72539a57c4c085bae 62b3d206425c245ed0a020390a64640d40d97471 a396c2549e079ab2f644aab8b2e7f47a8d0e3937
94642 fail 1c767107ef341cdc080d44d3ee1c9ca1b6957ce0 c530a75c1e6a472b0eb9558310b518f0dfcd8860 df553c056104e3dd8a2bd2e72539a57c4c085bae 62b3d206425c245ed0a020390a64640d40d97471 667a7120d006007389435976071f0b89f94ec7cc
94615 pass 48763742b1bceb119b04656b8dd06e0617dfa89a c530a75c1e6a472b0eb9558310b518f0dfcd8860 21f6526d1da331611ac5fe12967549d1a04e149b ae69b059498e8a563c6d64c4aa4cb95e53d76680 f8d4c1d5c59eb328480957ff6f1bccaf113b4921
94617 fail c977650a67e6ca6c3cff9548b031d072d00db80a c530a75c1e6a472b0eb9558310b518f0dfcd8860 df553c056104e3dd8a2bd2e72539a57c4c085bae 62b3d206425c245ed0a020390a64640d40d97471 a396c2549e079ab2f644aab8b2e7f47a8d0e3937
94616 fail c977650a67e6ca6c3cff9548b031d072d00db80a c530a75c1e6a472b0eb9558310b518f0dfcd8860 df553c056104e3dd8a2bd2e72539a57c4c085bae 62b3d206425c245ed0a020390a64640d40d97471 a396c2549e079ab2f644aab8b2e7f47a8d0e3937
94629 fail 9e3f55f5045c542a48c560c503f949b8e80adcf4 c530a75c1e6a472b0eb9558310b518f0dfcd8860 df553c056104e3dd8a2bd2e72539a57c4c085bae 62b3d206425c245ed0a020390a64640d40d97471 bab2bd8e222de9e596699ac080ea985af828c4c4
94653 fail 1c767107ef341cdc080d44d3ee1c9ca1b6957ce0 c530a75c1e6a472b0eb9558310b518f0dfcd8860 e4ceb77cf88bc44f0b7fe39225c49d660735f327 62b3d206425c245ed0a020390a64640d40d97471 1542efcea893df874b13b1ea78101e1ff6a55c41
94647 pass 1c767107ef341cdc080d44d3ee1c9ca1b6957ce0 c530a75c1e6a472b0eb9558310b518f0dfcd8860 e4ceb77cf88bc44f0b7fe39225c49d660735f327 62b3d206425c245ed0a020390a64640d40d97471 a5adcee740df2679cf6828535279d8f8cbe2eff1
94639 fail c977650a67e6ca6c3cff9548b031d072d00db80a c530a75c1e6a472b0eb9558310b518f0dfcd8860 df553c056104e3dd8a2bd2e72539a57c4c085bae 62b3d206425c245ed0a020390a64640d40d97471 a396c2549e079ab2f644aab8b2e7f47a8d0e3937
94689 fail 1c767107ef341cdc080d44d3ee1c9ca1b6957ce0 c530a75c1e6a472b0eb9558310b518f0dfcd8860 e4ceb77cf88bc44f0b7fe39225c49d660735f327 62b3d206425c245ed0a020390a64640d40d97471 1542efcea893df874b13b1ea78101e1ff6a55c41
94664 pass 1c767107ef341cdc080d44d3ee1c9ca1b6957ce0 c530a75c1e6a472b0eb9558310b518f0dfcd8860 e4ceb77cf88bc44f0b7fe39225c49d660735f327 62b3d206425c245ed0a020390a64640d40d97471 c32381352cce9744e640bf239d2704dae4af4fc7
94673 fail 1c767107ef341cdc080d44d3ee1c9ca1b6957ce0 c530a75c1e6a472b0eb9558310b518f0dfcd8860 e4ceb77cf88bc44f0b7fe39225c49d660735f327 62b3d206425c245ed0a020390a64640d40d97471 1542efcea893df874b13b1ea78101e1ff6a55c41
94675 pass 1c767107ef341cdc080d44d3ee1c9ca1b6957ce0 c530a75c1e6a472b0eb9558310b518f0dfcd8860 e4ceb77cf88bc44f0b7fe39225c49d660735f327 62b3d206425c245ed0a020390a64640d40d97471 c32381352cce9744e640bf239d2704dae4af4fc7
94677 fail 1c767107ef341cdc080d44d3ee1c9ca1b6957ce0 c530a75c1e6a472b0eb9558310b518f0dfcd8860 e4ceb77cf88bc44f0b7fe39225c49d660735f327 62b3d206425c245ed0a020390a64640d40d97471 1542efcea893df874b13b1ea78101e1ff6a55c41
94680 pass 1c767107ef341cdc080d44d3ee1c9ca1b6957ce0 c530a75c1e6a472b0eb9558310b518f0dfcd8860 e4ceb77cf88bc44f0b7fe39225c49d660735f327 62b3d206425c245ed0a020390a64640d40d97471 c32381352cce9744e640bf239d2704dae4af4fc7
Searching for interesting versions
Result found: flight 93563 (pass), for basis pass
Result found: flight 94610 (fail), for basis failure
Repro found: flight 94615 (pass), for basis pass
Repro found: flight 94616 (fail), for basis failure
0 revisions at 1c767107ef341cdc080d44d3ee1c9ca1b6957ce0 c530a75c1e6a472b0eb9558310b518f0dfcd8860 e4ceb77cf88bc44f0b7fe39225c49d660735f327 62b3d206425c245ed0a020390a64640d40d97471 c32381352cce9744e640bf239d2704dae4af4fc7
No revisions left to test, checking graph state.
Result found: flight 94664 (pass), for last pass
Result found: flight 94673 (fail), for first failure
Repro found: flight 94675 (pass), for last pass
Repro found: flight 94677 (fail), for first failure
Repro found: flight 94680 (pass), for last pass
Repro found: flight 94689 (fail), for first failure
*** Found and reproduced problem changeset ***
Bug is in tree: xen git://xenbits.xen.org/xen.git
Bug introduced: 1542efcea893df874b13b1ea78101e1ff6a55c41
Bug not present: c32381352cce9744e640bf239d2704dae4af4fc7
Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/94689/
commit 1542efcea893df874b13b1ea78101e1ff6a55c41
Author: Wei Liu <wei.liu2@citrix.com>
Date: Wed May 18 11:48:25 2016 +0100
Config.mk: update ovmf changeset
Signed-off-by: Wei Liu <wei.liu2@citrix.com>
pnmtopng: 245 colors found
Revision graph left in /home/logs/results/bisect/xen-unstable/test-amd64-amd64-xl-qemuu-ovmf-amd64.guest-start--debianhvm.repeat.{dot,ps,png,html,svg}.
----------------------------------------
94689: tolerable ALL FAIL
flight 94689 xen-unstable real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/94689/
Failures :-/ but no regressions.
Tests which did not succeed,
including tests which could not be run:
test-amd64-amd64-xl-qemuu-ovmf-amd64 17 guest-start/debianhvm.repeat fail baseline untested
jobs:
test-amd64-amd64-xl-qemuu-ovmf-amd64 fail
------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images
Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs
Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master
Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [xen-unstable bisection] complete test-amd64-amd64-xl-qemuu-ovmf-amd64
2016-05-22 5:37 [xen-unstable bisection] complete test-amd64-amd64-xl-qemuu-ovmf-amd64 osstest service owner
@ 2016-05-23 10:03 ` Wei Liu
2016-05-23 10:16 ` Wei Liu
2016-05-24 17:50 ` Wei Liu
1 sibling, 1 reply; 7+ messages in thread
From: Wei Liu @ 2016-05-23 10:03 UTC (permalink / raw)
To: osstest service owner
Cc: Andrew Cooper, xen-devel, Wei Liu, Ian Jackson, Jan Beulich
On Sun, May 22, 2016 at 05:37:51AM +0000, osstest service owner wrote:
> branch xen-unstable
> xenbranch xen-unstable
> job test-amd64-amd64-xl-qemuu-ovmf-amd64
> testid guest-start/debianhvm.repeat
>
> Tree: linux git://xenbits.xen.org/linux-pvops.git
> Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
> Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
> Tree: qemuu git://xenbits.xen.org/qemu-xen.git
> Tree: xen git://xenbits.xen.org/xen.git
>
> *** Found and reproduced problem changeset ***
>
> Bug is in tree: xen git://xenbits.xen.org/xen.git
> Bug introduced: 1542efcea893df874b13b1ea78101e1ff6a55c41
> Bug not present: c32381352cce9744e640bf239d2704dae4af4fc7
> Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/94689/
>
>
> commit 1542efcea893df874b13b1ea78101e1ff6a55c41
> Author: Wei Liu <wei.liu2@citrix.com>
> Date: Wed May 18 11:48:25 2016 +0100
>
> Config.mk: update ovmf changeset
>
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
>
Though the ovmf commit has passed our ovmf branch tests, it was ran on a
different hosts. The test history in [0] shows that there is no recent
flight that runs on merlot*.
Given that this is blocking the push gate and I don't think I can come
up with a fix today, I'm going to revert this patch in xen-unstable.
Hopefully OVMF branch testing will be scheduled on merlot* at some point
and we can let the bisector kick in to figure out what goes wrong.
Wei.
[0] http://logs.test-lab.xenproject.org/osstest/results/history/test-amd64-amd64-xl-qemuu-ovmf-amd64/ovmf
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [xen-unstable bisection] complete test-amd64-amd64-xl-qemuu-ovmf-amd64
2016-05-23 10:03 ` Wei Liu
@ 2016-05-23 10:16 ` Wei Liu
0 siblings, 0 replies; 7+ messages in thread
From: Wei Liu @ 2016-05-23 10:16 UTC (permalink / raw)
To: osstest service owner
Cc: Andrew Cooper, xen-devel, Wei Liu, Ian Jackson, Jan Beulich
On Mon, May 23, 2016 at 11:03:52AM +0100, Wei Liu wrote:
> On Sun, May 22, 2016 at 05:37:51AM +0000, osstest service owner wrote:
> > branch xen-unstable
> > xenbranch xen-unstable
> > job test-amd64-amd64-xl-qemuu-ovmf-amd64
> > testid guest-start/debianhvm.repeat
> >
> > Tree: linux git://xenbits.xen.org/linux-pvops.git
> > Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
> > Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
> > Tree: qemuu git://xenbits.xen.org/qemu-xen.git
> > Tree: xen git://xenbits.xen.org/xen.git
> >
> > *** Found and reproduced problem changeset ***
> >
> > Bug is in tree: xen git://xenbits.xen.org/xen.git
> > Bug introduced: 1542efcea893df874b13b1ea78101e1ff6a55c41
> > Bug not present: c32381352cce9744e640bf239d2704dae4af4fc7
> > Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/94689/
> >
> >
> > commit 1542efcea893df874b13b1ea78101e1ff6a55c41
> > Author: Wei Liu <wei.liu2@citrix.com>
> > Date: Wed May 18 11:48:25 2016 +0100
> >
> > Config.mk: update ovmf changeset
> >
> > Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> >
>
> Though the ovmf commit has passed our ovmf branch tests, it was ran on a
> different hosts. The test history in [0] shows that there is no recent
> flight that runs on merlot*.
>
> Given that this is blocking the push gate and I don't think I can come
> up with a fix today, I'm going to revert this patch in xen-unstable.
>
Now done.
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [xen-unstable bisection] complete test-amd64-amd64-xl-qemuu-ovmf-amd64
2016-05-22 5:37 [xen-unstable bisection] complete test-amd64-amd64-xl-qemuu-ovmf-amd64 osstest service owner
2016-05-23 10:03 ` Wei Liu
@ 2016-05-24 17:50 ` Wei Liu
2016-05-25 10:10 ` Wei Liu
1 sibling, 1 reply; 7+ messages in thread
From: Wei Liu @ 2016-05-24 17:50 UTC (permalink / raw)
To: osstest service owner; +Cc: xen-devel, Wei Liu
On Sun, May 22, 2016 at 05:37:51AM +0000, osstest service owner wrote:
> branch xen-unstable
> xenbranch xen-unstable
> job test-amd64-amd64-xl-qemuu-ovmf-amd64
> testid guest-start/debianhvm.repeat
>
> Tree: linux git://xenbits.xen.org/linux-pvops.git
> Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
> Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
> Tree: qemuu git://xenbits.xen.org/qemu-xen.git
> Tree: xen git://xenbits.xen.org/xen.git
>
> *** Found and reproduced problem changeset ***
>
> Bug is in tree: xen git://xenbits.xen.org/xen.git
> Bug introduced: 1542efcea893df874b13b1ea78101e1ff6a55c41
> Bug not present: c32381352cce9744e640bf239d2704dae4af4fc7
> Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/94689/
>
>
> commit 1542efcea893df874b13b1ea78101e1ff6a55c41
> Author: Wei Liu <wei.liu2@citrix.com>
> Date: Wed May 18 11:48:25 2016 +0100
>
> Config.mk: update ovmf changeset
>
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
>
I did some tests and analysis today.
* Manual tests
Seconds between starting a guest to receiving ping, test three times
xl create guest.cfg ;\
s=`date +%s`; date --date="@$s"; \
while true; do \
ping -c 1 -q -W 1 172.16.147.190 2>&1 1>/dev/null;\
if [ $? = 0 ]; then break; fi ;\
done;\
e=`date +%s`; date --date="@$e";\
expr $e - $s
merlot0 tg06
old ovmf, 5000mb ram 98 99 96 33 32 31
old ovmf, 768mb ram 97 100 100 31 31 32
new ovmf, 5000mb ram 158 158 157 25 26 25
new ovmf, 768mb ram 151 156 160 26 25 25
Old ovmf refers to 52a99493 (currently in master)
New ovmf refers to b41ef325 (the fingered one)
Tg06 and merlot0 have the same changeset git:983aae0.
Note that the guest runs on tg06 has a different version of Debian, so it is
not really comparable to the guest on merlot0. Also note that we can't
extrapolate from my manual test that osstest will or will not see timeout on
merlot0 because the technique to test that is not the same.
The conclusions are: we now know the results are consistent and the guest
memory size doesn't affect the time taken to start the guest.
* Osstest report
Pick the ovmf flight that tested the fingered changeset:
http://logs.test-lab.xenproject.org/osstest/logs/94519/test-amd64-amd64-xl-qemuu-ovmf-amd64/17.ts-repeat-test.log
2016-05-18 05:40:26 Z guest debianhvm.guest.osstest 5a:36:0e:37:00:01 22 link/ip/tcp: ok. (185s)
2016-05-18 05:44:13 Z guest debianhvm.guest.osstest 5a:36:0e:37:00:01 22 link/ip/tcp: ok. (184s)
2016-05-18 05:47:55 Z guest debianhvm.guest.osstest 5a:36:0e:37:00:01 22 link/ip/tcp: ok. (184s)
...
The time out for checking if a guest is up is 200 seconds so 180 seconds should
be fine.
The new ovmf failure reported by bisector is the controller timed out when
trying to check if the guest is up.
http://logs.test-lab.xenproject.org/osstest/logs/94689/test-amd64-amd64-xl-qemuu-ovmf-amd64/17.ts-repeat-test.log
The old ovmf passed on merlot0:
http://logs.test-lab.xenproject.org/osstest/logs/94680/test-amd64-amd64-xl-qemuu-ovmf-amd64/17.ts-repeat-test.log
2016-05-22 02:49:57 Z guest debianhvm.guest.osstest 5a:36:0e:d8:00:01 22 link/ip/tcp: ok. (141s)
The old ovmf passed on other machine:
http://logs.test-lab.xenproject.org/osstest/logs/94580/test-amd64-amd64-xl-qemuu-ovmf-amd64/17.ts-repeat-test.log
2016-05-19 22:45:39 Z guest debianhvm.guest.osstest 5a:36:0e:74:00:3c 22 link/ip/tcp: ok. (122s)
The two numbers suggest that merlot is slower than the other machine.
Pick one of the recent test report for OVMF:
http://logs.test-lab.xenproject.org/osstest/logs/94739/test-amd64-amd64-xl-qemuu-ovmf-amd64/17.ts-repeat-test.log
The same metric (guest creation to guest responding to network traffic) is a
lot shorter (on a non-merlot machine):
2016-05-24 14:21:59 Z guest debianhvm.guest.osstest 5a:36:0e:13:00:02 22 link/ip/tcp: ok. (49s)
2016-05-24 14:23:28 Z guest debianhvm.guest.osstest 5a:36:0e:13:00:02 22 link/ip/tcp: ok. (49s)
2016-05-24 14:25:03 Z guest debianhvm.guest.osstest 5a:36:0e:13:00:02 22 link/ip/tcp: ok. (48s)
2016-05-24 14:26:45 Z guest debianhvm.guest.osstest 5a:36:0e:13:00:02 22 link/ip/tcp: ok. (48s)
...
It looks like it's getting better.
I don't have a conclusion on this issue because I can't eliminate all
variables. I'm inclined to push another a newer ovmf changeset to see what
happens, because:
1. merlot is slower than other machine, the time difference is about 20s.
2. new ovmf on other machine already takes ~180s to come up (less than 20s to
200s timeout).
3. the time taken to come up seems to get shorter, though I didn't see why when
I skimmed ovmf changelog.
Wei.
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [xen-unstable bisection] complete test-amd64-amd64-xl-qemuu-ovmf-amd64
2016-05-24 17:50 ` Wei Liu
@ 2016-05-25 10:10 ` Wei Liu
2016-05-25 10:29 ` Anthony PERARD
0 siblings, 1 reply; 7+ messages in thread
From: Wei Liu @ 2016-05-25 10:10 UTC (permalink / raw)
To: osstest service owner; +Cc: xen-devel, Wei Liu
On Tue, May 24, 2016 at 06:50:23PM +0100, Wei Liu wrote:
> On Sun, May 22, 2016 at 05:37:51AM +0000, osstest service owner wrote:
> > branch xen-unstable
> > xenbranch xen-unstable
> > job test-amd64-amd64-xl-qemuu-ovmf-amd64
> > testid guest-start/debianhvm.repeat
> >
> > Tree: linux git://xenbits.xen.org/linux-pvops.git
> > Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
> > Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
> > Tree: qemuu git://xenbits.xen.org/qemu-xen.git
> > Tree: xen git://xenbits.xen.org/xen.git
> >
> > *** Found and reproduced problem changeset ***
> >
> > Bug is in tree: xen git://xenbits.xen.org/xen.git
> > Bug introduced: 1542efcea893df874b13b1ea78101e1ff6a55c41
> > Bug not present: c32381352cce9744e640bf239d2704dae4af4fc7
> > Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/94689/
> >
> >
> > commit 1542efcea893df874b13b1ea78101e1ff6a55c41
> > Author: Wei Liu <wei.liu2@citrix.com>
> > Date: Wed May 18 11:48:25 2016 +0100
> >
> > Config.mk: update ovmf changeset
> >
> > Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> >
>
> I did some tests and analysis today.
>
> * Manual tests
>
> Seconds between starting a guest to receiving ping, test three times
>
> xl create guest.cfg ;\
> s=`date +%s`; date --date="@$s"; \
> while true; do \
> ping -c 1 -q -W 1 172.16.147.190 2>&1 1>/dev/null;\
> if [ $? = 0 ]; then break; fi ;\
> done;\
> e=`date +%s`; date --date="@$e";\
> expr $e - $s
>
> merlot0 tg06
> old ovmf, 5000mb ram 98 99 96 33 32 31
> old ovmf, 768mb ram 97 100 100 31 31 32
> new ovmf, 5000mb ram 158 158 157 25 26 25
> new ovmf, 768mb ram 151 156 160 26 25 25
>
> Old ovmf refers to 52a99493 (currently in master)
> New ovmf refers to b41ef325 (the fingered one)
>
> Tg06 and merlot0 have the same changeset git:983aae0.
>
> Note that the guest runs on tg06 has a different version of Debian, so it is
> not really comparable to the guest on merlot0. Also note that we can't
> extrapolate from my manual test that osstest will or will not see timeout on
> merlot0 because the technique to test that is not the same.
>
> The conclusions are: we now know the results are consistent and the guest
> memory size doesn't affect the time taken to start the guest.
>
> * Osstest report
>
> Pick the ovmf flight that tested the fingered changeset:
>
> http://logs.test-lab.xenproject.org/osstest/logs/94519/test-amd64-amd64-xl-qemuu-ovmf-amd64/17.ts-repeat-test.log
>
> 2016-05-18 05:40:26 Z guest debianhvm.guest.osstest 5a:36:0e:37:00:01 22 link/ip/tcp: ok. (185s)
> 2016-05-18 05:44:13 Z guest debianhvm.guest.osstest 5a:36:0e:37:00:01 22 link/ip/tcp: ok. (184s)
> 2016-05-18 05:47:55 Z guest debianhvm.guest.osstest 5a:36:0e:37:00:01 22 link/ip/tcp: ok. (184s)
> ...
>
> The time out for checking if a guest is up is 200 seconds so 180 seconds should
> be fine.
>
> The new ovmf failure reported by bisector is the controller timed out when
> trying to check if the guest is up.
>
> http://logs.test-lab.xenproject.org/osstest/logs/94689/test-amd64-amd64-xl-qemuu-ovmf-amd64/17.ts-repeat-test.log
>
> The old ovmf passed on merlot0:
>
> http://logs.test-lab.xenproject.org/osstest/logs/94680/test-amd64-amd64-xl-qemuu-ovmf-amd64/17.ts-repeat-test.log
> 2016-05-22 02:49:57 Z guest debianhvm.guest.osstest 5a:36:0e:d8:00:01 22 link/ip/tcp: ok. (141s)
>
> The old ovmf passed on other machine:
>
> http://logs.test-lab.xenproject.org/osstest/logs/94580/test-amd64-amd64-xl-qemuu-ovmf-amd64/17.ts-repeat-test.log
>
> 2016-05-19 22:45:39 Z guest debianhvm.guest.osstest 5a:36:0e:74:00:3c 22 link/ip/tcp: ok. (122s)
>
> The two numbers suggest that merlot is slower than the other machine.
>
> Pick one of the recent test report for OVMF:
>
> http://logs.test-lab.xenproject.org/osstest/logs/94739/test-amd64-amd64-xl-qemuu-ovmf-amd64/17.ts-repeat-test.log
>
> The same metric (guest creation to guest responding to network traffic) is a
> lot shorter (on a non-merlot machine):
>
> 2016-05-24 14:21:59 Z guest debianhvm.guest.osstest 5a:36:0e:13:00:02 22 link/ip/tcp: ok. (49s)
> 2016-05-24 14:23:28 Z guest debianhvm.guest.osstest 5a:36:0e:13:00:02 22 link/ip/tcp: ok. (49s)
> 2016-05-24 14:25:03 Z guest debianhvm.guest.osstest 5a:36:0e:13:00:02 22 link/ip/tcp: ok. (48s)
> 2016-05-24 14:26:45 Z guest debianhvm.guest.osstest 5a:36:0e:13:00:02 22 link/ip/tcp: ok. (48s)
> ...
>
> It looks like it's getting better.
>
> I don't have a conclusion on this issue because I can't eliminate all
> variables. I'm inclined to push another a newer ovmf changeset to see what
> happens, because:
>
> 1. merlot is slower than other machine, the time difference is about 20s.
> 2. new ovmf on other machine already takes ~180s to come up (less than 20s to
> 200s timeout).
> 3. the time taken to come up seems to get shorter, though I didn't see why when
> I skimmed ovmf changelog.
I'm going to hold off the attempt because the latest ovmf flight was
scheduled on merlot1 and failed.
http://logs.test-lab.xenproject.org/osstest/logs/94753/
Wei.
>
> Wei.
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [xen-unstable bisection] complete test-amd64-amd64-xl-qemuu-ovmf-amd64
2016-05-25 10:10 ` Wei Liu
@ 2016-05-25 10:29 ` Anthony PERARD
0 siblings, 0 replies; 7+ messages in thread
From: Anthony PERARD @ 2016-05-25 10:29 UTC (permalink / raw)
To: Wei Liu; +Cc: xen-devel, osstest service owner
On Wed, May 25, 2016 at 11:10:21AM +0100, Wei Liu wrote:
> On Tue, May 24, 2016 at 06:50:23PM +0100, Wei Liu wrote:
> > On Sun, May 22, 2016 at 05:37:51AM +0000, osstest service owner wrote:
> > > branch xen-unstable
> > > xenbranch xen-unstable
> > > job test-amd64-amd64-xl-qemuu-ovmf-amd64
> > > testid guest-start/debianhvm.repeat
> > >
> > > Tree: linux git://xenbits.xen.org/linux-pvops.git
> > > Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
> > > Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
> > > Tree: qemuu git://xenbits.xen.org/qemu-xen.git
> > > Tree: xen git://xenbits.xen.org/xen.git
> > >
> > > *** Found and reproduced problem changeset ***
> > >
> > > Bug is in tree: xen git://xenbits.xen.org/xen.git
> > > Bug introduced: 1542efcea893df874b13b1ea78101e1ff6a55c41
> > > Bug not present: c32381352cce9744e640bf239d2704dae4af4fc7
> > > Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/94689/
> > >
> > >
> > > commit 1542efcea893df874b13b1ea78101e1ff6a55c41
> > > Author: Wei Liu <wei.liu2@citrix.com>
> > > Date: Wed May 18 11:48:25 2016 +0100
> > >
> > > Config.mk: update ovmf changeset
> > >
> > > Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> > >
> >
> > I did some tests and analysis today.
> >
> > * Manual tests
> >
> > Seconds between starting a guest to receiving ping, test three times
> >
> > xl create guest.cfg ;\
> > s=`date +%s`; date --date="@$s"; \
> > while true; do \
> > ping -c 1 -q -W 1 172.16.147.190 2>&1 1>/dev/null;\
> > if [ $? = 0 ]; then break; fi ;\
> > done;\
> > e=`date +%s`; date --date="@$e";\
> > expr $e - $s
> >
> > merlot0 tg06
> > old ovmf, 5000mb ram 98 99 96 33 32 31
> > old ovmf, 768mb ram 97 100 100 31 31 32
> > new ovmf, 5000mb ram 158 158 157 25 26 25
> > new ovmf, 768mb ram 151 156 160 26 25 25
> >
> > Old ovmf refers to 52a99493 (currently in master)
> > New ovmf refers to b41ef325 (the fingered one)
> >
> > Tg06 and merlot0 have the same changeset git:983aae0.
> >
> > Note that the guest runs on tg06 has a different version of Debian, so it is
> > not really comparable to the guest on merlot0. Also note that we can't
> > extrapolate from my manual test that osstest will or will not see timeout on
> > merlot0 because the technique to test that is not the same.
> >
> > The conclusions are: we now know the results are consistent and the guest
> > memory size doesn't affect the time taken to start the guest.
> >
> > * Osstest report
> >
> > Pick the ovmf flight that tested the fingered changeset:
> >
> > http://logs.test-lab.xenproject.org/osstest/logs/94519/test-amd64-amd64-xl-qemuu-ovmf-amd64/17.ts-repeat-test.log
> >
> > 2016-05-18 05:40:26 Z guest debianhvm.guest.osstest 5a:36:0e:37:00:01 22 link/ip/tcp: ok. (185s)
> > 2016-05-18 05:44:13 Z guest debianhvm.guest.osstest 5a:36:0e:37:00:01 22 link/ip/tcp: ok. (184s)
> > 2016-05-18 05:47:55 Z guest debianhvm.guest.osstest 5a:36:0e:37:00:01 22 link/ip/tcp: ok. (184s)
> > ...
> >
> > The time out for checking if a guest is up is 200 seconds so 180 seconds should
> > be fine.
> >
> > The new ovmf failure reported by bisector is the controller timed out when
> > trying to check if the guest is up.
> >
> > http://logs.test-lab.xenproject.org/osstest/logs/94689/test-amd64-amd64-xl-qemuu-ovmf-amd64/17.ts-repeat-test.log
> >
> > The old ovmf passed on merlot0:
> >
> > http://logs.test-lab.xenproject.org/osstest/logs/94680/test-amd64-amd64-xl-qemuu-ovmf-amd64/17.ts-repeat-test.log
> > 2016-05-22 02:49:57 Z guest debianhvm.guest.osstest 5a:36:0e:d8:00:01 22 link/ip/tcp: ok. (141s)
> >
> > The old ovmf passed on other machine:
> >
> > http://logs.test-lab.xenproject.org/osstest/logs/94580/test-amd64-amd64-xl-qemuu-ovmf-amd64/17.ts-repeat-test.log
> >
> > 2016-05-19 22:45:39 Z guest debianhvm.guest.osstest 5a:36:0e:74:00:3c 22 link/ip/tcp: ok. (122s)
> >
> > The two numbers suggest that merlot is slower than the other machine.
> >
> > Pick one of the recent test report for OVMF:
> >
> > http://logs.test-lab.xenproject.org/osstest/logs/94739/test-amd64-amd64-xl-qemuu-ovmf-amd64/17.ts-repeat-test.log
> >
> > The same metric (guest creation to guest responding to network traffic) is a
> > lot shorter (on a non-merlot machine):
> >
> > 2016-05-24 14:21:59 Z guest debianhvm.guest.osstest 5a:36:0e:13:00:02 22 link/ip/tcp: ok. (49s)
> > 2016-05-24 14:23:28 Z guest debianhvm.guest.osstest 5a:36:0e:13:00:02 22 link/ip/tcp: ok. (49s)
> > 2016-05-24 14:25:03 Z guest debianhvm.guest.osstest 5a:36:0e:13:00:02 22 link/ip/tcp: ok. (48s)
> > 2016-05-24 14:26:45 Z guest debianhvm.guest.osstest 5a:36:0e:13:00:02 22 link/ip/tcp: ok. (48s)
> > ...
> >
> > It looks like it's getting better.
> >
> > I don't have a conclusion on this issue because I can't eliminate all
> > variables. I'm inclined to push another a newer ovmf changeset to see what
> > happens, because:
> >
> > 1. merlot is slower than other machine, the time difference is about 20s.
> > 2. new ovmf on other machine already takes ~180s to come up (less than 20s to
> > 200s timeout).
> > 3. the time taken to come up seems to get shorter, though I didn't see why when
> > I skimmed ovmf changelog.
>
> I'm going to hold off the attempt because the latest ovmf flight was
> scheduled on merlot1 and failed.
>
> http://logs.test-lab.xenproject.org/osstest/logs/94753/
In my experience, OVMF takes a lot longer to start on an AMD box
compared to an Intel one. I think the time is spent on
self-decompression, so very early.
--
Anthony PERARD
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2016-05-25 10:29 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-05-22 5:37 [xen-unstable bisection] complete test-amd64-amd64-xl-qemuu-ovmf-amd64 osstest service owner
2016-05-23 10:03 ` Wei Liu
2016-05-23 10:16 ` Wei Liu
2016-05-24 17:50 ` Wei Liu
2016-05-25 10:10 ` Wei Liu
2016-05-25 10:29 ` Anthony PERARD
-- strict thread matches above, loose matches on Subject: below --
2015-07-29 14:26 osstest service owner
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).