From: osstest service owner <osstest-admin@xenproject.org>
To: xen-devel@lists.xensource.com, osstest-admin@xenproject.org
Subject: [xen-unstable bisection] complete test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm
Date: Wed, 01 Mar 2017 23:53:52 +0000 [thread overview]
Message-ID: <E1cjE48-00007j-JB@osstest.test-lab.xenproject.org> (raw)
branch xen-unstable
xenbranch xen-unstable
job test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm
testid xen-boot
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git
*** Found and reproduced problem changeset ***
Bug is in tree: xen git://xenbits.xen.org/xen.git
Bug introduced: c5b9805bc1f793177779ae342c65fcc201a15a47
Bug not present: b199c44afa3a0d18d0e968e78a590eb9e69e20ad
Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/106324/
commit c5b9805bc1f793177779ae342c65fcc201a15a47
Author: Daniel Kiper <daniel.kiper@oracle.com>
Date: Wed Feb 22 14:38:06 2017 +0100
efi: create new early memory allocator
There is a problem with place_string() which is used as early memory
allocator. It gets memory chunks starting from start symbol and goes
down. Sadly this does not work when Xen is loaded using multiboot2
protocol because then the start lives on 1 MiB address and we should
not allocate a memory from below of it. So, I tried to use mem_lower
address calculated by GRUB2. However, this solution works only on some
machines. There are machines in the wild (e.g. Dell PowerEdge R820)
which uses first ~640 KiB for boot services code or data... :-(((
Hence, we need new memory allocator for Xen EFI boot code which is
quite simple and generic and could be used by place_string() and
efi_arch_allocate_mmap_buffer(). I think about following solutions:
1) We could use native EFI allocation functions (e.g. AllocatePool()
or AllocatePages()) to get memory chunk. However, later (somewhere
in __start_xen()) we must copy its contents to safe place or reserve
it in e820 memory map and map it in Xen virtual address space. This
means that the code referring to Xen command line, loaded modules and
EFI memory map, mostly in __start_xen(), will be further complicated
and diverge from legacy BIOS cases. Additionally, both former things
have to be placed below 4 GiB because their addresses are stored in
multiboot_info_t structure which has 32-bit relevant members.
2) We may allocate memory area statically somewhere in Xen code which
could be used as memory pool for early dynamic allocations. Looks
quite simple. Additionally, it would not depend on EFI at all and
could be used on legacy BIOS platforms if we need it. However, we
must carefully choose size of this pool. We do not want increase Xen
binary size too much and waste too much memory but also we must fit
at least memory map on x86 EFI platforms. As I saw on small machine,
e.g. IBM System x3550 M2 with 8 GiB RAM, memory map may contain more
than 200 entries. Every entry on x86-64 platform is 40 bytes in size.
So, it means that we need more than 8 KiB for EFI memory map only.
Additionally, if we use this memory pool for Xen and modules command
line storage (it would be used when xen.efi is executed as EFI application)
then we should add, I think, about 1 KiB. In this case, to be on safe
side, we should assume at least 64 KiB pool for early memory allocations.
Which is about 4 times of our earlier calculations. However, during
discussion on Xen-devel Jan Beulich suggested that just in case we should
use 1 MiB memory pool like it is in original place_string() implementation.
So, let's use 1 MiB as it was proposed. If we think that we should not
waste unallocated memory in the pool on running system then we can mark
this region as __initdata and move all required data to dynamically
allocated places somewhere in __start_xen().
2a) We could put memory pool into .bss.page_aligned section. Then allocate
memory chunks starting from the lowest address. After init phase we can
free unused portion of the memory pool as in case of .init.text or .init.data
sections. This way we do not need to allocate any space in image file and
freeing of unused area in the memory pool is very simple.
Now #2a solution is implemented because it is quite simple and requires
limited number of changes, especially in __start_xen().
New allocator is quite generic and can be used on ARM platforms too.
Though it is not enabled on ARM yet due to lack of some prereq.
List of them is placed before ebmalloc code.
Signed-off-by: Daniel Kiper <daniel.kiper@oracle.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Julien Grall <julien.grall@arm.com>
Reviewed-by: Doug Goldstein <cardoe@cardoe.com>
Tested-by: Doug Goldstein <cardoe@cardoe.com>
For bisection revision-tuple graph see:
http://logs.test-lab.xenproject.org/osstest/results/bisect/xen-unstable/test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm.xen-boot.html
Revision IDs in each graph node refer, respectively, to the Trees above.
----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/xen-unstable/test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm.xen-boot --summary-out=tmp/106324.bisection-summary --basis-template=105933 --blessings=real,real-bisect xen-unstable test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm xen-boot
Searching for failure / basis pass:
106283 fail [host=elbling1] / 106160 [host=italia1] 106138 [host=nocera1] 106122 [host=merlot1] 106102 [host=huxelrebe0] 106081 [host=fiano1] 105994 [host=merlot0] 105966 [host=nobling0] 105946 [host=nocera0] 105933 [host=baroque0] 105919 [host=rimava0] 105900 [host=fiano0] 105896 [host=pinot0] 105873 [host=huxelrebe1] 105861 [host=nobling1] 105840 [host=chardonnay1] 105821 [host=elbling0] 105804 [host=huxelrebe0] 105790 [host=italia1] 105784 [host=chardonnay0] 105766 [host=pinot1] 105756 ok.
Failure / basis pass flights: 106283 / 105756
(tree with no url: minios)
(tree with no url: ovmf)
(tree with no url: seabios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git
Latest b65f2f457c49b2cfd7967c34b7a0b04c25587f13 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8b4834ee1202852ed83a9fc61268c65fb6961ea7 57e8fbb2f702001a18bd81e9fe31b26d94247ac9 8222557798768995e81c53aee3c273ea9503afb5
Basis pass b65f2f457c49b2cfd7967c34b7a0b04c25587f13 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b669e922b37b8957248798a5eb7aa96a666cd3fe 728e90b41d46c1c1c210ac496204efd51936db75 6f6d3b10ec8168e2a78cf385d89803397f116397
Generating revisions with ./adhoc-revtuple-generator git://xenbits.xen.org/linux-pvops.git#b65f2f457c49b2cfd7967c34b7a0b04c25587f13-b65f2f457c49b2cfd7967c34b7a0b04c25587f13 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/qemu-xen-traditional.git#b669e922b37b8957248798a5eb7aa96a666cd3fe-8b4834ee1202852ed83a9fc61268c65fb6961ea7 git://xenbits.xen.org/qemu-xen.git#728e90b41d46c1c1c210ac496204efd51936db75-57e8fbb2f702001a18bd81e9fe31b26d94247ac9 git://xenbits.xen.org/xen.git#6f6d3b10ec8168e2a78cf385d89803397f116397-8222557798768995e81c53aee3c273ea9503afb5
Loaded 7004 nodes in revision graph
Searching for test results:
105707 [host=merlot1]
105728 [host=nocera1]
105790 [host=italia1]
105756 pass b65f2f457c49b2cfd7967c34b7a0b04c25587f13 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b669e922b37b8957248798a5eb7aa96a666cd3fe 728e90b41d46c1c1c210ac496204efd51936db75 6f6d3b10ec8168e2a78cf385d89803397f116397
105742 [host=nobling0]
105784 [host=chardonnay0]
105766 [host=pinot1]
105804 [host=huxelrebe0]
105821 [host=elbling0]
105840 [host=chardonnay1]
105896 [host=pinot0]
105919 [host=rimava0]
105861 [host=nobling1]
105873 [host=huxelrebe1]
105900 [host=fiano0]
105933 [host=baroque0]
105946 [host=nocera0]
105966 [host=nobling0]
105994 [host=merlot0]
106102 [host=huxelrebe0]
106081 [host=fiano1]
106122 [host=merlot1]
106138 [host=nocera1]
106160 [host=italia1]
106289 pass b65f2f457c49b2cfd7967c34b7a0b04c25587f13 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b669e922b37b8957248798a5eb7aa96a666cd3fe 728e90b41d46c1c1c210ac496204efd51936db75 6f6d3b10ec8168e2a78cf385d89803397f116397
106186 fail b65f2f457c49b2cfd7967c34b7a0b04c25587f13 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8b4834ee1202852ed83a9fc61268c65fb6961ea7 57e8fbb2f702001a18bd81e9fe31b26d94247ac9 1f24be6c945c8f8e25547aed4a56c092133df713
106218 fail b65f2f457c49b2cfd7967c34b7a0b04c25587f13 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8b4834ee1202852ed83a9fc61268c65fb6961ea7 57e8fbb2f702001a18bd81e9fe31b26d94247ac9 12f687bf28e23fa662bb518311c4ec71e5b39ab8
106261 fail b65f2f457c49b2cfd7967c34b7a0b04c25587f13 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8b4834ee1202852ed83a9fc61268c65fb6961ea7 57e8fbb2f702001a18bd81e9fe31b26d94247ac9 12f687bf28e23fa662bb518311c4ec71e5b39ab8
106292 fail b65f2f457c49b2cfd7967c34b7a0b04c25587f13 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8b4834ee1202852ed83a9fc61268c65fb6961ea7 57e8fbb2f702001a18bd81e9fe31b26d94247ac9 12f687bf28e23fa662bb518311c4ec71e5b39ab8
106299 fail b65f2f457c49b2cfd7967c34b7a0b04c25587f13 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b669e922b37b8957248798a5eb7aa96a666cd3fe 57e8fbb2f702001a18bd81e9fe31b26d94247ac9 2c31b07ec74a29a81fdc278256c3517ae724f5e9
106283 fail b65f2f457c49b2cfd7967c34b7a0b04c25587f13 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8b4834ee1202852ed83a9fc61268c65fb6961ea7 57e8fbb2f702001a18bd81e9fe31b26d94247ac9 8222557798768995e81c53aee3c273ea9503afb5
106293 pass b65f2f457c49b2cfd7967c34b7a0b04c25587f13 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b669e922b37b8957248798a5eb7aa96a666cd3fe e88462aaa2f19e1238e77c1bcebbab7ef5380d7a 71af7d4220227529ea43b898683d4d2e68a90ffd
106300 pass b65f2f457c49b2cfd7967c34b7a0b04c25587f13 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b669e922b37b8957248798a5eb7aa96a666cd3fe 08c008de9c7d3ac71f71c87cc04a47819ca228dc 78da0c2a7a9c621ba64e515528e11e5f28f15050
106303 pass b65f2f457c49b2cfd7967c34b7a0b04c25587f13 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b669e922b37b8957248798a5eb7aa96a666cd3fe 08c008de9c7d3ac71f71c87cc04a47819ca228dc b908131167a67a16fbe9c7a7826b67e2d93d9ec5
106305 fail b65f2f457c49b2cfd7967c34b7a0b04c25587f13 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8b4834ee1202852ed83a9fc61268c65fb6961ea7 57e8fbb2f702001a18bd81e9fe31b26d94247ac9 8222557798768995e81c53aee3c273ea9503afb5
106306 pass b65f2f457c49b2cfd7967c34b7a0b04c25587f13 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b669e922b37b8957248798a5eb7aa96a666cd3fe 57e8fbb2f702001a18bd81e9fe31b26d94247ac9 24682c89d05c997115555f02ec280f82ea24cef2
106309 pass b65f2f457c49b2cfd7967c34b7a0b04c25587f13 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b669e922b37b8957248798a5eb7aa96a666cd3fe 57e8fbb2f702001a18bd81e9fe31b26d94247ac9 b199c44afa3a0d18d0e968e78a590eb9e69e20ad
106313 fail b65f2f457c49b2cfd7967c34b7a0b04c25587f13 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b669e922b37b8957248798a5eb7aa96a666cd3fe 57e8fbb2f702001a18bd81e9fe31b26d94247ac9 c5b9805bc1f793177779ae342c65fcc201a15a47
106317 pass b65f2f457c49b2cfd7967c34b7a0b04c25587f13 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b669e922b37b8957248798a5eb7aa96a666cd3fe 57e8fbb2f702001a18bd81e9fe31b26d94247ac9 b199c44afa3a0d18d0e968e78a590eb9e69e20ad
106319 fail b65f2f457c49b2cfd7967c34b7a0b04c25587f13 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b669e922b37b8957248798a5eb7aa96a666cd3fe 57e8fbb2f702001a18bd81e9fe31b26d94247ac9 c5b9805bc1f793177779ae342c65fcc201a15a47
106321 pass b65f2f457c49b2cfd7967c34b7a0b04c25587f13 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b669e922b37b8957248798a5eb7aa96a666cd3fe 57e8fbb2f702001a18bd81e9fe31b26d94247ac9 b199c44afa3a0d18d0e968e78a590eb9e69e20ad
106324 fail b65f2f457c49b2cfd7967c34b7a0b04c25587f13 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b669e922b37b8957248798a5eb7aa96a666cd3fe 57e8fbb2f702001a18bd81e9fe31b26d94247ac9 c5b9805bc1f793177779ae342c65fcc201a15a47
Searching for interesting versions
Result found: flight 105756 (pass), for basis pass
Result found: flight 106283 (fail), for basis failure
Repro found: flight 106289 (pass), for basis pass
Repro found: flight 106305 (fail), for basis failure
0 revisions at b65f2f457c49b2cfd7967c34b7a0b04c25587f13 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b669e922b37b8957248798a5eb7aa96a666cd3fe 57e8fbb2f702001a18bd81e9fe31b26d94247ac9 b199c44afa3a0d18d0e968e78a590eb9e69e20ad
No revisions left to test, checking graph state.
Result found: flight 106309 (pass), for last pass
Result found: flight 106313 (fail), for first failure
Repro found: flight 106317 (pass), for last pass
Repro found: flight 106319 (fail), for first failure
Repro found: flight 106321 (pass), for last pass
Repro found: flight 106324 (fail), for first failure
*** Found and reproduced problem changeset ***
Bug is in tree: xen git://xenbits.xen.org/xen.git
Bug introduced: c5b9805bc1f793177779ae342c65fcc201a15a47
Bug not present: b199c44afa3a0d18d0e968e78a590eb9e69e20ad
Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/106324/
commit c5b9805bc1f793177779ae342c65fcc201a15a47
Author: Daniel Kiper <daniel.kiper@oracle.com>
Date: Wed Feb 22 14:38:06 2017 +0100
efi: create new early memory allocator
There is a problem with place_string() which is used as early memory
allocator. It gets memory chunks starting from start symbol and goes
down. Sadly this does not work when Xen is loaded using multiboot2
protocol because then the start lives on 1 MiB address and we should
not allocate a memory from below of it. So, I tried to use mem_lower
address calculated by GRUB2. However, this solution works only on some
machines. There are machines in the wild (e.g. Dell PowerEdge R820)
which uses first ~640 KiB for boot services code or data... :-(((
Hence, we need new memory allocator for Xen EFI boot code which is
quite simple and generic and could be used by place_string() and
efi_arch_allocate_mmap_buffer(). I think about following solutions:
1) We could use native EFI allocation functions (e.g. AllocatePool()
or AllocatePages()) to get memory chunk. However, later (somewhere
in __start_xen()) we must copy its contents to safe place or reserve
it in e820 memory map and map it in Xen virtual address space. This
means that the code referring to Xen command line, loaded modules and
EFI memory map, mostly in __start_xen(), will be further complicated
and diverge from legacy BIOS cases. Additionally, both former things
have to be placed below 4 GiB because their addresses are stored in
multiboot_info_t structure which has 32-bit relevant members.
2) We may allocate memory area statically somewhere in Xen code which
could be used as memory pool for early dynamic allocations. Looks
quite simple. Additionally, it would not depend on EFI at all and
could be used on legacy BIOS platforms if we need it. However, we
must carefully choose size of this pool. We do not want increase Xen
binary size too much and waste too much memory but also we must fit
at least memory map on x86 EFI platforms. As I saw on small machine,
e.g. IBM System x3550 M2 with 8 GiB RAM, memory map may contain more
than 200 entries. Every entry on x86-64 platform is 40 bytes in size.
So, it means that we need more than 8 KiB for EFI memory map only.
Additionally, if we use this memory pool for Xen and modules command
line storage (it would be used when xen.efi is executed as EFI application)
then we should add, I think, about 1 KiB. In this case, to be on safe
side, we should assume at least 64 KiB pool for early memory allocations.
Which is about 4 times of our earlier calculations. However, during
discussion on Xen-devel Jan Beulich suggested that just in case we should
use 1 MiB memory pool like it is in original place_string() implementation.
So, let's use 1 MiB as it was proposed. If we think that we should not
waste unallocated memory in the pool on running system then we can mark
this region as __initdata and move all required data to dynamically
allocated places somewhere in __start_xen().
2a) We could put memory pool into .bss.page_aligned section. Then allocate
memory chunks starting from the lowest address. After init phase we can
free unused portion of the memory pool as in case of .init.text or .init.data
sections. This way we do not need to allocate any space in image file and
freeing of unused area in the memory pool is very simple.
Now #2a solution is implemented because it is quite simple and requires
limited number of changes, especially in __start_xen().
New allocator is quite generic and can be used on ARM platforms too.
Though it is not enabled on ARM yet due to lack of some prereq.
List of them is placed before ebmalloc code.
Signed-off-by: Daniel Kiper <daniel.kiper@oracle.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Julien Grall <julien.grall@arm.com>
Reviewed-by: Doug Goldstein <cardoe@cardoe.com>
Tested-by: Doug Goldstein <cardoe@cardoe.com>
pnmtopng: 184 colors found
Revision graph left in /home/logs/results/bisect/xen-unstable/test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm.xen-boot.{dot,ps,png,html,svg}.
----------------------------------------
106324: tolerable ALL FAIL
flight 106324 xen-unstable real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/106324/
Failures :-/ but no regressions.
Tests which did not succeed,
including tests which could not be run:
test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 6 xen-boot fail baseline untested
jobs:
test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm fail
------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images
Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs
Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master
Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
next reply other threads:[~2017-03-01 23:53 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-03-01 23:53 osstest service owner [this message]
2017-03-02 10:41 ` [xen-unstable bisection] complete test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm Daniel Kiper
2017-03-02 10:42 ` Andrew Cooper
2017-03-02 11:09 ` Daniel Kiper
-- strict thread matches above, loose matches on Subject: below --
2023-12-10 19:11 osstest service owner
2022-11-20 8:28 osstest service owner
2021-08-21 23:29 osstest service owner
2020-11-12 12:29 osstest service owner
2020-04-10 1:43 osstest service owner
2018-04-04 9:19 osstest service owner
2016-10-30 4:29 osstest service owner
2016-10-30 16:53 ` Andrew Cooper
2016-10-31 15:00 ` Wei Liu
2016-10-30 17:31 ` Andrew Cooper
2016-10-31 10:31 ` Ian Jackson
2016-10-31 10:37 ` Andrew Cooper
2016-01-15 1:42 osstest service owner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=E1cjE48-00007j-JB@osstest.test-lab.xenproject.org \
--to=osstest-admin@xenproject.org \
--cc=xen-devel@lists.xensource.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).