From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: Roger Pau Monne <roger.pau@citrix.com>,
Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
xen-devel@lists.xensource.com, Wei Liu <Wei.Liu2@citrix.com>,
osstest service owner <osstest-admin@xenproject.org>,
Jan Beulich <JBeulich@suse.com>
Subject: Re: Removing PVHv1 code
Date: Tue, 21 Feb 2017 08:51:55 -0500 [thread overview]
Message-ID: <007543dd-0d77-c5bd-b88e-2f918ab4fac1@oracle.com> (raw)
In-Reply-To: <20170220104224.io4mwyoy2mpg6sks@dhcp-3-221.uk.xensource.com>
On 02/20/2017 05:42 AM, Roger Pau Monne wrote:
> On Mon, Feb 20, 2017 at 12:20:10AM +0000, Andrew Cooper wrote:
>> From
>> http://logs.test-lab.xenproject.org/osstest/logs/105917/test-amd64-amd64-xl-pvh-intel/serial-fiano0.log
>> around Feb 19 23:12:06.269706
>>
>> (XEN) ----[ Xen-4.9-unstable x86_64 debug=y Not tainted ]----
>> (XEN) CPU: 2
>> (XEN) RIP: e008:[<ffff82d08016795a>]
>> domain.c#__context_switch+0x1a3/0x3e3
>> (XEN) RFLAGS: 0000000000010046 CONTEXT: hypervisor (d1v0)
>> (XEN) rax: 0000000000000000 rbx: 0000000000000002 rcx: 0000000000000000
>> (XEN) rdx: 00000031fd44b600 rsi: 0000000000000003 rdi: ffff83007de27000
>> (XEN) rbp: ffff83027d78fdb0 rsp: ffff83027d78fd60 r8: 0000000000000000
>> (XEN) r9: 0000005716f6126f r10: 0000000000007ff0 r11: 0000000000000246
>> (XEN) r12: ffff83007de27000 r13: ffff83027fb74000 r14: ffff83007dafd000
>> (XEN) r15: ffff83027d7c8000 cr0: 000000008005003b cr4: 00000000001526e0
>> (XEN) cr3: 000000007dd05000 cr2: 0000000000000008
>> (XEN) ds: 002b es: 002b fs: 0000 gs: 0000 ss: e010 cs: e008
>> (XEN) Xen code around <ffff82d08016795a>
>> (domain.c#__context_switch+0x1a3/0x3e3):
>> (XEN) 85 68 07 00 00 4c 89 e7 <ff> 50 08 4c 89 ef e8 36 e1 02 00 41 80
>> bd 78 08
>> (XEN) Xen stack trace from rsp=ffff83027d78fd60:
>> (XEN) ffff83027d78ffff 0000000000000003 0000000000000000 0000000000000000
>> (XEN) 0000000000000000 ffff83007de27000 ffff83007dafd000 ffff83027fb74000
>> (XEN) 0000000000000002 ffff83027d7c8000 ffff83027d78fe20 ffff82d08016bf1f
>> (XEN) ffff82d080131ae2 ffff83027d78fde0 0000000000000000 0000000000000000
>> (XEN) 0000000000000000 0000000000000000 ffff83027d78fe20 ffff83007dafd000
>> (XEN) ffff83007de27000 0000005716f5e5da ffff83027d796148 0000000000000001
>> (XEN) ffff83027d78feb0 ffff82d08012def9 ffff83027d7955a0 ffff83027d796160
>> (XEN) 0000000200000004 ffff83027d796140 ffff83027d78fe70 ffff82d08014af39
>> (XEN) ffff83027d78fe70 ffff83007de27000 0000000001c9c380 ffff82d0801bf800
>> (XEN) 000000107dafd000 ffff82d080322b80 ffff82d080322a80 ffffffffffffffff
>> (XEN) ffff83027d78ffff ffff83027d780000 ffff83027d78fee0 ffff82d08013128f
>> (XEN) ffff83027d78ffff ffff83007dd4c000 ffff83027d7c8000 00000000ffffffff
>> (XEN) ffff83027d78fef0 ffff82d0801312e4 ffff83027d78ff10 ffff82d080167582
>> (XEN) ffff82d0801312e4 ffff83007dafd000 ffff83027d78fdc8 0000000000000000
>> (XEN) 0000000000000000 0000000000000000 0000000000000000 0000000000000000
>> (XEN) 0000000000000000 0000000000000000 0000000000000000 0000000000000000
>> (XEN) 0000000000000000 0000000000000000 0000000000000000 0000000000000000
>> (XEN) ffffffff82374000 0000000000000000 0000000000000000 ffffffff81f59180
>> (XEN) 0000000000000000 0000000000000200 ffffffff82390000 0000000000000000
>> (XEN) 0000000000000000 02ffff8000000000 0000000000000000 0000000000000000
>> (XEN) Xen call trace:
>> (XEN) [<ffff82d08016795a>] domain.c#__context_switch+0x1a3/0x3e3
>> (XEN) [<ffff82d08016bf1f>] context_switch+0x147/0xf0d
>> (XEN) [<ffff82d08012def9>] schedule.c#schedule+0x5ba/0x615
>> (XEN) [<ffff82d08013128f>] softirq.c#__do_softirq+0x7f/0x8a
>> (XEN) [<ffff82d0801312e4>] do_softirq+0x13/0x15
>> (XEN) [<ffff82d080167582>] domain.c#idle_loop+0x55/0x62
>> (XEN)
>> (XEN) Pagetable walk from 0000000000000008:
>> (XEN) L4[0x000] = 000000027d7cd063 ffffffffffffffff
>> (XEN) L3[0x000] = 000000027d7cc063 ffffffffffffffff
>> (XEN) L2[0x000] = 000000027d7cb063 ffffffffffffffff
>> (XEN) L1[0x000] = 0000000000000000 ffffffffffffffff
>> (XEN)
>> (XEN) ****************************************
>> (XEN) Panic on CPU 2:
>> (XEN) FATAL PAGE FAULT
>> (XEN) [error_code=0000]
>> (XEN) Faulting linear address: 0000000000000008
>> (XEN) ****************************************
>> (XEN)
>>
>> We have followed the ->to() hook on a domain with a NULL ctxt_switch
>> pointer (confirmed by the disassembly). n is derived from current,
>> which is d1v0, but that would mean we are trying to schedule a vcpu
>> before its domain structure has been fully constructed.
>>
>> The problem is with hvm_domain_initialise()
>>
>> int hvm_domain_initialise(struct domain *d)
>> {
>> ...
>> if ( is_pvh_domain(d) )
>> {
>> register_portio_handler(d, 0, 0x10003, handle_pvh_io);
>> return 0;
>> }
>> ...
>> rc = hvm_funcs.domain_initialise(d);
>> ...
>> }
>>
>> So PVH domains exit hvm_domain_initialise() earlier than when we call
>> the vendor-specific initialisation hooks.
>>
>> Rather than fixing this specific issue, can I suggest we properly kill
>> PVH v1 at this point? Given what else it skips in
>> hvm_domain_initialise(), it clearly hasn't functioned properly in the past.
> I'm completely fine with that. I'm currently in the middle of something else,
> but I can hopefully prepare a patch either later today or tomorrow.
Note also that Linux will drop v1 support in 4.11 --- the patch is in
the staging tree, ready for a pull request, probably this week.
The same pull request will add domU v2 support so perhaps osstest should
replace 'pvh=1' with 'device_model_version="none"'.
-boris
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
prev parent reply other threads:[~2017-02-21 13:51 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-02-19 23:20 [linux-linus bisection] complete test-amd64-amd64-xl-pvh-intel osstest service owner
2017-02-20 0:20 ` Andrew Cooper
2017-02-20 0:26 ` Andrew Cooper
2017-02-20 0:36 ` Andrew Cooper
2017-02-20 10:42 ` Removing PVHv1 code (was: Re: [linux-linus bisection] complete) test-amd64-amd64-xl-pvh-intel Roger Pau Monne
2017-02-21 13:51 ` Boris Ostrovsky [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=007543dd-0d77-c5bd-b88e-2f918ab4fac1@oracle.com \
--to=boris.ostrovsky@oracle.com \
--cc=Ian.Jackson@eu.citrix.com \
--cc=JBeulich@suse.com \
--cc=Wei.Liu2@citrix.com \
--cc=andrew.cooper3@citrix.com \
--cc=osstest-admin@xenproject.org \
--cc=roger.pau@citrix.com \
--cc=xen-devel@lists.xensource.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).