xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: George Dunlap <george.dunlap@citrix.com>
To: Jan Beulich <JBeulich@suse.com>,
	George Dunlap <George.Dunlap@eu.citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Tim Deegan <tim@xen.org>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [PATCH RFC 01/12] x86/paging: introduce paging_set_allocation
Date: Wed, 3 Aug 2016 16:28:37 +0100	[thread overview]
Message-ID: <8ee96cc2-f914-692e-ac5e-a5d6e387e5bb@citrix.com> (raw)
In-Reply-To: <57A229200200007800102581@prv-mh.provo.novell.com>

On 03/08/16 16:25, Jan Beulich wrote:
>>>> On 03.08.16 at 17:11, <George.Dunlap@eu.citrix.com> wrote:
>> On Tue, Aug 2, 2016 at 5:12 PM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>>> On 02.08.16 at 17:49, <roger.pau@citrix.com> wrote:
>>>> On Tue, Aug 02, 2016 at 11:47:22AM +0200, Roger Pau Monne wrote:
>>>>> On Fri, Jul 29, 2016 at 05:47:24PM +0100, Andrew Cooper wrote:
>>>>>> As this is for the construction of dom0, it would be better to take a
>>>>>> preemptible pointer, loop in construct_dom0(), with a
>>>>>> process_pending_softirqs() call in between.
>>>>>
>>>>> Now fixed.
>>>>
>>>> Hm, I have to stand corrected, using hypercall_preempt_check (as
>>>> any of the *_set_allocation function use), is not safe at this point:
>>>>
>>>> (XEN) ----[ Xen-4.8-unstable  x86_64  debug=y  Tainted:    C  ]----
>>>> (XEN) CPU:    0
>>>> (XEN) RIP:    e008:[<ffff82d08022fd47>] 
>> hap.c#local_events_need_delivery+0x27/0x40
>>>> (XEN) RFLAGS: 0000000000010246   CONTEXT: hypervisor
>>>> (XEN) rax: 0000000000000000   rbx: ffff83023f5a5000   rcx: ffff82d080312900
>>>> (XEN) rdx: 0000000000000001   rsi: ffff83023f5a56c8   rdi: ffff8300b213d000
>>>> (XEN) rbp: ffff82d080307cc8   rsp: ffff82d080307cc8   r8:  0180000000000000
>>>> (XEN) r9:  0000000000000000   r10: 0000000000247000   r11: ffff82d08029a5b0
>>>> (XEN) r12: 0000000000000011   r13: 00000000000023ac   r14: ffff82d080307d4c
>>>> (XEN) r15: ffff83023f5a56c8   cr0: 000000008005003b   cr4: 00000000001526e0
>>>> (XEN) cr3: 00000000b20fc000   cr2: 0000000000000000
>>>> (XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: e008
>>>> (XEN) Xen code around <ffff82d08022fd47> 
>> (hap.c#local_events_need_delivery+0x27/0x40):
>>>> (XEN)  0d ad fa ff 48 8b 47 08 <80> 38 00 74 09 80 78 01 00 0f 94 c0 eb 02 31 
>> c0
>>>> (XEN) Xen stack trace from rsp=ffff82d080307cc8:
>>>> (XEN)    ffff82d080307d08 ffff82d08022fc47 0000000000000000 ffff83023f5a5000
>>>> (XEN)    ffff83023f5a5648 0000000000000000 ffff82d080307d4c 0000000000002400
>>>> (XEN)    ffff82d080307d38 ffff82d08020008c 00000000000ffffd ffff8300b1efd000
>>>> (XEN)    ffff83023f5a5000 ffff82d080307d4c ffff82d080307d78 ffff82d0802cad30
>>>> (XEN)    0000000000203000 ffff83023f5a5000 ffff82d0802bf860 0000000000000000
>>>> (XEN)    0000000000000001 ffff83000008bef0 ffff82d080307de8 ffff82d0802c91e0
>>>> (XEN)    ffff82d080307de8 ffff82d080143900 ffff82d080307de8 0000000000000000
>>>> (XEN)    ffff83000008bf00 ffff82d0802eb480 ffff82d080307dc4 ffff82d08028b1cd
>>>> (XEN)    ffff83000008bf00 0000000000000000 0000000000000001 ffff83023f5a5000
>>>> (XEN)    ffff82d080307f08 ffff82d0802bf0c9 0000000000000000 0000000000000000
>>>> (XEN)    0000000000000000 ffff82d080307f18 ffff83000008bee0 0000000000000001
>>>> (XEN)    0000000000000001 0000000000000001 0000000000000000 0000000000100000
>>>> (XEN)    0000000000000001 0000000000247000 ffff83000008bef4 0000000000100000
>>>> (XEN)    ffff830100000000 0000000000247001 0000000000000014 0000000100000000
>>>> (XEN)    ffff8300ffffffec ffff83000008bef0 ffff82d0802e0640 ffff83000008bfb0
>>>> (XEN)    0000000000000000 0000000000000000 0000000000000111 0000000800000000
>>>> (XEN)    000000010000006e 0000000000000003 00000000000002f8 0000000000000000
>>>> (XEN)    00000000ad5c0bd0 0000000000000000 0000000000000001 0000000000000008
>>>> (XEN)    0000000000000000 ffff82d080100073 0000000000000000 0000000000000000
>>>> (XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
>>>> (XEN) Xen call trace:
>>>> (XEN)    [<ffff82d08022fd47>] hap.c#local_events_need_delivery+0x27/0x40
>>>> (XEN)    [<ffff82d08022fc47>] hap_set_allocation+0x107/0x130
>>>> (XEN)    [<ffff82d08020008c>] paging_set_allocation+0x4c/0x80
>>>> (XEN)    [<ffff82d0802cad30>] domain_build.c#hvm_setup_p2m+0x70/0x1a0
>>>> (XEN)    [<ffff82d0802c91e0>] domain_build.c#construct_dom0_hvm+0x60/0x120
>>>> (XEN)    [<ffff82d0802bf0c9>] __start_xen+0x1ea9/0x23a0
>>>> (XEN)    [<ffff82d080100073>] __high_start+0x53/0x60
>>>> (XEN)
>>>> (XEN) Pagetable walk from 0000000000000000:
>>>
>>> Sadly you don't make clear what pointer it is that is NULL at that point.
>>
>> It sounds from what he says in the following paragraph like current is NULL.
> 
> I don't recall us re-setting current to this late in the boot process.
> Even during early boot we set it to a bogus non-NULL value rather
> than NULL.
> 
>>>> I've tried setting current to d->vcpu[0], but that just makes the call to
>>>> hypercall_preempt_check crash in some scheduler assert. In any case, I've
>>>> added the preempt parameter to the paging_set_allocation function, but I
>>>> don't plan to use it in the domain builder for the time being. Does that
>>>> sound right?
>>>
>>> Not really, new huge latency issues like this shouldn't be reintroduced;
>>> we've been fighting hard to get rid of those (and we still occasionally
>>> find some no-one had noticed before).
>>
>> You mean latency in processing softirqs?
>>
>> Maybe what we need to do is to make local_events_need_delivery() safe
>> to call at this point by having it return 0 if current is NULL rather
>> than crashing?
> 
> That would have the same effect - no softirq processing, and hence
> possible time issues on huge systems.

No, local_events_delivery() only checks to see if the current vcpu has
outstanding virtual interrupts.  The other half of
hypercall_preempt_check() checks for softirqs, which doesn't appear to
rely on having current available.

 -George

> 
> Jan
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

  reply	other threads:[~2016-08-03 15:28 UTC|newest]

Thread overview: 49+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-07-29 16:28 [PATCH RFC 01/12] PVHv2 Dom0 Roger Pau Monne
2016-07-29 16:28 ` [PATCH RFC 01/12] x86/paging: introduce paging_set_allocation Roger Pau Monne
2016-07-29 16:47   ` Andrew Cooper
2016-08-02  9:47     ` Roger Pau Monne
2016-08-02 15:49       ` Roger Pau Monne
2016-08-02 16:12         ` Jan Beulich
2016-08-03 15:11           ` George Dunlap
2016-08-03 15:25             ` Jan Beulich
2016-08-03 15:28               ` George Dunlap [this message]
2016-08-03 15:37                 ` Jan Beulich
2016-08-03 15:59                   ` George Dunlap
2016-08-03 16:00                   ` Roger Pau Monne
2016-08-03 16:15                     ` Jan Beulich
2016-08-03 16:24                       ` Roger Pau Monne
2016-08-04  6:19                         ` Jan Beulich
2016-08-01  8:57   ` Tim Deegan
2016-07-29 16:28 ` [PATCH RFC 02/12] xen/x86: split the setup of Dom0 permissions to a function Roger Pau Monne
2016-07-29 16:28 ` [PATCH RFC 03/12] xen/x86: allow the emulated APICs to be enbled for the hardware domain Roger Pau Monne
2016-07-29 17:50   ` Andrew Cooper
2016-08-01 11:23     ` Roger Pau Monne
2016-07-29 16:28 ` [PATCH RFC 04/12] xen/x86: split Dom0 build into PV and PVHv2 Roger Pau Monne
2016-07-29 17:57   ` Andrew Cooper
2016-08-01 11:36     ` Roger Pau Monne
2016-08-04 18:28       ` Andrew Cooper
2016-07-29 16:29 ` [PATCH RFC 05/12] xen/x86: make print_e820_memory_map global Roger Pau Monne
2016-07-29 17:57   ` Andrew Cooper
2016-07-29 16:29 ` [PATCH RFC 06/12] xen/x86: populate PVHv2 Dom0 physical memory map Roger Pau Monne
2016-07-29 19:04   ` Andrew Cooper
2016-08-02  9:19     ` Roger Pau Monne
2016-08-04 18:43       ` Andrew Cooper
2016-08-05  9:40         ` Roger Pau Monne
2016-08-11 18:28           ` Andrew Cooper
2016-07-29 16:29 ` [PATCH RFC 07/12] xen/x86: parse Dom0 kernel for PVHv2 Roger Pau Monne
2016-09-26 16:16   ` Jan Beulich
2016-09-26 17:11     ` Roger Pau Monne
2016-07-29 16:29 ` [PATCH RFC 08/12] xen/x86: setup PVHv2 Dom0 CPUs Roger Pau Monne
2016-09-26 16:19   ` Jan Beulich
2016-09-26 17:05     ` Roger Pau Monne
2016-09-27  8:10       ` Jan Beulich
2016-07-29 16:29 ` [PATCH RFC 09/12] xen/x86: setup PVHv2 Dom0 ACPI tables Roger Pau Monne
2016-09-26 16:21   ` Jan Beulich
2016-07-29 16:29 ` [PATCH RFC 10/12] xen/dcpi: add a dpci passthrough handler for hardware domain Roger Pau Monne
2016-07-29 16:29 ` [PATCH RFC 11/12] xen/x86: allow a PVHv2 Dom0 to register PCI devices with Xen Roger Pau Monne
2016-07-29 16:29 ` [PATCH RFC 12/12] xen/x86: route legacy PCI interrupts to Dom0 Roger Pau Monne
2016-07-29 16:38 ` [PATCH RFC 01/12] PVHv2 Dom0 Roger Pau Monne
2016-09-26 16:25 ` Jan Beulich
2016-09-26 17:12   ` Roger Pau Monne
2016-09-26 17:55     ` Konrad Rzeszutek Wilk
2016-09-27  8:11     ` Jan Beulich

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=8ee96cc2-f914-692e-ac5e-a5d6e387e5bb@citrix.com \
    --to=george.dunlap@citrix.com \
    --cc=George.Dunlap@eu.citrix.com \
    --cc=JBeulich@suse.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=roger.pau@citrix.com \
    --cc=tim@xen.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).