xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: xen-devel@lists.xenproject.org,
	David Vrabel <david.vrabel@citrix.com>,
	Dietmar Hahn <dietmar.hahn@ts.fujitsu.com>
Subject: Re: POD: soft lockups in dom0 kernel
Date: Mon, 20 Jan 2014 14:39:31 +0000	[thread overview]
Message-ID: <52DD3523.1080402@citrix.com> (raw)
In-Reply-To: <52D7CC3E020000780011435C@nat28.tlf.novell.com>

On 16/01/14 11:10, Jan Beulich wrote:
>>>> On 05.12.13 at 14:55, Dietmar Hahn <dietmar.hahn@ts.fujitsu.com> wrote:
>> when creating a bigger (> 50 GB) HVM guest with maxmem > memory we get
>> softlockups from time to time.
>>
>> kernel: [  802.084335] BUG: soft lockup - CPU#1 stuck for 22s! [xend:31351]
>>
>> I tracked this down to the call of xc_domain_set_pod_target() and further
>> p2m_pod_set_mem_target().
>>
>> Unfortunately I can this check only with xen-4.2.2 as I don't have a machine
>> with enough memory for current hypervisors. But it seems the code is nearly
>> the same.
> While I still didn't see a formal report of this against SLE11 yet,
> attached a draft patch against the SP3 code base adding manual
> preemption to the hypercall path of privcmd. This is only lightly
> tested, and therefore has a little bit of debugging code still left in
> there. Mind giving this an try (perhaps together with the patch
> David had sent for the other issue - there may still be a need for
> further preemption points in the IOCTL_PRIVCMD_MMAP*
> handling, but without knowing for sure whether that matters to
> you I didn't want to add this right away)?
>
> Jan
>

With my 4.4-rc2 testing, these softlockups are becoming more of a
problem, especially with construction/migration of 128GB guests.

I have been looking at doing a similar patch against mainline.

Having talked it through with David, it seems more sensible to have a
second hypercall page, at which point in_hypercall() becomes
in_preemptable_hypercall().

Any task (which could even be kernel tasks) could use the preemptable
page, rather than the main hypercall page, and the asm code doesn't need
to care whether the task was in privcmd.

This would avoid having to maintain extra state to identify whether the
hypercall was preemptable, and would avoid modification to
evtchn_do_upcall().

I shall see about hacking up a patch to this effect.

~Andrew

  reply	other threads:[~2014-01-20 14:39 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-12-05 13:55 POD: soft lockups in dom0 kernel Dietmar Hahn
2013-12-06 10:00 ` Jan Beulich
2013-12-06 11:07   ` David Vrabel
2013-12-06 11:30     ` Jan Beulich
2013-12-06 12:00       ` David Vrabel
2013-12-06 13:52         ` Dietmar Hahn
2013-12-06 14:58           ` David Vrabel
2013-12-06 14:50         ` Boris Ostrovsky
2014-01-16 11:10 ` Jan Beulich
2014-01-20 14:39   ` Andrew Cooper [this message]
2014-01-20 15:16     ` Jan Beulich
2014-01-29 14:12   ` Dietmar Hahn

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=52DD3523.1080402@citrix.com \
    --to=andrew.cooper3@citrix.com \
    --cc=JBeulich@suse.com \
    --cc=david.vrabel@citrix.com \
    --cc=dietmar.hahn@ts.fujitsu.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).