From: John McCullough <jmccullo@cs.ucsd.edu>
To: Keir Fraser <keir.fraser@eu.citrix.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
Yuvraj Agarwal <yuvraj@cs.ucsd.edu>
Subject: Re: XEN 4.0 + 2.6.31.13 pvops kernel : system crashes on starting 155th domU
Date: Tue, 27 Apr 2010 23:47:30 -0700 [thread overview]
Message-ID: <4BD7DA02.3030107@cs.ucsd.edu> (raw)
In-Reply-To: <C7FD6DD5.10D5B%keir.fraser@eu.citrix.com>
I did a little testing.
With no kernel option:
# dmesg | grep -i nr_irqs
[ 0.000000] nr_irqs_gsi: 88
[ 0.000000] NR_IRQS:4352 nr_irqs:256
w/nr_irqs=65536:
# dmesg | grep -i nr_irqs
[ 0.000000] Command line: root=/dev/sda1 ro quiet console=hvc0
nr_irqs=65536
[ 0.000000] nr_irqs_gsi: 88
[ 0.000000] Kernel command line: root=/dev/sda1 ro quiet console=hvc0
nr_irqs=65536
[ 0.000000] NR_IRQS:4352 nr_irqs:256
tweaking the NR_IRQS macro in the kernel will change the NR_IRQS output,
but unfortunately that doesn't change nr_irqs and I run into the same
limit (36 domus on a less-beefy dual core machine).
I did find this:
http://blogs.sun.com/fvdl/entry/a_million_vms
which references NR_DYNIRQS, which is in 2.6.18, but not in the pvops
kernel.
Watching /proc/interrupts, the domain irqs seem to be getting allocated
from 248 downward until they hit some other limit:
...
64: 59104 xen-pirq-ioapic-level ioc0
89: 1 xen-dyn-event evtchn:xenconsoled
90: 1 xen-dyn-event evtchn:xenstored
91: 6 xen-dyn-event vif36.0
92: 140 xen-dyn-event blkif-backend
93: 97 xen-dyn-event evtchn:xenconsoled
94: 139 xen-dyn-event evtchn:xenstored
95: 7 xen-dyn-event vif35.0
96: 301 xen-dyn-event blkif-backend
97: 261 xen-dyn-event evtchn:xenconsoled
98: 145 xen-dyn-event evtchn:xenstored
99: 7 xen-dyn-event vif34.0
...
Perhaps the xen irqs are getting allocated out of the nr_irqs pool,
while they could be allocated from the NR_IRQS pool?
-John
On 04/27/2010 08:45 PM, Keir Fraser wrote:
> I think nr_irqs is specifiable on the command line on newer kernels. You may
> be able to do nr_irqs=65536 as a kernel boot parameter, or something like
> that, without needing to rebuild the kernel.
>
> -- Keir
>
> On 28/04/2010 02:02, "Yuvraj Agarwal"<yuvraj@cs.ucsd.edu> wrote:
>
>
>> Actually, I did identify the problem (don’t know the fix) at least from
>> the console logs. Its related to running out of nr_irq's (attached JPG
>> for the console log).
>>
>>
>> -----Original Message-----
>> From: xen-devel-bounces@lists.xensource.com
>> [mailto:xen-devel-bounces@lists.xensource.com] On Behalf Of Keir Fraser
>> Sent: Tuesday, April 27, 2010 5:44 PM
>> To: Yuvraj Agarwal; xen-devel@lists.xensource.com
>> Subject: Re: [Xen-devel] XEN 4.0 + 2.6.31.13 pvops kernel : system crashes
>> on starting 155th domU
>>
>> On 27/04/2010 08:41, "Yuvraj Agarwal"<yuvraj@cs.ucsd.edu> wrote:
>>
>>
>>> Attached is the output of /var/log/daemon.log and /var/log/xen/xend.log,
>>>
>> but
>>
>>> as far as we can see we don¹t quite know what might be going causing the
>>> system to crash (no console access anymore and system becomes
>>>
>> unresponsive and
>>
>>> needs to be power-cycled). I have pasted only the relevant bits of
>>> information (the last domU that did successfully start and the next one
>>>
>> that
>>
>>> failed). It may be the case that all the log messages weren¹t flushed
>>>
>> before
>>
>>> the system crashedŠ
>>>
>>> Does anyone know where this limit of 155 domU is coming from and how we
>>>
>> can
>>
>>> fix/increase it?
>>>
>> Get a serial line on a test box, and capture Xen logging output on it. You
>> can both see if any crash messages come from Xen when the 155th domain is
>> created, and also try the serial debug keys (e.g., try 'h' to get help to
>> start with) to see whether Xen itself is still alive.
>>
>> -- Keir
>>
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xensource.com
>> http://lists.xensource.com/xen-devel
>>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel
>
next prev parent reply other threads:[~2010-04-28 6:47 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-04-27 7:41 XEN 4.0 + 2.6.31.13 pvops kernel : system crashes on starting 155th domU Yuvraj Agarwal
2010-04-27 9:02 ` Pasi Kärkkäinen
2010-04-27 17:14 ` Yuvraj Agarwal
2010-04-27 17:18 ` Pasi Kärkkäinen
2010-04-27 18:58 ` Yuvraj Agarwal
2010-04-27 19:29 ` Pasi Kärkkäinen
2010-04-27 13:59 ` Konrad Rzeszutek Wilk
2010-04-27 17:18 ` Yuvraj Agarwal
2010-04-27 18:51 ` Jeremy Fitzhardinge
2010-04-27 19:10 ` Yuvraj Agarwal
2010-04-27 19:27 ` Jeremy Fitzhardinge
2010-04-27 19:38 ` Yuvraj Agarwal
2010-04-28 0:43 ` Keir Fraser
2010-04-28 1:02 ` Yuvraj Agarwal
2010-04-28 3:45 ` Keir Fraser
2010-04-28 6:47 ` John McCullough [this message]
2010-04-28 3:53 ` Keir Fraser
2010-04-28 14:04 ` Konrad Rzeszutek Wilk
2010-04-28 16:57 ` Ian Campbell
2010-04-28 18:18 ` Jeremy Fitzhardinge
2010-04-28 22:51 ` Yuvraj Agarwal
2010-04-29 14:56 ` Konrad Rzeszutek Wilk
2010-04-30 3:12 ` Yuvraj Agarwal
2010-04-28 18:13 ` Jeremy Fitzhardinge
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4BD7DA02.3030107@cs.ucsd.edu \
--to=jmccullo@cs.ucsd.edu \
--cc=keir.fraser@eu.citrix.com \
--cc=xen-devel@lists.xensource.com \
--cc=yuvraj@cs.ucsd.edu \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).