xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: "Yuvraj Agarwal" <yuvraj@cs.ucsd.edu>
To: 'Konrad Rzeszutek Wilk' <konrad.wilk@oracle.com>,
	'John McCullough' <jmccullo@cs.ucsd.edu>
Cc: xen-devel@lists.xensource.com, 'Keir Fraser' <keir.fraser@eu.citrix.com>
Subject: RE: XEN 4.0 + 2.6.31.13 pvops kernel : system crashes	on starting 155th domU
Date: Wed, 28 Apr 2010 15:51:19 -0700 (PDT)	[thread overview]
Message-ID: <006e01cae725$54b95a00$fe2c0e00$@ucsd.edu> (raw)
In-Reply-To: <20100428140437.GA29653@phenom.dumpdata.com>

I tried making the change in 
linux-2.6-pvops.git/arch/x86/include/asm/irq_vectors.h

It was:
#define NR_VECTORS                     256
I changed it to
#define NR_VECTORS                       1024

I still get the same number of nr_irqs  (dmesg | grep -i nr_irq) before and 
after the change.

[    0.000000] nr_irqs_gsi: 48
[    0.500076] NR_IRQS:5120 nr_irqs:944

Also, as earlier it crashes on the same number of domU (154). I didn’t 
mention earlier, this a dual core Nehalem machine  -- 2 (sockets) * 4 cores 
per CPU * 2 (hyperthreading)

--Yuvraj

-----Original Message-----
From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com]
Sent: Wednesday, April 28, 2010 7:05 AM
To: John McCullough
Cc: Keir Fraser; xen-devel@lists.xensource.com; Yuvraj Agarwal
Subject: Re: [Xen-devel] XEN 4.0 + 2.6.31.13 pvops kernel : system crashes 
on starting 155th domU

On Tue, Apr 27, 2010 at 11:47:30PM -0700, John McCullough wrote:
> I did a little testing.
>
> With no kernel option:
> # dmesg | grep -i nr_irqs
> [    0.000000] nr_irqs_gsi: 88
> [    0.000000] NR_IRQS:4352 nr_irqs:256
>
> w/nr_irqs=65536:
> # dmesg | grep -i nr_irqs
> [    0.000000] Command line: root=/dev/sda1 ro quiet console=hvc0
> nr_irqs=65536
> [    0.000000] nr_irqs_gsi: 88
> [    0.000000] Kernel command line: root=/dev/sda1 ro quiet console=hvc0
> nr_irqs=65536
> [    0.000000] NR_IRQS:4352 nr_irqs:256
>
> tweaking the NR_IRQS macro in the kernel will change the NR_IRQS output,
> but unfortunately that doesn't change nr_irqs and I run into the same
> limit (36 domus on a less-beefy dual core machine).

If you have CONFIG_SPARSE_IRQ defined in your .config, it gets
overwritten by some code that figures out how many IRQs you need based
on your CPU count.

So can you change NR_VECTORS in arch/x86/include/asm/irq_vectors.h to a
higher value and see what happens?

>
> I did find this:
> http://blogs.sun.com/fvdl/entry/a_million_vms
> which references NR_DYNIRQS, which is in 2.6.18, but not in the pvops
> kernel.
>
> Watching /proc/interrupts, the domain irqs seem to be getting allocated
> from 248 downward until they hit some other limit:

Yeah. They hit the nr_irqs_gsi and don't go below that.

> ...
>  64:      59104  xen-pirq-ioapic-level  ioc0
>  89:          1   xen-dyn-event     evtchn:xenconsoled
>  90:          1   xen-dyn-event     evtchn:xenstored
>  91:          6   xen-dyn-event     vif36.0
>  92:        140   xen-dyn-event     blkif-backend
>  93:         97   xen-dyn-event     evtchn:xenconsoled
>  94:        139   xen-dyn-event     evtchn:xenstored
>  95:          7   xen-dyn-event     vif35.0
>  96:        301   xen-dyn-event     blkif-backend
>  97:        261   xen-dyn-event     evtchn:xenconsoled
>  98:        145   xen-dyn-event     evtchn:xenstored
>  99:          7   xen-dyn-event     vif34.0
> ...
> Perhaps the xen irqs are getting allocated out of the nr_irqs pool,
> while they could be allocated from the NR_IRQS pool?
>
> -John
>
>
>
>
> On 04/27/2010 08:45 PM, Keir Fraser wrote:
>> I think nr_irqs is specifiable on the command line on newer kernels. You 
>> may
>> be able to do nr_irqs=65536 as a kernel boot parameter, or something like
>> that, without needing to rebuild the kernel.
>>
>>   -- Keir
>>
>> On 28/04/2010 02:02, "Yuvraj Agarwal"<yuvraj@cs.ucsd.edu>  wrote:
>>
>>
>>> Actually, I did identify the problem (don’t know the fix) at least from
>>> the console logs. Its related to running out of nr_irq's  (attached JPG
>>> for the console log).
>>>
>>>
>>> -----Original Message-----
>>> From: xen-devel-bounces@lists.xensource.com
>>> [mailto:xen-devel-bounces@lists.xensource.com] On Behalf Of Keir Fraser
>>> Sent: Tuesday, April 27, 2010 5:44 PM
>>> To: Yuvraj Agarwal; xen-devel@lists.xensource.com
>>> Subject: Re: [Xen-devel] XEN 4.0 + 2.6.31.13 pvops kernel : system 
>>> crashes
>>> on starting 155th domU
>>>
>>> On 27/04/2010 08:41, "Yuvraj Agarwal"<yuvraj@cs.ucsd.edu>  wrote:
>>>
>>>
>>>> Attached is the output of /var/log/daemon.log and 
>>>> /var/log/xen/xend.log,
>>>>
>>> but
>>>
>>>> as far as we can see we don¹t quite know what might be going causing 
>>>> the
>>>> system to crash (no console access anymore and system becomes
>>>>
>>> unresponsive and
>>>
>>>> needs to be power-cycled).  I have pasted only the relevant bits of
>>>> information (the last domU that did successfully start and the next one
>>>>
>>> that
>>>
>>>> failed). It may be the case that all the log messages weren¹t flushed
>>>>
>>> before
>>>
>>>> the system crashedŠ
>>>>
>>>> Does anyone know where this limit of 155 domU is coming from and how we
>>>>
>>> can
>>>
>>>> fix/increase it?
>>>>
>>> Get a serial line on a test box, and capture Xen logging output on it. 
>>> You
>>> can both see if any crash messages come from Xen when the 155th domain 
>>> is
>>> created, and also try the serial debug keys (e.g., try 'h' to get help 
>>> to
>>> start with) to see whether Xen itself is still alive.
>>>
>>>   -- Keir
>>>
>>>
>>>
>>> _______________________________________________
>>> Xen-devel mailing list
>>> Xen-devel@lists.xensource.com
>>> http://lists.xensource.com/xen-devel
>>>
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xensource.com
>> http://lists.xensource.com/xen-devel
>>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel

  parent reply	other threads:[~2010-04-28 22:51 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-04-27  7:41 XEN 4.0 + 2.6.31.13 pvops kernel : system crashes on starting 155th domU Yuvraj Agarwal
2010-04-27  9:02 ` Pasi Kärkkäinen
2010-04-27 17:14   ` Yuvraj Agarwal
2010-04-27 17:18     ` Pasi Kärkkäinen
2010-04-27 18:58       ` Yuvraj Agarwal
2010-04-27 19:29         ` Pasi Kärkkäinen
2010-04-27 13:59 ` Konrad Rzeszutek Wilk
2010-04-27 17:18   ` Yuvraj Agarwal
2010-04-27 18:51 ` Jeremy Fitzhardinge
2010-04-27 19:10   ` Yuvraj Agarwal
2010-04-27 19:27     ` Jeremy Fitzhardinge
2010-04-27 19:38       ` Yuvraj Agarwal
2010-04-28  0:43 ` Keir Fraser
2010-04-28  1:02   ` Yuvraj Agarwal
2010-04-28  3:45     ` Keir Fraser
2010-04-28  6:47       ` John McCullough
2010-04-28  3:53         ` Keir Fraser
2010-04-28 14:04         ` Konrad Rzeszutek Wilk
2010-04-28 16:57           ` Ian Campbell
2010-04-28 18:18             ` Jeremy Fitzhardinge
2010-04-28 22:51           ` Yuvraj Agarwal [this message]
2010-04-29 14:56             ` Konrad Rzeszutek Wilk
2010-04-30  3:12               ` Yuvraj Agarwal
2010-04-28 18:13         ` Jeremy Fitzhardinge

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='006e01cae725$54b95a00$fe2c0e00$@ucsd.edu' \
    --to=yuvraj@cs.ucsd.edu \
    --cc=jmccullo@cs.ucsd.edu \
    --cc=keir.fraser@eu.citrix.com \
    --cc=konrad.wilk@oracle.com \
    --cc=xen-devel@lists.xensource.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).