xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* increase evtchn limits
@ 2010-05-21  3:41 Mukesh Rathor
  2010-05-21  5:07 ` Zhigang Wang
  2010-05-21  7:01 ` Keir Fraser
  0 siblings, 2 replies; 11+ messages in thread
From: Mukesh Rathor @ 2010-05-21  3:41 UTC (permalink / raw)
  To: Xen-devel@lists.xensource.com

Hi,

I'm trying to boot up with lot more than 32 vcpus on this very large box.
I overcame vcpu_info[MAX_VIRT_CPUS] by doing vcpu placement hypercall
in guest, but now running into evt channel limit (lots of devices):

       unsigned long evtchn_pending[sizeof(unsigned long) * 8];

which limits to 512 max for my 64bit dom0. The only recourse seems to 
create a new struct shared_info_v2{}, and re-arrange it a bit with lot 
more event channels. Since, start_info has magic with version info, I 
can just check that in guest and use new shared_info...(doing the design 
on the fly here). I can create a new vcpuop saying the guest is using 
newer version.  Or forget new version of shared_info{}, I can just
put evtchn stuff in my own mfn and tell hypervisor to relocate it,
(just like vcpu_info does) via new VCPUOP_ call.

Keir, what do you think?

thanks,
Mukesh

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: increase evtchn limits
  2010-05-21  3:41 increase evtchn limits Mukesh Rathor
@ 2010-05-21  5:07 ` Zhigang Wang
  2010-05-21  7:12   ` Keir Fraser
  2010-05-21  7:01 ` Keir Fraser
  1 sibling, 1 reply; 11+ messages in thread
From: Zhigang Wang @ 2010-05-21  5:07 UTC (permalink / raw)
  To: Mukesh Rathor; +Cc: xen-devel

On 05/21/2010 11:41 AM, Mukesh Rathor wrote:
> Hi,
> 
> I'm trying to boot up with lot more than 32 vcpus on this very large box.
> I overcame vcpu_info[MAX_VIRT_CPUS] by doing vcpu placement hypercall
> in guest, but now running into evt channel limit (lots of devices):
> 
>        unsigned long evtchn_pending[sizeof(unsigned long) * 8];
> 
I'm not sure, but it seems: 1024 for 32bit and 4096 for 64bit.

32bit: 4 * (4 * 8) * 8 = 1024
64bit: 8 * (8 * 8) * 8 = 4096

zhigang

> which limits to 512 max for my 64bit dom0. The only recourse seems to 
> create a new struct shared_info_v2{}, and re-arrange it a bit with lot 
> more event channels. Since, start_info has magic with version info, I 
> can just check that in guest and use new shared_info...(doing the design 
> on the fly here). I can create a new vcpuop saying the guest is using 
> newer version.  Or forget new version of shared_info{}, I can just
> put evtchn stuff in my own mfn and tell hypervisor to relocate it,
> (just like vcpu_info does) via new VCPUOP_ call.
> 
> Keir, what do you think?
> 
> thanks,
> Mukesh
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: increase evtchn limits
  2010-05-21  3:41 increase evtchn limits Mukesh Rathor
  2010-05-21  5:07 ` Zhigang Wang
@ 2010-05-21  7:01 ` Keir Fraser
  1 sibling, 0 replies; 11+ messages in thread
From: Keir Fraser @ 2010-05-21  7:01 UTC (permalink / raw)
  To: Mukesh Rathor, Xen-devel@lists.xensource.com

On 21/05/2010 04:41, "Mukesh Rathor" <mukesh.rathor@oracle.com> wrote:

> 
> I'm trying to boot up with lot more than 32 vcpus on this very large box.
> I overcame vcpu_info[MAX_VIRT_CPUS] by doing vcpu placement hypercall
> in guest, but now running into evt channel limit (lots of devices):
> 
>        unsigned long evtchn_pending[sizeof(unsigned long) * 8];
> 
> which limits to 512 max for my 64bit dom0.

How many CPUs do you have to bring up? How many event channels are we
squandering per CPU in PV Linux right now (can you enumerate them)?

 -- Keir

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: increase evtchn limits
  2010-05-21  5:07 ` Zhigang Wang
@ 2010-05-21  7:12   ` Keir Fraser
  2010-05-21 18:52     ` Mukesh Rathor
  0 siblings, 1 reply; 11+ messages in thread
From: Keir Fraser @ 2010-05-21  7:12 UTC (permalink / raw)
  To: Zhigang Wang, Mukesh Rathor; +Cc: xen-devel

On 21/05/2010 06:07, "Zhigang Wang" <zhigang.x.wang@oracle.com> wrote:

>>        unsigned long evtchn_pending[sizeof(unsigned long) * 8];
>> 
> I'm not sure, but it seems: 1024 for 32bit and 4096 for 64bit.
> 
> 32bit: 4 * (4 * 8) * 8 = 1024
> 64bit: 8 * (8 * 8) * 8 = 4096

This is correct. Which is why I wonder how many CPUs you are dealing with,
and how many event channels are being allocated per CPU. 4096 event channels
ought to be plenty for dom0 bringup on even a very big system.

 K.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: increase evtchn limits
@ 2010-05-21  8:14 Jan Beulich
  2010-05-21  9:25 ` Keir Fraser
  0 siblings, 1 reply; 11+ messages in thread
From: Jan Beulich @ 2010-05-21  8:14 UTC (permalink / raw)
  To: keir.fraser, mukesh.rathor; +Cc: xen-devel

>>> Keir Fraser  05/21/10 9:06 AM >>>
>How many CPUs do you have to bring up? How many event channels are we
>squandering per CPU in PV Linux right now (can you enumerate them)?

There are 5 event channels per CPU (timer, reschedule IPI, SMP call function IPI, SMP call function single IPI, and spinlock wakeup IPI).

While 4096 may seem plenty, I don't think it really is: With larger systems you not only have more CPUs, but you also generally have more devices (with e.g. MSI-X ones possibly requiring quite a number of event channels per device) plus you also expect to be able to run more guests, which in turn requires more event channels in Dom0. So I agree with Mukesh that we ought to find a reasonable way to overcome this limit.

Jan

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: increase evtchn limits
  2010-05-21  8:14 Jan Beulich
@ 2010-05-21  9:25 ` Keir Fraser
  2010-05-21  9:29   ` Keir Fraser
  2010-05-21 10:52   ` Jan Beulich
  0 siblings, 2 replies; 11+ messages in thread
From: Keir Fraser @ 2010-05-21  9:25 UTC (permalink / raw)
  To: Jan Beulich, mukesh.rathor@oracle.com; +Cc: xen-devel@lists.xensource.com

On 21/05/2010 09:14, "Jan Beulich" <jbeulich@novell.com> wrote:

>>>> Keir Fraser  05/21/10 9:06 AM >>>
>> How many CPUs do you have to bring up? How many event channels are we
>> squandering per CPU in PV Linux right now (can you enumerate them)?
> 
> There are 5 event channels per CPU (timer, reschedule IPI, SMP call function
> IPI, SMP call function single IPI, and spinlock wakeup IPI).
> 
> While 4096 may seem plenty, I don't think it really is: With larger systems
> you not only have more CPUs, but you also generally have more devices (with
> e.g. MSI-X ones possibly requiring quite a number of event channels per
> device) plus you also expect to be able to run more guests, which in turn
> requires more event channels in Dom0. So I agree with Mukesh that we ought to
> find a reasonable way to overcome this limit.

Has the event channel limit *actually* been hit? Or is this another "what if
we had over a thousand CPUs" scenario?

 -- Keir

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: increase evtchn limits
  2010-05-21  9:25 ` Keir Fraser
@ 2010-05-21  9:29   ` Keir Fraser
  2010-05-21 11:09     ` Jan Beulich
  2010-05-21 10:52   ` Jan Beulich
  1 sibling, 1 reply; 11+ messages in thread
From: Keir Fraser @ 2010-05-21  9:29 UTC (permalink / raw)
  To: Jan Beulich, mukesh.rathor@oracle.com; +Cc: xen-devel@lists.xensource.com

On 21/05/2010 10:25, "Keir Fraser" <keir.fraser@eu.citrix.com> wrote:

>> There are 5 event channels per CPU (timer, reschedule IPI, SMP call function
>> IPI, SMP call function single IPI, and spinlock wakeup IPI).


It would be very easy to collapse the four IPIs onto one event channel plus
a bitmask of IPI reasons. Then that would be a much more economical two
event channels per CPU. And the change is a hell of a lot easier to
implement than extensible event channels. I bet it could be knocked together
in an hour.

 -- Keir

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: increase evtchn limits
  2010-05-21  9:25 ` Keir Fraser
  2010-05-21  9:29   ` Keir Fraser
@ 2010-05-21 10:52   ` Jan Beulich
  1 sibling, 0 replies; 11+ messages in thread
From: Jan Beulich @ 2010-05-21 10:52 UTC (permalink / raw)
  To: Keir Fraser, mukesh.rathor@oracle.com; +Cc: xen-devel@lists.xensource.com

>>> On 21.05.10 at 11:25, Keir Fraser <keir.fraser@eu.citrix.com> wrote:
> Has the event channel limit *actually* been hit? Or is this another "what if
> we had over a thousand CPUs" scenario?

I haven't seen it hit, I've seen the dump from a box with about 1000
of them in use (and there not having been many guests). So this is
more of a "let's think in time about the not too far future" consideration.

Jan

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: increase evtchn limits
  2010-05-21  9:29   ` Keir Fraser
@ 2010-05-21 11:09     ` Jan Beulich
  0 siblings, 0 replies; 11+ messages in thread
From: Jan Beulich @ 2010-05-21 11:09 UTC (permalink / raw)
  To: Keir Fraser, mukesh.rathor@oracle.com; +Cc: xen-devel@lists.xensource.com

>>> On 21.05.10 at 11:29, Keir Fraser <keir.fraser@eu.citrix.com> wrote:
> On 21/05/2010 10:25, "Keir Fraser" <keir.fraser@eu.citrix.com> wrote:
> 
>>> There are 5 event channels per CPU (timer, reschedule IPI, SMP call function
>>> IPI, SMP call function single IPI, and spinlock wakeup IPI).
> 
> 
> It would be very easy to collapse the four IPIs onto one event channel plus
> a bitmask of IPI reasons. Then that would be a much more economical two
> event channels per CPU. And the change is a hell of a lot easier to
> implement than extensible event channels. I bet it could be knocked together
> in an hour.

Indeed, it didn't occur to me to do it that way, but this sounds pretty
strait forward to implement.

Thanks, Jan

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: increase evtchn limits
  2010-05-21  7:12   ` Keir Fraser
@ 2010-05-21 18:52     ` Mukesh Rathor
  2010-05-21 20:15       ` Keir Fraser
  0 siblings, 1 reply; 11+ messages in thread
From: Mukesh Rathor @ 2010-05-21 18:52 UTC (permalink / raw)
  To: Keir Fraser; +Cc: Zhigang Wang, xen-devel

On Fri, 21 May 2010 08:12:13 +0100
Keir Fraser <keir.fraser@eu.citrix.com> wrote:

> On 21/05/2010 06:07, "Zhigang Wang" <zhigang.x.wang@oracle.com> wrote:
> 
> >>        unsigned long evtchn_pending[sizeof(unsigned long) * 8];
> >> 
> > I'm not sure, but it seems: 1024 for 32bit and 4096 for 64bit.
> > 
> > 32bit: 4 * (4 * 8) * 8 = 1024
> > 64bit: 8 * (8 * 8) * 8 = 4096
> 
> This is correct. Which is why I wonder how many CPUs you are dealing
> with, and how many event channels are being allocated per CPU. 4096
> event channels ought to be plenty for dom0 bringup on even a very big
> system.
> 
>  K.
> 

My bad, it's ulong[], not sure why I thought it was uchar[]. So sorry
for the false alarm. I'm hitting BUG_ON(!test_bit(chn, s->evtchn_mask));
in bind_evtchn_to_cpu() and when I saw it on chn == 520, combined with
my thinking it was uchar, i made wrong conclusion. Apologies. Anyways,
I'll debug further.

But looking forward, I can see hitting limits of 4096 not too far in
future, so I think it will be a great idea to 'collapse the four IPIs
onto one event channel', perhaps next xen release.

Thanks again.
Mukesh

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: increase evtchn limits
  2010-05-21 18:52     ` Mukesh Rathor
@ 2010-05-21 20:15       ` Keir Fraser
  0 siblings, 0 replies; 11+ messages in thread
From: Keir Fraser @ 2010-05-21 20:15 UTC (permalink / raw)
  To: Mukesh Rathor; +Cc: Zhigang Wang, xen-devel

On 21/05/2010 19:52, "Mukesh Rathor" <mukesh.rathor@oracle.com> wrote:

> But looking forward, I can see hitting limits of 4096 not too far in
> future, so I think it will be a great idea to 'collapse the four IPIs
> onto one event channel', perhaps next xen release.

This doesn't require any Xen changes, and hence is not an issue to be dealt
with in a Xen release. It can be implemented in any guest kernel as soon as
anyone cares to.

 -- Keir

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2010-05-21 20:15 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-05-21  3:41 increase evtchn limits Mukesh Rathor
2010-05-21  5:07 ` Zhigang Wang
2010-05-21  7:12   ` Keir Fraser
2010-05-21 18:52     ` Mukesh Rathor
2010-05-21 20:15       ` Keir Fraser
2010-05-21  7:01 ` Keir Fraser
  -- strict thread matches above, loose matches on Subject: below --
2010-05-21  8:14 Jan Beulich
2010-05-21  9:25 ` Keir Fraser
2010-05-21  9:29   ` Keir Fraser
2010-05-21 11:09     ` Jan Beulich
2010-05-21 10:52   ` Jan Beulich

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).