From mboxrd@z Thu Jan 1 00:00:00 1970 From: David Vrabel Subject: Re: [PATCH 6/8] evtchn: alter internal object handling scheme Date: Thu, 15 Aug 2013 16:46:08 +0100 Message-ID: <520CF7C0.5090403@citrix.com> References: <1376071720-17644-1-git-send-email-david.vrabel@citrix.com> <1376071720-17644-7-git-send-email-david.vrabel@citrix.com> <520D001F02000078000EC473@nat28.tlf.novell.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Received: from mail6.bemta5.messagelabs.com ([195.245.231.135]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1V9zkb-00053i-Gn for xen-devel@lists.xenproject.org; Thu, 15 Aug 2013 15:46:13 +0000 In-Reply-To: <520D001F02000078000EC473@nat28.tlf.novell.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Jan Beulich Cc: xen-devel , Malcolm Crossley , Keir Fraser , Wei Liu List-Id: xen-devel@lists.xenproject.org On 15/08/13 15:21, Jan Beulich wrote: >>>> On 09.08.13 at 20:08, David Vrabel wrote: >> +#define BUCKETS_PER_GROUP (PAGE_SIZE/sizeof(struct evtchn *)) >> +/* Round size of struct evtchn up to power of 2 size */ >> +#define b2(x) ( (x) | ( (x) >> 1) ) >> +#define b4(x) ( b2(x) | ( b2(x) >> 2) ) >> +#define b8(x) ( b4(x) | ( b4(x) >> 4) ) >> +#define b16(x) ( b8(x) | ( b8(x) >> 8) ) >> +#define b32(x) (b16(x) | (b16(x) >>16) ) >> +/* Maximum number of event channels for any ABI. */ >> +#define MAX_NR_EVTCHNS (max_t(unsigned, NR_EVENT_CHANNELS, \ >> + 1 << EVTCHN_FIFO_LINK_BITS)) > >> +#define EVTCHNS_PER_BUCKET (PAGE_SIZE / next_power_of_2(sizeof(struct evtchn))) >> +#define EVTCHNS_PER_GROUP (BUCKETS_PER_GROUP * EVTCHNS_PER_BUCKET) >> +#define NR_EVTCHN_GROUPS DIV_ROUND_UP(MAX_NR_EVTCHNS, EVTCHNS_PER_GROUP) > > So for the 2-level case this still results in a full page allocation for > the top level structure. Not too nice... Without XSM enabled sizeof(struct evtchn) is 32 so: EVTCHNS_PER_BUCKET = 128 BUCKETS_PER_GROUP = 512 NR_EVTCHN_GROUPS = 2 The minimal allocation is then: 1 bucket page, 1 group page and 16 bytes[*] for the group pointers. Which I agree is 1 page more than previously. Is saving 1 xenheap page per domain worth adding extra complexity to the lookup? We could drop patch 4 (dynamically allocate d->evtchns) and have a array in struct domain since it is only a 2 element array. David [*] not sure what the minimum size for a xmalloc() allocation is. Is it actually 128 bytes?