From mboxrd@z Thu Jan 1 00:00:00 1970 From: Keir Fraser Subject: Re: [XEN][vNUMA][PATCH 3/9] public interface Date: Tue, 6 Jul 2010 13:57:02 +0100 Message-ID: References: Mime-Version: 1.0 Content-Type: text/plain; charset="ISO-8859-1" Content-Transfer-Encoding: quoted-printable Return-path: In-Reply-To: List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xensource.com Errors-To: xen-devel-bounces@lists.xensource.com To: Dulloor Cc: "xen-devel@lists.xensource.com" List-Id: xen-devel@lists.xenproject.org On 06/07/2010 06:57, "Dulloor" wrote: >> What are xc_cpumask (a libxc concept) related definitions doing in a >> hypervisor public header? These aren't even used in this header file. Be= low >> I suggest a vcpu_to_vnode[] array, which probably gets rid of the need f= or >> this bitmask stuff anyway. >=20 > Stale comment with xc_cpumask .. sorry ! > I did think of the vcpu_to_vnode array, but then we use the bitmask in > hvm_info > anyway (with vcpu_online). I thought I could atleast fold them into a > single structure. > I could change that if you insist. I think overall vnode_to_vcpu[] is a better way round, unless the per-node vcpu maps are really particularly handy for some reason. >> A small number to be statically defined. Better to make your structure >> extensible I think, perhaps including pointers out to vnode-indexed arra= ys? > This structure is passed in hvm_info page. Should I use offset/len for th= ese > dynamic-sized, vnode-indexed arrays ? The 'hvm_info page' is a slightly restrictive concept really. Actually the hvm_info data gets plopped down at a fixed location below 1MB in the guest'= s memory map, and you can just extend from there even across a page boundary. I would simply include pointers out to the dynamically-sized arrays; and their sizes should be implicit given nr_vnodes. >> How do vnodes and mnodes differ? Why should a guest care about or need t= o >> know about both, whatever they are? > vnode_id is the node-id in the guest and mnode_id refers to the real node > it maps to. Actually I don't need vnode_id. Will take that out. Yes that's a completely pointless unnecessary distinction. >>=20 >>> + =A0 =A0uint32_t nr_pages; >>=20 >> Not an address range? Is that implicitly worked out somehow? Should be >> commented, but even better just a range explicitly given? >=20 > The node address ranges are assumed contiguous and increasing. I will > change that to ranges. Thanks. >>=20 >>> + =A0 =A0struct xen_cpumask vcpu_mask; /* vnode_to_vcpumask */ >>> +}; >>=20 >> Why not have a single integer array vcpu_to_vnode[] in the main >> xen_domain_numa_info structure? >=20 > No specific reason, except that all the vnode-related info is > folded into a single structure. I will change that if you insist. Personally I think it it would be neater to change it. A whole bunch of cpumask machinery disappears. -- Keir >>=20 >>> +#define XEN_DOM_NUMA_INTERFACE_VERSION =A00x01 >>> + >>> +#define XEN_DOM_NUMA_CONFINE =A0 =A00x01 >>> +#define XEN_DOM_NUMA_SPLIT =A0 =A0 =A00x02 >>> +#define XEN_DOM_NUMA_STRIPE =A0 =A0 0x03 >>> +#define XEN_DOM_NUMA_DONTCARE =A0 0x04 >>=20 >> What should the guest do with these? You're rather light on comments in = this >> critical interface-defining header file. > I will add comments. The intent is to share this information with the > hypervisor > and PV guests (for ballooning). >=20 >>=20 >>> +struct xen_domain_numa_info { >>> + =A0 =A0uint8_t version; >>> + =A0 =A0uint8_t type; >>> + >>> + =A0 =A0uint8_t nr_vcpus; >>> + =A0 =A0uint8_t nr_vnodes; >>> + >>> + =A0 =A0/* XXX: hvm_info_table uses 32-bit for high_mem_pgend, >>> + =A0 =A0 * so we should be fine 32-bits too*/ >>> + =A0 =A0uint32_t nr_pages; >>=20 >> If this is going to be visible outside HVMloader (e.g., in PV guests) th= en >> just make it a uint64_aligned_t and be done with it. >=20 > Will do that. >>=20 >>> + =A0 =A0/* Only (nr_vnodes) entries are filled */ >>> + =A0 =A0struct xen_vnode_info vnode_info[XEN_MAX_VNODES]; >>> + =A0 =A0/* Only (nr_vnodes*nr_vnodes) entries are filled */ >>> + =A0 =A0uint8_t vnode_distance[XEN_MAX_VNODES*XEN_MAX_VNODES]; >>=20 >> As suggested above, make these pointers out to dynamic-sized arrays. No = need >> for XEN_MAX_VNODES at all. >=20 > In general, I realise I should add more comments. >>=20 >> =A0-- Keir >>=20 >>> +}; >>> + >>> +#endif >>=20 >> On 05/07/2010 09:52, "Dulloor" wrote: >>=20 >>> oops .. sorry, here it is. >>>=20 >>> -dulloor >>>=20 >>> On Mon, Jul 5, 2010 at 12:39 AM, Keir Fraser >>> wrote: >>>> This patch is incomplete. >>>>=20 >>>>=20 >>>> On 03/07/2010 00:54, "Dulloor" wrote: >>>>=20 >>>>> Implement the structure that will be shared with hvmloader (with HVMs= ) >>>>> and directly with the VMs (with PV). >>>>>=20 >>>>> -dulloor >>>>>=20 >>>>> Signed-off-by : Dulloor >>>>=20 >>>>=20 >>>>=20 >>=20 >>=20 >>=20