xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Keir Fraser <keir.fraser@eu.citrix.com>
To: Dulloor <dulloor@gmail.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [XEN][vNUMA][PATCH 3/9] public interface
Date: Tue, 6 Jul 2010 13:57:02 +0100	[thread overview]
Message-ID: <C858E6AE.197BE%keir.fraser@eu.citrix.com> (raw)
In-Reply-To: <AANLkTinMEypyPK_R6eBgx_7ar95SUKQWSHczEhINDuzQ@mail.gmail.com>

On 06/07/2010 06:57, "Dulloor" <dulloor@gmail.com> wrote:

>> What are xc_cpumask (a libxc concept) related definitions doing in a
>> hypervisor public header? These aren't even used in this header file. Below
>> I suggest a vcpu_to_vnode[] array, which probably gets rid of the need for
>> this bitmask stuff anyway.
> 
> Stale comment with xc_cpumask .. sorry !
> I did think of the vcpu_to_vnode array, but then we use the bitmask in
> hvm_info
> anyway (with vcpu_online). I thought I could atleast fold them into a
> single structure.
> I could change that if you insist.

I think overall vnode_to_vcpu[] is a better way round, unless the per-node
vcpu maps are really particularly handy for some reason.

>> A small number to be statically defined. Better to make your structure
>> extensible I think, perhaps including pointers out to vnode-indexed arrays?
> This structure is passed in hvm_info page. Should I use offset/len for these
> dynamic-sized, vnode-indexed arrays ?

The 'hvm_info page' is a slightly restrictive concept really. Actually the
hvm_info data gets plopped down at a fixed location below 1MB in the guest's
memory map, and you can just extend from there even across a page boundary.
I would simply include pointers out to the dynamically-sized arrays; and
their sizes should be implicit given nr_vnodes.

>> How do vnodes and mnodes differ? Why should a guest care about or need to
>> know about both, whatever they are?
> vnode_id is the node-id in the guest and mnode_id refers to the real node
> it maps to. Actually I don't need vnode_id. Will take that out.

Yes that's a completely pointless unnecessary distinction.

>> 
>>> +    uint32_t nr_pages;
>> 
>> Not an address range? Is that implicitly worked out somehow? Should be
>> commented, but even better just a <start,end> range explicitly given?
> 
> The node address ranges are assumed contiguous and increasing. I will
> change that to <start,end> ranges.

Thanks.

>> 
>>> +    struct xen_cpumask vcpu_mask; /* vnode_to_vcpumask */
>>> +};
>> 
>> Why not have a single integer array vcpu_to_vnode[] in the main
>> xen_domain_numa_info structure?
> 
> No specific reason, except that all the vnode-related info is
> folded into a single structure. I will change that if you insist.

Personally I think it it would be neater to change it. A whole bunch of
cpumask machinery disappears.

 -- Keir

>> 
>>> +#define XEN_DOM_NUMA_INTERFACE_VERSION  0x01
>>> +
>>> +#define XEN_DOM_NUMA_CONFINE    0x01
>>> +#define XEN_DOM_NUMA_SPLIT      0x02
>>> +#define XEN_DOM_NUMA_STRIPE     0x03
>>> +#define XEN_DOM_NUMA_DONTCARE   0x04
>> 
>> What should the guest do with these? You're rather light on comments in this
>> critical interface-defining header file.
> I will add comments. The intent is to share this information with the
> hypervisor
> and PV guests (for ballooning).
> 
>> 
>>> +struct xen_domain_numa_info {
>>> +    uint8_t version;
>>> +    uint8_t type;
>>> +
>>> +    uint8_t nr_vcpus;
>>> +    uint8_t nr_vnodes;
>>> +
>>> +    /* XXX: hvm_info_table uses 32-bit for high_mem_pgend,
>>> +     * so we should be fine 32-bits too*/
>>> +    uint32_t nr_pages;
>> 
>> If this is going to be visible outside HVMloader (e.g., in PV guests) then
>> just make it a uint64_aligned_t and be done with it.
> 
> Will do that.
>> 
>>> +    /* Only (nr_vnodes) entries are filled */
>>> +    struct xen_vnode_info vnode_info[XEN_MAX_VNODES];
>>> +    /* Only (nr_vnodes*nr_vnodes) entries are filled */
>>> +    uint8_t vnode_distance[XEN_MAX_VNODES*XEN_MAX_VNODES];
>> 
>> As suggested above, make these pointers out to dynamic-sized arrays. No need
>> for XEN_MAX_VNODES at all.
> 
> In general, I realise I should add more comments.
>> 
>>  -- Keir
>> 
>>> +};
>>> +
>>> +#endif
>> 
>> On 05/07/2010 09:52, "Dulloor" <dulloor@gmail.com> wrote:
>> 
>>> oops .. sorry, here it is.
>>> 
>>> -dulloor
>>> 
>>> On Mon, Jul 5, 2010 at 12:39 AM, Keir Fraser <keir.fraser@eu.citrix.com>
>>> wrote:
>>>> This patch is incomplete.
>>>> 
>>>> 
>>>> On 03/07/2010 00:54, "Dulloor" <dulloor@gmail.com> wrote:
>>>> 
>>>>> Implement the structure that will be shared with hvmloader (with HVMs)
>>>>> and directly with the VMs (with PV).
>>>>> 
>>>>> -dulloor
>>>>> 
>>>>> Signed-off-by : Dulloor <dulloor@gmail.com>
>>>> 
>>>> 
>>>> 
>> 
>> 
>> 

  reply	other threads:[~2010-07-06 12:57 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <1BEA8649F0C00540AB2811D7922ECB6C9338B4CC@orsmsx507.amr.corp.intel.com>
2010-07-02 23:54 ` [XEN][vNUMA][PATCH 3/9] public interface Dulloor
2010-07-05  7:39   ` Keir Fraser
2010-07-05  8:52     ` Dulloor
2010-07-05 10:23       ` Keir Fraser
2010-07-06  5:57         ` Dulloor
2010-07-06 12:57           ` Keir Fraser [this message]
2010-07-06 17:52             ` Dulloor
2010-08-01 22:02   ` [vNUMA v2][PATCH 2/8] " Dulloor
2010-08-03 12:40     ` Andre Przywara
2010-08-03 15:24       ` Dulloor
2010-08-03 13:37     ` Andre Przywara
2010-08-03 14:10       ` Keir Fraser
2010-08-03 15:43         ` Dulloor
2010-08-03 15:52           ` Keir Fraser
2010-08-03 17:24             ` Dulloor
2010-08-03 19:52               ` Keir Fraser
2010-08-03 20:32                 ` Dulloor
2010-08-03 21:55               ` Andre Przywara
2010-08-04  5:27                 ` Keir Fraser
2010-08-04  5:48                   ` Dulloor
2010-08-04  7:01                     ` Andre Przywara
2010-08-04  8:45                       ` Keir Fraser
2010-08-04 13:34                 ` Dan Magenheimer
2010-08-03 21:35             ` Andre Przywara
2010-08-03 15:54           ` Keir Fraser
2010-08-03 15:32       ` Dulloor
2010-08-03 21:21         ` Andre Przywara

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=C858E6AE.197BE%keir.fraser@eu.citrix.com \
    --to=keir.fraser@eu.citrix.com \
    --cc=dulloor@gmail.com \
    --cc=xen-devel@lists.xensource.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).