From mboxrd@z Thu Jan 1 00:00:00 1970 From: George Dunlap Subject: Re: [PATCH 1/1] Xen ARINC 653 Scheduler Date: Tue, 4 May 2010 10:54:06 -0500 Message-ID: References: Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="===============1237574940==" Return-path: In-Reply-To: List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xensource.com Errors-To: xen-devel-bounces@lists.xensource.com To: Kathy Hadley Cc: Keir.Fraser@citrix.com, xen-devel@lists.xensource.com List-Id: xen-devel@lists.xenproject.org --===============1237574940== Content-Type: multipart/related; boundary=00151747b4702ce7350485c6b83a --00151747b4702ce7350485c6b83a Content-Type: multipart/alternative; boundary=00151747b4702ce72f0485c6b839 --00151747b4702ce72f0485c6b839 Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable Kathy, thanks for your work on this. Unfortunately, with the new cpupools feature, your actual scheduler will need a little more modification before it can be merged into -unstable. Cpu pools carves up the cpus into several "pools", each of which has independent schedulers. This means that schedulers need to make pointers to per-pool "global" structures, rather than having global static structures. It should be a fairly straightforwar= d transformation; you can see example transformations done on the credit and sedf schedulers. Let us know if you need any help. -George On Fri, Apr 16, 2010 at 9:18 AM, Kathy Hadley wrote: > This patch adds an ARINC 653 scheduler to Xen. This is a modification o= f > an earlier patch ([Xen-devel] [PATCH 1/1] Xen ARINC653 scheduler). In > particular, it has been modified to use the new .adjust_global callback > function, which was added in =93[Xen-devel] [PATCH 1/1] Add .adjust_globa= l > callback=94. > > > > Thanks and regards, > > > > > > [image: cid:image001.jpg@01CAD7E0.50E45D70] > > *Kathy Hadley > *DornerWorks, Ltd. > *Embedded Systems Engineering > > *3445 Lake Eastbrook Blvd SE > Grand Rapids, MI 49546 > > Direct: 616.389.6127 > > Tel: 616.245.8369 > > Fax: 616.245.8372 > > > > Kathy.Hadley@DornerWorks.com > > www.DornerWorks.com > > [image: cid:image002.jpg@01CAD7E0.50E45D70] > > *Honored as one of the 2010 =93Michigan 50 Companies to Watch=94* > > > > diff -rupN a/tools/libxc/Makefile b/tools/libxc/Makefile > > --- a/tools/libxc/Makefile 2010-04-13 10:49:37.573793000 -0400 > > +++ b/tools/libxc/Makefile 2010-04-14 17:49:26.952638000 -0400 > > @@ -17,6 +17,7 @@ CTRL_SRCS-y +=3D xc_physdev.c > > CTRL_SRCS-y +=3D xc_private.c > > CTRL_SRCS-y +=3D xc_sedf.c > > CTRL_SRCS-y +=3D xc_csched.c > > +CTRL_SRCS-y +=3D xc_arinc653.c > > CTRL_SRCS-y +=3D xc_tbuf.c > > CTRL_SRCS-y +=3D xc_pm.c > > CTRL_SRCS-y +=3D xc_cpu_hotplug.c > > diff -rupN a/tools/libxc/xc_arinc653.c b/tools/libxc/xc_arinc653.c > > --- a/tools/libxc/xc_arinc653.c 1969-12-31 19:00:00.000000000 -0500 > > +++ b/tools/libxc/xc_arinc653.c 2010-04-14 17:49:26.952638000 -0400 > > @@ -0,0 +1,28 @@ > > > +/***********************************************************************= ***** > > + * (C) 2010 - DornerWorks, Ltd > > + > *************************************************************************= *** > > + * > > + * File: xc_arinc653.c > > + * Author: Josh Holtrop > > + * > > + * Description: XC Interface to the ARINC 653 scheduler > > + * > > + */ > > + > > +#include "xc_private.h" > > + > > +int > > +xc_sched_arinc653_sched_set( > > + int xc_handle, > > + xen_domctl_sched_arinc653_schedule_t * sched) > > +{ > > + DECLARE_DOMCTL; > > + > > + domctl.cmd =3D XEN_DOMCTL_scheduler_op; > > + domctl.domain =3D (domid_t) 0; > > + domctl.u.scheduler_op.sched_id =3D XEN_SCHEDULER_ARINC653; > > + domctl.u.scheduler_op.cmd =3D XEN_DOMCTL_SCHEDOP_put_global_info; > > + set_xen_guest_handle(domctl.u.scheduler_op.u.arinc653.schedule, > sched); > > + > > + return do_domctl(xc_handle, &domctl); > > +} > > diff -rupN a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h > > --- a/tools/libxc/xenctrl.h 2010-04-13 10:49:37.573793000 -040= 0 > > +++ b/tools/libxc/xenctrl.h 2010-04-14 17:49:26.952638000 -0400 > > @@ -476,6 +476,16 @@ int xc_sched_credit_domain_get(int xc_ha > > struct xen_domctl_sched_credit *sdom); > > > > /** > > + * This function sets the global ARINC 653 schedule. > > + * > > + * @parm xc_handle a handle to an open hypervisor interface > > + * @parm sched a pointer to the new ARINC 653 schedule > > + * return 0 on success > > + */ > > +int xc_sched_arinc653_sched_set(int xc_handle, > > + xen_domctl_sched_arinc653_schedule_t * > sched); > > + > > +/** > > * This function sends a trigger to a domain. > > * > > * @parm xc_handle a handle to an open hypervisor interface > > diff -rupN a/xen/common/Makefile b/xen/common/Makefile > > --- a/xen/common/Makefile 2010-04-13 10:49:37.573793000 -0400 > > +++ b/xen/common/Makefile 2010-04-13 13:00:31.651749000 -0400 > > @@ -14,6 +14,7 @@ obj-y +=3D page_alloc.o > > obj-y +=3D rangeset.o > > obj-y +=3D sched_credit.o > > obj-y +=3D sched_sedf.o > > +obj-y +=3D sched_arinc653.o > > obj-y +=3D schedule.o > > obj-y +=3D shutdown.o > > obj-y +=3D softirq.o > > diff -rupN a/xen/common/sched_arinc653.c b/xen/common/sched_arinc653.c > > --- a/xen/common/sched_arinc653.c 1969-12-31 19:00:00.000000000 -05= 00 > > +++ b/xen/common/sched_arinc653.c 2010-04-14 18:13:26.163404000 -0400 > > @@ -0,0 +1,590 @@ > > +/* > > + * File: sched_arinc653.c > > + * Copyright (c) 2010, DornerWorks, Ltd. > > + * > > + * Description: > > + * This file provides an ARINC653-compatible scheduling algorithm > > + * for use in Xen. > > + * > > + * This program is free software; you can redistribute it and/or modify = it > > + * under the terms of the GNU General Public License as published by the > Free > > + * software Foundation; either version 2 of the License, or (at your > option) > > + * any later version. > > + * > > + * This program is distributed in the hope that it will be useful, > > + * but WITHOUT ANY WARRANTY; without even the implied warranty of > > + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. > > + * See the GNU General Public License for more details. > > + */ > > + > > + > > > +/***********************************************************************= *** > > + * Includes > * > > + > *************************************************************************= / > > +#include > > +#include > > +#include > > +#include > > +#include > > +#include > > +#include > > +#include > > +#include /* ARINC653_MAX_DOMAINS_PER_SCHEDULE > */ > > +#include > > + > > + > > > +/***********************************************************************= *** > > + * Private Macros > * > > + > *************************************************************************= */ > > + > > +/** > > + * Retrieve the idle VCPU for a given physical CPU > > + */ > > +#define IDLETASK(cpu) ((struct vcpu *) per_cpu(schedule_data, > (cpu)).idle) > > + > > +/** > > + * Return a pointer to the ARINC 653-specific scheduler data information > > + * associated with the given VCPU (vc) > > + */ > > +#define AVCPU(vc) ((arinc653_vcpu_t *)(vc)->sched_priv) > > + > > > +/***********************************************************************= *** > > + * Private Type Definitions > * > > + > *************************************************************************= */ > > + > > +/** > > + * The sched_entry_t structure holds a single entry of the > > + * ARINC 653 schedule. > > + */ > > +typedef struct sched_entry_s > > +{ > > + /* dom_handle holds the handle ("UUID") for the domain that this > > + * schedule entry refers to. */ > > + xen_domain_handle_t dom_handle; > > + /* vcpu_id holds the VCPU number for the VCPU that this schedule > > + * entry refers to. */ > > + int vcpu_id; > > + /* runtime holds the number of nanoseconds that the VCPU for this > > + * schedule entry should be allowed to run per major frame. */ > > + s_time_t runtime; > > + /* vc holds a pointer to the Xen VCPU structure */ > > + struct vcpu * vc; > > +} sched_entry_t; > > + > > +/** > > + * The arinc653_vcpu_t structure holds ARINC 653-scheduler-specific > > + * information for all non-idle VCPUs > > + */ > > +typedef struct arinc653_vcpu_s > > +{ > > + /* vc points to Xen's struct vcpu so we can get to it from an > > + * arinc653_vcpu_t pointer. */ > > + struct vcpu * vc; > > + /* awake holds whether the VCPU has been woken with vcpu_wake() */ > > + bool_t awake; > > + /* list holds the linked list information for the list this VCPU > > + * is stored in */ > > + struct list_head list; > > +} arinc653_vcpu_t; > > + > > + > > > +/***********************************************************************= *** > > + * Global Data > * > > + > *************************************************************************= */ > > + > > +/** > > + * This array holds the active ARINC 653 schedule. > > + * > > + * When the system tries to start a new VCPU, this schedule is scanned > > + * to look for a matching (handle, VCPU #) pair. If both the handle > ("UUID") > > + * and VCPU number match, then the VCPU is allowed to run. Its run time > > + * (per major frame) is given in the third entry of the schedule. > > + */ > > +static sched_entry_t arinc653_schedule[ARINC653_MAX_DOMAINS_PER_SCHEDULE= ] > =3D { > > + { "", 0, MILLISECS(10), NULL } > > +}; > > + > > +/** > > + * This variable holds the number of entries that are valid in > > + * the arinc653_schedule table. > > + * > > + * This is not necessarily the same as the number of domains in the > > + * schedule. A domain could be listed multiple times within the schedule= , > > + * or a domain with multiple VCPUs could have a different > > + * schedule entry for each VCPU. > > + * > > + * A value of 1 means that only 1 domain (Dom0) will initially be starte= d. > > + */ > > +static int num_schedule_entries =3D 1; > > + > > +/** > > + * arinc653_major_frame holds the major frame time for the ARINC 653 > schedule. > > + */ > > +static s_time_t arinc653_major_frame =3D MILLISECS(10); > > + > > +/** > > + * next_major_frame holds the time that the next major frame starts > > + */ > > +static s_time_t next_major_frame =3D 0; > > + > > +/** > > + * vcpu_list holds pointers to all Xen VCPU structures for iterating > through > > + */ > > +static LIST_HEAD(vcpu_list); > > + > > > +/***********************************************************************= *** > > + * Scheduler functions > * > > + > *************************************************************************= */ > > + > > +/** > > + * This function compares two domain handles. > > + * > > + * @param h1 Pointer to handle 1 > > + * @param h2 Pointer to handle 2 > > + * > > + * @return
    > > + *
  • <0: handle 1 is less than handle 2 > > + *
  • 0: handle 1 is equal to handle 2 > > + *
  • >0: handle 1 is greater than handle 2 > > + *
> > + */ > > +static int dom_handle_cmp(const xen_domain_handle_t h1, > > + const xen_domain_handle_t h2) > > +{ > > + return memcmp(h1, h2, sizeof(xen_domain_handle_t)); > > +} /* end dom_handle_cmp */ > > + > > +/** > > + * This function searches the vcpu list to find a VCPU that matches > > + * the domain handle and VCPU ID specified. > > + * > > + * @param handle Pointer to handler > > + * @param vcpu_id VCPU ID > > + * > > + * @return
    > > + *
  • Pointer to the matching VCPU if one is found > > + *
  • NULL otherwise > > + *
> > + */ > > +static struct vcpu * find_vcpu(xen_domain_handle_t handle, int vcpu_id) > > +{ > > + arinc653_vcpu_t * avcpu; /* loop index variable */ > > + struct vcpu * vc =3D NULL; > > + > > + /* loop through the vcpu_list looking for the specified VCPU */ > > + list_for_each_entry(avcpu, &vcpu_list, list) > > + { > > + /* If the handles & VCPU IDs match, we've found a matching VCPU = */ > > + if ((dom_handle_cmp(avcpu->vc->domain->handle, handle) =3D=3D 0) > > + && (vcpu_id =3D=3D avcpu->vc->vcpu_id)) > > + { > > + vc =3D avcpu->vc; > > + /* > > + * "break" statement used instead of loop control variable > because > > + * the macro used for this loop does not support using loop > control > > + * variables > > + */ > > + break; > > + } > > + } > > + > > + return vc; > > +} /* end find_vcpu */ > > + > > +/** > > + * This function updates the pointer to the Xen VCPU structure for each > entry in > > + * the ARINC 653 schedule. > > + * > > + * @param > > + * @return > > + */ > > +static void update_schedule_vcpus(void) > > +{ > > + /* Loop through the number of entries in the schedule */ > > + for (int i =3D 0; i < num_schedule_entries; i++) > > + { > > + /* Update the pointer to the Xen VCPU structure for the current > entry */ > > + arinc653_schedule[i].vc =3D > > + find_vcpu(arinc653_schedule[i].dom_handle, > > + arinc653_schedule[i].vcpu_id); > > + } > > +} /* end update_schedule_vcpus */ > > + > > +/** > > + * This function is called by the arinc653_adjust_global scheduler > > + * callback function in response to a domain control hypercall with > > + * a scheduler operation. > > + * > > + * The parameter schedule is set to be the address of a local variable > from > > + * within arinc653_adjust_global(), so it is guaranteed to not be NULL. > > + * > > + * @param schedule Pointer to the new ARINC 653 schedule. > > + * > > + * @return
    > > + *
  • 0 =3D success > > + *
  • !0 =3D error > > + *
> > + */ > > +static int arinc653_sched_set(xen_domctl_sched_arinc653_schedule_t * > schedule) > > +{ > > + int ret =3D 0; > > + s_time_t total_runtime =3D 0; > > + bool_t found_dom0 =3D 0; > > + const static xen_domain_handle_t dom0_handle =3D {0}; > > + > > + /* check for valid major frame and number of schedule entries */ > > + if ( (schedule->major_frame <=3D 0) > > + || (schedule->num_sched_entries < 1) > > + || (schedule->num_sched_entries > ARINC653_MAX_DOMAINS_PER_SCHEDUL= E) > ) > > + { > > + ret =3D -EINVAL; > > + } > > + > > + if (ret =3D=3D 0) > > + { > > + for (int i =3D 0; i < schedule->num_sched_entries; i++) > > + { > > + /* > > + * look for domain 0 handle - every schedule must contain > > + * some time for domain 0 to run > > + */ > > + if (dom_handle_cmp(schedule->sched_entries[i].dom_handle, > > + dom0_handle) =3D=3D 0) > > + { > > + found_dom0 =3D 1; > > + } > > + > > + /* check for a valid VCPU ID and run time */ > > + if ( (schedule->sched_entries[i].vcpu_id < 0) > > + || (schedule->sched_entries[i].runtime <=3D 0) ) > > + { > > + ret =3D -EINVAL; > > + } > > + else > > + { > > + /* Add this entry's run time to total run time */ > > + total_runtime +=3D schedule->sched_entries[i].runtime; > > + } > > + } /* end loop through schedule entries */ > > + } > > + > > + if (ret =3D=3D 0) > > + { > > + /* error if the schedule doesn't contain a slot for domain 0 */ > > + if (found_dom0 =3D=3D 0) > > + { > > + ret =3D -EINVAL; > > + } > > + } > > + > > + if (ret =3D=3D 0) > > + { > > + /* > > + * error if the major frame is not large enough to run all entri= es > > + * as indicated by comparing the total run time to the major fra= me > > + * length > > + */ > > + if (total_runtime > schedule->major_frame) > > + { > > + ret =3D -EINVAL; > > + } > > + } > > + > > + if (ret =3D=3D 0) > > + { > > + /* copy the new schedule into place */ > > + num_schedule_entries =3D schedule->num_sched_entries; > > + arinc653_major_frame =3D schedule->major_frame; > > + for (int i =3D 0; i < num_schedule_entries; i++) > > + { > > + memcpy(arinc653_schedule[i].dom_handle, > > + schedule->sched_entries[i].dom_handle, > > + sizeof(arinc653_schedule[i].dom_handle)); > > + arinc653_schedule[i].vcpu_id =3D > schedule->sched_entries[i].vcpu_id; > > + arinc653_schedule[i].runtime =3D > schedule->sched_entries[i].runtime; > > + } > > + update_schedule_vcpus(); > > + > > + /* > > + * The newly-installed schedule takes effect immediately. > > + * We do not even wait for the current major frame to expire. > > + * > > + * Signal a new major frame to begin. The next major frame > > + * is set up by the do_schedule callback function when it > > + * is next invoked. > > + */ > > + next_major_frame =3D NOW(); > > + } > > + > > + return ret; > > +} /* end arinc653_sched_set */ > > + > > +/** > > + * Xen scheduler callback function to adjust global scheduling parameter= s > > + * > > + * @param op Pointer to the domain control scheduler operation > structure > > + * > > + * @return
    > > + *
  • 0 for success > > + *
  • !0 if there is an error > > + *
> > + */ > > +static int arinc653_adjust_global(struct xen_domctl_scheduler_op * op) > > +{ > > + int ret =3D -1; > > + xen_domctl_sched_arinc653_schedule_t new_sched; > > + > > + if (op->cmd =3D=3D XEN_DOMCTL_SCHEDOP_put_global_info) > > + { > > + if (copy_from_guest(&new_sched, op->u.arinc653.schedule, 1) !=3D= 0) > > + { > > + ret =3D -EFAULT; > > + } > > + else > > + { > > + ret =3D arinc653_sched_set(&new_sched); > > + } > > + } > > + > > + return ret; > > +} /* end arinc653_adjust_global */ > > + > > +/** > > + * Xen scheduler callback function to initialize a virtual CPU (VCPU). > > + * > > + * @param v Pointer to the VCPU structure > > + * > > + * @return
    > > + *
  • 0 if the VCPU is allowed to run > > + *
  • !0 if there is an error > > + *
> > + */ > > +static int arinc653_init_vcpu(struct vcpu * v) > > +{ > > + int ret =3D -1; > > + > > + if (is_idle_vcpu(v)) > > + { > > + /* > > + * The idle VCPU is created by Xen to run when no domains > > + * are runnable or require CPU time. > > + * It is similar to an "idle task" or "halt loop" process > > + * in an operating system. > > + * We do not track any scheduler information for the idle VCPU. > > + */ > > + v->sched_priv =3D NULL; > > + ret =3D 0; > > + } > > + else > > + { > > + /* > > + * Allocate memory for the ARINC 653-specific scheduler data > information > > + * associated with the given VCPU (vc). > > + */ > > + v->sched_priv =3D xmalloc(arinc653_vcpu_t); > > + if (AVCPU(v) !=3D NULL) > > + { > > + /* > > + * Initialize our ARINC 653 scheduler-specific information > > + * for the VCPU. > > + * The VCPU starts "asleep." > > + * When Xen is ready for the VCPU to run, it will call > > + * the vcpu_wake scheduler callback function and our > > + * scheduler will mark the VCPU awake. > > + */ > > + AVCPU(v)->vc =3D v; > > + AVCPU(v)->awake =3D 0; > > + list_add(&AVCPU(v)->list, &vcpu_list); > > + ret =3D 0; > > + update_schedule_vcpus(); > > + } > > + } > > + > > + return ret; > > +} /* end arinc653_init_vcpu */ > > + > > +/** > > + * Xen scheduler callback function to remove a VCPU > > + * > > + * @param v Pointer to the VCPU structure to remove > > + * > > + * @return > > + */ > > +static void arinc653_destroy_vcpu(struct vcpu * v) > > +{ > > + if (AVCPU(v) !=3D NULL) > > + { > > + /* remove the VCPU from whichever list it is on */ > > + list_del(&AVCPU(v)->list); > > + /* free the arinc653_vcpu structure */ > > + xfree(AVCPU(v)); > > + update_schedule_vcpus(); > > + } > > +} /* end arinc653_destroy_vcpu */ > > + > > +/** > > + * Xen scheduler callback function to select a VCPU to run. > > + * This is the main scheduler routine. > > + * > > + * @param t Current time > > + * > > + * @return Time slice and address of the VCPU structure for the > chosen > > + * domain > > + */ > > +static struct task_slice arinc653_do_schedule(s_time_t t) > > +{ > > + struct task_slice ret; /* hold the chosen domai= n > */ > > + struct vcpu * new_task =3D NULL; > > + static int sched_index =3D 0; > > + static s_time_t last_major_frame; > > + static s_time_t last_switch_time; > > + static s_time_t next_switch_time; > > + > > + if (t >=3D next_major_frame) > > + { > > + /* time to enter a new major frame > > + * the first time this function is called, this will be true */ > > + sched_index =3D 0; > > + last_major_frame =3D last_switch_time =3D t; > > + next_major_frame =3D t + arinc653_major_frame; > > + } > > + else if (t >=3D next_switch_time) > > + { > > + /* time to switch to the next domain in this major frame */ > > + sched_index++; > > + last_switch_time =3D next_switch_time; > > + } > > + > > + /* > > + * If there are more domains to run in the current major frame, set > > + * next_switch_time equal to the last switch time + this domain's ru= n > time. > > + * Otherwise, set next_switch_time equal to the start of the next > major > > + * frame. > > + */ > > + next_switch_time =3D (sched_index < num_schedule_entries) > > + ? last_switch_time + > arinc653_schedule[sched_index].runtime > > + : next_major_frame; > > + > > + /* > > + * If there are more domains to run in the current major frame, set > > + * new_task equal to the address of next domain's VCPU structure. > > + * Otherwise, set new_task equal to the address of the idle task's > VCPU > > + * structure. > > + */ > > + new_task =3D (sched_index < num_schedule_entries) > > + ? arinc653_schedule[sched_index].vc > > + : IDLETASK(0); > > + > > + /* Check to see if the new task can be run (awake & runnable). */ > > + if (!((new_task !=3D NULL) > > + && AVCPU(new_task)->awake > > + && vcpu_runnable(new_task)) ) > > + { > > + new_task =3D IDLETASK(0); > > + } > > + BUG_ON(new_task =3D=3D NULL); > > + > > + /* > > + * Check to make sure we did not miss a major frame. > > + * This is a good test for robust partitioning. > > + */ > > + BUG_ON(t >=3D next_major_frame); > > + > > + /* > > + * Return the amount of time the next domain has to run and the > address > > + * of the selected task's VCPU structure. > > + */ > > + ret.time =3D next_switch_time - t; > > + ret.task =3D new_task; > > + > > + BUG_ON(ret.time <=3D 0); > > + > > + return ret; > > +} /* end arinc653_do_schedule */ > > + > > +/** > > + * Xen scheduler callback function to select a CPU for the VCPU to run o= n > > + * > > + * @param v Pointer to the VCPU structure for the current domain > > + * > > + * @return Number of selected physical CPU > > + */ > > +static int arinc653_pick_cpu(struct vcpu * v) > > +{ > > + /* this implementation only supports one physical CPU */ > > + return 0; > > +} /* end arinc653_pick_cpu */ > > + > > +/** > > + * Xen scheduler callback function to wake up a VCPU > > + * > > + * @param vc Pointer to the VCPU structure for the current domain > > + * > > + * @return > > + */ > > +static void arinc653_vcpu_wake(struct vcpu * vc) > > +{ > > + /* boolean flag to indicate first run */ > > + static bool_t dont_raise_softirq =3D 0; > > + > > + if (AVCPU(vc) !=3D NULL) /* check that this is a VCPU we are tracki= ng > */ > > + { > > + AVCPU(vc)->awake =3D 1; > > + } > > + > > + /* the first time the vcpu_wake function is called, we should raise > > + * a softirq to invoke the do_scheduler callback */ > > + if (!dont_raise_softirq) > > + { > > + cpu_raise_softirq(vc->processor, SCHEDULE_SOFTIRQ); > > + dont_raise_softirq =3D 1; > > + } > > +} /* end arinc653_vcpu_wake */ > > + > > +/** > > + * Xen scheduler callback function to sleep a VCPU > > + * > > + * @param vc Pointer to the VCPU structure for the current domain > > + * > > + * @return > > + */ > > +static void arinc653_vcpu_sleep(struct vcpu * vc) > > +{ > > + if (AVCPU(vc) !=3D NULL) /* check that this is a VCPU we are tracki= ng > */ > > + { > > + AVCPU(vc)->awake =3D 0; > > + } > > + > > + /* if the VCPU being put to sleep is the same one that is currently > > + * running, raise a softirq to invoke the scheduler to switch domain= s > */ > > + if (per_cpu(schedule_data, vc->processor).curr =3D=3D vc) > > + { > > + cpu_raise_softirq(vc->processor, SCHEDULE_SOFTIRQ); > > + } > > +} /* end arinc653_vcpu_sleep */ > > + > > +/** > > + * This structure defines our scheduler for Xen. > > + * The entries tell Xen where to find our scheduler-specific > > + * callback functions. > > + * The symbol must be visible to the rest of Xen at link time. > > + */ > > +struct scheduler sched_arinc653_def =3D { > > + .name =3D "ARINC 653 Scheduler", > > + .opt_name =3D "arinc653", > > + .sched_id =3D XEN_SCHEDULER_ARINC653, > > + > > + .init_domain =3D NULL, > > + .destroy_domain =3D NULL, > > + > > + .init_vcpu =3D arinc653_init_vcpu, > > + .destroy_vcpu =3D arinc653_destroy_vcpu, > > + > > + .do_schedule =3D arinc653_do_schedule, > > + .pick_cpu =3D arinc653_pick_cpu, > > + .dump_cpu_state =3D NULL, > > + .sleep =3D arinc653_vcpu_sleep, > > + .wake =3D arinc653_vcpu_wake, > > + .adjust =3D NULL, > > + .adjust_global =3D arinc653_adjust_global, > > +}; > > diff -rupN a/xen/common/schedule.c b/xen/common/schedule.c > > --- a/xen/common/schedule.c 2010-04-14 10:57:11.262796000 -0400 > > +++ b/xen/common/schedule.c 2010-04-14 16:40:21.543608000 -0400 > > @@ -7,7 +7,8 @@ > > * File: common/schedule.c > > * Author: Rolf Neugebauer & Keir Fraser > > * Updated for generic API by Mark Williamson > > - * > > + * ARINC653 scheduler added by DornerWorks > > + * > > * Description: Generic CPU scheduling code > > * implements support functionality for the Xen scheduler > API. > > * > > @@ -56,9 +57,11 @@ DEFINE_PER_CPU(struct schedule_data, sch > > > > extern const struct scheduler sched_sedf_def; > > extern const struct scheduler sched_credit_def; > > +extern const struct scheduler sched_arinc653_def; > > static const struct scheduler *__initdata schedulers[] =3D { > > &sched_sedf_def, > > &sched_credit_def, > > + &sched_arinc653_def, > > NULL > > }; > > > > diff -rupN a/xen/include/public/domctl.h b/xen/include/public/domctl.h > > --- a/xen/include/public/domctl.h 2010-04-14 10:57:11.262796000 -0400 > > +++ b/xen/include/public/domctl.h 2010-04-14 16:40:21.543608000 > -0400 > > @@ -23,6 +23,8 @@ > > * > > * Copyright (c) 2002-2003, B Dragovic > > * Copyright (c) 2002-2006, K Fraser > > + * > > + * ARINC653 Scheduler type added by DornerWorks . > > */ > > > > #ifndef __XEN_PUBLIC_DOMCTL_H__ > > @@ -303,11 +305,43 @@ DEFINE_XEN_GUEST_HANDLE(xen_domctl_max_v > > /* Scheduler types. */ > > #define XEN_SCHEDULER_SEDF 4 > > #define XEN_SCHEDULER_CREDIT 5 > > +#define XEN_SCHEDULER_ARINC653 6 > > + > > /* Set or get info? */ > > #define XEN_DOMCTL_SCHEDOP_putinfo 0 > > #define XEN_DOMCTL_SCHEDOP_getinfo 1 > > #define XEN_DOMCTL_SCHEDOP_put_global_info 2 > > #define XEN_DOMCTL_SCHEDOP_get_global_info 3 > > + > > +/* > > + * This structure is used to pass a new ARINC653 schedule from a > > + * privileged domain (ie dom0) to Xen. > > + */ > > +#define ARINC653_MAX_DOMAINS_PER_SCHEDULE 64 > > +struct xen_domctl_sched_arinc653_schedule { > > + /* major_frame holds the time for the new schedule's major frame > > + * in nanoseconds. */ > > + int64_t major_frame; > > + /* num_sched_entries holds how many of the entries in the > > + * sched_entries[] array are valid. */ > > + uint8_t num_sched_entries; > > + /* The sched_entries array holds the actual schedule entries. */ > > + struct { > > + /* dom_handle must match a domain's UUID */ > > + xen_domain_handle_t dom_handle; > > + /* If a domain has multiple VCPUs, vcpu_id specifies which one > > + * this schedule entry applies to. It should be set to 0 if > > + * there is only one VCPU for the domain. */ > > + int vcpu_id; > > + /* runtime specifies the amount of time that should be allocated > > + * to this VCPU per major frame. It is specified in nanoseconds = */ > > + int64_t runtime; > > + } sched_entries[ARINC653_MAX_DOMAINS_PER_SCHEDULE]; > > +}; > > +typedef struct xen_domctl_sched_arinc653_schedule > > + xen_domctl_sched_arinc653_schedule_t; > > +DEFINE_XEN_GUEST_HANDLE(xen_domctl_sched_arinc653_schedule_t); > > + > > struct xen_domctl_scheduler_op { > > uint32_t sched_id; /* XEN_SCHEDULER_* */ > > uint32_t cmd; /* XEN_DOMCTL_SCHEDOP_* */ > > @@ -323,6 +357,9 @@ struct xen_domctl_scheduler_op { > > uint16_t weight; > > uint16_t cap; > > } credit; > > + struct xen_domctl_sched_arinc653 { > > + XEN_GUEST_HANDLE(xen_domctl_sched_arinc653_schedule_t) > schedule; > > + } arinc653; > > } u; > > }; > > typedef struct xen_domctl_scheduler_op xen_domctl_scheduler_op_t; > > > > > > > > > > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel > > --00151747b4702ce72f0485c6b839 Content-Type: text/html; charset=windows-1252 Content-Transfer-Encoding: quoted-printable Kathy, thanks for your work on this.=A0 Unfortunately, with the new=20 cpupools feature, your actual scheduler will need a little more=20 modification before it can be merged into -unstable.=A0 Cpu pools carves=20 up the cpus into several "pools", each of which has independent= =20 schedulers.=A0 This means that schedulers need to make pointers to=20 per-pool "global" structures, rather than having global static st= ructures.=A0 It should be a fairly straightforward transformation; you can = see example transformations done on the credit and sedf schedulers.=A0 Let = us know if you need any help.

=A0-George

On Fri, Apr 16, 2010 at 9:= 18 AM, Kathy Hadley <Kathy.Hadley@dornerworks.com> wrote:

This patch adds an ARINC 653 scheduler to Xen.=A0 Th= is is a modification of an earlier patch ([Xen-devel] [PATCH 1/1] Xen ARINC653 scheduler).=A0 In particular, it has been modified to use the new .adjust_g= lobal callback function, which was added in =93[Xen-devel] [PATCH 1/1] Add .adjust_global callback=94.

=A0

Thanks and regards,

=A0

=A0

3D"cid:image00=

Kathy Hadley
DornerWorks, Ltd.
Embedded Systems Engineering

3445 Lake Eastbrook Blvd SE
Grand Rapids, MI=A0 49546

Direct: 616.389.6127

Tel:=A0=A0=A0=A0=A0 616.245.8369

Fax:=A0=A0=A0=A0 616.245.8372

=A0

Kathy.Hadley@DornerWorks.com

www.DornerWorks.com

3D"cid:image002.jpg@01CAD7E0.50=

Honored as one of the 2010 =93Michigan 50 Companies to Watch=94

=A0

diff -rupN a/tools/libxc/Makefile b/tools/libxc/Makefile

--- a/tools/libxc/Makefile=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 2010-04-13 10:49:37.573793000 -0400

+++ b/tools/libxc/Makefile=A0=A0=A0=A0=A0=A0=A0=A0 2010-04-14 17:49:26.952638000 -0400

@@ -17,6 +17,7 @@ CTRL_SRCS-y=A0=A0=A0=A0=A0=A0 +=3D xc_physdev.c

=A0CTRL_SRCS-y=A0=A0=A0=A0=A0=A0 +=3D xc_private.c

=A0CTRL_SRCS-y=A0=A0=A0=A0=A0=A0 +=3D xc_sedf.c

=A0CTRL_SRCS-y=A0=A0=A0=A0=A0=A0 +=3D xc_csched.c

+CTRL_SRCS-y=A0=A0=A0=A0=A0=A0 +=3D xc_arinc653.c

=A0CTRL_SRCS-y=A0=A0=A0=A0=A0=A0 +=3D xc_tbuf.c

=A0CTRL_SRCS-y=A0=A0=A0=A0=A0=A0 +=3D xc_pm.c

=A0CTRL_SRCS-y=A0=A0=A0=A0=A0=A0 +=3D xc_cpu_hotplug.c

diff -rupN a/tools/libxc/xc_arinc653.c b/tools/libxc/xc_arinc653.c

--- a/tools/libxc/xc_arinc653.c=A0=A0=A0=A0 1969-12-31 19:00:00.000000000 -0500

+++ b/tools/libxc/xc_arinc653.c 2010-04-14 17:49:26.952638000 -0400

@@ -0,0 +1,28 @@

+/*************************************************= ***************************

+ * (C) 2010 - DornerWorks, Ltd <DornerWorks.com>

+ *************************************************************************= ***

+ *

+ *=A0=A0=A0=A0=A0=A0=A0 File: xc_arinc653.c

+ *=A0=A0=A0=A0=A0 Author: Josh Holtrop <DornerWorks.com>

+ *

+ * Description: XC Interface to the ARINC 653 scheduler

+ *

+ */

+

+#include "xc_private.h"

+

+int

+xc_sched_arinc653_sched_set(

+=A0=A0=A0 int xc_handle,

+=A0=A0=A0 xen_domctl_sched_arinc653_schedule_t * sched)

+{

+=A0=A0=A0 DECLARE_DOMCTL;

+

+=A0=A0=A0 domctl.cmd =3D XEN_DOMCTL_scheduler_op;

+=A0=A0=A0 domctl.domain =3D (domid_t) 0;

+=A0=A0=A0 domctl.u.scheduler_op.sched_id =3D XEN_SCHEDULER_ARINC653;

+=A0=A0=A0 domctl.u.scheduler_op.cmd =3D XEN_DOMCTL_SCHEDOP_put_global_info;<= /p>

+=A0=A0=A0 set_xen_guest_handle(domctl.u.scheduler_op.u.arinc653.schedule, sched);

+

+=A0=A0=A0 return do_domctl(xc_handle, &domctl);

+}

diff -rupN a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h

--- a/tools/libxc/xenctrl.h=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 2010-04-13 10:49:37.573793000 -0400

+++ b/tools/libxc/xenctrl.h=A0=A0=A0=A0=A0=A0=A0=A0 2010-04-14 17:49:26.952638000 -0400

@@ -476,6 +476,16 @@ int xc_sched_credit_domain_get(int xc_ha

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0= =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 struct xen_domctl_sched_credit *sdom);

=A0

=A0/**

+ * This function sets the global ARINC 653 schedule.

+ *

+ * @parm xc_handle a handle to an open hypervisor interface

+ * @parm sched a pointer to the new ARINC 653 schedule

+ * return 0 on success

+ */

+int xc_sched_arinc653_sched_set(int xc_handle,

+=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0= =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 xen_domctl_sched_arinc653_schedule_t * sched);

+

+/**

=A0 * This function sends a trigger to a domain.

=A0 *

=A0 * @parm xc_handle a handle to an open hypervisor interface

diff -rupN a/xen/common/Makefile b/xen/common/Makefile

--- a/xen/common/Makefile=A0=A0=A0=A0=A0=A0=A0=A0 2010-04-13 10:49:37.573793000 -0400

+++ b/xen/common/Makefile=A0=A0=A0=A0 2010-04-13 13:00:31.651749000 -0400

@@ -14,6 +14,7 @@ obj-y +=3D page_alloc.o

=A0obj-y +=3D rangeset.o

=A0obj-y +=3D sched_credit.o

=A0obj-y +=3D sched_sedf.o

+obj-y +=3D sched_arinc653.o

=A0obj-y +=3D schedule.o

=A0obj-y +=3D shutdown.o

=A0obj-y +=3D softirq.o

diff -rupN a/xen/common/sched_arinc653.c b/xen/common/sched_arinc653.c<= /p>

--- a/xen/common/sched_arinc653.c=A0=A0=A0=A0=A0=A0 1969-12-31 19:00:00.000000000 -0500

+++ b/xen/common/sched_arinc653.c=A0=A0=A0 2010-04-14 18:13:26.163404000 -0400

@@ -0,0 +1,590 @@

+/*

+ * File: sched_arinc653.c

+ * Copyright (c) 2010, DornerWorks, Ltd. <DornerWorks.com>

+ *

+ * Description:

+ *=A0=A0 This file provides an ARINC653-compatible scheduling algorithm

+ *=A0=A0 for use in Xen.

+ *

+ * This program is free software; you can redistribute it and/or modify it

+ * under the terms of the GNU General Public License as published by the Fre= e

+ * software Foundation; either version 2 of the License, or (at your option)=

+ * any later version.

+ *

+ * This program is distributed in the hope that it will be useful,

+ * but WITHOUT ANY WARRANTY; without even the implied warranty of

+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

+ * See the GNU General Public License for more details.

+ */

+

+

+/*************************************************= *************************

+ * Includes=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0= =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0= =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 *

+ *************************************************************************= /

+#include <xen/lib.h>

+#include <xen/sched.h>

+#include <xen/sched-if.h>

+#include <xen/timer.h>

+#include <xen/softirq.h>

+#include <xen/time.h>

+#include <xen/errno.h>

+#include <xen/list.h>

+#include <public/domctl.h>=A0=A0=A0=A0=A0=A0=A0=A0=A0 /* ARINC653_MAX_DOMAINS_PER_SCHEDULE */

+#include <xen/guest_access.h>

+

+

+/*************************************************= *************************

+ * Private Macros=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0= =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0= =A0=A0=A0=A0=A0=A0=A0=A0=A0 *

+ *************************************************************************= */

+

+/**

+ * Retrieve the idle VCPU for a given physical CPU

+ */

+#define IDLETASK(cpu)=A0 ((struct vcpu *) per_cpu(schedule_data, (cpu)).idle)

+

+/**

+ * Return a pointer to the ARINC 653-specific scheduler data information

+ * associated with the given VCPU (vc)

+ */

+#define AVCPU(vc) ((arinc653_vcpu_t *)(vc)->sched_priv)

+

+/*************************************************= *************************

+ * Private Type Definitions=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0= =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0= =A0 *

+ *************************************************************************= */

+

+/**

+ * The sched_entry_t structure holds a single entry of the

+ * ARINC 653 schedule.

+ */

+typedef struct sched_entry_s

+{

+=A0=A0=A0 /* dom_handle holds the handle ("UUID") for the domain that thi= s

+=A0=A0=A0=A0 * schedule entry refers to. */

+=A0=A0=A0 xen_domain_handle_t dom_handle;

+=A0=A0=A0 /* vcpu_id holds the VCPU number for the VCPU that this schedule

+=A0=A0=A0=A0 * entry refers to. */

+=A0=A0=A0 int=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 vcpu_id;

+=A0=A0=A0 /* runtime holds the number of nanoseconds that the VCPU for this<= /p>

+=A0=A0=A0=A0 * schedule entry should be allowed to run per major frame. */

+=A0=A0=A0 s_time_t=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 runtime;

+=A0=A0=A0 /* vc holds a pointer to the Xen VCPU structure */

+=A0=A0=A0 struct vcpu *=A0=A0=A0=A0=A0=A0 vc;

+} sched_entry_t;

+

+/**

+ * The arinc653_vcpu_t structure holds ARINC 653-scheduler-specific

+ * information for all non-idle VCPUs

+ */

+typedef struct arinc653_vcpu_s

+{

+=A0=A0=A0 /* vc points to Xen's struct vcpu so we can get to it from an<= /p>

+=A0=A0=A0=A0 * arinc653_vcpu_t pointer. */

+=A0=A0=A0 struct vcpu *=A0=A0=A0=A0=A0=A0 vc;

+=A0=A0=A0 /* awake holds whether the VCPU has been woken with vcpu_wake() */=

+=A0=A0=A0 bool_t=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 awake;

+=A0=A0=A0 /* list holds the linked list information for the list this VCPU

+=A0=A0=A0=A0 * is stored in */

+=A0=A0=A0 struct list_head=A0=A0=A0 list;

+} arinc653_vcpu_t;

+

+

+/*************************************************= *************************

+ * Global Data=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0= =A0=A0=A0=A0=A0=A0=A0=A0=A0 =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0= =A0=A0=A0*

+ *************************************************************************= */

+

+/**

+ * This array holds the active ARINC 653 schedule.

+ *

+ * When the system tries to start a new VCPU, this schedule is scanned

+ * to look for a matching (handle, VCPU #) pair. If both the handle ("UUID")

+ * and VCPU number match, then the VCPU is allowed to run. Its run time

+ * (per major frame) is given in the third entry of the schedule.

+ */

+static sched_entry_t arinc653_schedule[ARINC653_MAX_DOMAINS_PER_SCHEDULE] =3D {<= /span>

+=A0=A0=A0 { "", 0, MILLISECS(10), NULL }

+};

+

+/**

+ * This variable holds the number of entries that are valid in

+ * the arinc653_schedule table.

+ *

+ * This is not necessarily the same as the number of domains in the

+ * schedule. A domain could be listed multiple times within the schedule,

+ * or a domain with multiple VCPUs could have a different

+ * schedule entry for each VCPU.

+ *

+ * A value of 1 means that only 1 domain (Dom0) will initially be started.

+ */

+static int num_schedule_entries =3D 1;

+

+/**

+ * arinc653_major_frame holds the major frame time for the ARINC 653 schedul= e.

+ */

+static s_time_t arinc653_major_frame =3D MILLISECS(10);

+

+/**

+ * next_major_frame holds the time that the next major frame starts

+ */

+static s_time_t next_major_frame =3D 0;

+

+/**

+ * vcpu_list holds pointers to all Xen VCPU structures for iterating through=

+ */

+static LIST_HEAD(vcpu_list);

+

+/*************************************************= *************************

+ * Scheduler functions=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0= =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0= =A0=A0=A0=A0=A0 *

+ *************************************************************************= */

+

+/**

+ * This function compares two domain handles.

+ *

+ * @param h1=A0=A0=A0=A0=A0=A0=A0 Pointer to handle 1

+ * @param h2=A0=A0=A0=A0=A0=A0=A0 Pointer to handle 2

+ *

+ * @return=A0=A0=A0=A0=A0=A0=A0=A0=A0 <ul>

+ *=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 <li> <0:=A0 handle 1 is less than handle 2

+ *=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 <li>=A0 0:=A0 handle 1 is equal to handle 2

+ *=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 <li> >0:=A0 handle 1 is greater than handle 2

+ *=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 </ul>

+ */

+static int dom_handle_cmp(const xen_domain_handle_t h1,

+=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0= =A0=A0=A0=A0=A0=A0=A0=A0=A0 const xen_domain_handle_t h2)

+{

+=A0=A0=A0 return memcmp(h1, h2, sizeof(xen_domain_handle_t));

+} /* end dom_handle_cmp */

+

+/**

+ * This function searches the vcpu list to find a VCPU that matches

+ * the domain handle and VCPU ID specified.

+ *

+ * @param handle=A0=A0=A0 Pointer to handler

+ * @param vcpu_id=A0=A0 VCPU ID

+ *

+ * @return=A0=A0=A0=A0=A0=A0=A0=A0=A0 <ul>

+ *=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 <li> Pointer to the matching VCPU if one is found

+ *=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 <li> NULL otherwise

+ *=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 </ul>

+ */

+static struct vcpu * find_vcpu(xen_domain_handle_t handle, int vcpu_id)

+{

+=A0=A0=A0 arinc653_vcpu_t * avcpu; /* loop index variable */

+=A0=A0=A0 struct vcpu * vc =3D NULL;

+

+=A0=A0=A0 /* loop through the vcpu_list looking for the specified VCPU */

+=A0=A0=A0 list_for_each_entry(avcpu, &vcpu_list, list)

+=A0=A0=A0 {

+=A0=A0=A0=A0=A0=A0=A0 /* If the handles & VCPU IDs match, we've found a matching VCPU *= /

+=A0=A0=A0=A0=A0=A0=A0 if ((dom_handle_cmp(avcpu->vc->domain->handle, handle) =3D=3D 0)=

+=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 && (vcpu_id =3D=3D avcpu->vc->vcpu_id))

+=A0=A0=A0=A0=A0=A0=A0 {

+=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 vc =3D avcpu->vc;

+=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 /*

+=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 * "break" statement used instead of loop control variable becau= se

+=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 * the macro used for this loop does not support using loop control=

+=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 * variables

+=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 */

+=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 break;

+=A0=A0=A0=A0=A0=A0=A0 }

+=A0=A0=A0 }

+

+=A0=A0=A0 return vc;

+} /* end find_vcpu */

+

+/**

+ * This function updates the pointer to the Xen VCPU structure for each entr= y in

+ * the ARINC 653 schedule.

+ *

+ * @param=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 <None>

+ * @return=A0=A0=A0=A0=A0=A0=A0=A0=A0 <None>

+ */

+static void update_schedule_vcpus(void)

+{

+=A0=A0=A0 /* Loop through the number of entries in the schedule */

+=A0=A0=A0 for (int i =3D 0; i < num_schedule_entries; i++)

+=A0=A0=A0 {

+=A0=A0=A0=A0=A0=A0=A0 /* Update the pointer to the Xen VCPU structure for the current entry */<= /span>

+=A0=A0=A0=A0=A0=A0=A0 arinc653_schedule[i].vc =3D

+=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 find_vcpu(arinc653_schedule[i].dom_handle,

+=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0= =A0=A0=A0=A0=A0 arinc653_schedule[i].vcpu_id);

+=A0=A0=A0 }

+} /* end update_schedule_vcpus */

+

+/**

+ * This function is called by the arinc653_adjust_global scheduler

+ * callback function in response to a domain control hypercall with

+ * a scheduler operation.

+ *

+ * The parameter schedule is set to be the address of a local variable from

+ * within arinc653_adjust_global(), so it is guaranteed to not be NULL.

+ *

+ * @param schedule=A0 Pointer to the new ARINC 653 schedule.

+ *

+ * @return=A0=A0=A0=A0=A0=A0=A0=A0=A0 <ul>

+ *=A0=A0=A0=A0=A0=A0=A0=A0 =A0=A0=A0=A0=A0=A0=A0=A0=A0<li> 0 =3D success

+ *=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 <li> !0 =3D error

+ *=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 </ul>

+ */

+static int arinc653_sched_set(xen_domctl_sched_arinc653_schedule_t * schedule)

+{

+=A0=A0=A0 int ret =3D 0;

+=A0=A0=A0 s_time_t total_runtime =3D 0;

+=A0=A0=A0 bool_t found_dom0 =3D 0;

+=A0=A0=A0 const static xen_domain_handle_t dom0_handle =3D {0};

+

+=A0=A0=A0 /* check for valid major frame and number of schedule entries */

+=A0=A0=A0 if ( (schedule->major_frame <=3D 0)

+=A0=A0=A0=A0=A0 || (schedule->num_sched_entries < 1)

+=A0=A0=A0=A0=A0 || (schedule->num_sched_entries > ARINC653_MAX_DOMAINS_PER_SCHEDULE= ) )

+=A0=A0=A0 {

+=A0=A0=A0=A0=A0=A0=A0 ret =3D -EINVAL;

+=A0=A0=A0 }

+

+=A0=A0=A0 if (ret =3D=3D 0)

+=A0=A0=A0 {

+=A0=A0=A0=A0=A0=A0=A0 for (int i =3D 0; i < schedule->num_sched_entries; i++)

+=A0=A0=A0=A0=A0=A0=A0 {

+=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 /*

+=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 * look for domain 0 handle - every schedule must contain

+=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 * some time for domain 0 to run

+=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 */

+=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 if (dom_handle_cmp(schedule->sched_entries[i].dom_handle,

+=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0= =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 dom0_handle) =3D=3D 0)

+=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 {

+=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 found_dom0 =3D 1;

+=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 }

+

+=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 /* check for a valid VCPU ID and run time */

+=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 if ( (schedule->sched_entries[i].vcpu_id < 0)

+=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 || (schedule->sched_entries[i].runtime <=3D 0) )

+=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 =A0{

+=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 ret =3D -EINVAL;

+=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 }

+=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 else

+=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 {

+=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 /* Add this entry's run time to total run time */

+=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 total_runtime +=3D schedule->sched_entries[i].runtime;

+=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 }

+=A0=A0=A0=A0=A0=A0=A0 } /* end loop through schedule entries */

+=A0=A0=A0 }

+

+=A0=A0=A0 if (ret =3D=3D 0)

+=A0=A0=A0 {

+=A0=A0=A0=A0=A0=A0=A0 /* error if the schedule doesn't contain a slot for domain 0 */

+=A0=A0=A0=A0=A0=A0=A0 if (found_dom0 =3D=3D 0)

+=A0=A0=A0=A0=A0=A0=A0 {

+=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 ret =3D -EINVAL;

+=A0=A0=A0=A0=A0=A0=A0 }

+=A0=A0=A0 }

+

+=A0=A0=A0 if (ret =3D=3D 0)

+=A0=A0=A0 {

+=A0=A0=A0=A0=A0=A0=A0 /*

+=A0=A0=A0=A0=A0=A0=A0=A0 * error if the major frame is not large enough to run all entries<= /p>

+=A0=A0=A0=A0=A0=A0=A0=A0 * as indicated by comparing the total run time to the major frame<= /p>

+=A0=A0=A0=A0=A0=A0=A0=A0 * length

+=A0=A0=A0=A0=A0=A0=A0=A0 */

+=A0=A0=A0=A0=A0=A0=A0 if (total_runtime > schedule->major_frame)

+=A0=A0=A0=A0=A0=A0 =A0{

+=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 ret =3D -EINVAL;

+=A0=A0=A0=A0=A0=A0=A0 }

+=A0=A0=A0 }

+

+=A0=A0=A0 if (ret =3D=3D 0)

+=A0=A0=A0 {

+=A0=A0=A0=A0=A0=A0=A0 /* copy the new schedule into place */

+=A0=A0=A0=A0=A0=A0=A0 num_schedule_entries =3D schedule->num_sched_entries;

+=A0=A0=A0=A0=A0=A0=A0 arinc653_major_frame =3D schedule->major_frame;

+=A0=A0=A0=A0=A0=A0=A0 for (int i =3D 0; i < num_schedule_entries; i++)

+=A0=A0=A0=A0=A0=A0=A0 {

+=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 memcpy(arinc653_schedule[i].dom_handle,

+=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0= =A0=A0 schedule->sched_entries[i].dom_handle,

+=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0= =A0=A0 sizeof(arinc653_schedule[i].dom_handle));

+=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 arinc653_schedule[i].vcpu_id =3D schedule->sched_entries[i].vcpu_id;

+=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 arinc653_schedule[i].runtime =3D schedule->sched_entries[i].runtime;

+=A0=A0=A0=A0=A0=A0=A0 }

+=A0=A0=A0=A0=A0=A0=A0 update_schedule_vcpus();

+

+=A0=A0=A0=A0=A0=A0=A0 /*

+=A0=A0=A0=A0=A0=A0=A0=A0 * The newly-installed schedule takes effect immediately.

+=A0=A0=A0=A0=A0=A0=A0=A0 * We do not even wait for the current major frame to expire.

+=A0=A0=A0=A0=A0=A0=A0=A0 *

+=A0=A0=A0=A0=A0=A0=A0=A0 * Signal a new major frame to begin. The next major frame

+=A0=A0=A0=A0=A0=A0=A0=A0 * is set up by the do_schedule callback function when it

+=A0=A0=A0=A0=A0=A0 =A0=A0* is next invoked.

+=A0=A0=A0=A0=A0=A0=A0=A0 */

+=A0=A0=A0=A0=A0=A0=A0 next_major_frame =3D NOW();

+=A0=A0=A0 }

+

+=A0=A0=A0 return ret;

+} /* end arinc653_sched_set */

+

+/**

+ * Xen scheduler callback function to adjust global scheduling parameters=

+ *

+ * @param op=A0=A0=A0 Pointer to the domain control scheduler operation structure

+ *

+ * @return=A0=A0=A0=A0=A0=A0=A0=A0=A0 <ul>

+ *=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 <li> 0 for success

+ *=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 <li> !0 if there is an error

+ *=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 </ul>

+ */

+static int arinc653_adjust_global(struct xen_domctl_scheduler_op * op)

+{

+=A0=A0=A0 int ret =3D -1;

+=A0=A0=A0 xen_domctl_sched_arinc653_schedule_t new_sched;

+

+=A0=A0=A0 if (op->cmd =3D=3D XEN_DOMCTL_SCHEDOP_put_global_info)

+=A0=A0=A0 {

+=A0=A0=A0=A0=A0=A0=A0 if (copy_from_guest(&new_sched, op->u.arinc653.schedule, 1) !=3D 0= )

+=A0=A0=A0=A0=A0=A0=A0 {

+=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 ret =3D -EFAULT;

+=A0=A0=A0=A0=A0=A0=A0 }

+=A0=A0=A0=A0=A0=A0=A0 else

+=A0=A0=A0=A0=A0=A0=A0 {

+=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 ret =3D arinc653_sched_set(&new_sched);

+=A0=A0=A0=A0=A0=A0=A0 }

+=A0=A0=A0 }

+

+=A0=A0=A0 return ret;

+} /* end arinc653_adjust_global */

+

+/**

+ * Xen scheduler callback function to initialize a virtual CPU (VCPU).

+ *

+ * @param v=A0=A0=A0=A0=A0=A0=A0=A0 Pointer to the VCPU structure

+ *

+ * @return=A0=A0=A0=A0=A0=A0=A0=A0=A0 <ul>

+ *=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 <li> 0 if the VCPU is allowed to run

+ *=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 <li> !0 if there is an error

+ *=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 </ul>

+ */

+static int arinc653_init_vcpu(struct vcpu * v)

+{

+=A0=A0=A0 int ret =3D -1;

+

+=A0=A0=A0 if (is_idle_vcpu(v))

+=A0=A0=A0 {

+=A0=A0=A0=A0=A0=A0=A0 /*

+=A0=A0=A0=A0=A0=A0=A0=A0 * The idle VCPU is created by Xen to run when no domains

+=A0=A0=A0=A0=A0=A0=A0=A0 * are runnable or require CPU time.

+=A0=A0=A0=A0=A0=A0=A0=A0 * It is similar to an "idle task" or "halt loop" proc= ess

+=A0=A0=A0=A0=A0=A0=A0=A0 * in an operating system.

+=A0=A0=A0=A0=A0=A0=A0=A0 * We do not track any scheduler information for the idle VCPU.

+=A0=A0=A0=A0=A0=A0=A0=A0 */

+=A0=A0=A0=A0=A0=A0=A0 v->sched_priv =3D NULL;

+=A0=A0=A0=A0=A0=A0=A0 ret =3D 0;

+=A0=A0=A0 }

+=A0=A0=A0 else

+=A0=A0=A0 {

+=A0=A0=A0=A0=A0=A0=A0 /*

+=A0=A0=A0=A0=A0=A0=A0=A0 * Allocate memory for the ARINC 653-specific scheduler data information

+=A0=A0=A0=A0=A0=A0=A0=A0 * associated with the given VCPU (vc).

+=A0=A0=A0=A0=A0=A0=A0=A0 */

+=A0=A0=A0=A0=A0=A0=A0 v->sched_priv =3D xmalloc(arinc653_vcpu_t);

+=A0=A0=A0=A0=A0=A0=A0 if (AVCPU(v) !=3D NULL)

+=A0=A0=A0=A0 =A0=A0=A0{

+=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 /*

+=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 * Initialize our ARINC 653 scheduler-specific information

+=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 * for the VCPU.

+=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 * The VCPU starts "asleep."

+=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 * When Xen is ready for the VCPU to run, it will call

+=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 * the vcpu_wake scheduler callback function and our

+=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 * scheduler will mark the VCPU awake.

+=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 */

+=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 AVCPU(v)->vc =3D v;

+=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 AVCPU(v)->awake =3D 0;

+=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 list_add(&AVCPU(v)->list, &vcpu_list);

+=A0=A0 =A0=A0=A0=A0=A0=A0=A0=A0=A0ret =3D 0;

+=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 update_schedule_vcpus();

+=A0=A0=A0=A0=A0=A0=A0 }

+=A0=A0=A0 }

+

+=A0=A0=A0 return ret;

+} /* end arinc653_init_vcpu */

+

+/**

+ * Xen scheduler callback function to remove a VCPU

+ *

+ * @param v=A0=A0=A0=A0=A0=A0=A0=A0 Pointer to the VCPU structure to remove

+ *

+ * @return=A0=A0=A0=A0=A0=A0=A0=A0=A0 <None>

+ */

+static void arinc653_destroy_vcpu(struct vcpu * v)

+{

+=A0=A0=A0 if (AVCPU(v) !=3D NULL)

+=A0=A0=A0 {

+=A0=A0=A0=A0=A0=A0=A0 /* remove the VCPU from whichever list it is on */

+=A0=A0=A0=A0=A0=A0=A0 list_del(&AVCPU(v)->list);

+=A0=A0=A0=A0=A0=A0=A0 /* free the arinc653_vcpu structure */

+=A0=A0=A0=A0=A0=A0=A0 xfree(AVCPU(v));

+=A0=A0=A0=A0=A0=A0=A0 update_schedule_vcpus();

+=A0=A0=A0 }

+} /* end arinc653_destroy_vcpu */

+

+/**

+ * Xen scheduler callback function to select a VCPU to run.

+ * This is the main scheduler routine.

+ *

+ * @param t=A0=A0=A0=A0=A0=A0=A0=A0 Current time

+ *

+ * @return=A0=A0=A0=A0=A0=A0=A0=A0=A0 Time slice and address of the VCPU structure for the chosen

+ *=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 domain

+ */

+static struct task_slice arinc653_do_schedule(s_time_t t)

+{

+=A0=A0=A0 struct task_slice ret;=A0=A0=A0=A0=A0=A0=A0=A0 =A0=A0=A0=A0=A0=A0=A0=A0= =A0=A0=A0=A0=A0/* hold the chosen domain */

+=A0=A0=A0 struct vcpu * new_task =3D NULL;

+=A0=A0=A0 static int sched_index =3D 0;

+=A0=A0=A0 static s_time_t last_major_frame;

+=A0=A0=A0 static s_time_t last_switch_time;

+=A0=A0=A0 static s_time_t next_switch_time;

+

+=A0=A0=A0 if (t >=3D next_major_frame)

+=A0=A0=A0 {

+=A0=A0=A0=A0=A0=A0=A0 /* time to enter a new major frame

+=A0=A0=A0=A0=A0=A0=A0=A0 * the first time this function is called, this will be true */

+=A0=A0=A0=A0=A0=A0=A0 sched_index =3D 0;

+=A0=A0=A0=A0=A0=A0=A0 last_major_frame =3D last_switch_time =3D t;

+=A0=A0=A0=A0=A0=A0=A0 next_major_frame =3D t + arinc653_major_frame;

+=A0=A0=A0 }

+=A0=A0=A0 else if (t >=3D next_switch_time)

+=A0=A0=A0 {

+=A0=A0=A0=A0=A0=A0=A0 /* time to switch to the next domain in this major frame */

+=A0=A0=A0=A0=A0=A0=A0 sched_index++;

+=A0=A0=A0=A0=A0=A0=A0 last_switch_time =3D next_switch_time;

+=A0=A0=A0 }

+

+=A0 =A0=A0/*

+=A0=A0=A0=A0 * If there are more domains to run in the current major frame, set=

+=A0=A0=A0=A0 * next_switch_time equal to the last switch time + this domain's run = time.

+=A0=A0=A0=A0 * Otherwise, set next_switch_time equal to the start of the next major

+=A0=A0=A0=A0 * frame.

+=A0=A0=A0 =A0*/

+=A0=A0=A0 next_switch_time =3D (sched_index < num_schedule_entries)

+=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0= =A0=A0=A0=A0 ? last_switch_time + arinc653_schedule[sched_index].runtime

+=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0= =A0=A0=A0=A0 : next_major_frame;

+

+=A0=A0=A0 /*

+=A0=A0=A0=A0 * If there are more domains to run in the current major frame, set=

+=A0=A0=A0=A0 * new_task equal to the address of next domain's VCPU structure.

+=A0=A0=A0=A0 * Otherwise, set new_task equal to the address of the idle task's VCP= U

+=A0=A0=A0=A0 * structure.

+=A0=A0=A0=A0 */

+=A0=A0=A0 new_task =3D (sched_index < num_schedule_entries)

+=A0=A0=A0=A0=A0=A0=A0=A0=A0 =A0=A0=A0=A0=A0=A0=A0=A0=A0? arinc653_schedule[sched_index].vc

+=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0= =A0=A0 : IDLETASK(0);

+

+=A0=A0=A0 /* Check to see if the new task can be run (awake & runnable). */

+=A0=A0=A0 if (!((new_task !=3D NULL)

+=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 && AVCPU(new_task)->awake

+=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 && vcpu_runnable(new_task)) )

+=A0=A0=A0 {

+=A0=A0=A0=A0=A0=A0=A0 new_task =3D IDLETASK(0);

+=A0=A0=A0 }

+=A0=A0=A0 BUG_ON(new_task =3D=3D NULL);

+

+=A0=A0=A0 /*

+=A0=A0=A0=A0 * Check to make sure we did not miss a major frame.

+=A0=A0=A0=A0 * This is a good test for robust partitioning.

+=A0=A0=A0=A0 */

+=A0=A0=A0 BUG_ON(t >=3D next_major_frame);

+

+=A0=A0=A0 /*

+=A0=A0=A0=A0 * Return the amount of time the next domain has to run and the address

+=A0=A0=A0=A0 * of the selected task's VCPU structure.

+=A0=A0=A0=A0 */

+=A0=A0=A0 ret.time =3D next_switch_time - t;

+=A0=A0=A0 ret.task =3D new_task;

+

+=A0=A0=A0 BUG_ON(ret.time <=3D 0);

+

+=A0=A0=A0 return ret;

+} /* end arinc653_do_schedule */

+

+/**

+ * Xen scheduler callback function to select a CPU for the VCPU to run on=

+ *

+ * @param v=A0=A0=A0=A0=A0=A0=A0=A0 Pointer to the VCPU structure for the current domain

+ *

+ * @return=A0=A0=A0=A0=A0=A0=A0=A0=A0 Number of selected physical CPU

+ */

+static int arinc653_pick_cpu(struct vcpu * v)

+{

+=A0=A0=A0 /* this implementation only supports one physical CPU */

+=A0=A0=A0 return 0;

+} /* end arinc653_pick_cpu */

+

+/**

+ * Xen scheduler callback function to wake up a VCPU

+ *

+ * @param vc=A0=A0=A0=A0=A0=A0=A0 Pointer to the VCPU structure for the current domain

+ *

+ * @return=A0=A0=A0=A0=A0=A0=A0=A0=A0 <None>

+ */

+static void arinc653_vcpu_wake(struct vcpu * vc)

+{

+=A0=A0=A0 /* boolean flag to indicate first run */

+=A0=A0=A0 static bool_t dont_raise_softirq =3D 0;

+

+=A0=A0=A0 if (AVCPU(vc) !=3D NULL)=A0 /* check that this is a VCPU we are tracking = */

+=A0=A0=A0 {

+=A0=A0=A0=A0=A0=A0=A0 AVCPU(vc)->awake =3D 1;

+=A0=A0=A0 }

+

+=A0=A0=A0 /* the first time the vcpu_wake function is called, we should raise

+=A0=A0=A0=A0 * a softirq to invoke the do_scheduler callback */

+=A0=A0=A0 if (!dont_raise_softirq)

+=A0=A0=A0 {

+=A0=A0=A0=A0=A0=A0=A0 cpu_raise_softirq(vc->processor, SCHEDULE_SOFTIRQ);

+=A0=A0=A0=A0=A0=A0=A0 dont_raise_softirq =3D 1;

+=A0=A0=A0 }

+} /* end arinc653_vcpu_wake */

+

+/**

+ * Xen scheduler callback function to sleep a VCPU

+ *

+ * @param vc=A0=A0=A0=A0=A0=A0=A0 Pointer to the VCPU structure for the current domain

+ *

+ * @return=A0=A0=A0=A0=A0=A0=A0=A0=A0 <None>

+ */

+static void arinc653_vcpu_sleep(struct vcpu * vc)

+{

+=A0=A0=A0 if (AVCPU(vc) !=3D NULL)=A0 /* check that this is a VCPU we are tracking = */

+=A0=A0=A0 {

+=A0=A0=A0=A0=A0=A0=A0 AVCPU(vc)->awake =3D 0;

+=A0=A0=A0 }

+

+=A0=A0=A0 /* if the VCPU being put to sleep is the same one that is currently

+=A0=A0=A0=A0 * running, raise a softirq to invoke the scheduler to switch domains */

+=A0=A0=A0 if (per_cpu(schedule_data, vc->processor).curr =3D=3D vc)

+=A0=A0=A0 {

+=A0=A0=A0=A0=A0=A0=A0 cpu_raise_softirq(vc->processor, SCHEDULE_SOFTIRQ);

+=A0=A0=A0 }

+} /* end arinc653_vcpu_sleep */

+

+/**

+ * This structure defines our scheduler for Xen.

+ * The entries tell Xen where to find our scheduler-specific

+ * callback functions.

+ * The symbol must be visible to the rest of Xen at link time.

+ */

+struct scheduler sched_arinc653_def =3D {

+=A0=A0=A0 .name=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 =3D "ARINC 653 Scheduler",

+=A0=A0=A0 .opt_name=A0=A0=A0=A0=A0=A0 =3D "arinc653",

+=A0=A0=A0 .sched_id=A0=A0=A0=A0=A0=A0 =3D XEN_SCHEDULER_ARINC653,

+

+=A0=A0=A0 .init_domain=A0=A0=A0 =3D NULL,

+=A0=A0=A0 .destroy_domain =3D NULL,

+

+=A0=A0=A0 .init_vcpu=A0=A0=A0=A0=A0 =3D arinc653_init_vcpu,

+=A0=A0=A0 .destroy_vcpu=A0=A0 =3D arinc653_destroy_vcpu,

+

+=A0=A0=A0 .do_schedule=A0=A0=A0 =3D arinc653_do_schedule,

+=A0=A0=A0 .pick_cpu=A0=A0=A0=A0=A0=A0 =3D arinc653_pick_cpu,

+=A0=A0=A0 .dump_cpu_state =3D NULL,

+=A0=A0=A0 .sleep=A0=A0=A0=A0=A0=A0=A0=A0=A0 =3D arinc653_vcpu_sleep,

+=A0=A0=A0 .wake=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 =3D arinc653_vcpu_wake,

+=A0=A0=A0 .adjust=A0=A0=A0=A0=A0=A0=A0=A0 =3D NULL,

+=A0=A0=A0 .adjust_global=A0 =3D arinc653_adjust_global,

+};

diff -rupN a/xen/common/schedule.c b/xen/common/schedule.c

--- a/xen/common/schedule.c=A0=A0=A0=A0 2010-04-14 10:57:11.262796000 -0400

+++ b/xen/common/schedule.c=A0 2010-04-14 16:40:21.543608000 -0400

@@ -7,7 +7,8 @@

=A0 *=A0=A0=A0=A0=A0=A0=A0 File: common/schedule.c

=A0 *=A0=A0=A0=A0=A0 Author: Rolf Neugebauer & Keir Fraser

=A0 *=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 Updated for generic API by Mark Williamson

- *

+ *=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 ARINC653 scheduler added by DornerWorks <DornerWorks.com>

+ *

=A0 * Description: Generic CPU scheduling code

=A0 *=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 implements support functionality for the Xen scheduler API.

=A0 *

@@ -56,9 +57,11 @@ DEFINE_PER_CPU(struct schedule_data, sch

=A0

=A0extern const struct scheduler sched_sedf_def;

=A0extern const struct scheduler sched_credit_def;

+extern const struct scheduler sched_arinc653_def;

=A0static const struct scheduler *__initdata schedulers[] =3D {

=A0=A0 =A0=A0&sched_sedf_def,

=A0=A0=A0=A0 &sched_credit_def,

+=A0=A0=A0 &sched_arinc653_def,

=A0=A0=A0=A0 NULL

=A0};

=A0

diff -rupN a/xen/include/public/domctl.h b/xen/include/public/domctl.h<= /p>

--- a/xen/include/public/domctl.h 2010-04-14 10:57:11.262796000 -0400<= /p>

+++ b/xen/include/public/domctl.h=A0=A0=A0=A0=A0=A0=A0=A0 2010-04-14 16:40:21.543608000 -0400

@@ -23,6 +23,8 @@

=A0 *

=A0 * Copyright (c) 2002-2003, B Dragovic

=A0 * Copyright (c) 2002-2006, K Fraser

+ *

+ * ARINC653 Scheduler type added by DornerWorks <DornerWorks.com>.

=A0 */

=A0

=A0#ifndef __XEN_PUBLIC_DOMCTL_H__

@@ -303,11 +305,43 @@ DEFINE_XEN_GUEST_HANDLE(xen_domctl_max_v

=A0/* Scheduler types. */

=A0#define XEN_SCHEDULER_SEDF=A0=A0=A0=A0 4

=A0#define XEN_SCHEDULER_CREDIT=A0=A0 5

+#define XEN_SCHEDULER_ARINC653 6

+

=A0/* Set or get info? */

=A0#define XEN_DOMCTL_SCHEDOP_putinfo 0

=A0#define XEN_DOMCTL_SCHEDOP_getinfo 1

=A0#define XEN_DOMCTL_SCHEDOP_put_global_info 2

=A0#define XEN_DOMCTL_SCHEDOP_get_global_info 3

+

+/*

+ * This structure is used to pass a new ARINC653 schedule from a

+ * privileged domain (ie dom0) to Xen.

+ */

+#define ARINC653_MAX_DOMAINS_PER_SCHEDULE=A0=A0 64

+struct xen_domctl_sched_arinc653_schedule {

+=A0=A0=A0 /* major_frame holds the time for the new schedule's major frame

+=A0=A0=A0=A0 * in nanoseconds. */

+=A0=A0=A0 int64_t=A0=A0=A0=A0 major_frame;

+=A0=A0=A0 /* num_sched_entries holds how many of the entries in the

+=A0=A0=A0=A0 * sched_entries[] array are valid. */

+=A0=A0=A0 uint8_t=A0=A0=A0=A0 num_sched_entries;

+=A0=A0=A0 /* The sched_entries array holds the actual schedule entries. */

+=A0=A0=A0 struct {

+=A0=A0=A0=A0=A0=A0=A0 /* dom_handle must match a domain's UUID */

+=A0=A0=A0=A0=A0=A0=A0 xen_domain_handle_t dom_handle;

+=A0=A0=A0=A0=A0=A0=A0 /* If a domain has multiple VCPUs, vcpu_id specifies which one

+=A0=A0=A0=A0=A0=A0=A0=A0 * this schedule entry applies to. It should be set to 0 if

+=A0=A0=A0=A0=A0=A0=A0=A0 * there is only one VCPU for the domain. */

+=A0=A0=A0=A0=A0=A0=A0 int=A0=A0=A0=A0=A0=A0=A0 =A0=A0=A0=A0=A0=A0=A0=A0=A0vcpu_id;

+=A0=A0=A0=A0=A0=A0=A0 /* runtime specifies the amount of time that should be allocated

+=A0=A0=A0=A0=A0=A0=A0=A0 * to this VCPU per major frame. It is specified in nanoseconds */<= /p>

+=A0=A0=A0=A0=A0=A0=A0 int64_t=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 runtime;

+=A0=A0=A0 } sched_entries[ARINC653_MAX_DOMAINS_PER_SCHEDULE];

+};

+typedef struct xen_domctl_sched_arinc653_schedule

+=A0=A0=A0 xen_domctl_sched_arinc653_schedule_t;

+DEFINE_XEN_GUEST_HANDLE(xen_domctl_sched_arinc653_= schedule_t);

+

=A0struct xen_domctl_scheduler_op {

=A0=A0=A0=A0 uint32_t sched_id;=A0 /* XEN_SCHEDULER_* */

=A0=A0=A0 =A0uint32_t cmd;=A0=A0=A0=A0=A0=A0 /* XEN_DOMCTL_SCHEDOP_* */

@@ -323,6 +357,9 @@ struct xen_domctl_scheduler_op {

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 uint16_t weight;

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 uint16_t cap;

=A0=A0=A0=A0=A0=A0=A0=A0 } credit;

+=A0=A0=A0=A0=A0=A0=A0 struct xen_domctl_sched_arinc653 {

+=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 XEN_GUEST_HANDLE(xen_domctl_sched_arinc653_schedule_t) schedule;

+=A0=A0=A0=A0=A0=A0=A0 } arinc653;

=A0=A0=A0=A0 } u;

=A0};

=A0typedef struct xen_domctl_scheduler_op xen_domctl_scheduler_op_t;

=A0<= /p>

=A0<= /p>

=A0<= /p>

=A0

=A0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.= com
http://l= ists.xensource.com/xen-devel


--00151747b4702ce72f0485c6b839-- --00151747b4702ce7350485c6b83a Content-Type: image/jpeg; name="image002.jpg" Content-Transfer-Encoding: base64 Content-ID: X-Attachment-Id: 0.0.2 /9j/4AAQSkZJRgABAQEAYABgAAD/2wBDAAoHBwgHBgoICAgLCgoLDhgQDg0NDh0VFhEYIx8lJCIf IiEmKzcvJik0KSEiMEExNDk7Pj4+JS5ESUM8SDc9Pjv/2wBDAQoLCw4NDhwQEBw7KCIoOzs7Ozs7 Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozv/wAARCABVAFIDASIA AhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQA AAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3 ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWm p6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEA AwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSEx BhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElK U1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3 uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwDrNJOm pZzXl3Ikrx52wbhuOPbuaZbeJIJ72K2vNESGKZwiuvJBPTPFYcMUaS7o0+duOOSa6a0s7fRrZdR1 THmZ/cw4yd3bA/vfypiGaraLYXgRCSjjcoPUe1WZvI0XT0nntzdXMvCQj1/+t3NYmoanLdM93MAp AGFHRRnpW74jORayryhBAP1waAIdL1W31i4exurD7FdbdybejD2plzIdPkfzYXmMR/1adX9K5LX/ ABENJki+xuPt6ZKsOfLBGDn86q+HPF8skq2mrTl2biO4c8/Rj/WsnWgp8p1RwtSVP2iR6PaXCX+i veGyFq/zDYeSMHr0qGEtJIqL1Y4qW1cf8I/Kc92/nVfTplF7FuIGTgfU1qco/VNWXS50srOyN7dl d7L0Cj1Jqxp16uqW7sbdrW5iOJIW7eh9wfWq13EIdSuJD96Yhs+wAAFXdLwXkkIAwoBb260AY9xJ /pMv++f50Vjy6nJJK7quQzEg+tFAEmjaxo+mt51yk81yOhVAVT6c8n3q3d+IfDd9c+fc297I4G0Z 6KPYbuK5tWHoKeHA9KANO/n027Vf7PikjiZCGEnXP5ms6/8AGdxYaK2jSQLcXQAWCTOdg7EjuR2p Y4rm6BW3wg6GVhwv0Hc1Jb2mnaQxZAZbpuWlb5pCf6V5WNzCFH3I6y/I7cPRT9+exzFn4V1XUHM9 yRbK5yXm+8fw6/nW1b+DNMiA+0TT3Dd8EIP0rRa6upj+7QIPU8mm+RcPy8zn8cV87UxVaW8reh6T qzexrW1ybPTxYRFvIAxhiWOPqeaZKDLGyxSGN+qtnoexrMNtKvKyOD/vGp7OWRmaOTll5DeoqY4v E0/ejUbt3OaVKD3RpReK7SRBDrVtLFcR8F4xlW96S78Q/wBowHTdEgkjSTiS4cYwvfFUY2V7uYFQ y7u4zWhEwVcKAo9AMV9pRm501J9UeZNWk0UzHFCxiVQQnyj8KKjnf9/J/vH+dFakGD5gA5q5Z2vn J9puD5duOnq//wBaqunRQXFyftD/ACoMiMdZD6Vs+U9w4eXgD7qDoorxcyx7o/u4aPv/AJHXQo83 vMjeaW4HlwL5MQ4GOv8A9anw2SpzjmrKoqDpUsQ3OAK+VdSUnZHoJJHP6t4jtNKlNvHH9onX7yg4 VPqfWm6b4wsLqVYbyJrRm4Emd6Z9+4riZ3eS4keTO9nJbPrmmV7sMBRUbSV2dnsY2sessq4BBBBG QRyCPWogBF5kuOQMD3NZvg+6afQVEylzFKyKSe2Af5k1t74h/wAsv1rwqq9jVlDexyyjZ2KdvAyr kAknk+9WEkqx9sjjH3MVSup0MgYDY7fw+3r7V7+UZjUnL2NRX7P/ADOHE0UvfRVmf9/J/vH+dFV5 X/fP/vGivpTgMVWwQQSCOhFdBpmpLcqIZSBMBwf7/wD9euZDU8SFfmBII5BFcWMwcMVT5Zb9H2Nq VV03dHXOWCM4652r/jV63g8tVdzyOQKoIWltIiWBcqrZ9TVk3QZApOD3zXxFenKNoroerFp6nGeI vDFyl7JdadCZoJCWKJyyH6dx9KybTw/ql5KEW0kjGeXlXaq/nXo+8eopC5P/ANeu2OY1uW1rvubq s0rEGm2Mel6fFaRncE5LH+Jj1NTls8Dk0x5ERS8jhVHJJOAPxrEuPEH2lzb6X8wBw9yR8q+y+p/S sqOEq4ifmznnNJczNK6u0tjtBEk56L2X3NV4d24u7Fnbkk1Tgi2cklieSSck1bU19fg8HDDRst+5 5dWq5sglb98/+8aKilb98/8AvGiu0xOOHhTU5vCVt4gtJXulllZJLaPJkjAbaGxnkE8e2RVrVPAm r6a8cIu45p/sP2ydBIQI/nCFAf4iCcfnT7C98Q6XpOlTaff2No8VpM9vGJj580cshB+UjBO4cDPb NTpdeKoLIrHptvcLaWD2UjxSiVgqyq7k7WPzgkZHoaQyxDo+sWSNYvqemS3FrIkDxR3eZFLMFAII 7E4OKRIddMt9G0LJ9g4leQMqn5wnykjnkj8KluZtfea81MeDNOgu2mE9xMGZpNyFZTgM5wcEZwM4 OKqNf+OFuNWW6tbm5Wdo/OgklZ0t97h0KjdwMrjPbkGuWWEpSd2jqjiqkVY0ZNM1iJylxf2Fs3nS QIsk5zIyEBtoAOQCazRb+IbqTU47RI5W0wlZT53EjAElY/752gnHoKmvvE+u2CLqmo6BZMPtMssF x9okTDSnLqhRxkArg9cd6ml8SeM1uEvLeDSrKMsb6RleFg5ZtvmFmJK5+4MEdKlYSiuhTxVRo5y7 0PxNf6Zp2qT7pLDUGCxOj/IhLbQHH8PPc1BZ6RdSeJI9Bafypjd/ZWZWJVWDbSffoa6Wz1zxPJ4g EWmWdpaTabGUewjYmORGl3EYJIIBfPHQc9qzL3RNdh1+41C506GaVJZLueANlAokIJPOdpOcYOSA TXVGKirJHLKTk9S/rPgi70qGF4b6+m8y5SAs9sURNxwGLBj3x9aWTwvZjUL7S7bxTNLqFkkrNE1q yqTGCWG7d7VYku9Zgs3h07QtI01LhoWlliuw3mDdvjXLOQAxXPvin/2jrVzez3UfhTTLK61Bdj37 GQD9+CM7ixXnntxVEnB/apyc+dJz/tGioGIVivPBxRTuBej8SavFpyael3i3jTy0HlruVd27aGxn GecZqV/FeuOci/MR3FswRrEdxIJb5QOSVHPtRRSAhPiHVy8DG/lzbvvj56NhRk+vCr19KlHijXQY G/tW5L27745C+XB5/i6kfMeDxyaKKAKM95cXMaRzSl0jd3UHsznLH8SBVq21vUbXaIpwVWAW+x41 dTGGLBSGBBwTmiigBq6xqCalJqSXTpdyhg8q4BIYbSPTpxViPxNrkWwx6nOmwIBtbGQq7VB9QBxg 8cn1oooAS38Q6rakmK5X7sa7WiRgAmdmAQQCMnB96VfEOro4YX0hI8vhsFfk+7weOKKKAMpuWJPU miiigD//2Q== --00151747b4702ce7350485c6b83a Content-Type: image/jpeg; name="image001.jpg" Content-Transfer-Encoding: base64 Content-ID: X-Attachment-Id: 0.0.1 /9j/4AAQSkZJRgABAQEAYABgAAD/2wBDAAoHBwgHBgoICAgLCgoLDhgQDg0NDh0VFhEYIx8lJCIf IiEmKzcvJik0KSEiMEExNDk7Pj4+JS5ESUM8SDc9Pjv/2wBDAQoLCw4NDhwQEBw7KCIoOzs7Ozs7 Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozv/wAARCABSAGcDASIA AhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQA AAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3 ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWm p6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEA AwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSEx BhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElK U1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3 uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwBdc1SD QLXRre10LRpBNpkErvPZKzFiOTn8Kyf+Exf/AKF7QP8AwXrU3jj/AJgP/YHt/wCRrl66YxTR5Nat UjUaTPR/D/jDwneOtvrXhvTLOQ8CdLVDGfqMZX9a6LxDoaW9oL/Q9E0W6hC7mjNijMR/eUjr9K8W rsvAfjebQbtLC+lL6bK2Pm58gnuPb1FZ1KN17uhrQxbvaYf8JGv/AEL+hf8AgvSj/hI1/wChf0L/ AMAEre8f+Go7SQazYqBbzn98q9FY9GHsf5/WuIrw6k6sJcrkeqrM2f8AhI1/6F/Qv/ABKP8AhI1/ 6F/Qv/ABKxqXk8AZNZ+3qfzDsjY/4SNf+hf0L/wASj/hI1/6F/Qv/ABKl8SeG30K30+U5P2iEebn +GTqR+RH5Vg1UqtWLs2KyNn/AISNf+hf0L/wASj/AISNf+hf0L/wASsaug8O+D7zxHby3EM8cEcb 7MyAnccZ4xRGpWk7RbCyHQ3drrOmalFPoukxeVCjq8FmiMD5qDr9CaK15vB114c0TUrqe7imWSFI wqKQQfNQ9/pRXp0OdQ9/ch2ucr44/wCYD/2B7f8Aka5eur8YwvcT+HYYl3SSaTbIo9SeBXZweEfC PhXToTr3l3FxLwzy5OT3CqOw9a7faRhBNnkyoyq1ZWPIaK9I8W/DyCS0i1TwrC00cpG63jbcCD0Z c/yrk/8AhCfE/wD0BLr8h/jVqaauYToTi7WPQ/AN+vibwZcaNetve3HkknrsI+Q/h/QV57cQPbXM tvIMPE5Rh7g4rsfhpomuaLrlwb/TZ7e3mgwXcDG4EEd/rVPxloGoQ65qF+llKbMuJDMANvIGf1ry sfTvaSPYwsm6a5jlq3vBmlf2t4kt0dcwwHzpPoOg/E4rBr0jwRBFoPhS8166G3zFZxn+4vQfic/p XDh4c9RI6JOyubfi2wi13w9eQwMsk9q29QpyVdRkr9cH9a8crr/hj4kln8Q39ndvk6izXC5/56Dq Py/lWT4r0r+x/ENzbquInbzIv908/ocj8K6sdS5WpGNCqqkboxq9K1SdvBvw1VImMd5MoRWU4Ikf kn8Bn8q4/wAI6Z/aviS1gZcxRt5sn+6vP6nAq58WtX+06zb6XG2UtE3uB/fb/wCtj86rAU7tyZOK qclNmdoGs6nqVnq8V9qFzcxraowWWQsAfOj55oqn4T/1Gsf9ei/+jo6K9Gp8RlhW3T1Lviq5ayv/ AAzdqNxg0u1kA9cc16BqGm6V8QbC0vrS/MbRg8qAxXPVWXsa848cf8wH/sD2/wDI1zMcssJJikeM nqUYjP5U5Uo1IJM5vbulVloew+I/FVp4H0e10rSniubyMBQjndsXuWx3J7Vy/wDwt3Xv+fOx/wC+ W/xrhCSSSTknqTRWkacYqxjPFVJSunY9f8DeN9W8T6xLa3VvbJDFCXLRKwOcgDqa3LTWbfXtV1zw /OqYtsIMdXQqAT9Q39K5z4ZWCaN4ZvdevPkWYFgT/wA80B5/E5/KuL8OeJZLPxwurzMQl1Own/3X P9OD+FZuClex1RrShGHM9yU6PcjX/wCx8fvvP8n9ev5c11vxP1CPSfDtl4ftTt80DcB/zzTp+Zx+ VdWfD0J8WDXfl/1G3H+303f98145411n+2/FN3cq26GNvJi/3V4z+JyfxrlwlDkk2zbF1OWnbuZe mX8ul6nbX8JxJbyBx746j8RXq3j60i1fw9Z69a/MEUEkd43/AMDj8zXj9etfDa/j1vwpd6DdEMYA UAP/ADzfOPyOf0rpxNPng0cmCqWly9yX4dWcdho97rdz8qtkBj2ReSfz/lXlerahJqurXV/KfmuJ S/0B6D8sV6r46nj8M+AodHt3+ecCAEcEqOXP4/1rx+lhafJBIeNqXkonReE/9RrH/Xov/o6Oijwn /qNY/wCvRf8A0dHRTqfEdGE/hE3jj/mA/wDYHt/5GuXrq/GsMsg0EpE7j+x7flVJ7GuftdI1K+kE drp9zMx7JETW0H7qPPrpuq7FSug8IeFbnxPqYjCslnEQbibsB/dHua6DQPhVfXLLPrcgs4ByYUYG Rh7notaviDxpYeHNO/sTwrbgyICpmRCUi9SD/E3vSc76RLhQ5feqaIr/ABL8R29pZR+F9MKqqKou NnRFH3U/qa8yqSQXEsjSSLK7uSzMwJJPqab5Un/PN/8Avk1UUkrGVWcqkrnrEXjPHwsN4ZP9NRfs fXnfjAP/AHzzXktS5ufIMGJfKLbymDjdjGfypnlSf883/wC+TSjFIdSpKpa/QbXReBNa/sTxVayu 22Cc+TL6YbofwODXP+VJ/wA83/75NKI5QciN/wDvk1T1VjODcZJo6/4oav8A2h4pNojZisUEfH98 8t/QfhXG1LL9omlaWVZHdzlmZSST60zypP8Anm//AHyaSVlYdSTnJyOg8J/6jWP+vRf/AEdHRS+F UdYNYLIyj7IvUY/5bR0VhU+I9XCfwj1fQr+ZdM0ewhtkdm0uKbe8m0YCqMdD61pW+rKbuSzu4TaT xp5gDMCjp3ZW9u+elYmhzJbvokku5UOixrv2nbn5TjPrxU+s20+rXCXVnE7R2aE8jb55LKSgz1GF PtkiuVyavY6jbXULV5Ej8whpPuBlI3/TI5pkF9YXDiOJ0LMCVBQjdg4OMjnBqrqEiapbQRWoZpDP HJkqQYgrAknPTgEfjVKx82FbS5njZ4ElnUFUIaFmc4YjuCO/bNNzdwsbS3Vm9mbxHjeAAtvUZGB1 pv22y8/yMgyYUlRGeA3TPHesRLS4s9I+02MTSRXEBFzagYO4jG9R6+o7/WrUbeVr8xeWSIGGBQAm Q5y2R09x+dHOxWL8d/ZTv5ULo0pLBVZSoYr1Gcdqqx6zD/YUmqTWuzyiytCvzMGVtuOnXNUrTzYW juZYmktoryckKh3RlmID+64J/PNT2tnKNcuYDETZFxdq3Ysy7Sv5gn8RS5pMdkX5bxA1p5MKSC6O QxONq7clun0/Oni+sSyDcoEh2oxQhWPoDjBrDg02+fTr+z2srWsT21qzcB1J3Dn6bV/A1ev5k1PR GtIInE8qhBEyEGJsjk+mOufbimpsLF2O/sJJ/JV13lygBQjLDqBkdaUXtiVLBlOJfKxsOd/pjFY8 Ymjle4eNpbeG/d3UId68YDj1Ayf8iixkmtL25vXiea1kuZFyEO6DJ+8B3U9/8KOdhYPF1xby+Fr9 YSCRs5C4BxIoOD35oqjr4EfhzU1tpnktJAkio4wYWMq5Az2PXHbHvRVxd1qI8nsvEWuRWNvHHrN+ iJEoVVunAAA4AGam/wCEl1/P/Ic1H/wLk/xooqgD/hJdf/6Dmo/+Bb/40f8ACS6//wBBzUf/AALk /wAaKKAI5fE3iAEY1zUh/wBvcn+NM/4SfxB/0HdS/wDAuT/GiigBP+En8Qf9B3Uv/AuT/Gj/AISf xB/0HdS/8C5P8aKKAHJ4n8Qbh/xPdS/8C5P8am/4SXX/APoOaj/4Fyf40UUAH/CS6/8A9BzUf/Au T/Gj/hJdf/6Dmo/+Bcn+NFFAFXU/EGtT6fLHNrF/IhxlXuXIPI7ZooooA//Z --00151747b4702ce7350485c6b83a-- --===============1237574940== Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: inline _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel --===============1237574940==--