From mboxrd@z Thu Jan 1 00:00:00 1970 From: Tianyang Chen Subject: Re: [PATCH v5][RFC]xen: sched: convert RTDS from time to event driven model Date: Wed, 17 Feb 2016 20:55:34 -0500 Message-ID: <56C52496.7020306@seas.upenn.edu> References: <1454992407-5436-1-git-send-email-tiche@seas.upenn.edu> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii"; Format="flowed" Content-Transfer-Encoding: 7bit Return-path: Received: from mail6.bemta3.messagelabs.com ([195.245.230.39]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1aWDpi-000620-Uy for xen-devel@lists.xenproject.org; Thu, 18 Feb 2016 01:56:43 +0000 In-Reply-To: List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Meng Xu Cc: "xen-devel@lists.xenproject.org" , Dario Faggioli , George Dunlap , Dagaen Golomb , Meng Xu List-Id: xen-devel@lists.xenproject.org On 2/15/2016 10:55 PM, Meng Xu wrote: > Hi Tianyang, > > Thanks for the patch! Great work and really quick action! :-) > I will just comment on something I quickly find out and would look > forwarding to Dario's comment. > > On Mon, Feb 8, 2016 at 11:33 PM, Tianyang Chen > wrote: > > Changes since v4: > > removed unnecessary replenishment queue checks in vcpu_wake() > > extended replq_remove() to all cases in vcpu_sleep() > > used _deadline_queue_insert() helper function for both queues > > _replq_insert() and _replq_remove() program timer internally > > > > Changes since v3: > > removed running queue. > > added repl queue to keep track of repl events. > > timer is now per scheduler. > > timer is init on a valid cpu in a cpupool. > > > > Signed-off-by: Tianyang Chen > > > Signed-off-by: Meng Xu > > > Signed-off-by: Dagaen Golomb > > > --- > > xen/common/sched_rt.c | 337 > ++++++++++++++++++++++++++++++++++++------------- > > 1 file changed, 251 insertions(+), 86 deletions(-) > > > > diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c > > index 2e5430f..1f0bb7b 100644 > > --- a/xen/common/sched_rt.c > > +++ b/xen/common/sched_rt.c > > @@ -16,6 +16,7 @@ > > #include > > #include > > #include > > +#include > > #include > > #include > > #include > > @@ -87,7 +88,7 @@ > > #define RTDS_DEFAULT_BUDGET (MICROSECS(4000)) > > > > #define UPDATE_LIMIT_SHIFT 10 > > -#define MAX_SCHEDULE (MILLISECS(1)) > > + > > /* > > * Flags > > */ > > @@ -142,6 +143,12 @@ static cpumask_var_t *_cpumask_scratch; > > */ > > static unsigned int nr_rt_ops; > > > > +/* handler for the replenishment timer */ > > +static void repl_handler(void *data); > > + > > +/* checks if a timer is active or not */ > > +bool_t active_timer(struct timer* t); > > + > > /* > > * Systme-wide private data, include global RunQueue/DepletedQ > > * Global lock is referenced by schedule_data.schedule_lock from all > > @@ -152,7 +159,9 @@ struct rt_private { > > struct list_head sdom; /* list of availalbe domains, used > for dump */ > > struct list_head runq; /* ordered list of runnable vcpus */ > > struct list_head depletedq; /* unordered list of depleted vcpus */ > > + struct list_head replq; /* ordered list of vcpus that need > replenishment */ > > cpumask_t tickled; /* cpus been tickled */ > > + struct timer *repl_timer; /* replenishment timer */ > > }; > > > > /* > > @@ -160,6 +169,7 @@ struct rt_private { > > */ > > struct rt_vcpu { > > struct list_head q_elem; /* on the runq/depletedq list */ > > + struct list_head replq_elem;/* on the repl event list */ > > > > /* Up-pointers */ > > struct rt_dom *sdom; > > @@ -213,8 +223,14 @@ static inline struct list_head > *rt_depletedq(const struct scheduler *ops) > > return &rt_priv(ops)->depletedq; > > } > > > > +static inline struct list_head *rt_replq(const struct scheduler *ops) > > +{ > > + return &rt_priv(ops)->replq; > > +} > > + > > /* > > - * Queue helper functions for runq and depletedq > > + * Queue helper functions for runq, depletedq > > + * and replenishment event queue > > */ > > static int > > __vcpu_on_q(const struct rt_vcpu *svc) > > @@ -228,6 +244,18 @@ __q_elem(struct list_head *elem) > > return list_entry(elem, struct rt_vcpu, q_elem); > > } > > > > +static struct rt_vcpu * > > +__replq_elem(struct list_head *elem) > > +{ > > + return list_entry(elem, struct rt_vcpu, replq_elem); > > +} > > + > > +static int > > +__vcpu_on_replq(const struct rt_vcpu *svc) > > +{ > > + return !list_empty(&svc->replq_elem); > > +} > > + > > /* > > * Debug related code, dump vcpu/cpu information > > */ > > @@ -288,7 +316,7 @@ rt_dump_pcpu(const struct scheduler *ops, int cpu) > > static void > > rt_dump(const struct scheduler *ops) > > { > > - struct list_head *runq, *depletedq, *iter; > > + struct list_head *runq, *depletedq, *replq, *iter; > > struct rt_private *prv = rt_priv(ops); > > struct rt_vcpu *svc; > > struct rt_dom *sdom; > > @@ -301,6 +329,7 @@ rt_dump(const struct scheduler *ops) > > > > runq = rt_runq(ops); > > depletedq = rt_depletedq(ops); > > + replq = rt_replq(ops); > > > > printk("Global RunQueue info:\n"); > > list_for_each( iter, runq ) > > @@ -316,6 +345,13 @@ rt_dump(const struct scheduler *ops) > > rt_dump_vcpu(ops, svc); > > } > > > > + printk("Global Replenishment Event info:\n"); > > + list_for_each( iter, replq ) > > + { > > + svc = __replq_elem(iter); > > + rt_dump_vcpu(ops, svc); > > + } > > + > > printk("Domain info:\n"); > > list_for_each( iter, &prv->sdom ) > > { > > @@ -388,6 +424,66 @@ __q_remove(struct rt_vcpu *svc) > > } > > > > /* > > + * Removing a vcpu from the replenishment queue could > > + * re-program the timer for the next replenishment event > > + * if the timer is currently active > > + */ > > +static inline void > > +__replq_remove(const struct scheduler *ops, struct rt_vcpu *svc) > > +{ > > + struct rt_private *prv = rt_priv(ops); > > + struct list_head *replq = rt_replq(ops); > > + struct timer* repl_timer = prv->repl_timer; > > + > > + if ( __vcpu_on_replq(svc) ) > > + { > > + /* > > + * disarm the timer if removing the first replenishment event > > + * which is going to happen next > > + */ > > + if( active_timer(repl_timer) ) > > + { > > + struct rt_vcpu *next_repl = __replq_elem(replq->next); > > + > > + if( next_repl->cur_deadline == svc->cur_deadline ) > > + repl_timer->expires = 0; > > + > > + list_del_init(&svc->replq_elem); > > + > > + /* re-arm the timer for the next replenishment event */ > > + if( !list_empty(replq) ) > > + { > > + struct rt_vcpu *svc_next = __replq_elem(replq->next); > > + set_timer(repl_timer, svc_next->cur_deadline); > > + } > > + } > > + > > This blank line should go away.. It is quite misleading. At very first > glance, I thought it is the else for if ( __vcpu_on_replq(svc) ); > But after second read, I found it is actually for the if( > a ctive_timer(repl_timer) ) > Sure > > @@ -880,9 +981,11 @@ rt_schedule(const struct scheduler *ops, > s_time_t now, bool_t tasklet_work_sched > > snext->vcpu->processor = cpu; > > ret.migrated = 1; > > } > > + > > + ret.time = snext->budget; /* invoke the scheduler next time */ > > + > > } > > > > - ret.time = MIN(snext->budget, MAX_SCHEDULE); /* sched quantum */ > > ret.task = snext->vcpu; > > > > /* TRACE */ > > @@ -914,7 +1017,7 @@ static void > > rt_vcpu_sleep(const struct scheduler *ops, struct vcpu *vc) > > { > > struct rt_vcpu * const svc = rt_vcpu(vc); > > - > > + > > Next patch should avoid this. You may have some trailing space in the line. > git has some command to mark the trailing whitespace. > You can have a look at this post: > http://stackoverflow.com/questions/5257553/coloring-white-space-in-git-diffs-output > ok I will check that out. Dario: Is there anything else we need to pick up in this patch? Thanks, Tianyang