xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Meng Xu <xumengpanda@gmail.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Ian Campbell <ian.campbell@citrix.com>,
	Sisu Xi <xisisu@gmail.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Chenyang Lu <lu@cse.wustl.edu>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Linh Thi Xuan Phan <ptxlinh@gmail.com>,
	Meng Xu <mengxu@cis.upenn.edu>, Jan Beulich <JBeulich@suse.com>,
	Chao Wang <chaowang@wustl.edu>, Chong Li <lichong659@gmail.com>,
	Dagaen Golomb <dgolomb@seas.upenn.edu>
Subject: Re: [PATCH v3 1/4] xen: add real time scheduler rtds
Date: Sat, 20 Sep 2014 17:10:03 -0400	[thread overview]
Message-ID: <CAENZ-+nUeH6cWn+jUtUtLLn4m0QnwmC7m_rCdLOStp5kMkCPRg@mail.gmail.com> (raw)
In-Reply-To: <541B038D.4030307@eu.citrix.com>

Hi George,

2014-09-18 12:08 GMT-04:00 George Dunlap <george.dunlap@eu.citrix.com>:
> On 09/14/2014 10:37 PM, Meng Xu wrote:
>>
>> This scheduler follows the Preemptive Global Earliest Deadline First
>> (EDF) theory in real-time field.
>> At any scheduling point, the VCPU with earlier deadline has higher
>> priority. The scheduler always picks the highest priority VCPU to run on a
>> feasible PCPU.
>> A PCPU is feasible if the VCPU can run on this PCPU and (the PCPU is
>> idle or has a lower-priority VCPU running on it.)
>>
>> Each VCPU has a dedicated period and budget.
>> The deadline of a VCPU is at the end of each period;
>> A VCPU has its budget replenished at the beginning of each period;
>> While scheduled, a VCPU burns its budget.
>> The VCPU needs to finish its budget before its deadline in each period;
>> The VCPU discards its unused budget at the end of each period.
>> If a VCPU runs out of budget in a period, it has to wait until next
>> period.
>>
>> Each VCPU is implemented as a deferable server.
>> When a VCPU has a task running on it, its budget is continuously burned;
>> When a VCPU has no task but with budget left, its budget is preserved.
>>
>> Queue scheme:
>> A global runqueue and a global depletedq for each CPU pool.
>> The runqueue holds all runnable VCPUs with budget and sorted by deadline;
>> The depletedq holds all VCPUs without budget and unsorted.
>>
>> Note: cpumask and cpupool is supported.
>>
>> This is an experimental scheduler.
>>
>> Signed-off-by: Meng Xu <mengxu@cis.upenn.edu>
>> Signed-off-by: Sisu Xi <xisisu@gmail.com>
>
>
> Getting there, but unfortunately I've got a number more comments.
>
> Konrad, I think this is very close to being ready -- when is the deadline
> again, and how hard is it?  Would it be better to check it in before the
> deadline, and then address the things I'm bringing up here?  Or would it be
> better to wait until all the issues are sorted and then check it in (even if
> it's after the deadline)?
>
>
> +/*
> + * Debug related code, dump vcpu/cpu information
> + */
> +static void
> +rt_dump_vcpu(const struct scheduler *ops, const struct rt_vcpu *svc)
> +{
> +    char cpustr[1024];
> +    cpumask_t *cpupool_mask;
> +
> +    ASSERT(svc != NULL);
> +    /* flag vcpu */
> +    if( svc->sdom == NULL )
> +        return;
> +
> +    cpumask_scnprintf(cpustr, sizeof(cpustr),
> svc->vcpu->cpu_hard_affinity);
> +    printk("[%5d.%-2u] cpu %u, (%"PRI_stime", %"PRI_stime"),"
> +           " cur_b=%"PRI_stime" cur_d=%"PRI_stime"
> last_start=%"PRI_stime"\n"
> +           " \t\t onQ=%d runnable=%d cpu_hard_affinity=%s ",
> +            svc->vcpu->domain->domain_id,
> +            svc->vcpu->vcpu_id,
> +            svc->vcpu->processor,
> +            svc->period,
> +            svc->budget,
> +            svc->cur_budget,
> +            svc->cur_deadline,
> +            svc->last_start,
> +            __vcpu_on_q(svc),
> +            vcpu_runnable(svc->vcpu),
> +            cpustr);
> +    memset(cpustr, 0, sizeof(cpustr));
> +    cpupool_mask = cpupool_scheduler_cpumask(svc->vcpu->domain->cpupool);
> +    cpumask_scnprintf(cpustr, sizeof(cpustr), cpupool_mask);
> +    printk("cpupool=%s\n", cpustr);
> +}
> +
> +static void
> +rt_dump_pcpu(const struct scheduler *ops, int cpu)
> +{
> +    struct rt_vcpu *svc = rt_vcpu(curr_on_cpu(cpu));
> +
> +    rt_dump_vcpu(ops, svc);
> +}
>
>
> These svc structures are allocated dynamically and may disappear at any
> time... I think you need the lock to cover this as well.
>
> And since this is called both externally (via ops->dump_cpu_state) and
> internally (below), you probably need a locking version and a non-locking
> version (normally you'd make rt_dump_pcpu() a locking "wrapper" around
> __rt_dump_pcpu()).


I tried to add the schedule lock (prv->lock) here, which causes system
freeze. The reason is because when this function is called by
schedule_dump() in schedule.c, the lock has already been grabbed. So I
don't need the lock here, IMO. :-)

void schedule_dump(struct cpupool *c)

{

    int               i;

    struct scheduler *sched;

    cpumask_t        *cpus;



    sched = (c == NULL) ? &ops : c->sched;

    cpus = cpupool_scheduler_cpumask(c);

    printk("Scheduler: %s (%s)\n", sched->name, sched->opt_name);

    SCHED_OP(sched, dump_settings);



    for_each_cpu (i, cpus)

    {

        spinlock_t *lock = pcpu_schedule_lock(i);



        printk("CPU[%02d] ", i);

        SCHED_OP(sched, dump_cpu_state, i);

        pcpu_schedule_unlock(lock, i);

    }

}


>
>> +
>> +static void
>> +rt_dump(const struct scheduler *ops)
>> +{
>> +    struct list_head *iter_sdom, *iter_svc, *runq, *depletedq, *iter;
>> +    struct rt_private *prv = rt_priv(ops);
>> +    struct rt_vcpu *svc;
>> +    unsigned int cpu = 0;
>> +    cpumask_t *online;
>> +    struct rt_dom *sdom;
>> +    unsigned long flags;
>> +
>> +    ASSERT(!list_empty(&prv->sdom));
>> +    sdom = list_entry(prv->sdom.next, struct rt_dom, sdom_elem);
>> +    online = cpupool_scheduler_cpumask(sdom->dom->cpupool);
>> +    runq = rt_runq(ops);
>> +    depletedq = rt_depletedq(ops);
>
>
> Same thing with all these -- other CPUs may be modifying prv->sdom.
>

I need lock here because when this function is called, it has no lock
protection, as you saw above:
    SCHED_OP(sched, dump_settings);
is when this function is called.

So I add a lock here for rt_dump() but not rt_dump_pcpu().

Thanks,

Meng

-----------
Meng Xu
PhD Student in Computer and Information Science
University of Pennsylvania

  parent reply	other threads:[~2014-09-20 21:10 UTC|newest]

Thread overview: 46+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-09-14 21:37 Introduce rtds real-time scheduler for Xen Meng Xu
2014-09-14 21:37 ` [PATCH v3 1/4] xen: add real time scheduler rtds Meng Xu
2014-09-15 10:40   ` Jan Beulich
2014-09-16  8:42   ` Dario Faggioli
2014-09-16  8:52     ` Jan Beulich
2014-09-16  8:59       ` Dario Faggioli
2014-09-16 16:38       ` Meng Xu
2014-09-17  9:22         ` Jan Beulich
2014-09-18 16:08   ` George Dunlap
2014-09-18 18:08     ` Dario Faggioli
2014-09-19 13:11       ` Meng Xu
2014-09-22 17:11         ` Dario Faggioli
2014-09-22 20:40           ` Meng Xu
2014-09-19  9:45     ` Dario Faggioli
2014-09-19 16:44     ` Konrad Rzeszutek Wilk
2014-09-22 17:26       ` Dario Faggioli
2014-09-22 20:45         ` Meng Xu
2014-09-22 22:43         ` Konrad Rzeszutek Wilk
2014-09-20 21:10     ` Meng Xu [this message]
2014-09-23 10:47       ` George Dunlap
2014-09-14 21:37 ` [PATCH v3 2/4] libxc: add rtds scheduler Meng Xu
2014-09-18 16:15   ` George Dunlap
2014-09-14 21:37 ` [PATCH v3 3/4] libxl: " Meng Xu
2014-09-15 22:07   ` Ian Campbell
2014-09-16  1:11     ` Meng Xu
2014-09-16  1:49       ` Ian Campbell
2014-09-16  3:32         ` Meng Xu
2014-09-16  7:27           ` Dario Faggioli
2014-09-16 16:54             ` Ian Campbell
2014-09-17 10:19               ` Dario Faggioli
2014-09-16  8:04   ` Dario Faggioli
2014-09-16 16:56     ` Ian Campbell
2014-09-18 16:24   ` George Dunlap
2014-09-18 17:19     ` Ian Campbell
2014-09-14 21:37 ` [PATCH v3 4/4] xl: introduce " Meng Xu
2014-09-15 22:18   ` Ian Campbell
2014-09-16  7:49   ` Dario Faggioli
2014-09-18 16:35   ` George Dunlap
2014-09-16  7:43 ` Introduce rtds real-time scheduler for Xen Dario Faggioli
2014-09-17 14:15 ` Dario Faggioli
2014-09-17 14:33   ` Meng Xu
2014-09-18 16:00   ` Meng Xu
2014-09-23 13:50 ` Ian Campbell
2014-09-24 20:59   ` Meng Xu
2014-09-24 21:14     ` Wei Liu
2014-09-25  7:39       ` Ian Campbell

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAENZ-+nUeH6cWn+jUtUtLLn4m0QnwmC7m_rCdLOStp5kMkCPRg@mail.gmail.com \
    --to=xumengpanda@gmail.com \
    --cc=JBeulich@suse.com \
    --cc=chaowang@wustl.edu \
    --cc=dario.faggioli@citrix.com \
    --cc=dgolomb@seas.upenn.edu \
    --cc=george.dunlap@eu.citrix.com \
    --cc=ian.campbell@citrix.com \
    --cc=ian.jackson@eu.citrix.com \
    --cc=lichong659@gmail.com \
    --cc=lu@cse.wustl.edu \
    --cc=mengxu@cis.upenn.edu \
    --cc=ptxlinh@gmail.com \
    --cc=stefano.stabellini@eu.citrix.com \
    --cc=xen-devel@lists.xen.org \
    --cc=xisisu@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).