From: Peter Zijlstra <a.p.zijlstra@chello.nl>
To: Rik van Riel <riel@redhat.com>
Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org,
Avi Kiviti <avi@redhat.com>,
Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>,
Mike Galbraith <efault@gmx.de>,
Chris Wright <chrisw@sous-sol.org>,
ttracy@redhat.com, dshaks@redhat.com, "Nakajima,
Jun" <jun.nakajima@intel.com>
Subject: Re: [RFC -v6 PATCH 4/8] sched: Add yield_to(task, preempt) functionality
Date: Mon, 24 Jan 2011 19:12:28 +0100 [thread overview]
Message-ID: <1295892748.28776.463.camel@laptop> (raw)
In-Reply-To: <20110120163443.762c2409@annuminas.surriel.com>
On Thu, 2011-01-20 at 16:34 -0500, Rik van Riel wrote:
> From: Mike Galbraith <efault@gmx.de>
>
> Currently only implemented for fair class tasks.
>
> Add a yield_to_task method() to the fair scheduling class. allowing the
> caller of yield_to() to accelerate another thread in it's thread group,
> task group.
>
> Implemented via a scheduler hint, using cfs_rq->next to encourage the
> target being selected. We can rely on pick_next_entity to keep things
> fair, so noone can accelerate a thread that has already used its fair
> share of CPU time.
>
> This also means callers should only call yield_to when they really
> mean it. Calling it too often can result in the scheduler just
> ignoring the hint.
>
> Signed-off-by: Rik van Riel <riel@redhat.com>
> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
> Signed-off-by: Mike Galbraith <efault@gmx.de>
Patch 5 wants to be merged back in here I think..
> diff --git a/include/linux/sched.h b/include/linux/sched.h
> index 2c79e92..6c43fc4 100644
> --- a/include/linux/sched.h
> +++ b/include/linux/sched.h
> @@ -1047,6 +1047,7 @@ struct sched_class {
> void (*enqueue_task) (struct rq *rq, struct task_struct *p, int flags);
> void (*dequeue_task) (struct rq *rq, struct task_struct *p, int flags);
> void (*yield_task) (struct rq *rq);
> + bool (*yield_to_task) (struct rq *rq, struct task_struct *p, bool preempt);
>
> void (*check_preempt_curr) (struct rq *rq, struct task_struct *p, int flags);
>
> @@ -1943,6 +1944,7 @@ static inline int rt_mutex_getprio(struct task_struct *p)
> # define rt_mutex_adjust_pi(p) do { } while (0)
> #endif
>
> +extern bool yield_to(struct task_struct *p, bool preempt);
> extern void set_user_nice(struct task_struct *p, long nice);
> extern int task_prio(const struct task_struct *p);
> extern int task_nice(const struct task_struct *p);
> diff --git a/kernel/sched.c b/kernel/sched.c
> index e4e57ff..1f38ed2 100644
> --- a/kernel/sched.c
> +++ b/kernel/sched.c
> @@ -5270,6 +5270,64 @@ void __sched yield(void)
> }
> EXPORT_SYMBOL(yield);
>
> +/**
> + * yield_to - yield the current processor to another thread in
> + * your thread group, or accelerate that thread toward the
> + * processor it's on.
> + *
> + * It's the caller's job to ensure that the target task struct
> + * can't go away on us before we can do any checks.
> + *
> + * Returns true if we indeed boosted the target task.
> + */
> +bool __sched yield_to(struct task_struct *p, bool preempt)
> +{
> + struct task_struct *curr = current;
> + struct rq *rq, *p_rq;
> + unsigned long flags;
> + bool yielded = 0;
> +
> + local_irq_save(flags);
> + rq = this_rq();
> +
> +again:
> + p_rq = task_rq(p);
> + double_rq_lock(rq, p_rq);
> + while (task_rq(p) != p_rq) {
> + double_rq_unlock(rq, p_rq);
> + goto again;
> + }
> +
> + if (!curr->sched_class->yield_to_task)
> + goto out;
> +
> + if (curr->sched_class != p->sched_class)
> + goto out;
> +
> + if (task_running(p_rq, p) || p->state)
> + goto out;
> +
> + if (!same_thread_group(p, curr))
> + goto out;
> +
> +#ifdef CONFIG_FAIR_GROUP_SCHED
> + if (task_group(p) != task_group(curr))
> + goto out;
> +#endif
> +
> + yielded = curr->sched_class->yield_to_task(rq, p, preempt);
> +
> +out:
> + double_rq_unlock(rq, p_rq);
> + local_irq_restore(flags);
> +
> + if (yielded)
> + yield();
Calling yield() here is funny, you just had all the locks to actually do
it..
> + return yielded;
> +}
> +EXPORT_SYMBOL_GPL(yield_to);
> diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
> index f701a51..097e936 100644
> --- a/kernel/sched_fair.c
> +++ b/kernel/sched_fair.c
> @@ -1800,6 +1800,23 @@ static void yield_task_fair(struct rq *rq)
> set_yield_buddy(se);
> }
>
> +static bool yield_to_task_fair(struct rq *rq, struct task_struct *p, bool preempt)
> +{
> + struct sched_entity *se = &p->se;
> +
> + if (!se->on_rq)
> + return false;
> +
> + /* Tell the scheduler that we'd really like pse to run next. */
> + set_next_buddy(se);
> +
> + /* Make p's CPU reschedule; pick_next_entity takes care of fairness. */
> + if (preempt)
> + resched_task(rq->curr);
> +
> + return true;
> +}
So here we set ->next, we could be ->last, and after this we'll set
->yield to curr by calling yield().
So if you do this cyclically I can see ->yield == {->next,->last}
happening.
next prev parent reply other threads:[~2011-01-24 18:12 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-01-20 21:31 [RFC -v6 PATCH 0/8] directed yield for Pause Loop Exiting Rik van Riel
2011-01-20 21:32 ` [RFC -v6 PATCH 1/8] sched: check the right ->nr_running in yield_task_fair Rik van Riel
2011-01-20 21:33 ` [RFC -v6 PATCH 2/8] sched: limit the scope of clear_buddies Rik van Riel
2011-01-24 17:57 ` Peter Zijlstra
2011-01-24 18:04 ` Rik van Riel
2011-01-20 21:33 ` [RFC -v6 PATCH 3/8] sched: use a buddy to implement yield_task_fair Rik van Riel
2011-01-24 18:04 ` Peter Zijlstra
2011-01-24 18:16 ` Rik van Riel
2011-01-20 21:34 ` [RFC -v6 PATCH 4/8] sched: Add yield_to(task, preempt) functionality Rik van Riel
2011-01-24 18:12 ` Peter Zijlstra [this message]
2011-01-24 18:19 ` Rik van Riel
2011-01-20 21:36 ` [RFC -v6 PATCH 6/8] export pid symbols needed for kvm_vcpu_on_spin Rik van Riel
2011-01-20 21:36 ` [RFC -v6 PATCH 7/8] kvm: keep track of which task is running a KVM vcpu Rik van Riel
2011-01-26 13:01 ` Avi Kivity
2011-01-26 15:20 ` Rik van Riel
2011-01-20 21:37 ` [RFC -v6 PATCH 5/8] sched: drop superfluous tests from yield_to Rik van Riel
2011-01-20 21:38 ` [RFC -v6 PATCH 8/8] kvm: use yield_to instead of sleep in kvm_vcpu_on_spin Rik van Riel
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1295892748.28776.463.camel@laptop \
--to=a.p.zijlstra@chello.nl \
--cc=avi@redhat.com \
--cc=chrisw@sous-sol.org \
--cc=dshaks@redhat.com \
--cc=efault@gmx.de \
--cc=jun.nakajima@intel.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=riel@redhat.com \
--cc=ttracy@redhat.com \
--cc=vatsa@linux.vnet.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox