From: Steven Rostedt <rostedt@goodmis.org>
To: LKML <linux-kernel@vger.kernel.org>
Cc: Ingo Molnar <mingo@elte.hu>,
Gregory Haskins <ghaskins@novell.com>,
Peter Zijlstra <a.p.zijlstra@chello.nl>,
Christoph Lameter <clameter@sgi.com>,
Steven Rostedt <srostedt@redhat.com>
Subject: [PATCH v4 07/20] disable CFS RT load balancing.
Date: Tue, 20 Nov 2007 20:01:01 -0500 [thread overview]
Message-ID: <20071121011249.984603310@goodmis.org> (raw)
In-Reply-To: 20071121010054.663842380@goodmis.org
[-- Attachment #1: disable-CFS-rt-balance.patch --]
[-- Type: text/plain, Size: 3650 bytes --]
Since we now take an active approach to load balancing, we don't need to
balance RT tasks via CFS. In fact, this code was found to pull RT tasks
away from CPUS that the active movement performed, resulting in
large latencies.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
---
kernel/sched_rt.c | 95 ++----------------------------------------------------
1 file changed, 4 insertions(+), 91 deletions(-)
Index: linux-compile.git/kernel/sched_rt.c
===================================================================
--- linux-compile.git.orig/kernel/sched_rt.c 2007-11-20 19:53:00.000000000 -0500
+++ linux-compile.git/kernel/sched_rt.c 2007-11-20 19:53:01.000000000 -0500
@@ -564,109 +564,22 @@ static void wakeup_balance_rt(struct rq
push_rt_tasks(rq);
}
-/*
- * Load-balancing iterator. Note: while the runqueue stays locked
- * during the whole iteration, the current task might be
- * dequeued so the iterator has to be dequeue-safe. Here we
- * achieve that by always pre-iterating before returning
- * the current task:
- */
-static struct task_struct *load_balance_start_rt(void *arg)
-{
- struct rq *rq = arg;
- struct rt_prio_array *array = &rq->rt.active;
- struct list_head *head, *curr;
- struct task_struct *p;
- int idx;
-
- idx = sched_find_first_bit(array->bitmap);
- if (idx >= MAX_RT_PRIO)
- return NULL;
-
- head = array->queue + idx;
- curr = head->prev;
-
- p = list_entry(curr, struct task_struct, run_list);
-
- curr = curr->prev;
-
- rq->rt.rt_load_balance_idx = idx;
- rq->rt.rt_load_balance_head = head;
- rq->rt.rt_load_balance_curr = curr;
-
- return p;
-}
-
-static struct task_struct *load_balance_next_rt(void *arg)
-{
- struct rq *rq = arg;
- struct rt_prio_array *array = &rq->rt.active;
- struct list_head *head, *curr;
- struct task_struct *p;
- int idx;
-
- idx = rq->rt.rt_load_balance_idx;
- head = rq->rt.rt_load_balance_head;
- curr = rq->rt.rt_load_balance_curr;
-
- /*
- * If we arrived back to the head again then
- * iterate to the next queue (if any):
- */
- if (unlikely(head == curr)) {
- int next_idx = find_next_bit(array->bitmap, MAX_RT_PRIO, idx+1);
-
- if (next_idx >= MAX_RT_PRIO)
- return NULL;
-
- idx = next_idx;
- head = array->queue + idx;
- curr = head->prev;
-
- rq->rt.rt_load_balance_idx = idx;
- rq->rt.rt_load_balance_head = head;
- }
-
- p = list_entry(curr, struct task_struct, run_list);
-
- curr = curr->prev;
-
- rq->rt.rt_load_balance_curr = curr;
-
- return p;
-}
-
static unsigned long
load_balance_rt(struct rq *this_rq, int this_cpu, struct rq *busiest,
unsigned long max_load_move,
struct sched_domain *sd, enum cpu_idle_type idle,
int *all_pinned, int *this_best_prio)
{
- struct rq_iterator rt_rq_iterator;
-
- rt_rq_iterator.start = load_balance_start_rt;
- rt_rq_iterator.next = load_balance_next_rt;
- /* pass 'busiest' rq argument into
- * load_balance_[start|next]_rt iterators
- */
- rt_rq_iterator.arg = busiest;
-
- return balance_tasks(this_rq, this_cpu, busiest, max_load_move, sd,
- idle, all_pinned, this_best_prio, &rt_rq_iterator);
+ /* don't touch RT tasks */
+ return 0;
}
static int
move_one_task_rt(struct rq *this_rq, int this_cpu, struct rq *busiest,
struct sched_domain *sd, enum cpu_idle_type idle)
{
- struct rq_iterator rt_rq_iterator;
-
- rt_rq_iterator.start = load_balance_start_rt;
- rt_rq_iterator.next = load_balance_next_rt;
- rt_rq_iterator.arg = busiest;
-
- return iter_move_one_task(this_rq, this_cpu, busiest, sd, idle,
- &rt_rq_iterator);
+ /* don't touch RT tasks */
+ return 0;
}
#else /* CONFIG_SMP */
# define schedule_tail_balance_rt(rq) do { } while (0)
--
next prev parent reply other threads:[~2007-11-21 1:23 UTC|newest]
Thread overview: 36+ messages / expand[flat|nested] mbox.gz Atom feed top
2007-11-21 1:00 [PATCH v4 00/20] New RT Balancing version 4 Steven Rostedt
2007-11-21 1:00 ` [PATCH v4 01/20] Add rt_nr_running accounting Steven Rostedt
2007-11-21 1:00 ` [PATCH v4 02/20] track highest prio queued on runqueue Steven Rostedt
2007-11-21 1:00 ` [PATCH v4 03/20] push RT tasks Steven Rostedt
2007-11-21 1:00 ` [PATCH v4 04/20] RT overloaded runqueues accounting Steven Rostedt
2007-11-21 1:00 ` [PATCH v4 05/20] pull RT tasks Steven Rostedt
2007-11-21 1:01 ` [PATCH v4 06/20] wake up balance RT Steven Rostedt
2007-11-21 1:01 ` Steven Rostedt [this message]
2007-11-21 1:01 ` [PATCH v4 08/20] Cache cpus_allowed weight for optimizing migration Steven Rostedt
2007-11-21 1:01 ` [PATCH v4 09/20] RT: Consistency cleanup for this_rq usage Steven Rostedt
2007-11-21 1:01 ` [PATCH v4 10/20] RT: Remove some CFS specific code from the wakeup path of RT tasks Steven Rostedt
2007-11-21 1:01 ` [PATCH v4 11/20] RT: Break out the search function Steven Rostedt
2007-11-21 1:01 ` [PATCH v4 12/20] RT: Allow current_cpu to be included in search Steven Rostedt
2007-11-21 1:01 ` [PATCH v4 13/20] RT: Pre-route RT tasks on wakeup Steven Rostedt
2007-11-21 1:01 ` [PATCH v4 14/20] RT: Optimize our cpu selection based on topology Steven Rostedt
2007-11-21 1:01 ` [PATCH v4 15/20] RT: Optimize rebalancing Steven Rostedt
2007-11-21 1:01 ` [PATCH v4 16/20] Avoid overload Steven Rostedt
2007-11-21 1:01 ` [PATCH v4 17/20] RT: restore the migratable conditional Steven Rostedt
2007-11-21 1:01 ` [PATCH v4 18/20] Optimize cpu search with hamming weight Steven Rostedt
2007-11-21 1:01 ` [PATCH v4 19/20] Optimize out cpu_clears Steven Rostedt
2007-11-21 2:10 ` Steven Rostedt
2007-11-21 3:10 ` [PATCH] Fix optimized search Gregory Haskins
2007-11-21 4:15 ` Steven Rostedt
2007-11-21 4:26 ` Steven Rostedt
2007-11-21 5:14 ` Gregory Haskins
2007-11-21 1:01 ` [PATCH v4 20/20] balance RT tasks no new wake up Steven Rostedt
2007-11-21 4:44 ` [PATCH 0/4] more RT balancing enhancements Gregory Haskins
2007-11-21 4:44 ` [PATCH 1/4] Fix optimized search Gregory Haskins
2007-11-21 4:44 ` [PATCH 2/4] RT: Add sched-domain roots Gregory Haskins
2007-11-21 4:44 ` [PATCH 3/4] RT: Only balance our RT tasks within our root-domain Gregory Haskins
2007-11-21 4:44 ` [PATCH 4/4] RT: Use a 2-d bitmap for searching lowest-pri CPU Gregory Haskins
2007-11-21 19:51 ` [PATCH 0/4] more RT balancing enhancements v6a Gregory Haskins
2007-11-21 19:52 ` [PATCH 1/4] SCHED: Add sched-domain roots Gregory Haskins
2007-11-21 19:52 ` [PATCH 2/4] SCHED: Track online cpus in the root-domain Gregory Haskins
2007-11-21 19:52 ` [PATCH 3/4] SCHED: Only balance our RT tasks within our root-domain Gregory Haskins
2007-11-21 19:52 ` [PATCH 4/4] SCHED: Use a 2-d bitmap for searching lowest-pri CPU Gregory Haskins
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20071121011249.984603310@goodmis.org \
--to=rostedt@goodmis.org \
--cc=a.p.zijlstra@chello.nl \
--cc=clameter@sgi.com \
--cc=ghaskins@novell.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@elte.hu \
--cc=srostedt@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox