From: Steven Rostedt <rostedt@goodmis.org>
To: LKML <linux-kernel@vger.kernel.org>, RT <linux-rt-users@vger.kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>,
Andrew Morton <akpm@linux-foundation.org>,
Ingo Molnar <mingo@elte.hu>,
Dmitry Adamushko <dmitry.adamushko@gmail.com>,
Paul Jackson <pj@sgi.com>, Gregory Haskins <ghaskins@novell.com>,
Peter Zijlstra <a.p.zijlstra@chello.nl>
Subject: [PATCH -v2 7/7] disable CFS RT load balancing.
Date: Mon, 22 Oct 2007 22:59:07 -0400 [thread overview]
Message-ID: <20071023032917.119896476@goodmis.org> (raw)
In-Reply-To: 20071023025900.927578809@goodmis.org
[-- Attachment #1: disable-CFS-rt-balance.patch --]
[-- Type: text/plain, Size: 3361 bytes --]
Since we now take an active approach to load balancing, we don't need to
balance RT tasks via CFS. In fact, this code was found to pull RT tasks
away from CPUS that the active movement performed, resulting in
large latencies.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
kernel/sched_rt.c | 91 +-----------------------------------------------------
1 file changed, 2 insertions(+), 89 deletions(-)
Index: linux-test.git/kernel/sched_rt.c
===================================================================
--- linux-test.git.orig/kernel/sched_rt.c 2007-10-22 22:38:04.000000000 -0400
+++ linux-test.git/kernel/sched_rt.c 2007-10-22 22:38:33.000000000 -0400
@@ -576,101 +576,14 @@ static void wakeup_balance_rt(struct rq
# define wakeup_balance_rt(rq, p) do { } while (0)
#endif /* CONFIG_SMP */
-
-/*
- * Load-balancing iterator. Note: while the runqueue stays locked
- * during the whole iteration, the current task might be
- * dequeued so the iterator has to be dequeue-safe. Here we
- * achieve that by always pre-iterating before returning
- * the current task:
- */
-static struct task_struct *load_balance_start_rt(void *arg)
-{
- struct rq *rq = arg;
- struct rt_prio_array *array = &rq->rt.active;
- struct list_head *head, *curr;
- struct task_struct *p;
- int idx;
-
- idx = sched_find_first_bit(array->bitmap);
- if (idx >= MAX_RT_PRIO)
- return NULL;
-
- head = array->queue + idx;
- curr = head->prev;
-
- p = list_entry(curr, struct task_struct, run_list);
-
- curr = curr->prev;
-
- rq->rt.rt_load_balance_idx = idx;
- rq->rt.rt_load_balance_head = head;
- rq->rt.rt_load_balance_curr = curr;
-
- return p;
-}
-
-static struct task_struct *load_balance_next_rt(void *arg)
-{
- struct rq *rq = arg;
- struct rt_prio_array *array = &rq->rt.active;
- struct list_head *head, *curr;
- struct task_struct *p;
- int idx;
-
- idx = rq->rt.rt_load_balance_idx;
- head = rq->rt.rt_load_balance_head;
- curr = rq->rt.rt_load_balance_curr;
-
- /*
- * If we arrived back to the head again then
- * iterate to the next queue (if any):
- */
- if (unlikely(head == curr)) {
- int next_idx = find_next_bit(array->bitmap, MAX_RT_PRIO, idx+1);
-
- if (next_idx >= MAX_RT_PRIO)
- return NULL;
-
- idx = next_idx;
- head = array->queue + idx;
- curr = head->prev;
-
- rq->rt.rt_load_balance_idx = idx;
- rq->rt.rt_load_balance_head = head;
- }
-
- p = list_entry(curr, struct task_struct, run_list);
-
- curr = curr->prev;
-
- rq->rt.rt_load_balance_curr = curr;
-
- return p;
-}
-
static unsigned long
load_balance_rt(struct rq *this_rq, int this_cpu, struct rq *busiest,
unsigned long max_nr_move, unsigned long max_load_move,
struct sched_domain *sd, enum cpu_idle_type idle,
int *all_pinned, int *this_best_prio)
{
- int nr_moved;
- struct rq_iterator rt_rq_iterator;
- unsigned long load_moved;
-
- rt_rq_iterator.start = load_balance_start_rt;
- rt_rq_iterator.next = load_balance_next_rt;
- /* pass 'busiest' rq argument into
- * load_balance_[start|next]_rt iterators
- */
- rt_rq_iterator.arg = busiest;
-
- nr_moved = balance_tasks(this_rq, this_cpu, busiest, max_nr_move,
- max_load_move, sd, idle, all_pinned, &load_moved,
- this_best_prio, &rt_rq_iterator);
-
- return load_moved;
+ /* don't touch RT tasks */
+ return 0;
}
static void task_tick_rt(struct rq *rq, struct task_struct *p)
--
next prev parent reply other threads:[~2007-10-23 3:29 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2007-10-23 2:59 [PATCH -v2 0/7] New RT Task Balancing -v2 Steven Rostedt
2007-10-23 2:59 ` [PATCH -v2 1/7] Add rt_nr_running accounting Steven Rostedt
2007-10-23 2:59 ` [PATCH -v2 2/7] track highest prio queued on runqueue Steven Rostedt
2007-10-23 2:59 ` [PATCH -v2 3/7] push RT tasks Steven Rostedt
2007-10-23 2:59 ` [PATCH -v2 4/7] RT overloaded runqueues accounting Steven Rostedt
2007-10-23 4:17 ` Paul Jackson
2007-10-23 6:11 ` Paul Menage
2007-10-23 6:19 ` Paul Jackson
2007-10-23 13:43 ` Steven Rostedt
2007-10-23 21:46 ` Paul Jackson
2008-01-29 13:00 ` Paul Jackson
2007-10-23 2:59 ` [PATCH -v2 5/7] pull RT tasks Steven Rostedt
2007-10-23 2:59 ` [PATCH -v2 6/7] wake up balance RT Steven Rostedt
2007-10-23 2:59 ` Steven Rostedt [this message]
2007-10-23 8:39 ` [PATCH -v2 0/7] New RT Task Balancing -v2 Ingo Molnar
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20071023032917.119896476@goodmis.org \
--to=rostedt@goodmis.org \
--cc=a.p.zijlstra@chello.nl \
--cc=akpm@linux-foundation.org \
--cc=dmitry.adamushko@gmail.com \
--cc=ghaskins@novell.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-rt-users@vger.kernel.org \
--cc=mingo@elte.hu \
--cc=pj@sgi.com \
--cc=torvalds@linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).