linux-rt-users.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RT] [PATCH] Make scheduler root_domain modular (sched_class specific)
@ 2008-03-22 14:29 Ankita Garg
  2008-03-22 18:04 ` [RT] [PATCH] Make scheduler root_domain modular (sched_classspecific) Gregory Haskins
  0 siblings, 1 reply; 6+ messages in thread
From: Ankita Garg @ 2008-03-22 14:29 UTC (permalink / raw)
  To: linux-rt-users; +Cc: Ingo Molnar, Steven Rostedt, Gregory Haskins, LKML

Hello,

Thanks Gregory for clarifying my question on root_domains infrastructure. What
I was effectively mentioning on irc the other day was to make the root_domain
infrastructure modular, ie sched_class specific. Currently, only rt is making
use of this infrasture. Making it modular would enable ease of extension to
other sched_classes if required. Trivial patch to that effect.

Patch compile and boot tested.


Signed-off-by: Ankita Garg <ankita@in.ibm.com> 

Index: linux-2.6.24.3-rt3/kernel/sched.c
===================================================================
--- linux-2.6.24.3-rt3.orig/kernel/sched.c	2008-03-21 22:57:04.000000000 +0530
+++ linux-2.6.24.3-rt3/kernel/sched.c	2008-03-21 23:04:56.000000000 +0530
@@ -337,11 +337,8 @@
  * object.
  *
  */
-struct root_domain {
-	atomic_t refcount;
-	cpumask_t span;
-	cpumask_t online;
 
+struct rt_root_domain {
 	/*
 	 * The "RT overload" flag: it gets set if a CPU has more than
 	 * one runnable RT task.
@@ -353,6 +350,14 @@
 #endif
 };
 
+struct root_domain {
+	atomic_t refcount;
+	cpumask_t span;
+	cpumask_t online;
+
+	struct rt_root_domain rt_dom;
+};
+
 /*
  * By default the system creates a single root-domain with all cpus as
  * members (mimicking the global state we have today).
@@ -6332,7 +6337,7 @@
 	cpus_clear(rd->span);
 	cpus_clear(rd->online);
 
-	cpupri_init(&rd->cpupri);
+	cpupri_init(&rd->rt_dom.cpupri);
 
 }
 
Index: linux-2.6.24.3-rt3/kernel/sched_rt.c
===================================================================
--- linux-2.6.24.3-rt3.orig/kernel/sched_rt.c	2008-03-21 22:57:04.000000000 +0530
+++ linux-2.6.24.3-rt3/kernel/sched_rt.c	2008-03-21 23:04:39.000000000 +0530
@@ -7,12 +7,12 @@
 
 static inline int rt_overloaded(struct rq *rq)
 {
-	return atomic_read(&rq->rd->rto_count);
+	return atomic_read(&rq->rd->rt_dom.rto_count);
 }
 
 static inline void rt_set_overload(struct rq *rq)
 {
-	cpu_set(rq->cpu, rq->rd->rto_mask);
+	cpu_set(rq->cpu, rq->rd->rt_dom.rto_mask);
 	/*
 	 * Make sure the mask is visible before we set
 	 * the overload count. That is checked to determine
@@ -21,14 +21,14 @@
 	 * updated yet.
 	 */
 	wmb();
-	atomic_inc(&rq->rd->rto_count);
+	atomic_inc(&rq->rd->rt_dom.rto_count);
 }
 
 static inline void rt_clear_overload(struct rq *rq)
 {
 	/* the order here really doesn't matter */
-	atomic_dec(&rq->rd->rto_count);
-	cpu_clear(rq->cpu, rq->rd->rto_mask);
+	atomic_dec(&rq->rd->rt_dom.rto_count);
+	cpu_clear(rq->cpu, rq->rd->rt_dom.rto_mask);
 }
 
 static void update_rt_migration(struct rq *rq)
@@ -78,7 +78,7 @@
 #ifdef CONFIG_SMP
 	if (p->prio < rq->rt.highest_prio) {
 		rq->rt.highest_prio = p->prio;
-		cpupri_set(&rq->rd->cpupri, rq->cpu, p->prio);
+		cpupri_set(&rq->rd->rt_dom.cpupri, rq->cpu, p->prio);
 	}
 	if (p->nr_cpus_allowed > 1)
 		rq->rt.rt_nr_migratory++;
@@ -114,7 +114,7 @@
 	}
 
 	if (rq->rt.highest_prio != highest_prio)
-		cpupri_set(&rq->rd->cpupri, rq->cpu, rq->rt.highest_prio);
+		cpupri_set(&rq->rd->rt_dom.cpupri, rq->cpu, rq->rt.highest_prio);
 
 	update_rt_migration(rq);
 #endif /* CONFIG_SMP */
@@ -363,7 +363,7 @@
 {
 	int count;
 
-	count = cpupri_find(&task_rq(task)->rd->cpupri, task, lowest_mask);
+	count = cpupri_find(&task_rq(task)->rd->rt_dom.cpupri, task, lowest_mask);
 
 	/*
 	 * cpupri cannot efficiently tell us how many bits are set, so it only
@@ -599,7 +599,7 @@
 
 	next = pick_next_task_rt(this_rq);
 
-	for_each_cpu_mask(cpu, this_rq->rd->rto_mask) {
+	for_each_cpu_mask(cpu, this_rq->rd->rt_dom.rto_mask) {
 		if (this_cpu == cpu)
 			continue;
 
@@ -763,7 +763,7 @@
 	if (rq->rt.overloaded)
 		rt_set_overload(rq);
 
-	cpupri_set(&rq->rd->cpupri, rq->cpu, rq->rt.highest_prio);
+	cpupri_set(&rq->rd->rt_dom.cpupri, rq->cpu, rq->rt.highest_prio);
 }
 
 /* Assumes rq->lock is held */
@@ -772,7 +772,7 @@
 	if (rq->rt.overloaded)
 		rt_clear_overload(rq);
 
-	cpupri_set(&rq->rd->cpupri, rq->cpu, CPUPRI_INVALID);
+	cpupri_set(&rq->rd->rt_dom.cpupri, rq->cpu, CPUPRI_INVALID);
 }
 
 /*

-- 
Regards,
Ankita Garg (ankita@in.ibm.com)
Linux Technology Center
IBM India Systems & Technology Labs, 
Bangalore, India   

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [RT] [PATCH] Make scheduler root_domain modular (sched_classspecific)
  2008-03-22 14:29 [RT] [PATCH] Make scheduler root_domain modular (sched_class specific) Ankita Garg
@ 2008-03-22 18:04 ` Gregory Haskins
  2008-03-23  9:02   ` Ankita Garg
  0 siblings, 1 reply; 6+ messages in thread
From: Gregory Haskins @ 2008-03-22 18:04 UTC (permalink / raw)
  To: Ankita Garg, linux-rt-users; +Cc: Ingo Molnar, Steven Rostedt, LKML

>>> On Sat, Mar 22, 2008 at 10:29 AM, in message
<20080322142915.GA9478@in.ibm.com>, Ankita Garg <ankita@in.ibm.com> wrote: 
> Hello,
> 
> Thanks Gregory for clarifying my question on root_domains infrastructure. 
> What
> I was effectively mentioning on irc the other day was to make the 
> root_domain
> infrastructure modular, ie sched_class specific. Currently, only rt is 
> making
> use of this infrasture. Making it modular would enable ease of extension to
> other sched_classes if required. Trivial patch to that effect.
> 
> Patch compile and boot tested.

Hi Ankita,
  Very nice, thanks!  Couple of minor nits and further cleanup opportunities inline, but otherwise:

Acked-by: Gregory Haskins <ghaskins@novell.com>

> 
> 
> Signed-off-by: Ankita Garg <ankita@in.ibm.com> 
> 
> Index: linux-2.6.24.3-rt3/kernel/sched.c
> ===================================================================
> --- linux-2.6.24.3-rt3.orig/kernel/sched.c	2008-03-21 22:57:04.000000000 +0530
> +++ linux-2.6.24.3-rt3/kernel/sched.c	2008-03-21 23:04:56.000000000 +0530
> @@ -337,11 +337,8 @@
>   * object.
>   *
>   */
> -struct root_domain {
> -	atomic_t refcount;
> -	cpumask_t span;
> -	cpumask_t online;
>  
> +struct rt_root_domain {
>  	/*
>  	 * The "RT overload" flag: it gets set if a CPU has more than
>  	 * one runnable RT task.
> @@ -353,6 +350,14 @@
>  #endif
>  };
>  
> +struct root_domain {
> +	atomic_t refcount;
> +	cpumask_t span;
> +	cpumask_t online;
> +
> +	struct rt_root_domain rt_dom;

Perhaps this should just be s/rt_dom/rt since it is already implicitly a domain just by being a subordinate member of a domain structure.


> +};
> +
>  /*
>   * By default the system creates a single root-domain with all cpus as
>   * members (mimicking the global state we have today).
> @@ -6332,7 +6337,7 @@
>  	cpus_clear(rd->span);
>  	cpus_clear(rd->online);
>  
> -	cpupri_init(&rd->cpupri);
> +	cpupri_init(&rd->rt_dom.cpupri);
>  
>  }
>  
> Index: linux-2.6.24.3-rt3/kernel/sched_rt.c
> ===================================================================
> --- linux-2.6.24.3-rt3.orig/kernel/sched_rt.c	2008-03-21 22:57:04.000000000 +0530
> +++ linux-2.6.24.3-rt3/kernel/sched_rt.c	2008-03-21 23:04:39.000000000 +0530
> @@ -7,12 +7,12 @@
>  
>  static inline int rt_overloaded(struct rq *rq)
>  {
> -	return atomic_read(&rq->rd->rto_count);
> +	return atomic_read(&rq->rd->rt_dom.rto_count);

Perhaps we should change s/rto_count/overload_count and s/rto_mask/overload_mask, since "rt" is now implicit with the association with rt_root_domain?


>  }
>  
>  static inline void rt_set_overload(struct rq *rq)
>  {
> -	cpu_set(rq->cpu, rq->rd->rto_mask);
> +	cpu_set(rq->cpu, rq->rd->rt_dom.rto_mask);
>  	/*
>  	 * Make sure the mask is visible before we set
>  	 * the overload count. That is checked to determine
> @@ -21,14 +21,14 @@
>  	 * updated yet.
>  	 */
>  	wmb();
> -	atomic_inc(&rq->rd->rto_count);
> +	atomic_inc(&rq->rd->rt_dom.rto_count);
>  }
>  
>  static inline void rt_clear_overload(struct rq *rq)
>  {
>  	/* the order here really doesn't matter */
> -	atomic_dec(&rq->rd->rto_count);
> -	cpu_clear(rq->cpu, rq->rd->rto_mask);
> +	atomic_dec(&rq->rd->rt_dom.rto_count);
> +	cpu_clear(rq->cpu, rq->rd->rt_dom.rto_mask);
>  }
>  
>  static void update_rt_migration(struct rq *rq)
> @@ -78,7 +78,7 @@
>  #ifdef CONFIG_SMP
>  	if (p->prio < rq->rt.highest_prio) {
>  		rq->rt.highest_prio = p->prio;
> -		cpupri_set(&rq->rd->cpupri, rq->cpu, p->prio);
> +		cpupri_set(&rq->rd->rt_dom.cpupri, rq->cpu, p->prio);
>  	}
>  	if (p->nr_cpus_allowed > 1)
>  		rq->rt.rt_nr_migratory++;
> @@ -114,7 +114,7 @@
>  	}
>  
>  	if (rq->rt.highest_prio != highest_prio)
> -		cpupri_set(&rq->rd->cpupri, rq->cpu, rq->rt.highest_prio);
> +		cpupri_set(&rq->rd->rt_dom.cpupri, rq->cpu, rq->rt.highest_prio);
>  
>  	update_rt_migration(rq);
>  #endif /* CONFIG_SMP */
> @@ -363,7 +363,7 @@
>  {
>  	int count;
>  
> -	count = cpupri_find(&task_rq(task)->rd->cpupri, task, lowest_mask);
> +	count = cpupri_find(&task_rq(task)->rd->rt_dom.cpupri, task, lowest_mask);
>  
>  	/*
>  	 * cpupri cannot efficiently tell us how many bits are set, so it only
> @@ -599,7 +599,7 @@
>  
>  	next = pick_next_task_rt(this_rq);
>  
> -	for_each_cpu_mask(cpu, this_rq->rd->rto_mask) {
> +	for_each_cpu_mask(cpu, this_rq->rd->rt_dom.rto_mask) {
>  		if (this_cpu == cpu)
>  			continue;
>  
> @@ -763,7 +763,7 @@
>  	if (rq->rt.overloaded)
>  		rt_set_overload(rq);
>  
> -	cpupri_set(&rq->rd->cpupri, rq->cpu, rq->rt.highest_prio);
> +	cpupri_set(&rq->rd->rt_dom.cpupri, rq->cpu, rq->rt.highest_prio);
>  }
>  
>  /* Assumes rq->lock is held */
> @@ -772,7 +772,7 @@
>  	if (rq->rt.overloaded)
>  		rt_clear_overload(rq);
>  
> -	cpupri_set(&rq->rd->cpupri, rq->cpu, CPUPRI_INVALID);
> +	cpupri_set(&rq->rd->rt_dom.cpupri, rq->cpu, CPUPRI_INVALID);
>  }
>  
>  /*




^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [RT] [PATCH] Make scheduler root_domain modular (sched_classspecific)
  2008-03-22 18:04 ` [RT] [PATCH] Make scheduler root_domain modular (sched_classspecific) Gregory Haskins
@ 2008-03-23  9:02   ` Ankita Garg
  2008-03-23 11:27     ` Peter Zijlstra
  0 siblings, 1 reply; 6+ messages in thread
From: Ankita Garg @ 2008-03-23  9:02 UTC (permalink / raw)
  To: Gregory Haskins; +Cc: linux-rt-users, Ingo Molnar, Steven Rostedt, LKML

Hi Gregory,

On Sat, Mar 22, 2008 at 12:04:04PM -0600, Gregory Haskins wrote:
> >>> On Sat, Mar 22, 2008 at 10:29 AM, in message
> <20080322142915.GA9478@in.ibm.com>, Ankita Garg <ankita@in.ibm.com> wrote: 
> > Hello,
> > 
> > Thanks Gregory for clarifying my question on root_domains infrastructure. 
> > What
> > I was effectively mentioning on irc the other day was to make the 
> > root_domain
> > infrastructure modular, ie sched_class specific. Currently, only rt is 
> > making
> > use of this infrasture. Making it modular would enable ease of extension to
> > other sched_classes if required. Trivial patch to that effect.
> > 
> > Patch compile and boot tested.
> 
> Hi Ankita,
>   Very nice, thanks!  Couple of minor nits and further cleanup opportunities inline, but otherwise:
>
> Acked-by: Gregory Haskins <ghaskins@novell.com>
> 
> > 
The changes you have suggested are consistent with what we do for rt_rq
and cfs_rq. Here is the patch with these modifications.
 

Signed-off-by: Ankita Garg <ankita@in.ibm.com> 

Index: linux-2.6.24.3-rt3/kernel/sched.c
===================================================================
--- linux-2.6.24.3-rt3.orig/kernel/sched.c	2008-03-21 22:57:04.000000000 +0530
+++ linux-2.6.24.3-rt3/kernel/sched.c	2008-03-23 14:09:22.000000000 +0530
@@ -337,22 +337,27 @@
  * object.
  *
  */
-struct root_domain {
-	atomic_t refcount;
-	cpumask_t span;
-	cpumask_t online;
 
+struct rt_root_domain {
 	/*
 	 * The "RT overload" flag: it gets set if a CPU has more than
 	 * one runnable RT task.
 	 */
-	cpumask_t rto_mask;
-	atomic_t rto_count;
+	cpumask_t overload_mask;
+	atomic_t overload_count;
 #ifdef CONFIG_SMP
 	struct cpupri cpupri;
 #endif
 };
 
+struct root_domain {
+	atomic_t refcount;
+	cpumask_t span;
+	cpumask_t online;
+
+	struct rt_root_domain rt;
+};
+
 /*
  * By default the system creates a single root-domain with all cpus as
  * members (mimicking the global state we have today).
@@ -6332,7 +6337,7 @@
 	cpus_clear(rd->span);
 	cpus_clear(rd->online);
 
-	cpupri_init(&rd->cpupri);
+	cpupri_init(&rd->rt.cpupri);
 
 }
 
Index: linux-2.6.24.3-rt3/kernel/sched_rt.c
===================================================================
--- linux-2.6.24.3-rt3.orig/kernel/sched_rt.c	2008-03-21 22:57:04.000000000 +0530
+++ linux-2.6.24.3-rt3/kernel/sched_rt.c	2008-03-23 14:12:45.000000000 +0530
@@ -7,12 +7,12 @@
 
 static inline int rt_overloaded(struct rq *rq)
 {
-	return atomic_read(&rq->rd->rto_count);
+	return atomic_read(&rq->rd->rt.overload_count);
 }
 
 static inline void rt_set_overload(struct rq *rq)
 {
-	cpu_set(rq->cpu, rq->rd->rto_mask);
+	cpu_set(rq->cpu, rq->rd->rt.overload_mask);
 	/*
 	 * Make sure the mask is visible before we set
 	 * the overload count. That is checked to determine
@@ -21,14 +21,14 @@
 	 * updated yet.
 	 */
 	wmb();
-	atomic_inc(&rq->rd->rto_count);
+	atomic_inc(&rq->rd->rt.overload_count);
 }
 
 static inline void rt_clear_overload(struct rq *rq)
 {
 	/* the order here really doesn't matter */
-	atomic_dec(&rq->rd->rto_count);
-	cpu_clear(rq->cpu, rq->rd->rto_mask);
+	atomic_dec(&rq->rd->rt.overload_count);
+	cpu_clear(rq->cpu, rq->rd->rt.overload_mask);
 }
 
 static void update_rt_migration(struct rq *rq)
@@ -78,7 +78,7 @@
 #ifdef CONFIG_SMP
 	if (p->prio < rq->rt.highest_prio) {
 		rq->rt.highest_prio = p->prio;
-		cpupri_set(&rq->rd->cpupri, rq->cpu, p->prio);
+		cpupri_set(&rq->rd->rt.cpupri, rq->cpu, p->prio);
 	}
 	if (p->nr_cpus_allowed > 1)
 		rq->rt.rt_nr_migratory++;
@@ -114,7 +114,7 @@
 	}
 
 	if (rq->rt.highest_prio != highest_prio)
-		cpupri_set(&rq->rd->cpupri, rq->cpu, rq->rt.highest_prio);
+		cpupri_set(&rq->rd->rt.cpupri, rq->cpu, rq->rt.highest_prio);
 
 	update_rt_migration(rq);
 #endif /* CONFIG_SMP */
@@ -363,7 +363,7 @@
 {
 	int count;
 
-	count = cpupri_find(&task_rq(task)->rd->cpupri, task, lowest_mask);
+	count = cpupri_find(&task_rq(task)->rd->rt.cpupri, task, lowest_mask);
 
 	/*
 	 * cpupri cannot efficiently tell us how many bits are set, so it only
@@ -599,7 +599,7 @@
 
 	next = pick_next_task_rt(this_rq);
 
-	for_each_cpu_mask(cpu, this_rq->rd->rto_mask) {
+	for_each_cpu_mask(cpu, this_rq->rd->rt.overload_mask) {
 		if (this_cpu == cpu)
 			continue;
 
@@ -763,7 +763,7 @@
 	if (rq->rt.overloaded)
 		rt_set_overload(rq);
 
-	cpupri_set(&rq->rd->cpupri, rq->cpu, rq->rt.highest_prio);
+	cpupri_set(&rq->rd->rt.cpupri, rq->cpu, rq->rt.highest_prio);
 }
 
 /* Assumes rq->lock is held */
@@ -772,7 +772,7 @@
 	if (rq->rt.overloaded)
 		rt_clear_overload(rq);
 
-	cpupri_set(&rq->rd->cpupri, rq->cpu, CPUPRI_INVALID);
+	cpupri_set(&rq->rd->rt.cpupri, rq->cpu, CPUPRI_INVALID);
 }
 
 /*

-- 
Regards,
Ankita Garg (ankita@in.ibm.com)
Linux Technology Center
IBM India Systems & Technology Labs, 
Bangalore, India   

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [RT] [PATCH] Make scheduler root_domain modular (sched_classspecific)
  2008-03-23  9:02   ` Ankita Garg
@ 2008-03-23 11:27     ` Peter Zijlstra
  2008-03-23 11:37       ` Ankita Garg
  0 siblings, 1 reply; 6+ messages in thread
From: Peter Zijlstra @ 2008-03-23 11:27 UTC (permalink / raw)
  To: Ankita Garg
  Cc: Gregory Haskins, linux-rt-users, Ingo Molnar, Steven Rostedt,
	LKML

On Sun, 2008-03-23 at 14:32 +0530, Ankita Garg wrote:
> Hi Gregory,
> 
> On Sat, Mar 22, 2008 at 12:04:04PM -0600, Gregory Haskins wrote:
> > >>> On Sat, Mar 22, 2008 at 10:29 AM, in message
> > <20080322142915.GA9478@in.ibm.com>, Ankita Garg <ankita@in.ibm.com> wrote: 
> > > Hello,
> > > 
> > > Thanks Gregory for clarifying my question on root_domains infrastructure. 
> > > What
> > > I was effectively mentioning on irc the other day was to make the 
> > > root_domain
> > > infrastructure modular, ie sched_class specific. Currently, only rt is 
> > > making
> > > use of this infrasture. Making it modular would enable ease of extension to
> > > other sched_classes if required. Trivial patch to that effect.
> > > 
> > > Patch compile and boot tested.
> > 
> > Hi Ankita,
> >   Very nice, thanks!  Couple of minor nits and further cleanup opportunities inline, but otherwise:
> >
> > Acked-by: Gregory Haskins <ghaskins@novell.com>
> > 
> > > 
> The changes you have suggested are consistent with what we do for rt_rq
> and cfs_rq. Here is the patch with these modifications.

As this patch doesn't touch -rt specific code you should have provided a
patch against the upstream code in sched-devel/latest.

patching file kernel/sched.c
Hunk #1 FAILED at 337.
Hunk #2 FAILED at 6337.
2 out of 2 hunks FAILED -- rejects in file kernel/sched.c
patching file kernel/sched_rt.c
Hunk #3 FAILED at 78.
Hunk #4 FAILED at 114.
Hunk #5 FAILED at 363.
Hunk #6 succeeded at 1005 (offset 406 lines).
Hunk #7 FAILED at 1169.
Hunk #8 FAILED at 1178.
5 out of 8 hunks FAILED -- rejects in file kernel/sched_rt.c


> Signed-off-by: Ankita Garg <ankita@in.ibm.com> 
> 
> Index: linux-2.6.24.3-rt3/kernel/sched.c
> ===================================================================
> --- linux-2.6.24.3-rt3.orig/kernel/sched.c	2008-03-21 22:57:04.000000000 +0530
> +++ linux-2.6.24.3-rt3/kernel/sched.c	2008-03-23 14:09:22.000000000 +0530
> @@ -337,22 +337,27 @@
>   * object.
>   *
>   */
> -struct root_domain {
> -	atomic_t refcount;
> -	cpumask_t span;
> -	cpumask_t online;
>  
> +struct rt_root_domain {
>  	/*
>  	 * The "RT overload" flag: it gets set if a CPU has more than
>  	 * one runnable RT task.
>  	 */
> -	cpumask_t rto_mask;
> -	atomic_t rto_count;
> +	cpumask_t overload_mask;
> +	atomic_t overload_count;
>  #ifdef CONFIG_SMP
>  	struct cpupri cpupri;
>  #endif
>  };
>  
> +struct root_domain {
> +	atomic_t refcount;
> +	cpumask_t span;
> +	cpumask_t online;
> +
> +	struct rt_root_domain rt;
> +};
> +
>  /*
>   * By default the system creates a single root-domain with all cpus as
>   * members (mimicking the global state we have today).
> @@ -6332,7 +6337,7 @@
>  	cpus_clear(rd->span);
>  	cpus_clear(rd->online);
>  
> -	cpupri_init(&rd->cpupri);
> +	cpupri_init(&rd->rt.cpupri);
>  
>  }
>  
> Index: linux-2.6.24.3-rt3/kernel/sched_rt.c
> ===================================================================
> --- linux-2.6.24.3-rt3.orig/kernel/sched_rt.c	2008-03-21 22:57:04.000000000 +0530
> +++ linux-2.6.24.3-rt3/kernel/sched_rt.c	2008-03-23 14:12:45.000000000 +0530
> @@ -7,12 +7,12 @@
>  
>  static inline int rt_overloaded(struct rq *rq)
>  {
> -	return atomic_read(&rq->rd->rto_count);
> +	return atomic_read(&rq->rd->rt.overload_count);
>  }
>  
>  static inline void rt_set_overload(struct rq *rq)
>  {
> -	cpu_set(rq->cpu, rq->rd->rto_mask);
> +	cpu_set(rq->cpu, rq->rd->rt.overload_mask);
>  	/*
>  	 * Make sure the mask is visible before we set
>  	 * the overload count. That is checked to determine
> @@ -21,14 +21,14 @@
>  	 * updated yet.
>  	 */
>  	wmb();
> -	atomic_inc(&rq->rd->rto_count);
> +	atomic_inc(&rq->rd->rt.overload_count);
>  }
>  
>  static inline void rt_clear_overload(struct rq *rq)
>  {
>  	/* the order here really doesn't matter */
> -	atomic_dec(&rq->rd->rto_count);
> -	cpu_clear(rq->cpu, rq->rd->rto_mask);
> +	atomic_dec(&rq->rd->rt.overload_count);
> +	cpu_clear(rq->cpu, rq->rd->rt.overload_mask);
>  }
>  
>  static void update_rt_migration(struct rq *rq)
> @@ -78,7 +78,7 @@
>  #ifdef CONFIG_SMP
>  	if (p->prio < rq->rt.highest_prio) {
>  		rq->rt.highest_prio = p->prio;
> -		cpupri_set(&rq->rd->cpupri, rq->cpu, p->prio);
> +		cpupri_set(&rq->rd->rt.cpupri, rq->cpu, p->prio);
>  	}
>  	if (p->nr_cpus_allowed > 1)
>  		rq->rt.rt_nr_migratory++;
> @@ -114,7 +114,7 @@
>  	}
>  
>  	if (rq->rt.highest_prio != highest_prio)
> -		cpupri_set(&rq->rd->cpupri, rq->cpu, rq->rt.highest_prio);
> +		cpupri_set(&rq->rd->rt.cpupri, rq->cpu, rq->rt.highest_prio);
>  
>  	update_rt_migration(rq);
>  #endif /* CONFIG_SMP */
> @@ -363,7 +363,7 @@
>  {
>  	int count;
>  
> -	count = cpupri_find(&task_rq(task)->rd->cpupri, task, lowest_mask);
> +	count = cpupri_find(&task_rq(task)->rd->rt.cpupri, task, lowest_mask);
>  
>  	/*
>  	 * cpupri cannot efficiently tell us how many bits are set, so it only
> @@ -599,7 +599,7 @@
>  
>  	next = pick_next_task_rt(this_rq);
>  
> -	for_each_cpu_mask(cpu, this_rq->rd->rto_mask) {
> +	for_each_cpu_mask(cpu, this_rq->rd->rt.overload_mask) {
>  		if (this_cpu == cpu)
>  			continue;
>  
> @@ -763,7 +763,7 @@
>  	if (rq->rt.overloaded)
>  		rt_set_overload(rq);
>  
> -	cpupri_set(&rq->rd->cpupri, rq->cpu, rq->rt.highest_prio);
> +	cpupri_set(&rq->rd->rt.cpupri, rq->cpu, rq->rt.highest_prio);
>  }
>  
>  /* Assumes rq->lock is held */
> @@ -772,7 +772,7 @@
>  	if (rq->rt.overloaded)
>  		rt_clear_overload(rq);
>  
> -	cpupri_set(&rq->rd->cpupri, rq->cpu, CPUPRI_INVALID);
> +	cpupri_set(&rq->rd->rt.cpupri, rq->cpu, CPUPRI_INVALID);
>  }
>  
>  /*
> 


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [RT] [PATCH] Make scheduler root_domain modular (sched_classspecific)
  2008-03-23 11:27     ` Peter Zijlstra
@ 2008-03-23 11:37       ` Ankita Garg
  2008-03-23 11:53         ` Peter Zijlstra
  0 siblings, 1 reply; 6+ messages in thread
From: Ankita Garg @ 2008-03-23 11:37 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Gregory Haskins, linux-rt-users, Ingo Molnar, Steven Rostedt,
	LKML

Hi Peter,

On Sun, Mar 23, 2008 at 12:27:07PM +0100, Peter Zijlstra wrote:
> On Sun, 2008-03-23 at 14:32 +0530, Ankita Garg wrote:
> > Hi Gregory,
> > 
> > On Sat, Mar 22, 2008 at 12:04:04PM -0600, Gregory Haskins wrote:
> > > >>> On Sat, Mar 22, 2008 at 10:29 AM, in message
> > > <20080322142915.GA9478@in.ibm.com>, Ankita Garg <ankita@in.ibm.com> wrote: 
> > > > Hello,
> > > > 
> > > > Thanks Gregory for clarifying my question on root_domains infrastructure. 
> > > > What
> > > > I was effectively mentioning on irc the other day was to make the 
> > > > root_domain
> > > > infrastructure modular, ie sched_class specific. Currently, only rt is 
> > > > making
> > > > use of this infrasture. Making it modular would enable ease of extension to
> > > > other sched_classes if required. Trivial patch to that effect.
> > > > 
> > > > Patch compile and boot tested.
> > > 
> > > Hi Ankita,
> > >   Very nice, thanks!  Couple of minor nits and further cleanup opportunities inline, but otherwise:
> > >
> > > Acked-by: Gregory Haskins <ghaskins@novell.com>
> > > 
> > > > 
> > The changes you have suggested are consistent with what we do for rt_rq
> > and cfs_rq. Here is the patch with these modifications.
> 
> As this patch doesn't touch -rt specific code you should have provided a
> patch against the upstream code in sched-devel/latest.
>

The cpupri bits have not been added to the sched-devel tree yet. This
patch involves linking to the cpupri from the rt_root_domain. Thus  the
patch against the latest RT tree. Pl let me know if I understand it
incorrectly.

> patching file kernel/sched.c
> Hunk #1 FAILED at 337.
> Hunk #2 FAILED at 6337.
> 2 out of 2 hunks FAILED -- rejects in file kernel/sched.c
> patching file kernel/sched_rt.c
> Hunk #3 FAILED at 78.
> Hunk #4 FAILED at 114.
> Hunk #5 FAILED at 363.
> Hunk #6 succeeded at 1005 (offset 406 lines).
> Hunk #7 FAILED at 1169.
> Hunk #8 FAILED at 1178.
> 5 out of 8 hunks FAILED -- rejects in file kernel/sched_rt.c
> 
> 
> > Signed-off-by: Ankita Garg <ankita@in.ibm.com> 
> > 
> > Index: linux-2.6.24.3-rt3/kernel/sched.c
> > ===================================================================
> > --- linux-2.6.24.3-rt3.orig/kernel/sched.c	2008-03-21 22:57:04.000000000 +0530
> > +++ linux-2.6.24.3-rt3/kernel/sched.c	2008-03-23 14:09:22.000000000 +0530
> > @@ -337,22 +337,27 @@
> >   * object.
> >   *
> >   */
> > -struct root_domain {
> > -	atomic_t refcount;
> > -	cpumask_t span;
> > -	cpumask_t online;
> >  
> > +struct rt_root_domain {
> >  	/*
> >  	 * The "RT overload" flag: it gets set if a CPU has more than
> >  	 * one runnable RT task.
> >  	 */
> > -	cpumask_t rto_mask;
> > -	atomic_t rto_count;
> > +	cpumask_t overload_mask;
> > +	atomic_t overload_count;
> >  #ifdef CONFIG_SMP
> >  	struct cpupri cpupri;
> >  #endif
> >  };
> >  
> > +struct root_domain {
> > +	atomic_t refcount;
> > +	cpumask_t span;
> > +	cpumask_t online;
> > +
> > +	struct rt_root_domain rt;
> > +};
> > +
> >  /*
> >   * By default the system creates a single root-domain with all cpus as
> >   * members (mimicking the global state we have today).
> > @@ -6332,7 +6337,7 @@
> >  	cpus_clear(rd->span);
> >  	cpus_clear(rd->online);
> >  
> > -	cpupri_init(&rd->cpupri);
> > +	cpupri_init(&rd->rt.cpupri);
> >  
> >  }
> >  
> > Index: linux-2.6.24.3-rt3/kernel/sched_rt.c
> > ===================================================================
> > --- linux-2.6.24.3-rt3.orig/kernel/sched_rt.c	2008-03-21 22:57:04.000000000 +0530
> > +++ linux-2.6.24.3-rt3/kernel/sched_rt.c	2008-03-23 14:12:45.000000000 +0530
> > @@ -7,12 +7,12 @@
> >  
> >  static inline int rt_overloaded(struct rq *rq)
> >  {
> > -	return atomic_read(&rq->rd->rto_count);
> > +	return atomic_read(&rq->rd->rt.overload_count);
> >  }
> >  
> >  static inline void rt_set_overload(struct rq *rq)
> >  {
> > -	cpu_set(rq->cpu, rq->rd->rto_mask);
> > +	cpu_set(rq->cpu, rq->rd->rt.overload_mask);
> >  	/*
> >  	 * Make sure the mask is visible before we set
> >  	 * the overload count. That is checked to determine
> > @@ -21,14 +21,14 @@
> >  	 * updated yet.
> >  	 */
> >  	wmb();
> > -	atomic_inc(&rq->rd->rto_count);
> > +	atomic_inc(&rq->rd->rt.overload_count);
> >  }
> >  
> >  static inline void rt_clear_overload(struct rq *rq)
> >  {
> >  	/* the order here really doesn't matter */
> > -	atomic_dec(&rq->rd->rto_count);
> > -	cpu_clear(rq->cpu, rq->rd->rto_mask);
> > +	atomic_dec(&rq->rd->rt.overload_count);
> > +	cpu_clear(rq->cpu, rq->rd->rt.overload_mask);
> >  }
> >  
> >  static void update_rt_migration(struct rq *rq)
> > @@ -78,7 +78,7 @@
> >  #ifdef CONFIG_SMP
> >  	if (p->prio < rq->rt.highest_prio) {
> >  		rq->rt.highest_prio = p->prio;
> > -		cpupri_set(&rq->rd->cpupri, rq->cpu, p->prio);
> > +		cpupri_set(&rq->rd->rt.cpupri, rq->cpu, p->prio);
> >  	}
> >  	if (p->nr_cpus_allowed > 1)
> >  		rq->rt.rt_nr_migratory++;
> > @@ -114,7 +114,7 @@
> >  	}
> >  
> >  	if (rq->rt.highest_prio != highest_prio)
> > -		cpupri_set(&rq->rd->cpupri, rq->cpu, rq->rt.highest_prio);
> > +		cpupri_set(&rq->rd->rt.cpupri, rq->cpu, rq->rt.highest_prio);
> >  
> >  	update_rt_migration(rq);
> >  #endif /* CONFIG_SMP */
> > @@ -363,7 +363,7 @@
> >  {
> >  	int count;
> >  
> > -	count = cpupri_find(&task_rq(task)->rd->cpupri, task, lowest_mask);
> > +	count = cpupri_find(&task_rq(task)->rd->rt.cpupri, task, lowest_mask);
> >  
> >  	/*
> >  	 * cpupri cannot efficiently tell us how many bits are set, so it only
> > @@ -599,7 +599,7 @@
> >  
> >  	next = pick_next_task_rt(this_rq);
> >  
> > -	for_each_cpu_mask(cpu, this_rq->rd->rto_mask) {
> > +	for_each_cpu_mask(cpu, this_rq->rd->rt.overload_mask) {
> >  		if (this_cpu == cpu)
> >  			continue;
> >  
> > @@ -763,7 +763,7 @@
> >  	if (rq->rt.overloaded)
> >  		rt_set_overload(rq);
> >  
> > -	cpupri_set(&rq->rd->cpupri, rq->cpu, rq->rt.highest_prio);
> > +	cpupri_set(&rq->rd->rt.cpupri, rq->cpu, rq->rt.highest_prio);
> >  }
> >  
> >  /* Assumes rq->lock is held */
> > @@ -772,7 +772,7 @@
> >  	if (rq->rt.overloaded)
> >  		rt_clear_overload(rq);
> >  
> > -	cpupri_set(&rq->rd->cpupri, rq->cpu, CPUPRI_INVALID);
> > +	cpupri_set(&rq->rd->rt.cpupri, rq->cpu, CPUPRI_INVALID);
> >  }
> >  
> >  /*
> > 

-- 
Regards,
Ankita Garg (ankita@in.ibm.com)
Linux Technology Center
IBM India Systems & Technology Labs, 
Bangalore, India   

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [RT] [PATCH] Make scheduler root_domain modular (sched_classspecific)
  2008-03-23 11:37       ` Ankita Garg
@ 2008-03-23 11:53         ` Peter Zijlstra
  0 siblings, 0 replies; 6+ messages in thread
From: Peter Zijlstra @ 2008-03-23 11:53 UTC (permalink / raw)
  To: Ankita Garg
  Cc: Gregory Haskins, linux-rt-users, Ingo Molnar, Steven Rostedt,
	LKML

On Sun, 2008-03-23 at 17:07 +0530, Ankita Garg wrote:
> Hi Peter,
> 
> On Sun, Mar 23, 2008 at 12:27:07PM +0100, Peter Zijlstra wrote:
> > On Sun, 2008-03-23 at 14:32 +0530, Ankita Garg wrote:
> > > Hi Gregory,
> > > 
> > > On Sat, Mar 22, 2008 at 12:04:04PM -0600, Gregory Haskins wrote:
> > > > >>> On Sat, Mar 22, 2008 at 10:29 AM, in message
> > > > <20080322142915.GA9478@in.ibm.com>, Ankita Garg <ankita@in.ibm.com> wrote: 
> > > > > Hello,
> > > > > 
> > > > > Thanks Gregory for clarifying my question on root_domains infrastructure. 
> > > > > What
> > > > > I was effectively mentioning on irc the other day was to make the 
> > > > > root_domain
> > > > > infrastructure modular, ie sched_class specific. Currently, only rt is 
> > > > > making
> > > > > use of this infrasture. Making it modular would enable ease of extension to
> > > > > other sched_classes if required. Trivial patch to that effect.
> > > > > 
> > > > > Patch compile and boot tested.
> > > > 
> > > > Hi Ankita,
> > > >   Very nice, thanks!  Couple of minor nits and further cleanup opportunities inline, but otherwise:
> > > >
> > > > Acked-by: Gregory Haskins <ghaskins@novell.com>
> > > > 
> > > > > 
> > > The changes you have suggested are consistent with what we do for rt_rq
> > > and cfs_rq. Here is the patch with these modifications.
> > 
> > As this patch doesn't touch -rt specific code you should have provided a
> > patch against the upstream code in sched-devel/latest.
> >
> 
> The cpupri bits have not been added to the sched-devel tree yet. This
> patch involves linking to the cpupri from the rt_root_domain. Thus  the
> patch against the latest RT tree. Pl let me know if I understand it
> incorrectly.

The root_domain code is upstream and not -rt specific, that -rt carries
a patch that touches this code is perhaps unfortunate.

We strive to keep the -rt patch as small as possible, that means push
stuff upstream whenever possible. As your patch doesn't change anything
specific to -rt, upstream is the right place to restructure the
root_domain code. Next time the -rt tree gets fwd ported the cpupri bits
will be made to match.




^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2008-03-23 11:53 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-03-22 14:29 [RT] [PATCH] Make scheduler root_domain modular (sched_class specific) Ankita Garg
2008-03-22 18:04 ` [RT] [PATCH] Make scheduler root_domain modular (sched_classspecific) Gregory Haskins
2008-03-23  9:02   ` Ankita Garg
2008-03-23 11:27     ` Peter Zijlstra
2008-03-23 11:37       ` Ankita Garg
2008-03-23 11:53         ` Peter Zijlstra

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).