public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: "Jan Schönherr" <schnhrr@cs.tu-berlin.de>
To: Paul Turner <pjt@google.com>
Cc: Ingo Molnar <mingo@elte.hu>,
	Peter Zijlstra <peterz@infradead.org>,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH 1/2] sched: Enforce order of leaf CFS runqueues
Date: Tue, 19 Jul 2011 17:17:48 +0200	[thread overview]
Message-ID: <4E25A01C.70402@cs.tu-berlin.de> (raw)
In-Reply-To: <CAPM31RLqq0hgdhoh3GA-38petKeE1zxcL=uJYba9o3yd+_7jjw@mail.gmail.com>

Am 19.07.2011 01:24, schrieb Paul Turner:
> hmmm, what about something like the below (only boot tested), it
> should make the insert case always safe meaning we don't need to do
> anything funky around delete:

Seems to work, too, with two modifications...


> diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
> index eb98f77..a7e0966 100644
> --- a/kernel/sched_fair.c
> +++ b/kernel/sched_fair.c
> @@ -143,26 +143,39 @@ static inline struct cfs_rq *cpu_cfs_rq(struct
> cfs_rq *cfs_rq, int this_cpu)
>  	return cfs_rq->tg->cfs_rq[this_cpu];
>  }
>
> -static inline void list_add_leaf_cfs_rq(struct cfs_rq *cfs_rq)
> +/*
> + * rq->leaf_cfs_rq_list has an order constraint that specifies children must
> + * appear before parents.  For the (!on_list) chain starting at cfs_rq this
> + * finds a satisfactory insertion point.  If no ancestor is yet on_list, this
> + * choice is arbitrary.
> + */
> +static inline struct list_head *find_leaf_cfs_rq_insertion(struct
> cfs_rq *cfs_rq)
>  {
> -	if (!cfs_rq->on_list) {
> -		/*
> -		 * Ensure we either appear before our parent (if already
> -		 * enqueued) or force our parent to appear after us when it is
> -		 * enqueued.  The fact that we always enqueue bottom-up
> -		 * reduces this to two cases.
> -		 */
> -		if (cfs_rq->tg->parent &&
> -		    cfs_rq->tg->parent->cfs_rq[cpu_of(rq_of(cfs_rq))]->on_list) {
> -			list_add_rcu(&cfs_rq->leaf_cfs_rq_list,
> -				&rq_of(cfs_rq)->leaf_cfs_rq_list);
> -		} else {
> -			list_add_tail_rcu(&cfs_rq->leaf_cfs_rq_list,
> -				&rq_of(cfs_rq)->leaf_cfs_rq_list);
> -		}
> +	struct sched_entity *se;
> +
> +	se = cfs_rq->tg->se[cpu_of(rq_of(cfs_rq))];
> +	for_each_sched_entity(se)
> +		if (cfs_rq->on_list)
> +			return &cfs_rq->leaf_cfs_rq_list;

Need to use cfs_rq corresponding to current se:

-	for_each_sched_entity(se)
-		if (cfs_rq->on_list)
-			return &cfs_rq->leaf_cfs_rq_list;
+	for_each_sched_entity(se) {
+		struct cfs_rq *se_cfs_rq = cfs_rq_of(se);
+		if (se_cfs_rq->on_list)
+			return &se_cfs_rq->leaf_cfs_rq_list;
+	}

>
> -		cfs_rq->on_list = 1;
> +	return &rq_of(cfs_rq)->leaf_cfs_rq_list;
> +}


And something like the following hack to prevent the removal
of the leaf_insertion_point itself during
	enqueue_entity()
		update_cfs_load()

(Obviously not for production:)

diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
index 2df33d4..947257d 100644
--- a/kernel/sched_fair.c
+++ b/kernel/sched_fair.c
@@ -832,7 +832,7 @@ static void update_cfs_load(struct cfs_rq *cfs_rq, int global_update)
        }

        /* consider updating load contribution on each fold or truncate */
-       if (global_update || cfs_rq->load_period > period
+       if (global_update==1 || cfs_rq->load_period > period
            || !cfs_rq->load_period)
                update_cfs_rq_load_contribution(cfs_rq, global_update);

@@ -847,7 +847,7 @@ static void update_cfs_load(struct cfs_rq *cfs_rq, int global_update)
                cfs_rq->load_avg /= 2;
        }

-       if (!cfs_rq->curr && !cfs_rq->nr_running && !cfs_rq->load_avg)
+       if (!cfs_rq->curr && !cfs_rq->nr_running && !cfs_rq->load_avg && global_update!=2)
                list_del_leaf_cfs_rq(cfs_rq);
 }

@@ -1063,7 +1063,7 @@ enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
         * Update run-time statistics of the 'current'.
         */
        update_curr(cfs_rq);
-       update_cfs_load(cfs_rq, 0);
+       update_cfs_load(cfs_rq, 2);
        account_entity_enqueue(cfs_rq, se);
        update_cfs_shares(cfs_rq);



  parent reply	other threads:[~2011-07-19 15:18 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-07-18 10:50 [PATCH 0/2] Enforce hierarchical order of leaf CFS runqueues Jan H. Schönherr
2011-07-18 10:50 ` [PATCH 1/2] sched: Enforce " Jan H. Schönherr
2011-07-18 23:24   ` Paul Turner
2011-07-19 13:08     ` Peter Zijlstra
2011-07-19 17:48       ` Paul Turner
2011-07-19 15:17     ` Jan Schönherr [this message]
2011-07-19 17:53       ` Paul Turner
2011-07-21 13:20     ` Jan H. Schönherr
2011-07-21 13:20       ` [PATCH RFC 1/3] list, treewide: Rename __list_del() to __list_link() Jan H. Schönherr
2011-07-21 13:20       ` [PATCH RFC 2/3] rcu: More rcu-variants for list manipulation Jan H. Schönherr
2011-07-22 16:21         ` Paul E. McKenney
2011-07-23 18:41           ` Jan Schönherr
2011-07-21 13:20       ` [PATCH RFC 3/3] sched: Handle on_list ancestor in list_add_leaf_cfs_rq() Jan H. Schönherr
2011-07-18 10:50 ` [PATCH 2/2] sched: Prevent removal of leaf CFS runqueues with on_list children Jan H. Schönherr

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4E25A01C.70402@cs.tu-berlin.de \
    --to=schnhrr@cs.tu-berlin.de \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@elte.hu \
    --cc=peterz@infradead.org \
    --cc=pjt@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox