From mboxrd@z Thu Jan 1 00:00:00 1970 From: Oleg Nesterov Subject: [PATCH] schedule: simplify the reacquire_kernel_lock() logic Date: Wed, 19 May 2010 14:57:11 +0200 Message-ID: <20100519125711.GA30199@redhat.com> References: <20100518164537.6194.73366.stgit@warthog.procyon.org.uk> <20100518164547.6194.94193.stgit@warthog.procyon.org.uk> <20100518212220.GA5092@redhat.com> <1274250079.5605.9967.camel@twins> <20100519102743.GA10040@redhat.com> <1274265465.5605.10564.camel@twins> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Return-path: Content-Disposition: inline In-Reply-To: <1274265465.5605.10564.camel@twins> Sender: linux-kernel-owner@vger.kernel.org To: Peter Zijlstra Cc: David Howells , Ingo Molnar , Yong Zhang , linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org List-Id: linux-arch.vger.kernel.org - Contrary to what 6d558c3a says, there is no need to reload prev = rq->curr after the context switch. You always schedule back to where you came from, prev must be equal to current even if cpu/rq was changed. - This also means reacquire_kernel_lock() can use prev instead of current. - No need to reassign switch_count if reacquire_kernel_lock() reports need_resched(), we can just move the initial assignment down, under the "need_resched_nonpreemptible:" label. - Try to update the comment after context_switch(). Signed-off-by: Oleg Nesterov --- kernel/sched.c | 13 ++++++------- 1 file changed, 6 insertions(+), 7 deletions(-) --- 34-rc1/kernel/sched.c~SCHEDULE_PREV_EQ_TO_CURRENT 2010-05-18 23:32:50.000000000 +0200 +++ 34-rc1/kernel/sched.c 2010-05-19 14:32:57.000000000 +0200 @@ -3679,7 +3679,6 @@ need_resched: rq = cpu_rq(cpu); rcu_sched_qs(cpu); prev = rq->curr; - switch_count = &prev->nivcsw; release_kernel_lock(prev); need_resched_nonpreemptible: @@ -3693,6 +3692,7 @@ need_resched_nonpreemptible: update_rq_clock(rq); clear_tsk_need_resched(prev); + switch_count = &prev->nivcsw; if (prev->state && !(preempt_count() & PREEMPT_ACTIVE)) { if (unlikely(signal_pending_state(prev->state, prev))) prev->state = TASK_RUNNING; @@ -3719,8 +3719,10 @@ need_resched_nonpreemptible: context_switch(rq, prev, next); /* unlocks the rq */ /* - * the context switch might have flipped the stack from under - * us, hence refresh the local variables. + * The context switch have flipped the stack from under us + * and restored the local variables which were saved when + * this task called schedule() in the past. prev == current + * is still correct, but it can be moved to another cpu/rq. */ cpu = smp_processor_id(); rq = cpu_rq(cpu); @@ -3729,11 +3731,8 @@ need_resched_nonpreemptible: post_schedule(rq); - if (unlikely(reacquire_kernel_lock(current) < 0)) { - prev = rq->curr; - switch_count = &prev->nivcsw; + if (unlikely(reacquire_kernel_lock(prev))) goto need_resched_nonpreemptible; - } preempt_enable_no_resched(); if (need_resched()) From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx1.redhat.com ([209.132.183.28]:9863 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932069Ab0ESM7z (ORCPT ); Wed, 19 May 2010 08:59:55 -0400 Date: Wed, 19 May 2010 14:57:11 +0200 From: Oleg Nesterov Subject: [PATCH] schedule: simplify the reacquire_kernel_lock() logic Message-ID: <20100519125711.GA30199@redhat.com> References: <20100518164537.6194.73366.stgit@warthog.procyon.org.uk> <20100518164547.6194.94193.stgit@warthog.procyon.org.uk> <20100518212220.GA5092@redhat.com> <1274250079.5605.9967.camel@twins> <20100519102743.GA10040@redhat.com> <1274265465.5605.10564.camel@twins> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1274265465.5605.10564.camel@twins> Sender: linux-arch-owner@vger.kernel.org List-ID: To: Peter Zijlstra Cc: David Howells , Ingo Molnar , Yong Zhang , linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org Message-ID: <20100519125711.GjzdQ8puV7NeOQaQApRBDcP8T2daD0wH0AtwlrNoSKc@z> - Contrary to what 6d558c3a says, there is no need to reload prev = rq->curr after the context switch. You always schedule back to where you came from, prev must be equal to current even if cpu/rq was changed. - This also means reacquire_kernel_lock() can use prev instead of current. - No need to reassign switch_count if reacquire_kernel_lock() reports need_resched(), we can just move the initial assignment down, under the "need_resched_nonpreemptible:" label. - Try to update the comment after context_switch(). Signed-off-by: Oleg Nesterov --- kernel/sched.c | 13 ++++++------- 1 file changed, 6 insertions(+), 7 deletions(-) --- 34-rc1/kernel/sched.c~SCHEDULE_PREV_EQ_TO_CURRENT 2010-05-18 23:32:50.000000000 +0200 +++ 34-rc1/kernel/sched.c 2010-05-19 14:32:57.000000000 +0200 @@ -3679,7 +3679,6 @@ need_resched: rq = cpu_rq(cpu); rcu_sched_qs(cpu); prev = rq->curr; - switch_count = &prev->nivcsw; release_kernel_lock(prev); need_resched_nonpreemptible: @@ -3693,6 +3692,7 @@ need_resched_nonpreemptible: update_rq_clock(rq); clear_tsk_need_resched(prev); + switch_count = &prev->nivcsw; if (prev->state && !(preempt_count() & PREEMPT_ACTIVE)) { if (unlikely(signal_pending_state(prev->state, prev))) prev->state = TASK_RUNNING; @@ -3719,8 +3719,10 @@ need_resched_nonpreemptible: context_switch(rq, prev, next); /* unlocks the rq */ /* - * the context switch might have flipped the stack from under - * us, hence refresh the local variables. + * The context switch have flipped the stack from under us + * and restored the local variables which were saved when + * this task called schedule() in the past. prev == current + * is still correct, but it can be moved to another cpu/rq. */ cpu = smp_processor_id(); rq = cpu_rq(cpu); @@ -3729,11 +3731,8 @@ need_resched_nonpreemptible: post_schedule(rq); - if (unlikely(reacquire_kernel_lock(current) < 0)) { - prev = rq->curr; - switch_count = &prev->nivcsw; + if (unlikely(reacquire_kernel_lock(prev))) goto need_resched_nonpreemptible; - } preempt_enable_no_resched(); if (need_resched())