From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1761129AbbEEQ30 (ORCPT ); Tue, 5 May 2015 12:29:26 -0400 Received: from mail-wi0-f181.google.com ([209.85.212.181]:33296 "EHLO mail-wi0-f181.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1761097AbbEEQ3U (ORCPT ); Tue, 5 May 2015 12:29:20 -0400 From: Ingo Molnar To: linux-kernel@vger.kernel.org Cc: Andy Lutomirski , Borislav Petkov , Dave Hansen , Fenghua Yu , "H. Peter Anvin" , Linus Torvalds , Oleg Nesterov , Thomas Gleixner Subject: [PATCH 060/208] x86/fpu: Use 'struct fpu' in switch_fpu_prepare() Date: Tue, 5 May 2015 18:24:40 +0200 Message-Id: <1430843228-13749-61-git-send-email-mingo@kernel.org> X-Mailer: git-send-email 2.1.0 In-Reply-To: <1430843228-13749-1-git-send-email-mingo@kernel.org> References: <1430843228-13749-1-git-send-email-mingo@kernel.org> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Migrate this function to pure 'struct fpu' usage. Reviewed-by: Borislav Petkov Cc: Andy Lutomirski Cc: Dave Hansen Cc: Fenghua Yu Cc: H. Peter Anvin Cc: Linus Torvalds Cc: Oleg Nesterov Cc: Thomas Gleixner Signed-off-by: Ingo Molnar --- arch/x86/include/asm/fpu-internal.h | 27 +++++++++++++-------------- arch/x86/kernel/process_32.c | 2 +- arch/x86/kernel/process_64.c | 2 +- 3 files changed, 15 insertions(+), 16 deletions(-) diff --git a/arch/x86/include/asm/fpu-internal.h b/arch/x86/include/asm/fpu-internal.h index 579f7d0a399d..60d2c6f376f3 100644 --- a/arch/x86/include/asm/fpu-internal.h +++ b/arch/x86/include/asm/fpu-internal.h @@ -402,10 +402,9 @@ static inline void fpu_reset_state(struct fpu *fpu) */ typedef struct { int preload; } fpu_switch_t; -static inline fpu_switch_t switch_fpu_prepare(struct task_struct *old, struct task_struct *new, int cpu) +static inline fpu_switch_t +switch_fpu_prepare(struct fpu *old_fpu, struct fpu *new_fpu, int cpu) { - struct fpu *old_fpu = &old->thread.fpu; - struct fpu *new_fpu = &new->thread.fpu; fpu_switch_t fpu; /* @@ -413,33 +412,33 @@ static inline fpu_switch_t switch_fpu_prepare(struct task_struct *old, struct ta * or if the past 5 consecutive context-switches used math. */ fpu.preload = new_fpu->fpstate_active && - (use_eager_fpu() || new->thread.fpu.counter > 5); + (use_eager_fpu() || new_fpu->counter > 5); if (old_fpu->has_fpu) { - if (!fpu_save_init(&old->thread.fpu)) - old->thread.fpu.last_cpu = -1; + if (!fpu_save_init(old_fpu)) + old_fpu->last_cpu = -1; else - old->thread.fpu.last_cpu = cpu; + old_fpu->last_cpu = cpu; /* But leave fpu_fpregs_owner_ctx! */ - old->thread.fpu.has_fpu = 0; + old_fpu->has_fpu = 0; /* Don't change CR0.TS if we just switch! */ if (fpu.preload) { - new->thread.fpu.counter++; + new_fpu->counter++; __thread_set_has_fpu(new_fpu); - prefetch(new->thread.fpu.state); + prefetch(new_fpu->state); } else if (!use_eager_fpu()) stts(); } else { - old->thread.fpu.counter = 0; - old->thread.fpu.last_cpu = -1; + old_fpu->counter = 0; + old_fpu->last_cpu = -1; if (fpu.preload) { - new->thread.fpu.counter++; + new_fpu->counter++; if (fpu_want_lazy_restore(new_fpu, cpu)) fpu.preload = 0; else - prefetch(new->thread.fpu.state); + prefetch(new_fpu->state); __thread_fpu_begin(new_fpu); } } diff --git a/arch/x86/kernel/process_32.c b/arch/x86/kernel/process_32.c index 1a0edce626b2..5b0ed71dde60 100644 --- a/arch/x86/kernel/process_32.c +++ b/arch/x86/kernel/process_32.c @@ -248,7 +248,7 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p) /* never put a printk in __switch_to... printk() calls wake_up*() indirectly */ - fpu = switch_fpu_prepare(prev_p, next_p, cpu); + fpu = switch_fpu_prepare(&prev_p->thread.fpu, &next_p->thread.fpu, cpu); /* * Save away %gs. No need to save %fs, as it was saved on the diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c index 99cc4b8589ad..fefe65efd9d6 100644 --- a/arch/x86/kernel/process_64.c +++ b/arch/x86/kernel/process_64.c @@ -278,7 +278,7 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p) unsigned fsindex, gsindex; fpu_switch_t fpu; - fpu = switch_fpu_prepare(prev_p, next_p, cpu); + fpu = switch_fpu_prepare(&prev_p->thread.fpu, &next_p->thread.fpu, cpu); /* We must save %fs and %gs before load_TLS() because * %fs and %gs may be cleared by load_TLS(). -- 2.1.0