From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail.linuxfoundation.org ([140.211.169.12]:48446 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1161872AbeBOPUu (ORCPT ); Thu, 15 Feb 2018 10:20:50 -0500 From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Benjamin Herrenschmidt , Michael Ellerman Subject: [PATCH 4.4 003/108] powerpc: Fix VSX enabling/flushing to also test MSR_FP and MSR_VEC Date: Thu, 15 Feb 2018 16:16:00 +0100 Message-Id: <20180215151222.757273093@linuxfoundation.org> In-Reply-To: <20180215151222.267507937@linuxfoundation.org> References: <20180215151222.267507937@linuxfoundation.org> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: stable-owner@vger.kernel.org List-ID: 4.4-stable review patch. If anyone has any objections, please let me know. ------------------ From: Benjamin Herrenschmidt commit 5a69aec945d27e78abac9fd032533d3aaebf7c1e upstream. VSX uses a combination of the old vector registers, the old FP registers and new "second halves" of the FP registers. Thus when we need to see the VSX state in the thread struct (flush_vsx_to_thread()) or when we'll use the VSX in the kernel (enable_kernel_vsx()) we need to ensure they are all flushed into the thread struct if either of them is individually enabled. Unfortunately we only tested if the whole VSX was enabled, not if they were individually enabled. Fixes: 72cd7b44bc99 ("powerpc: Uncomment and make enable_kernel_vsx() routine available") Signed-off-by: Benjamin Herrenschmidt [mpe: Backported due to changed context] Signed-off-by: Michael Ellerman Signed-off-by: Greg Kroah-Hartman --- arch/powerpc/kernel/process.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) --- a/arch/powerpc/kernel/process.c +++ b/arch/powerpc/kernel/process.c @@ -209,7 +209,8 @@ void enable_kernel_vsx(void) WARN_ON(preemptible()); #ifdef CONFIG_SMP - if (current->thread.regs && (current->thread.regs->msr & MSR_VSX)) + if (current->thread.regs && + (current->thread.regs->msr & (MSR_VSX|MSR_VEC|MSR_FP))) giveup_vsx(current); else giveup_vsx(NULL); /* just enable vsx for kernel - force */ @@ -231,7 +232,7 @@ void flush_vsx_to_thread(struct task_str { if (tsk->thread.regs) { preempt_disable(); - if (tsk->thread.regs->msr & MSR_VSX) { + if (tsk->thread.regs->msr & (MSR_VSX|MSR_VEC|MSR_FP)) { #ifdef CONFIG_SMP BUG_ON(tsk != current); #endif