From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754043AbbBBSfn (ORCPT ); Mon, 2 Feb 2015 13:35:43 -0500 Received: from mx1.redhat.com ([209.132.183.28]:59750 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751334AbbBBSfm (ORCPT ); Mon, 2 Feb 2015 13:35:42 -0500 Date: Mon, 2 Feb 2015 19:34:18 +0100 From: Oleg Nesterov To: riel@redhat.com Cc: dave.hansen@linux.intel.com, sbsiddha@gmail.com, luto@amacapital.net, tglx@linutronix.de, mingo@redhat.com, hpa@zytor.com, fenghua.yu@intel.com, x86@kernel.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH 6/6] x86,fpu: remove redundant increments of fpu_counter Message-ID: <20150202183418.GA16547@redhat.com> References: <20150129210723.GA31584@redhat.com> <1422900051-10778-1-git-send-email-riel@redhat.com> <1422900051-10778-7-git-send-email-riel@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1422900051-10778-7-git-send-email-riel@redhat.com> User-Agent: Mutt/1.5.18 (2008-05-17) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 02/02, riel@redhat.com wrote: > > From: Rik van Riel > > fpu.preload only gets set if new->thread.fpu_counter is already > larger than 5. Incrementing it further does absolutely nothing. > Remove those lines. I _think_ that we increment it further on purpose. Note that fpu_counter is "char", so it seems that we want no more than 256 automatic preloads. So I am not sure about this change. At least the changelog doesn't look right. Oleg. > --- a/arch/x86/include/asm/fpu-internal.h > +++ b/arch/x86/include/asm/fpu-internal.h > @@ -447,7 +447,6 @@ static inline fpu_switch_t switch_fpu_prepare(struct task_struct *old, struct ta > > /* Don't change CR0.TS if we just switch! */ > if (fpu.preload) { > - new->thread.fpu_counter++; > __thread_set_has_fpu(new); > prefetch(new->thread.fpu.state); > } else if (!use_eager_fpu()) > @@ -456,7 +455,6 @@ static inline fpu_switch_t switch_fpu_prepare(struct task_struct *old, struct ta > old->thread.fpu_counter = 0; > task_disable_lazy_fpu_restore(old); > if (fpu.preload) { > - new->thread.fpu_counter++; > if (fpu_lazy_restore(new, cpu)) > fpu.preload = 0; > else > -- > 1.9.3 >