From mboxrd@z Thu Jan 1 00:00:00 1970 From: Mark Rutland Subject: Re: [kernel-hardening] [PATCH v4 26/29] sched: Allow putting thread_info into task_struct Date: Mon, 11 Jul 2016 17:31:11 +0100 Message-ID: <20160711163110.GD7691@leverpostej> References: <20160711100808.GB31221@leverpostej> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Return-path: Content-Disposition: inline In-Reply-To: Sender: linux-kernel-owner@vger.kernel.org To: Linus Torvalds Cc: Andy Lutomirski , linux-arch@vger.kernel.org, Nadav Amit , Kees Cook , Borislav Petkov , Josh Poimboeuf , X86 ML , Jann Horn , Heiko Carstens , Brian Gerst , "kernel-hardening@lists.openwall.com" , "linux-kernel@vger.kernel.org" List-Id: linux-arch.vger.kernel.org On Mon, Jul 11, 2016 at 09:06:58AM -0700, Linus Torvalds wrote: > On Jul 11, 2016 7:55 AM, "Andy Lutomirski" <[1]luto@amacapital.net> wrote: > > > > How do you intend to find 'current' to get to the preempt count > > without first disabling preemption? > > Actually, that is the classic case of "not a problem". > > The thing is, it doesn't matter if you schedule away while looking up > current or the preempt count - because both values are idempotent wet > scheduling. > > So until you do the wire that actually disables preemption you can > schedule away as much as you want, and after that write you no longer > will. I was assuming a percpu pointer to current (or preempt count). The percpu offset might be stale at the point you try to dereference that, even though current itself hasn't changed, and you may access the wrong CPU's value. > This is different wrt a per-cpu area - which is clearly not idempotent wrt > scheduling. > > The reason per-cpu works on x86 is that we have an atomic rmw operation > that is *also* atomic wrt the CPU lookup (thanks to the segment base) Sure, understood. Mark. From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from foss.arm.com ([217.140.101.70]:54171 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752755AbcGKQbX (ORCPT ); Mon, 11 Jul 2016 12:31:23 -0400 Date: Mon, 11 Jul 2016 17:31:11 +0100 From: Mark Rutland Subject: Re: [kernel-hardening] [PATCH v4 26/29] sched: Allow putting thread_info into task_struct Message-ID: <20160711163110.GD7691@leverpostej> References: <20160711100808.GB31221@leverpostej> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Sender: linux-arch-owner@vger.kernel.org List-ID: To: Linus Torvalds Cc: Andy Lutomirski , linux-arch@vger.kernel.org, Nadav Amit , Kees Cook , Borislav Petkov , Josh Poimboeuf , X86 ML , Jann Horn , Heiko Carstens , Brian Gerst , "kernel-hardening@lists.openwall.com" , "linux-kernel@vger.kernel.org" Message-ID: <20160711163111.7OUwyWV1TEoT2nj45XffQZNccjPygXHvDEggSJ5ujGI@z> On Mon, Jul 11, 2016 at 09:06:58AM -0700, Linus Torvalds wrote: > On Jul 11, 2016 7:55 AM, "Andy Lutomirski" <[1]luto@amacapital.net> wrote: > > > > How do you intend to find 'current' to get to the preempt count > > without first disabling preemption? > > Actually, that is the classic case of "not a problem". > > The thing is, it doesn't matter if you schedule away while looking up > current or the preempt count - because both values are idempotent wet > scheduling. > > So until you do the wire that actually disables preemption you can > schedule away as much as you want, and after that write you no longer > will. I was assuming a percpu pointer to current (or preempt count). The percpu offset might be stale at the point you try to dereference that, even though current itself hasn't changed, and you may access the wrong CPU's value. > This is different wrt a per-cpu area - which is clearly not idempotent wrt > scheduling. > > The reason per-cpu works on x86 is that we have an atomic rmw operation > that is *also* atomic wrt the CPU lookup (thanks to the segment base) Sure, understood. Mark.