From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2E415C35E04 for ; Tue, 25 Feb 2020 20:11:16 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E6CE220CC7 for ; Tue, 25 Feb 2020 20:11:15 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b="dfWRdqyG" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E6CE220CC7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chromium.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8235A6B0005; Tue, 25 Feb 2020 15:11:15 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 7D2286B0007; Tue, 25 Feb 2020 15:11:15 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6C1026B0008; Tue, 25 Feb 2020 15:11:15 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0251.hostedemail.com [216.40.44.251]) by kanga.kvack.org (Postfix) with ESMTP id 4E66D6B0005 for ; Tue, 25 Feb 2020 15:11:15 -0500 (EST) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id E839E3D13 for ; Tue, 25 Feb 2020 20:11:14 +0000 (UTC) X-FDA: 76529743668.01.whip02_450447874f45b X-HE-Tag: whip02_450447874f45b X-Filterd-Recvd-Size: 6022 Received: from mail-pg1-f194.google.com (mail-pg1-f194.google.com [209.85.215.194]) by imf41.hostedemail.com (Postfix) with ESMTP for ; Tue, 25 Feb 2020 20:11:14 +0000 (UTC) Received: by mail-pg1-f194.google.com with SMTP id u12so76151pgb.10 for ; Tue, 25 Feb 2020 12:11:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=81meCRrUd7FkyT9G9NN+yX4/OPtkzRjFPOeZBX9Bs6g=; b=dfWRdqyGZUBC0WYr813Wit4XuNySCIYQYI7uwQeBtD1+p62LdSaeOa/XYxguHnyqTG YI9ua5i3FpeOwDCAABc4Q4vXBEcTAv3OUnJczyjKgQbNtigSlxM3Ur+Puwyx3aOprLMP RX+0xUGtLeAHXo+MWGokm5cRYVMrwOUJ4xlFg= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=81meCRrUd7FkyT9G9NN+yX4/OPtkzRjFPOeZBX9Bs6g=; b=ujtjk7ErZdB1y9FtJsdD7UJ3hzsDTzXCIyT00tR5YYH1OiNlz3GG14hCs6E03Lr0vu mYsNSzVP8gwJIOmQoFI8SxlYNXVU32EwWHWJ+imJ4+qGxofojMhaCHZPht0Ht3qP+yIW 4xKHTFExjM4dJLpPJMOSB5VMERg+LDLTKTIGwQHT2f87whE65ws/6CbiRea0id0HeMv/ DXwd5Cfb66T/8BgybOAZQ+zhj6GVHtsJEOX+E7DFd9iqk+hRBrJjGE8tT5UCWuWrtCYL 3NACdtytGsoiVSwbHB9s5BND3WkBBbPcsIVO4zh5GaIK22aplZMgcGFyXc8st6sT92HX LEWg== X-Gm-Message-State: APjAAAUdvCzPeugJkkkS81OZTyHTuStvmLRQ/gvBrT88OmN172Us3HPT U8/NODbMhgHj4LPk3dTXKod10A== X-Google-Smtp-Source: APXvYqyQ3YSUXoahHHRKuSddjeb4Zh7NvSdcT1SPZAvErg+AuOzuJXA1Bi+uG2m6xS0+k4tS2i+PGA== X-Received: by 2002:a65:6459:: with SMTP id s25mr255868pgv.74.1582661473165; Tue, 25 Feb 2020 12:11:13 -0800 (PST) Received: from www.outflux.net (smtp.outflux.net. [198.145.64.163]) by smtp.gmail.com with ESMTPSA id v25sm17783531pfe.147.2020.02.25.12.11.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Feb 2020 12:11:12 -0800 (PST) Date: Tue, 25 Feb 2020 12:11:11 -0800 From: Kees Cook To: Yu-cheng Yu Cc: x86@kernel.org, "H. Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H.J. Lu" , Jann Horn , Jonathan Corbet , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , "Ravi V. Shankar" , Vedvyas Shanbhogue , Dave Martin , x86-patch-review@intel.com Subject: Re: [RFC PATCH v9 07/27] Add guard pages around a Shadow Stack. Message-ID: <202002251209.A2079ECA4@keescook> References: <20200205181935.3712-1-yu-cheng.yu@intel.com> <20200205181935.3712-8-yu-cheng.yu@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200205181935.3712-8-yu-cheng.yu@intel.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Feb 05, 2020 at 10:19:15AM -0800, Yu-cheng Yu wrote: > INCSSPD/INCSSPQ instruction is used to unwind a Shadow Stack (SHSTK). It > performs 'pop and discard' of the first and last element from SHSTK in the > range specified in the operand. The maximum value of the operand is 255, > and the maximum moving distance of the SHSTK pointer is 255 * 4 for > INCSSPD, 255 * 8 for INCSSPQ. > > Since SHSTK has a fixed size, creating a guard page above prevents > INCSSP/RET from moving beyond. Likewise, creating a guard page below > prevents CALL from underflowing the SHSTK. This commit log doesn't really explain why the code changes below are needed? stack_guard_gap is configurable at boot, etc. This appears to be limiting it? I don't follow this change... -Kees > > Signed-off-by: Yu-cheng Yu > --- > include/linux/mm.h | 20 ++++++++++++++++---- > 1 file changed, 16 insertions(+), 4 deletions(-) > > diff --git a/include/linux/mm.h b/include/linux/mm.h > index b5145fbe102e..75de07674649 100644 > --- a/include/linux/mm.h > +++ b/include/linux/mm.h > @@ -2464,9 +2464,15 @@ static inline struct vm_area_struct * find_vma_intersection(struct mm_struct * m > static inline unsigned long vm_start_gap(struct vm_area_struct *vma) > { > unsigned long vm_start = vma->vm_start; > + unsigned long gap = 0; > > - if (vma->vm_flags & VM_GROWSDOWN) { > - vm_start -= stack_guard_gap; > + if (vma->vm_flags & VM_GROWSDOWN) > + gap = stack_guard_gap; > + else if (vma->vm_flags & VM_SHSTK) > + gap = PAGE_SIZE; > + > + if (gap != 0) { > + vm_start -= gap; > if (vm_start > vma->vm_start) > vm_start = 0; > } > @@ -2476,9 +2482,15 @@ static inline unsigned long vm_start_gap(struct vm_area_struct *vma) > static inline unsigned long vm_end_gap(struct vm_area_struct *vma) > { > unsigned long vm_end = vma->vm_end; > + unsigned long gap = 0; > + > + if (vma->vm_flags & VM_GROWSUP) > + gap = stack_guard_gap; > + else if (vma->vm_flags & VM_SHSTK) > + gap = PAGE_SIZE; > > - if (vma->vm_flags & VM_GROWSUP) { > - vm_end += stack_guard_gap; > + if (gap != 0) { > + vm_end += gap; > if (vm_end < vma->vm_end) > vm_end = -PAGE_SIZE; > } > -- > 2.21.0 > -- Kees Cook