From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5081BC41513 for ; Thu, 10 Aug 2023 19:21:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233757AbjHJTVt (ORCPT ); Thu, 10 Aug 2023 15:21:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36290 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231548AbjHJTVs (ORCPT ); Thu, 10 Aug 2023 15:21:48 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 72E22A8 for ; Thu, 10 Aug 2023 12:21:47 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-5896bdb0b18so23654497b3.1 for ; Thu, 10 Aug 2023 12:21:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1691695306; x=1692300106; h=cc:to:from:subject:message-id:mime-version:date:from:to:cc:subject :date:message-id:reply-to; bh=JVwAsL8GyIj4mx5rACaLqR7121szF8L7kVa/VdSYhKk=; b=W/I0jMUUR76P+6pY9PNw6/9lrs82do/vNTSTR7MQbf0T5sxD5ee07o/AopBu6jRp2E WZnCIRWERMlxTbBmgviUTWUKyaRGjZ2Xtt1BFf1MsbHT42k3S+u6IuCKQHRfDmTvVkCE Cgwa+5SM2CMJrzCWkOBVh5eYlaF0Vb0S9olgCPGKYX3KqTvn3i8MswXT95LHLMmNMoew bn/cMAlDdhvhTpkNRRFfiCYCF4vOpk6ZCG73MqBGRK/Sb7T78Wp3u8WtYw8fSZ2yrpmE q8mlC5QOTvl4xRYwIKw3SSeZQe5wrTD/reHyzKv2bJRFQMN2wpSoIKnnaNwfM+A/yBCh BIMA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691695306; x=1692300106; h=cc:to:from:subject:message-id:mime-version:date:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=JVwAsL8GyIj4mx5rACaLqR7121szF8L7kVa/VdSYhKk=; b=BgaDoO8F9RkE9L30Ue+KlmYAXmcdQJzXVz5cRtQP3DdDV/FPvR5UTh6DMxij+zjyQ4 qXgAGEUj6QK8dsB9btpScY5V0dO0/9RexpBu099bfaedS/H1DjOibSqvsv/xxZ6vdO4/ frcMhHhpphBF6dtU16NG2xhS/F9JDq0UhlJgyGW0P54yefrOG9U4Yh0E8J/lSJAYR3I5 inp5mcEUM4O1AF781chXOF/w1qJuA32Qt8h96hCIR2GMwJ956klj9z2Opr1GG1BPUV3A 8umZK2MYkJQrm60fyhQaYIrc7q6BXWwWkwh8eAN4HFqVBY0GEuiIckRYZwAMT6BSayI6 T1NA== X-Gm-Message-State: AOJu0Yx7P+09Q7YtrTR6MnQ3MmESecapVIcNZA6DgLPJJvUTIk7epHZP +iG4lh6NtLnuLWYpGjx25TZghyLeGgojR6CtoOF9 X-Google-Smtp-Source: AGHT+IGL5UhB5HhvTOWpZuR6v95NxwJe4g+zXUVXUS554aEFsc/GlWZiIV916LpXuIgo0/V184I8wNlnKoIYhiZEQhlq X-Received: from axel.svl.corp.google.com ([2620:15c:2a3:200:cc07:13ef:656b:e8de]) (user=axelrasmussen job=sendgmr) by 2002:a81:a94a:0:b0:56c:ed45:442c with SMTP id g71-20020a81a94a000000b0056ced45442cmr65776ywh.5.1691695306687; Thu, 10 Aug 2023 12:21:46 -0700 (PDT) Date: Thu, 10 Aug 2023 12:21:28 -0700 Mime-Version: 1.0 X-Mailer: git-send-email 2.41.0.640.ga95def55d0-goog Message-ID: <20230810192128.1855570-1-axelrasmussen@google.com> Subject: [PATCH mm-unstable fix] mm: userfaultfd: check for start + len overflow in validate_range: fix From: Axel Rasmussen To: Alexander Viro , Andrew Morton , Brian Geffon , Christian Brauner , David Hildenbrand , Gaosheng Cui , Huang Ying , Hugh Dickins , James Houghton , Jiaqi Yan , Jonathan Corbet , Kefeng Wang , "Liam R. Howlett" , Miaohe Lin , Mike Kravetz , "Mike Rapoport (IBM)" , Muchun Song , Nadav Amit , Naoya Horiguchi , Peter Xu , Ryan Roberts , Shuah Khan , Steven Barrett , Suleiman Souhlal , Suren Baghdasaryan , "T.J. Alumbaugh" , Yu Zhao , ZhangPeng Cc: linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, Axel Rasmussen Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-doc@vger.kernel.org A previous fixup to this commit fixed one issue, but introduced another: we're now overly strict when validating the src address for UFFDIO_COPY. Most of the validation in validate_range is useful to apply to src as well as dst, but page alignment is only a requirement for dst, not src. So, split the function up so src can use an "unaligned" variant, while still allowing us to share the majority of the code between the different cases. Reported-by: Ryan Roberts Closes: https://lore.kernel.org/linux-mm/8fbb5965-28f7-4e9a-ac04-1406ed8fc2d4@arm.com/T/#t Signed-off-by: Axel Rasmussen --- fs/userfaultfd.c | 18 +++++++++++++----- 1 file changed, 13 insertions(+), 5 deletions(-) diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c index bb5c474a0a77..1091cb461747 100644 --- a/fs/userfaultfd.c +++ b/fs/userfaultfd.c @@ -1287,13 +1287,11 @@ static __always_inline void wake_userfault(struct userfaultfd_ctx *ctx, __wake_userfault(ctx, range); } -static __always_inline int validate_range(struct mm_struct *mm, - __u64 start, __u64 len) +static __always_inline int validate_unaligned_range( + struct mm_struct *mm, __u64 start, __u64 len) { __u64 task_size = mm->task_size; - if (start & ~PAGE_MASK) - return -EINVAL; if (len & ~PAGE_MASK) return -EINVAL; if (!len) @@ -1309,6 +1307,15 @@ static __always_inline int validate_range(struct mm_struct *mm, return 0; } +static __always_inline int validate_range(struct mm_struct *mm, + __u64 start, __u64 len) +{ + if (start & ~PAGE_MASK) + return -EINVAL; + + return validate_unaligned_range(mm, start, len); +} + static int userfaultfd_register(struct userfaultfd_ctx *ctx, unsigned long arg) { @@ -1759,7 +1766,8 @@ static int userfaultfd_copy(struct userfaultfd_ctx *ctx, sizeof(uffdio_copy)-sizeof(__s64))) goto out; - ret = validate_range(ctx->mm, uffdio_copy.src, uffdio_copy.len); + ret = validate_unaligned_range(ctx->mm, uffdio_copy.src, + uffdio_copy.len); if (ret) goto out; ret = validate_range(ctx->mm, uffdio_copy.dst, uffdio_copy.len); -- 2.41.0.640.ga95def55d0-goog