From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.0 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BE2CAC43381 for ; Fri, 22 Mar 2019 11:59:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 8E0BC20830 for ; Fri, 22 Mar 2019 11:59:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1553255972; bh=HoGVdYJ7B+TLLgmwFxTRG5ihe+SS6NjAF07znru19LQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=SOp5hTpoEjb1TXyhpqwNheEKXj7baqN9K0qqMmPFJi7rwtwUvkPqcNPVVuZJW1HIi UGogXisj7OSuziNc94o3X+Ap5vrIEEDjZde1MX1vdKDpexl4qr9RM54a861bNoMfjT IwRJkGa9RA9sTnImEDzgnba0H/pGjH49e57uq7Ng= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387596AbfCVL7a (ORCPT ); Fri, 22 Mar 2019 07:59:30 -0400 Received: from mail.kernel.org ([198.145.29.99]:36478 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1733301AbfCVL73 (ORCPT ); Fri, 22 Mar 2019 07:59:29 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 233F420830; Fri, 22 Mar 2019 11:59:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1553255968; bh=HoGVdYJ7B+TLLgmwFxTRG5ihe+SS6NjAF07znru19LQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=j7AI87z3cJWKVsOwwOCUWXhfrmItF6NNel1+4ypu6kpwOS2pRnEGIBVcZi25UZg6+ c7XXKf8WXd0LLbLVK5AkURfmVnZbU0sCWLnEz0+FzZpL22vXwzzrLpTSb2IFYDusU2 +xVQo50gfcQt+ZNCIdX9SXMwF+bl2zwpRqC6SRWM= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Jann Horn , "David S. Miller" , Sasha Levin Subject: [PATCH 4.19 061/280] mm: page_alloc: fix ref bias in page_frag_alloc() for 1-byte allocs Date: Fri, 22 Mar 2019 12:13:34 +0100 Message-Id: <20190322111309.703910508@linuxfoundation.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190322111306.356185024@linuxfoundation.org> References: <20190322111306.356185024@linuxfoundation.org> User-Agent: quilt/0.65 X-stable: review X-Patchwork-Hint: ignore MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org 4.19-stable review patch. If anyone has any objections, please let me know. ------------------ [ Upstream commit 2c2ade81741c66082f8211f0b96cf509cc4c0218 ] The basic idea behind ->pagecnt_bias is: If we pre-allocate the maximum number of references that we might need to create in the fastpath later, the bump-allocation fastpath only has to modify the non-atomic bias value that tracks the number of extra references we hold instead of the atomic refcount. The maximum number of allocations we can serve (under the assumption that no allocation is made with size 0) is nc->size, so that's the bias used. However, even when all memory in the allocation has been given away, a reference to the page is still held; and in the `offset < 0` slowpath, the page may be reused if everyone else has dropped their references. This means that the necessary number of references is actually `nc->size+1`. Luckily, from a quick grep, it looks like the only path that can call page_frag_alloc(fragsz=1) is TAP with the IFF_NAPI_FRAGS flag, which requires CAP_NET_ADMIN in the init namespace and is only intended to be used for kernel testing and fuzzing. To test for this issue, put a `WARN_ON(page_ref_count(page) == 0)` in the `offset < 0` path, below the virt_to_page() call, and then repeatedly call writev() on a TAP device with IFF_TAP|IFF_NO_PI|IFF_NAPI_FRAGS|IFF_NAPI, with a vector consisting of 15 elements containing 1 byte each. Signed-off-by: Jann Horn Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- mm/page_alloc.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index a9de1dbb9a6c..ef99971c13dd 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4532,11 +4532,11 @@ refill: /* Even if we own the page, we do not use atomic_set(). * This would break get_page_unless_zero() users. */ - page_ref_add(page, size - 1); + page_ref_add(page, size); /* reset page count bias and offset to start of new frag */ nc->pfmemalloc = page_is_pfmemalloc(page); - nc->pagecnt_bias = size; + nc->pagecnt_bias = size + 1; nc->offset = size; } @@ -4552,10 +4552,10 @@ refill: size = nc->size; #endif /* OK, page count is 0, we can safely set it */ - set_page_count(page, size); + set_page_count(page, size + 1); /* reset page count bias and offset to start of new frag */ - nc->pagecnt_bias = size; + nc->pagecnt_bias = size + 1; offset = size - fragsz; } -- 2.19.1