From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 28848C43381 for ; Thu, 7 Mar 2019 17:29:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id E1E8B2063F for ; Thu, 7 Mar 2019 17:29:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1551979744; bh=dxl+n5eNTxTLvZ57qygfiksPtHPLFDO/Jgq7WK94J+8=; h=Subject:To:Cc:From:Date:List-ID:From; b=GsiKxZ/RmsiQF1V2di3Rng6RptSeEPrlfemd7QcVGaxg0Fts46Si0rg89RQELSuk9 PgNFMmMMX9CeGt1sfvyHVGJBES4bnrfoxEQCGn52JcADdEoKKloHnDC0BepRftViWU +9hgyayvVfa1ku5dw3zJAbootCrl/ftCB4H21kp8= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726172AbfCGR3D (ORCPT ); Thu, 7 Mar 2019 12:29:03 -0500 Received: from wout2-smtp.messagingengine.com ([64.147.123.25]:56325 "EHLO wout2-smtp.messagingengine.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726161AbfCGR3D (ORCPT ); Thu, 7 Mar 2019 12:29:03 -0500 Received: from compute6.internal (compute6.nyi.internal [10.202.2.46]) by mailout.west.internal (Postfix) with ESMTP id 2404836C7; Thu, 7 Mar 2019 12:29:02 -0500 (EST) Received: from mailfrontend2 ([10.202.2.163]) by compute6.internal (MEProxy); Thu, 07 Mar 2019 12:29:02 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:content-type :date:from:message-id:mime-version:subject:to:x-me-proxy :x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; bh=nuqb6K ldUk6jPuDm5a3E++eFbQLtpundNoKY5dRCC9w=; b=l5sDwWbc4VaPSUvfNAS51r v7/4YmzOomId/68LNt3TojnCP3dgC4IDNsyqWv47jZlqRNchqRgray4hGiJApKPS 5peihEXrRfZqjQVyA34nu3aj29bdXw56LUVTlRVRd4iD/51t/jmIPpOkJdnZuL5m SfH0WXHCFFX0b9jPj+QHv1YYkt27JxWKqxwIYIYRzK7574aJ3j0A1dyTAFbzEjdU podHIOjokRWHZ4pza3lZ20KzrRdXX0vXggBv0E/khMywJiB01f2bjPnjLRuupx3w j3+gXmbHTxUBeHbWqnJ4RTIjDbBtVBxlynki7pK3TdCIuRigh2sYwdJJA7iWFtsg == X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedutddrfeekgddutdegucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucenucfjughrpefuvffhfffkgggtgfesthekredttd dtlfenucfhrhhomhepoehgrhgvghhkhheslhhinhhugihfohhunhgurghtihhonhdrohhr gheqnecukfhppeekfedrkeeirdekledruddtjeenucfrrghrrghmpehmrghilhhfrhhomh epghhrvghgsehkrhhorghhrdgtohhmnecuvehluhhsthgvrhfuihiivgeptd X-ME-Proxy: Received: from localhost (5356596b.cm-6-7b.dynamic.ziggo.nl [83.86.89.107]) by mail.messagingengine.com (Postfix) with ESMTPA id C593010331; Thu, 7 Mar 2019 12:29:00 -0500 (EST) Subject: FAILED: patch "[PATCH] staging: erofs: compressed_pages should not be accessed again" failed to apply to 5.0-stable tree To: gaoxiang25@huawei.com, gregkh@linuxfoundation.org, stable@vger.kernel.org, yuchao0@huawei.com Cc: From: Date: Thu, 07 Mar 2019 18:28:58 +0100 Message-ID: <15519797383642@kroah.com> MIME-Version: 1.0 Content-Type: text/plain; charset=ANSI_X3.4-1968 Content-Transfer-Encoding: 8bit Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org The patch below does not apply to the 5.0-stable tree. If someone wants it applied there, or to any other stable or longterm tree, then please email the backport, including the original git commit id to . thanks, greg k-h ------------------ original commit in Linus's tree ------------------ >From af692e117cb8cd9d3d844d413095775abc1217f9 Mon Sep 17 00:00:00 2001 From: Gao Xiang Date: Wed, 27 Feb 2019 13:33:30 +0800 Subject: [PATCH] staging: erofs: compressed_pages should not be accessed again after freed This patch resolves the following page use-after-free issue, z_erofs_vle_unzip: ... for (i = 0; i < nr_pages; ++i) { ... z_erofs_onlinepage_endio(page); (1) } for (i = 0; i < clusterpages; ++i) { page = compressed_pages[i]; if (page->mapping == mngda) (2) continue; /* recycle all individual staging pages */ (void)z_erofs_gather_if_stagingpage(page_pool, page); (3) WRITE_ONCE(compressed_pages[i], NULL); } ... After (1) is executed, page is freed and could be then reused, if compressed_pages is scanned after that, it could fall info (2) or (3) by mistake and that could finally be in a mess. This patch aims to solve the above issue only with little changes as much as possible in order to make the fix backport easier. Fixes: 3883a79abd02 ("staging: erofs: introduce VLE decompression support") Cc: # 4.19+ Signed-off-by: Gao Xiang Reviewed-by: Chao Yu Signed-off-by: Greg Kroah-Hartman diff --git a/drivers/staging/erofs/unzip_vle.c b/drivers/staging/erofs/unzip_vle.c index a127d8db76d8..416dde4e8ea1 100644 --- a/drivers/staging/erofs/unzip_vle.c +++ b/drivers/staging/erofs/unzip_vle.c @@ -986,11 +986,10 @@ static int z_erofs_vle_unzip(struct super_block *sb, if (llen > grp->llen) llen = grp->llen; - err = z_erofs_vle_unzip_fast_percpu(compressed_pages, - clusterpages, pages, llen, work->pageofs, - z_erofs_onlinepage_endio); + err = z_erofs_vle_unzip_fast_percpu(compressed_pages, clusterpages, + pages, llen, work->pageofs); if (err != -ENOTSUPP) - goto out_percpu; + goto out; if (sparsemem_pages >= nr_pages) goto skip_allocpage; @@ -1011,8 +1010,25 @@ static int z_erofs_vle_unzip(struct super_block *sb, erofs_vunmap(vout, nr_pages); out: + /* must handle all compressed pages before endding pages */ + for (i = 0; i < clusterpages; ++i) { + page = compressed_pages[i]; + +#ifdef EROFS_FS_HAS_MANAGED_CACHE + if (page->mapping == MNGD_MAPPING(sbi)) + continue; +#endif + /* recycle all individual staging pages */ + (void)z_erofs_gather_if_stagingpage(page_pool, page); + + WRITE_ONCE(compressed_pages[i], NULL); + } + for (i = 0; i < nr_pages; ++i) { page = pages[i]; + if (!page) + continue; + DBG_BUGON(!page->mapping); /* recycle all individual staging pages */ @@ -1025,20 +1041,6 @@ static int z_erofs_vle_unzip(struct super_block *sb, z_erofs_onlinepage_endio(page); } -out_percpu: - for (i = 0; i < clusterpages; ++i) { - page = compressed_pages[i]; - -#ifdef EROFS_FS_HAS_MANAGED_CACHE - if (page->mapping == MNGD_MAPPING(sbi)) - continue; -#endif - /* recycle all individual staging pages */ - (void)z_erofs_gather_if_stagingpage(page_pool, page); - - WRITE_ONCE(compressed_pages[i], NULL); - } - if (pages == z_pagemap_global) mutex_unlock(&z_pagemap_global_lock); else if (unlikely(pages != pages_onstack)) diff --git a/drivers/staging/erofs/unzip_vle.h b/drivers/staging/erofs/unzip_vle.h index 9dafa8dc712c..517e5ce8c5e9 100644 --- a/drivers/staging/erofs/unzip_vle.h +++ b/drivers/staging/erofs/unzip_vle.h @@ -218,8 +218,7 @@ int z_erofs_vle_plain_copy(struct page **compressed_pages, int z_erofs_vle_unzip_fast_percpu(struct page **compressed_pages, unsigned int clusterpages, struct page **pages, unsigned int outlen, - unsigned short pageofs, - void (*endio)(struct page *)); + unsigned short pageofs); int z_erofs_vle_unzip_vmap(struct page **compressed_pages, unsigned int clusterpages, void *vaddr, unsigned int llen, diff --git a/drivers/staging/erofs/unzip_vle_lz4.c b/drivers/staging/erofs/unzip_vle_lz4.c index 8e8d705a6861..48b263a2731a 100644 --- a/drivers/staging/erofs/unzip_vle_lz4.c +++ b/drivers/staging/erofs/unzip_vle_lz4.c @@ -125,8 +125,7 @@ int z_erofs_vle_unzip_fast_percpu(struct page **compressed_pages, unsigned int clusterpages, struct page **pages, unsigned int outlen, - unsigned short pageofs, - void (*endio)(struct page *)) + unsigned short pageofs) { void *vin, *vout; unsigned int nr_pages, i, j; @@ -148,19 +147,16 @@ int z_erofs_vle_unzip_fast_percpu(struct page **compressed_pages, ret = z_erofs_unzip_lz4(vin, vout + pageofs, clusterpages * PAGE_SIZE, outlen); - if (ret >= 0) { - outlen = ret; - ret = 0; - } + if (ret < 0) + goto out; + ret = 0; for (i = 0; i < nr_pages; ++i) { j = min((unsigned int)PAGE_SIZE - pageofs, outlen); if (pages[i]) { - if (ret < 0) { - SetPageError(pages[i]); - } else if (clusterpages == 1 && - pages[i] == compressed_pages[0]) { + if (clusterpages == 1 && + pages[i] == compressed_pages[0]) { memcpy(vin + pageofs, vout + pageofs, j); } else { void *dst = kmap_atomic(pages[i]); @@ -168,12 +164,13 @@ int z_erofs_vle_unzip_fast_percpu(struct page **compressed_pages, memcpy(dst + pageofs, vout + pageofs, j); kunmap_atomic(dst); } - endio(pages[i]); } vout += PAGE_SIZE; outlen -= j; pageofs = 0; } + +out: preempt_enable(); if (clusterpages == 1)