From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.0 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE, SPF_PASS,T_DKIMWL_WL_HIGH,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 88140C28CC2 for ; Thu, 30 May 2019 03:59:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 6200F24F3C for ; Thu, 30 May 2019 03:59:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1559188799; bh=SZYgzCSl2InSQyrNHF2tnU1aOYnCmJ61nTCrd7r9g6A=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=SnkP2U9Ym8MIuyCRgo4RznspQki/KLUHsZT/bTuvyvz+KYtSFe2FtUeZ6noVEKsPi bpLMcYDYrLn4gEc+KdmenU4QXu1SLFPeI+TWibbBUcsU7MoXjmBIKl0LtOEQrS78yv 7ghm935oCBT+EJ+6KWOFMre1yWypSQMJdyysczNk= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731585AbfE3D7q (ORCPT ); Wed, 29 May 2019 23:59:46 -0400 Received: from mail.kernel.org ([198.145.29.99]:50856 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731578AbfE3DSc (ORCPT ); Wed, 29 May 2019 23:18:32 -0400 Received: from localhost (ip67-88-213-2.z213-88-67.customer.algx.net [67.88.213.2]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 74BBF24790; Thu, 30 May 2019 03:18:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1559186311; bh=SZYgzCSl2InSQyrNHF2tnU1aOYnCmJ61nTCrd7r9g6A=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=BQUAR/xqK6RxP3+NV4Qk8g+1rTEZBQ+ber4m0W7tVWbUdwZd8o1tBssMQGin41ONj gJVfqoUZ+0o8Rap3t0q1P7e6ixyXCsNGjDpEAzXIe2ZAHTPiiFgBg4/sLdCxhe3jPm TuHnDV72249WK0Bk4yR+EEn3pXhuVwyc4EJW01wo= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Jeff Moyer , Ingo Molnar , Christoph Hellwig , Al Viro , Thomas Gleixner , Matthew Wilcox , Kees Cook , Jan Kara , Dan Williams , Jeff Smits Subject: [PATCH 4.14 012/193] libnvdimm/pmem: Bypass CONFIG_HARDENED_USERCOPY overhead Date: Wed, 29 May 2019 20:04:26 -0700 Message-Id: <20190530030449.639898524@linuxfoundation.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190530030446.953835040@linuxfoundation.org> References: <20190530030446.953835040@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Dan Williams commit 52f476a323f9efc959be1c890d0cdcf12e1582e0 upstream. Jeff discovered that performance improves from ~375K iops to ~519K iops on a simple psync-write fio workload when moving the location of 'struct page' from the default PMEM location to DRAM. This result is surprising because the expectation is that 'struct page' for dax is only needed for third party references to dax mappings. For example, a dax-mapped buffer passed to another system call for direct-I/O requires 'struct page' for sending the request down the driver stack and pinning the page. There is no usage of 'struct page' for first party access to a file via read(2)/write(2) and friends. However, this "no page needed" expectation is violated by CONFIG_HARDENED_USERCOPY and the check_copy_size() performed in copy_from_iter_full_nocache() and copy_to_iter_mcsafe(). The check_heap_object() helper routine assumes the buffer is backed by a slab allocator (DRAM) page and applies some checks. Those checks are invalid, dax pages do not originate from the slab, and redundant, dax_iomap_actor() has already validated that the I/O is within bounds. Specifically that routine validates that the logical file offset is within bounds of the file, then it does a sector-to-pfn translation which validates that the physical mapping is within bounds of the block device. Bypass additional hardened usercopy overhead and call the 'no check' versions of the copy_{to,from}_iter operations directly. Fixes: 0aed55af8834 ("x86, uaccess: introduce copy_from_iter_flushcache...") Cc: Cc: Jeff Moyer Cc: Ingo Molnar Cc: Christoph Hellwig Cc: Al Viro Cc: Thomas Gleixner Cc: Matthew Wilcox Reported-and-tested-by: Jeff Smits Acked-by: Kees Cook Acked-by: Jan Kara Signed-off-by: Dan Williams Signed-off-by: Greg Kroah-Hartman --- drivers/nvdimm/pmem.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) --- a/drivers/nvdimm/pmem.c +++ b/drivers/nvdimm/pmem.c @@ -256,10 +256,16 @@ static long pmem_dax_direct_access(struc return __pmem_direct_access(pmem, pgoff, nr_pages, kaddr, pfn); } +/* + * Use the 'no check' versions of copy_from_iter_flushcache() and + * copy_to_iter_mcsafe() to bypass HARDENED_USERCOPY overhead. Bounds + * checking, both file offset and device offset, is handled by + * dax_iomap_actor() + */ static size_t pmem_copy_from_iter(struct dax_device *dax_dev, pgoff_t pgoff, void *addr, size_t bytes, struct iov_iter *i) { - return copy_from_iter_flushcache(addr, bytes, i); + return _copy_from_iter_flushcache(addr, bytes, i); } static const struct dax_operations pmem_dax_ops = {