From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,MENTIONS_GIT_HOSTING, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5B6F9C433E0 for ; Sun, 10 Jan 2021 16:21:00 +0000 (UTC) Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 65C92221E7 for ; Sun, 10 Jan 2021 16:20:59 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 65C92221E7 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=zeniv.linux.org.uk Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-nvdimm-bounces@lists.01.org Received: from ml01.vlan13.01.org (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id E7311100ED4A7; Sun, 10 Jan 2021 08:20:58 -0800 (PST) Received-SPF: None (mailfrom) identity=mailfrom; client-ip=2002:c35c:fd02::1; helo=zeniv.linux.org.uk; envelope-from=viro@ftp.linux.org.uk; receiver= Received: from ZenIV.linux.org.uk (zeniv.linux.org.uk [IPv6:2002:c35c:fd02::1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id 30C13100ED487 for ; Sun, 10 Jan 2021 08:20:55 -0800 (PST) Received: from viro by ZenIV.linux.org.uk with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1kydRo-008yaK-RN; Sun, 10 Jan 2021 16:20:08 +0000 Date: Sun, 10 Jan 2021 16:20:08 +0000 From: Al Viro To: Mikulas Patocka Subject: Re: [RFC v2] nvfs: a filesystem for persistent memory Message-ID: <20210110162008.GV3579531@ZenIV.linux.org.uk> References: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: Sender: Al Viro Message-ID-Hash: IAPJ42J7BVVI44QXE6BSO54CWNRRFJ2Z X-Message-ID-Hash: IAPJ42J7BVVI44QXE6BSO54CWNRRFJ2Z X-MailFrom: viro@ftp.linux.org.uk X-Mailman-Rule-Hits: nonmember-moderation X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation CC: Andrew Morton , Matthew Wilcox , Jan Kara , Steven Whitehouse , Eric Sandeen , Dave Chinner , Theodore Ts'o , Wang Jianchao , "Tadakamadla, Rajesh" , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-nvdimm@lists.01.org X-Mailman-Version: 3.1.1 Precedence: list List-Id: "Linux-nvdimm developer list." Archived-At: List-Archive: List-Help: List-Post: List-Subscribe: List-Unsubscribe: Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit On Thu, Jan 07, 2021 at 08:15:41AM -0500, Mikulas Patocka wrote: > Hi > > I announce a new version of NVFS - a filesystem for persistent memory. > http://people.redhat.com/~mpatocka/nvfs/ Utilities, AFAICS > git://leontynka.twibright.com/nvfs.git Seems to hang on git pull at the moment... Do you have it anywhere else? > I found out that on NVFS, reading a file with the read method has 10% > better performance than the read_iter method. The benchmark just reads the > same 4k page over and over again - and the cost of creating and parsing > the kiocb and iov_iter structures is just that high. Apples and oranges... What happens if you take ssize_t read_iter_locked(struct file *file, struct iov_iter *to, loff_t *ppos) { struct inode *inode = file_inode(file); struct nvfs_memory_inode *nmi = i_to_nmi(inode); struct nvfs_superblock *nvs = inode->i_sb->s_fs_info; ssize_t total = 0; loff_t pos = *ppos; int r; int shift = nvs->log2_page_size; size_t i_size; i_size = inode->i_size; if (pos >= i_size) return 0; iov_iter_truncate(to, i_size - pos); while (iov_iter_count(to)) { void *blk, *ptr; size_t page_mask = (1UL << shift) - 1; unsigned page_offset = pos & page_mask; unsigned prealloc = (iov_iter_count(to) + page_mask) >> shift; unsigned size; blk = nvfs_bmap(nmi, pos >> shift, &prealloc, NULL, NULL, NULL); if (unlikely(IS_ERR(blk))) { r = PTR_ERR(blk); goto ret_r; } size = ((size_t)prealloc << shift) - page_offset; ptr = blk + page_offset; if (unlikely(!blk)) { size = min(size, (unsigned)PAGE_SIZE); ptr = empty_zero_page; } size = copy_to_iter(to, ptr, size); if (unlikely(!size)) { r = -EFAULT; goto ret_r; } pos += size; total += size; } while (iov_iter_count(to)); r = 0; ret_r: *ppos = pos; if (file) file_accessed(file); return total ? total : r; } and use that instead of your nvfs_rw_iter_locked() in your ->read_iter() for DAX read case? Then the same with s/copy_to_iter/_copy_to_iter/, to see how much of that is "hardening" overhead. Incidentally, what's the point of sharing nvfs_rw_iter() for read and write cases? They have practically no overlap - count the lines common for wr and !wr cases. And if you do the same in nvfs_rw_iter_locked(), you'll see that the shared parts _there_ are bloody pointless on the read side. Not that it had been more useful on the write side, really, but that's another story (nvfs_write_pages() handling of copyin is... interesting). Let's figure out what's going on with the read overhead first... lib/iov_iter.c primitives certainly could use massage for better code generation, but let's find out how much of the PITA is due to those and how much comes from you fighing the damn thing instead of using it sanely... _______________________________________________ Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org To unsubscribe send an email to linux-nvdimm-leave@lists.01.org