From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752554AbbJEQXR (ORCPT ); Mon, 5 Oct 2015 12:23:17 -0400 Received: from mga02.intel.com ([134.134.136.20]:56933 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752339AbbJEQXQ (ORCPT ); Mon, 5 Oct 2015 12:23:16 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.17,639,1437462000"; d="scan'208";a="658228087" Subject: Re: [REGRESSION] 998ef75ddb and aio-dio-invalidate-failure w/ data=journal To: Linus Torvalds , "Theodore Ts'o" , Andrew Morton , "linux-ext4@vger.kernel.org" , Linux Kernel Mailing List , "H. Peter Anvin" References: <20151005152236.GA8140@thunk.org> From: Dave Hansen Message-ID: <5612A3F3.2040609@linux.intel.com> Date: Mon, 5 Oct 2015 09:23:15 -0700 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.2.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 10/05/2015 08:58 AM, Linus Torvalds wrote: ... > Dave, mind sharing the micro-benchmark or perhaps even just a kernel > profile of it? How is that "iov_iter_fault_in_readable()" so > noticeable? It really shouldn't be a big deal. The micro was just plugging this test: https://www.sr71.net/~dave/intel/write1byte.c In to will-it-scale: https://github.com/antonblanchard/will-it-scale iov_iter_fault_in_readable() shows up as the third-most expensive kernel function in a profile: > 7.45% write1byte_proc [kernel.kallsyms] [k] copy_user_enhanced_fast_string > 6.51% write1byte_proc [kernel.kallsyms] [k] unlock_page > 6.04% write1byte_proc [kernel.kallsyms] [k] iov_iter_fault_in_readable > 5.23% write1byte_proc libc-2.20.so [.] __GI___libc_write > 4.86% write1byte_proc [kernel.kallsyms] [k] entry_SYSCALL_64 > 4.48% write1byte_proc [kernel.kallsyms] [k] iov_iter_copy_from_user_atomic > 3.94% write1byte_proc [kernel.kallsyms] [k] generic_perform_write > 3.74% write1byte_proc [kernel.kallsyms] [k] mutex_lock > 3.59% write1byte_proc [kernel.kallsyms] [k] entry_SYSCALL_64_after_swapgs > 3.55% write1byte_proc [kernel.kallsyms] [k] find_get_entry > 3.53% write1byte_proc [kernel.kallsyms] [k] vfs_write > 3.17% write1byte_proc [kernel.kallsyms] [k] find_lock_entry > 3.17% write1byte_proc [kernel.kallsyms] [k] put_page The disassembly points at the stac/clac pair being the culprits inside the function (copy/paste from 'perf top' disassebly here): ... > │ stac > 24.57 │ mov (%rcx),%sil > 15.70 │ clac > 28.77 │ test %eax,%eax > 2.15 │ mov %sil,-0x1(%rbp) > 8.93 │ ↓ jne 66 > 2.31 │ movslq %edx,%rdx One thing I've been noticing on Skylake is that barriers (implicit and explicit) are showing up more in profiles. What we're seeing here probably isn't actually stac/clac overhead, but the cost of finishing some other operations that are outstanding before we can proceed through here.