From: "Zwisler, Ross" <ross.zwisler@intel.com>
To: "jmoyer@redhat.com" <jmoyer@redhat.com>
Cc: "linux-ext4@vger.kernel.org" <linux-ext4@vger.kernel.org>,
"willy@linux.intel.com" <willy@linux.intel.com>,
"linux-nvdimm@ml01.01.org" <linux-nvdimm@ml01.01.org>,
"linux-fsdevel@vger.kernel.org" <linux-fsdevel@vger.kernel.org>
Subject: Re: dax pmd fault handler never returns to userspace
Date: Wed, 18 Nov 2015 15:56:46 +0000 [thread overview]
Message-ID: <1447862206.12885.0.camel@intel.com> (raw)
In-Reply-To: <x49wptfnw2l.fsf@segfault.boston.devel.redhat.com>
On Wed, 2015-11-18 at 10:53 -0500, Jeff Moyer wrote:
> Hi,
>
> When running the nvml library's test suite against an ext4 file system
> mounted with -o dax, I ran into an issue where many of the tests would
> simply timeout. The problem appears to be that the pmd fault handler
> never returns to userspace (the application is doing a memcpy of 512
> bytes into pmem). Here's the 'perf report -g' output:
>
> - 88.30% 0.01% blk_non_zero.st libc-2.17.so [.] __memmove_ssse3_back
> - 88.30% __memmove_ssse3_back
> - 66.63% page_fault
> - 66.47% do_page_fault
> - 66.16% __do_page_fault
> - 63.38% handle_mm_fault
> - 61.15% ext4_dax_pmd_fault
> - 45.04% __dax_pmd_fault
> - 37.05% vmf_insert_pfn_pmd
> - track_pfn_insert
> - 35.58% lookup_memtype
> - 33.80% pat_pagerange_is_ram
> - 33.40% walk_system_ram_range
> - 31.63% find_next_iomem_res
> 21.78% strcmp
>
> And here's 'perf top':
>
> Samples: 2M of event 'cycles:pp', Event count (approx.): 56080150519
> Overhead Shared Object Symbol
> 22.55% [kernel] [k] strcmp
> 20.33% [unknown] [k] 0x00007f9f549ef3f3
> 10.01% [kernel] [k] native_irq_return_iret
> 9.54% [kernel] [k] find_next_iomem_res
> 3.00% [jbd2] [k] start_this_handle
>
> This is easily reproduced by doing the following:
>
> git clone https://github.com/pmem/nvml.git
> cd nvml
> make
> make test
> cd src/test/blk_non_zero
> ./blk_non_zero.static-nondebug 512 /path/to/ext4/dax/fs/testfile1 c 1073741824 w:0
>
> I also ran the test suite against xfs, and the problem is not present
> there. However, I did not verify that the xfs tests were getting pmd
> faults.
>
> I'm happy to help diagnose the problem further, if necessary.
Thanks for the report, I'll take a look.
- Ross
next prev parent reply other threads:[~2015-11-18 15:56 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-11-18 15:53 dax pmd fault handler never returns to userspace Jeff Moyer
2015-11-18 15:56 ` Zwisler, Ross [this message]
2015-11-18 16:52 ` Dan Williams
2015-11-18 17:00 ` Ross Zwisler
2015-11-18 17:43 ` Jeff Moyer
2015-11-18 18:10 ` Dan Williams
2015-11-18 18:23 ` Ross Zwisler
2015-11-18 18:32 ` Jeff Moyer
2015-11-18 18:53 ` Ross Zwisler
2015-11-18 18:58 ` Dan Williams
2015-11-19 22:34 ` Dave Chinner
2015-11-18 21:33 ` Toshi Kani
2015-11-18 21:57 ` Dan Williams
2015-11-18 22:04 ` Toshi Kani
2015-11-19 0:36 ` Ross Zwisler
2015-11-19 0:39 ` Dan Williams
2015-11-19 1:05 ` Toshi Kani
2015-11-19 1:19 ` Dan Williams
2015-11-18 18:30 ` Jeff Moyer
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1447862206.12885.0.camel@intel.com \
--to=ross.zwisler@intel.com \
--cc=jmoyer@redhat.com \
--cc=linux-ext4@vger.kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-nvdimm@ml01.01.org \
--cc=willy@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).