* [PATCH v2 0/5] block, dax: updates for 4.4
@ 2015-10-22 17:10 Dan Williams
2015-10-22 17:10 ` [PATCH v2 1/5] pmem, dax: clean up clear_pmem() Dan Williams
` (4 more replies)
0 siblings, 5 replies; 18+ messages in thread
From: Dan Williams @ 2015-10-22 17:10 UTC (permalink / raw)
To: axboe
Cc: Jens Axboe, Boaz Harrosh, jack, linux-nvdimm, Dave Hansen, david,
linux-kernel, hch, Jeff Moyer, Al Viro, Jan Kara, willy, akpm,
ross.zwisler
Changes since v1: https://lists.01.org/pipermail/linux-nvdimm/2015-October/002538.html
1/ Rename file_bd_inode to bdev_file_inode (Jan Kara)
2/ Clarify sb_start_pagefault() comment (Jan Kara)
3/ Collect Reviewed-by's
---
As requested [1], break out the block specific updates from the dax-gup
series [2], to merge via the block tree.
1/ Enable dax mappings for raw block devices. This addresses the review
comments (from Ross and Honza) from the RFC [3].
2/ Introduce dax_map_atomic() to fix races between device teardown and
new mapping requests. This depends on commit 2a9067a91825 "block:
generic request_queue reference counting" in for-4.4/integrity branch
of the block tree.
3/ Cleanup clear_pmem() and its usage in dax. This depends on commit
0f90cc6609c7 "mm, dax: fix DAX deadlocks" that was merged into v4.3-rc6.
These pass the nvdimm unit tests and have passed a 0day-kbuild-robot run.
[1]: https://lists.01.org/pipermail/linux-nvdimm/2015-October/002531.html
[2]: https://lists.01.org/pipermail/linux-nvdimm/2015-October/002387.html
[3]: https://lists.01.org/pipermail/linux-nvdimm/2015-October/002512.html
---
Dan Williams (5):
pmem, dax: clean up clear_pmem()
dax: increase granularity of dax_clear_blocks() operations
block, dax: fix lifetime of in-kernel dax mappings with dax_map_atomic()
block: introduce bdev_file_inode()
block: enable dax for raw block devices
arch/x86/include/asm/pmem.h | 7 --
block/blk.h | 2
fs/block_dev.c | 79 ++++++++++++++++-
fs/dax.c | 196 +++++++++++++++++++++++++++----------------
include/linux/blkdev.h | 2
5 files changed, 197 insertions(+), 89 deletions(-)
^ permalink raw reply [flat|nested] 18+ messages in thread* [PATCH v2 1/5] pmem, dax: clean up clear_pmem() 2015-10-22 17:10 [PATCH v2 0/5] block, dax: updates for 4.4 Dan Williams @ 2015-10-22 17:10 ` Dan Williams 2015-10-22 20:48 ` Jeff Moyer 2015-10-27 17:31 ` Ross Zwisler 2015-10-22 17:10 ` [PATCH v2 2/5] dax: increase granularity of dax_clear_blocks() operations Dan Williams ` (3 subsequent siblings) 4 siblings, 2 replies; 18+ messages in thread From: Dan Williams @ 2015-10-22 17:10 UTC (permalink / raw) To: axboe Cc: jack, akpm, linux-nvdimm, Dave Hansen, david, linux-kernel, willy, ross.zwisler, hch Both, __dax_pmd_fault, and clear_pmem() were taking special steps to clear memory a page at a time to take advantage of non-temporal clear_page() implementations. However, x86_64 does not use non-temporal instructions for clear_page(), and arch_clear_pmem() was always incurring the cost of __arch_wb_cache_pmem(). Clean up the assumption that doing clear_pmem() a page at a time is more performant. Cc: Ross Zwisler <ross.zwisler@linux.intel.com> Reported-by: Dave Hansen <dave.hansen@linux.intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com> --- arch/x86/include/asm/pmem.h | 7 +------ fs/dax.c | 4 +--- 2 files changed, 2 insertions(+), 9 deletions(-) diff --git a/arch/x86/include/asm/pmem.h b/arch/x86/include/asm/pmem.h index d8ce3ec816ab..1544fabcd7f9 100644 --- a/arch/x86/include/asm/pmem.h +++ b/arch/x86/include/asm/pmem.h @@ -132,12 +132,7 @@ static inline void arch_clear_pmem(void __pmem *addr, size_t size) { void *vaddr = (void __force *)addr; - /* TODO: implement the zeroing via non-temporal writes */ - if (size == PAGE_SIZE && ((unsigned long)vaddr & ~PAGE_MASK) == 0) - clear_page(vaddr); - else - memset(vaddr, 0, size); - + memset(vaddr, 0, size); __arch_wb_cache_pmem(vaddr, size); } diff --git a/fs/dax.c b/fs/dax.c index a86d3cc2b389..5dc33d788d50 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -623,9 +623,7 @@ int __dax_pmd_fault(struct vm_area_struct *vma, unsigned long address, goto fallback; if (buffer_unwritten(&bh) || buffer_new(&bh)) { - int i; - for (i = 0; i < PTRS_PER_PMD; i++) - clear_pmem(kaddr + i * PAGE_SIZE, PAGE_SIZE); + clear_pmem(kaddr, HPAGE_SIZE); wmb_pmem(); count_vm_event(PGMAJFAULT); mem_cgroup_count_vm_event(vma->vm_mm, PGMAJFAULT); ^ permalink raw reply related [flat|nested] 18+ messages in thread
* Re: [PATCH v2 1/5] pmem, dax: clean up clear_pmem() 2015-10-22 17:10 ` [PATCH v2 1/5] pmem, dax: clean up clear_pmem() Dan Williams @ 2015-10-22 20:48 ` Jeff Moyer 2015-10-22 22:29 ` Dan Williams 2015-10-27 17:31 ` Ross Zwisler 1 sibling, 1 reply; 18+ messages in thread From: Jeff Moyer @ 2015-10-22 20:48 UTC (permalink / raw) To: Dan Williams Cc: axboe, jack, linux-nvdimm, Dave Hansen, david, linux-kernel, hch, akpm Dan Williams <dan.j.williams@intel.com> writes: > Both, __dax_pmd_fault, and clear_pmem() were taking special steps to > clear memory a page at a time to take advantage of non-temporal > clear_page() implementations. However, x86_64 does not use > non-temporal instructions for clear_page(), and arch_clear_pmem() was > always incurring the cost of __arch_wb_cache_pmem(). > > Clean up the assumption that doing clear_pmem() a page at a time is more > performant. Wouldn't another solution be to actually use non-temporal stores? Why did you choose to punt? Cheers, Jeff > > Cc: Ross Zwisler <ross.zwisler@linux.intel.com> > Reported-by: Dave Hansen <dave.hansen@linux.intel.com> > Signed-off-by: Dan Williams <dan.j.williams@intel.com> > --- > arch/x86/include/asm/pmem.h | 7 +------ > fs/dax.c | 4 +--- > 2 files changed, 2 insertions(+), 9 deletions(-) > > diff --git a/arch/x86/include/asm/pmem.h b/arch/x86/include/asm/pmem.h > index d8ce3ec816ab..1544fabcd7f9 100644 > --- a/arch/x86/include/asm/pmem.h > +++ b/arch/x86/include/asm/pmem.h > @@ -132,12 +132,7 @@ static inline void arch_clear_pmem(void __pmem *addr, size_t size) > { > void *vaddr = (void __force *)addr; > > - /* TODO: implement the zeroing via non-temporal writes */ > - if (size == PAGE_SIZE && ((unsigned long)vaddr & ~PAGE_MASK) == 0) > - clear_page(vaddr); > - else > - memset(vaddr, 0, size); > - > + memset(vaddr, 0, size); > __arch_wb_cache_pmem(vaddr, size); > } > > diff --git a/fs/dax.c b/fs/dax.c > index a86d3cc2b389..5dc33d788d50 100644 > --- a/fs/dax.c > +++ b/fs/dax.c > @@ -623,9 +623,7 @@ int __dax_pmd_fault(struct vm_area_struct *vma, unsigned long address, > goto fallback; > > if (buffer_unwritten(&bh) || buffer_new(&bh)) { > - int i; > - for (i = 0; i < PTRS_PER_PMD; i++) > - clear_pmem(kaddr + i * PAGE_SIZE, PAGE_SIZE); > + clear_pmem(kaddr, HPAGE_SIZE); > wmb_pmem(); > count_vm_event(PGMAJFAULT); > mem_cgroup_count_vm_event(vma->vm_mm, PGMAJFAULT); > > _______________________________________________ > Linux-nvdimm mailing list > Linux-nvdimm@lists.01.org > https://lists.01.org/mailman/listinfo/linux-nvdimm ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v2 1/5] pmem, dax: clean up clear_pmem() 2015-10-22 20:48 ` Jeff Moyer @ 2015-10-22 22:29 ` Dan Williams 2015-10-28 21:01 ` Jeff Moyer 0 siblings, 1 reply; 18+ messages in thread From: Dan Williams @ 2015-10-22 22:29 UTC (permalink / raw) To: Jeff Moyer Cc: Jens Axboe, Jan Kara, linux-nvdimm, Dave Hansen, david, linux-kernel@vger.kernel.org, Christoph Hellwig, Andrew Morton On Thu, Oct 22, 2015 at 1:48 PM, Jeff Moyer <jmoyer@redhat.com> wrote: > Dan Williams <dan.j.williams@intel.com> writes: > >> Both, __dax_pmd_fault, and clear_pmem() were taking special steps to >> clear memory a page at a time to take advantage of non-temporal >> clear_page() implementations. However, x86_64 does not use >> non-temporal instructions for clear_page(), and arch_clear_pmem() was >> always incurring the cost of __arch_wb_cache_pmem(). >> >> Clean up the assumption that doing clear_pmem() a page at a time is more >> performant. > > Wouldn't another solution be to actually use non-temporal stores? Sure. > Why did you choose to punt? Just a priority call at this point. Patches welcome of course ;-). ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v2 1/5] pmem, dax: clean up clear_pmem() 2015-10-22 22:29 ` Dan Williams @ 2015-10-28 21:01 ` Jeff Moyer 0 siblings, 0 replies; 18+ messages in thread From: Jeff Moyer @ 2015-10-28 21:01 UTC (permalink / raw) To: Dan Williams Cc: Jens Axboe, Jan Kara, linux-nvdimm, Dave Hansen, david, linux-kernel@vger.kernel.org, Christoph Hellwig, Andrew Morton Dan Williams <dan.j.williams@intel.com> writes: > On Thu, Oct 22, 2015 at 1:48 PM, Jeff Moyer <jmoyer@redhat.com> wrote: >> Dan Williams <dan.j.williams@intel.com> writes: >> >>> Both, __dax_pmd_fault, and clear_pmem() were taking special steps to >>> clear memory a page at a time to take advantage of non-temporal >>> clear_page() implementations. However, x86_64 does not use >>> non-temporal instructions for clear_page(), and arch_clear_pmem() was >>> always incurring the cost of __arch_wb_cache_pmem(). >>> >>> Clean up the assumption that doing clear_pmem() a page at a time is more >>> performant. >> >> Wouldn't another solution be to actually use non-temporal stores? > > Sure. > >> Why did you choose to punt? > > Just a priority call at this point. Patches welcome of course ;-). OK. Patch is harmless. Reviewed-by: Jeff Moyer <jmoyer@redhat.com> ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v2 1/5] pmem, dax: clean up clear_pmem() 2015-10-22 17:10 ` [PATCH v2 1/5] pmem, dax: clean up clear_pmem() Dan Williams 2015-10-22 20:48 ` Jeff Moyer @ 2015-10-27 17:31 ` Ross Zwisler 1 sibling, 0 replies; 18+ messages in thread From: Ross Zwisler @ 2015-10-27 17:31 UTC (permalink / raw) To: Dan Williams Cc: axboe, jack, akpm, linux-nvdimm, Dave Hansen, david, linux-kernel, willy, ross.zwisler, hch On Thu, Oct 22, 2015 at 01:10:21PM -0400, Dan Williams wrote: > Both, __dax_pmd_fault, and clear_pmem() were taking special steps to > clear memory a page at a time to take advantage of non-temporal > clear_page() implementations. However, x86_64 does not use > non-temporal instructions for clear_page(), and arch_clear_pmem() was > always incurring the cost of __arch_wb_cache_pmem(). > > Clean up the assumption that doing clear_pmem() a page at a time is more > performant. > > Cc: Ross Zwisler <ross.zwisler@linux.intel.com> > Reported-by: Dave Hansen <dave.hansen@linux.intel.com> > Signed-off-by: Dan Williams <dan.j.williams@intel.com> Reviewed-by: Ross Zwisler <ross.zwisler@linux.intel.com> ^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH v2 2/5] dax: increase granularity of dax_clear_blocks() operations 2015-10-22 17:10 [PATCH v2 0/5] block, dax: updates for 4.4 Dan Williams 2015-10-22 17:10 ` [PATCH v2 1/5] pmem, dax: clean up clear_pmem() Dan Williams @ 2015-10-22 17:10 ` Dan Williams 2015-10-22 21:04 ` Jeff Moyer 2015-10-22 17:10 ` [PATCH v2 3/5] block, dax: fix lifetime of in-kernel dax mappings with dax_map_atomic() Dan Williams ` (2 subsequent siblings) 4 siblings, 1 reply; 18+ messages in thread From: Dan Williams @ 2015-10-22 17:10 UTC (permalink / raw) To: axboe Cc: jack, linux-nvdimm, david, linux-kernel, Jan Kara, willy, akpm, ross.zwisler, hch dax_clear_blocks is currently performing a cond_resched() after every PAGE_SIZE memset. We need not check so frequently, for example md-raid only calls cond_resched() at stripe granularity. Also, in preparation for introducing a dax_map_atomic() operation that temporarily pins a dax mapping move the call to cond_resched() to the outer loop. Reviewed-by: Jan Kara <jack@suse.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com> --- fs/dax.c | 27 ++++++++++++--------------- 1 file changed, 12 insertions(+), 15 deletions(-) diff --git a/fs/dax.c b/fs/dax.c index 5dc33d788d50..f8e543839e5c 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -28,6 +28,7 @@ #include <linux/sched.h> #include <linux/uio.h> #include <linux/vmstat.h> +#include <linux/sizes.h> int dax_clear_blocks(struct inode *inode, sector_t block, long size) { @@ -38,24 +39,20 @@ int dax_clear_blocks(struct inode *inode, sector_t block, long size) do { void __pmem *addr; unsigned long pfn; - long count; + long count, sz; - count = bdev_direct_access(bdev, sector, &addr, &pfn, size); + sz = min_t(long, size, SZ_1M); + count = bdev_direct_access(bdev, sector, &addr, &pfn, sz); if (count < 0) return count; - BUG_ON(size < count); - while (count > 0) { - unsigned pgsz = PAGE_SIZE - offset_in_page(addr); - if (pgsz > count) - pgsz = count; - clear_pmem(addr, pgsz); - addr += pgsz; - size -= pgsz; - count -= pgsz; - BUG_ON(pgsz & 511); - sector += pgsz / 512; - cond_resched(); - } + if (count < sz) + sz = count; + clear_pmem(addr, sz); + addr += sz; + size -= sz; + BUG_ON(sz & 511); + sector += sz / 512; + cond_resched(); } while (size); wmb_pmem(); ^ permalink raw reply related [flat|nested] 18+ messages in thread
* Re: [PATCH v2 2/5] dax: increase granularity of dax_clear_blocks() operations 2015-10-22 17:10 ` [PATCH v2 2/5] dax: increase granularity of dax_clear_blocks() operations Dan Williams @ 2015-10-22 21:04 ` Jeff Moyer 2015-10-22 22:57 ` Williams, Dan J 0 siblings, 1 reply; 18+ messages in thread From: Jeff Moyer @ 2015-10-22 21:04 UTC (permalink / raw) To: Dan Williams Cc: axboe, jack, linux-nvdimm, david, linux-kernel, hch, Jan Kara, akpm Dan Williams <dan.j.williams@intel.com> writes: > dax_clear_blocks is currently performing a cond_resched() after every > PAGE_SIZE memset. We need not check so frequently, for example md-raid > only calls cond_resched() at stripe granularity. Also, in preparation > for introducing a dax_map_atomic() operation that temporarily pins a dax > mapping move the call to cond_resched() to the outer loop. There's nothing wrong with the mechanics here, but why bother? I only see 1 caller in the kernel, and that caller passes in 1<<inode->i_blkbits for the size (so 1 page or less). Did you plan to add other callers? I don't see them in this particular patch set. Again, I'm not taking issue with the patch, I'm just wondering what motivated the change. Thanks! Jeff > > Reviewed-by: Jan Kara <jack@suse.com> > Signed-off-by: Dan Williams <dan.j.williams@intel.com> > --- > fs/dax.c | 27 ++++++++++++--------------- > 1 file changed, 12 insertions(+), 15 deletions(-) > > diff --git a/fs/dax.c b/fs/dax.c > index 5dc33d788d50..f8e543839e5c 100644 > --- a/fs/dax.c > +++ b/fs/dax.c > @@ -28,6 +28,7 @@ > #include <linux/sched.h> > #include <linux/uio.h> > #include <linux/vmstat.h> > +#include <linux/sizes.h> > > int dax_clear_blocks(struct inode *inode, sector_t block, long size) > { > @@ -38,24 +39,20 @@ int dax_clear_blocks(struct inode *inode, sector_t block, long size) > do { > void __pmem *addr; > unsigned long pfn; > - long count; > + long count, sz; > > - count = bdev_direct_access(bdev, sector, &addr, &pfn, size); > + sz = min_t(long, size, SZ_1M); > + count = bdev_direct_access(bdev, sector, &addr, &pfn, sz); > if (count < 0) > return count; > - BUG_ON(size < count); > - while (count > 0) { > - unsigned pgsz = PAGE_SIZE - offset_in_page(addr); > - if (pgsz > count) > - pgsz = count; > - clear_pmem(addr, pgsz); > - addr += pgsz; > - size -= pgsz; > - count -= pgsz; > - BUG_ON(pgsz & 511); > - sector += pgsz / 512; > - cond_resched(); > - } > + if (count < sz) > + sz = count; > + clear_pmem(addr, sz); > + addr += sz; > + size -= sz; > + BUG_ON(sz & 511); > + sector += sz / 512; > + cond_resched(); > } while (size); > > wmb_pmem(); > > _______________________________________________ > Linux-nvdimm mailing list > Linux-nvdimm@lists.01.org > https://lists.01.org/mailman/listinfo/linux-nvdimm ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v2 2/5] dax: increase granularity of dax_clear_blocks() operations 2015-10-22 21:04 ` Jeff Moyer @ 2015-10-22 22:57 ` Williams, Dan J 2015-10-28 21:02 ` Jeff Moyer 0 siblings, 1 reply; 18+ messages in thread From: Williams, Dan J @ 2015-10-22 22:57 UTC (permalink / raw) To: jmoyer@redhat.com Cc: linux-kernel@vger.kernel.org, linux-nvdimm@ml01.01.org, hch@lst.de, akpm@linux-foundation.org, axboe@fb.com, jack@suse.com, david@fromorbit.com, jack@suse.cz [-- Warning: decoded text below may be mangled, UTF-8 assumed --] [-- Attachment #1: Type: text/plain; charset="utf-8", Size: 2680 bytes --] On Thu, 2015-10-22 at 17:04 -0400, Jeff Moyer wrote: > Dan Williams <dan.j.williams@intel.com> writes: > > > dax_clear_blocks is currently performing a cond_resched() after every > > PAGE_SIZE memset. We need not check so frequently, for example md-raid > > only calls cond_resched() at stripe granularity. Also, in preparation > > for introducing a dax_map_atomic() operation that temporarily pins a dax > > mapping move the call to cond_resched() to the outer loop. > > There's nothing wrong with the mechanics here, but why bother? I only > see 1 caller in the kernel, and that caller passes in > 1<<inode->i_blkbits for the size (so 1 page or less). Did you plan to > add other callers? I don't see them in this particular patch set. > > Again, I'm not taking issue with the patch, I'm just wondering what > motivated the change. The motivation is the subsequent patch to wrap all touches of pmem within a dax_map_atomic() / dax_unmap_atomic() pairing. If I just do the straightforward conversion of this function to dax_map_atomic() it looks something like this: > diff --git a/fs/dax.c b/fs/dax.c > index 5dc33d788d50..fa2a2a255d3a 100644 > --- a/fs/dax.c > +++ b/fs/dax.c > @@ -40,9 +40,9 @@ int dax_clear_blocks(struct inode *inode, sector_t block, long size) > unsigned long pfn; > long count; > > - count = bdev_direct_access(bdev, sector, &addr, &pfn, size); > - if (count < 0) > - return count; > + addr = __dax_map_atomic(bdev, sector, size, &pfn, &count); > + if (IS_ERR(addr)) > + return PTR_ERR(addr); > BUG_ON(size < count); > while (count > 0) { > unsigned pgsz = PAGE_SIZE - offset_in_page(addr); > @@ -56,6 +56,7 @@ int dax_clear_blocks(struct inode *inode, sector_t block, long size) > sector += pgsz / 512; > cond_resched(); > } > + dax_unmap_atomic(bdev, addr); > } while (size); > > wmb_pmem(); The problem is that intervening call to cond_resched(). I later want to inject an rcu_read_lock()/unlock() pair to allow flushing active dax_map_atomic() usages at driver teardown time [1]. But, I think the patch stands alone as a cleanup outside of that admittedly hidden motivation. [1]: "mm, pmem: devm_memunmap_pages(), truncate and unmap ZONE_DEVICE pages" https://lists.01.org/pipermail/linux-nvdimm/2015-October/002406.html ÿôèº{.nÇ+·®+%Ëÿ±éݶ\x17¥wÿº{.nÇ+·¥{±þG«éÿ{ayº\x1dÊÚë,j\a¢f£¢·hïêÿêçz_è®\x03(éÝ¢j"ú\x1a¶^[m§ÿÿ¾\a«þG«éÿ¢¸?¨èÚ&£ø§~á¶iOæ¬z·vØ^\x14\x04\x1a¶^[m§ÿÿÃ\fÿ¶ìÿ¢¸?I¥ ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v2 2/5] dax: increase granularity of dax_clear_blocks() operations 2015-10-22 22:57 ` Williams, Dan J @ 2015-10-28 21:02 ` Jeff Moyer 0 siblings, 0 replies; 18+ messages in thread From: Jeff Moyer @ 2015-10-28 21:02 UTC (permalink / raw) To: Williams, Dan J Cc: linux-kernel@vger.kernel.org, linux-nvdimm@ml01.01.org, hch@lst.de, akpm@linux-foundation.org, axboe@fb.com, jack@suse.com, david@fromorbit.com, jack@suse.cz "Williams, Dan J" <dan.j.williams@intel.com> writes: > The problem is that intervening call to cond_resched(). I later want to > inject an rcu_read_lock()/unlock() pair to allow flushing active > dax_map_atomic() usages at driver teardown time [1]. But, I think the > patch stands alone as a cleanup outside of that admittedly hidden > motivation. I'm not going to split hairs. Reviewed-by: Jeff Moyer <jmoyer@redhat.com> ^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH v2 3/5] block, dax: fix lifetime of in-kernel dax mappings with dax_map_atomic() 2015-10-22 17:10 [PATCH v2 0/5] block, dax: updates for 4.4 Dan Williams 2015-10-22 17:10 ` [PATCH v2 1/5] pmem, dax: clean up clear_pmem() Dan Williams 2015-10-22 17:10 ` [PATCH v2 2/5] dax: increase granularity of dax_clear_blocks() operations Dan Williams @ 2015-10-22 17:10 ` Dan Williams 2015-10-28 20:15 ` Jeff Moyer 2015-10-22 17:10 ` [PATCH v2 4/5] block: introduce bdev_file_inode() Dan Williams 2015-10-22 17:10 ` [PATCH v2 5/5] block: enable dax for raw block devices Dan Williams 4 siblings, 1 reply; 18+ messages in thread From: Dan Williams @ 2015-10-22 17:10 UTC (permalink / raw) To: axboe Cc: Jens Axboe, Boaz Harrosh, jack, akpm, linux-nvdimm, david, linux-kernel, willy, ross.zwisler, hch The DAX implementation needs to protect new calls to ->direct_access() and usage of its return value against unbind of the underlying block device. Use blk_queue_enter()/blk_queue_exit() to either prevent blk_cleanup_queue() from proceeding, or fail the dax_map_atomic() if the request_queue is being torn down. Cc: Jens Axboe <axboe@kernel.dk> Cc: Christoph Hellwig <hch@lst.de> Cc: Boaz Harrosh <boaz@plexistor.com> Cc: Dave Chinner <david@fromorbit.com> Cc: Ross Zwisler <ross.zwisler@linux.intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com> --- block/blk.h | 2 - fs/dax.c | 165 ++++++++++++++++++++++++++++++++---------------- include/linux/blkdev.h | 2 + 3 files changed, 112 insertions(+), 57 deletions(-) diff --git a/block/blk.h b/block/blk.h index 157c93d54dc9..dc7d9411fa45 100644 --- a/block/blk.h +++ b/block/blk.h @@ -72,8 +72,6 @@ void blk_dequeue_request(struct request *rq); void __blk_queue_free_tags(struct request_queue *q); bool __blk_end_bidi_request(struct request *rq, int error, unsigned int nr_bytes, unsigned int bidi_bytes); -int blk_queue_enter(struct request_queue *q, gfp_t gfp); -void blk_queue_exit(struct request_queue *q); void blk_freeze_queue(struct request_queue *q); static inline void blk_queue_enter_live(struct request_queue *q) diff --git a/fs/dax.c b/fs/dax.c index f8e543839e5c..a480729c00ec 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -30,6 +30,40 @@ #include <linux/vmstat.h> #include <linux/sizes.h> +static void __pmem *__dax_map_atomic(struct block_device *bdev, sector_t sector, + long size, unsigned long *pfn, long *len) +{ + long rc; + void __pmem *addr; + struct request_queue *q = bdev->bd_queue; + + if (blk_queue_enter(q, GFP_NOWAIT) != 0) + return (void __pmem *) ERR_PTR(-EIO); + rc = bdev_direct_access(bdev, sector, &addr, pfn, size); + if (len) + *len = rc; + if (rc < 0) { + blk_queue_exit(q); + return (void __pmem *) ERR_PTR(rc); + } + return addr; +} + +static void __pmem *dax_map_atomic(struct block_device *bdev, sector_t sector, + long size) +{ + unsigned long pfn; + + return __dax_map_atomic(bdev, sector, size, &pfn, NULL); +} + +static void dax_unmap_atomic(struct block_device *bdev, void __pmem *addr) +{ + if (IS_ERR(addr)) + return; + blk_queue_exit(bdev->bd_queue); +} + int dax_clear_blocks(struct inode *inode, sector_t block, long size) { struct block_device *bdev = inode->i_sb->s_bdev; @@ -42,9 +76,9 @@ int dax_clear_blocks(struct inode *inode, sector_t block, long size) long count, sz; sz = min_t(long, size, SZ_1M); - count = bdev_direct_access(bdev, sector, &addr, &pfn, sz); - if (count < 0) - return count; + addr = __dax_map_atomic(bdev, sector, size, &pfn, &count); + if (IS_ERR(addr)) + return PTR_ERR(addr); if (count < sz) sz = count; clear_pmem(addr, sz); @@ -52,6 +86,7 @@ int dax_clear_blocks(struct inode *inode, sector_t block, long size) size -= sz; BUG_ON(sz & 511); sector += sz / 512; + dax_unmap_atomic(bdev, addr); cond_resched(); } while (size); @@ -60,14 +95,6 @@ int dax_clear_blocks(struct inode *inode, sector_t block, long size) } EXPORT_SYMBOL_GPL(dax_clear_blocks); -static long dax_get_addr(struct buffer_head *bh, void __pmem **addr, - unsigned blkbits) -{ - unsigned long pfn; - sector_t sector = bh->b_blocknr << (blkbits - 9); - return bdev_direct_access(bh->b_bdev, sector, addr, &pfn, bh->b_size); -} - /* the clear_pmem() calls are ordered by a wmb_pmem() in the caller */ static void dax_new_buf(void __pmem *addr, unsigned size, unsigned first, loff_t pos, loff_t end) @@ -97,19 +124,30 @@ static bool buffer_size_valid(struct buffer_head *bh) return bh->b_state != 0; } + +static sector_t to_sector(const struct buffer_head *bh, + const struct inode *inode) +{ + sector_t sector = bh->b_blocknr << (inode->i_blkbits - 9); + + return sector; +} + static ssize_t dax_io(struct inode *inode, struct iov_iter *iter, loff_t start, loff_t end, get_block_t get_block, struct buffer_head *bh) { - ssize_t retval = 0; - loff_t pos = start; - loff_t max = start; - loff_t bh_max = start; - void __pmem *addr; + loff_t pos = start, max = start, bh_max = start; + struct block_device *bdev = NULL; + int rw = iov_iter_rw(iter), rc; + long map_len = 0; + unsigned long pfn; + void __pmem *addr = NULL; + void __pmem *kmap = (void __pmem *) ERR_PTR(-EIO); bool hole = false; bool need_wmb = false; - if (iov_iter_rw(iter) != WRITE) + if (rw == READ) end = min(end, i_size_read(inode)); while (pos < end) { @@ -124,13 +162,13 @@ static ssize_t dax_io(struct inode *inode, struct iov_iter *iter, if (pos == bh_max) { bh->b_size = PAGE_ALIGN(end - pos); bh->b_state = 0; - retval = get_block(inode, block, bh, - iov_iter_rw(iter) == WRITE); - if (retval) + rc = get_block(inode, block, bh, rw == WRITE); + if (rc) break; if (!buffer_size_valid(bh)) bh->b_size = 1 << blkbits; bh_max = pos - first + bh->b_size; + bdev = bh->b_bdev; } else { unsigned done = bh->b_size - (bh_max - (pos - first)); @@ -138,21 +176,27 @@ static ssize_t dax_io(struct inode *inode, struct iov_iter *iter, bh->b_size -= done; } - hole = iov_iter_rw(iter) != WRITE && !buffer_written(bh); + hole = rw == READ && !buffer_written(bh); if (hole) { addr = NULL; size = bh->b_size - first; } else { - retval = dax_get_addr(bh, &addr, blkbits); - if (retval < 0) + dax_unmap_atomic(bdev, kmap); + kmap = __dax_map_atomic(bdev, + to_sector(bh, inode), + bh->b_size, &pfn, &map_len); + if (IS_ERR(kmap)) { + rc = PTR_ERR(kmap); break; + } + addr = kmap; if (buffer_unwritten(bh) || buffer_new(bh)) { - dax_new_buf(addr, retval, first, pos, - end); + dax_new_buf(addr, map_len, first, pos, + end); need_wmb = true; } addr += first; - size = retval - first; + size = map_len - first; } max = min(pos + size, end); } @@ -175,8 +219,9 @@ static ssize_t dax_io(struct inode *inode, struct iov_iter *iter, if (need_wmb) wmb_pmem(); + dax_unmap_atomic(bdev, kmap); - return (pos == start) ? retval : pos - start; + return (pos == start) ? rc : pos - start; } /** @@ -265,28 +310,31 @@ static int dax_load_hole(struct address_space *mapping, struct page *page, return VM_FAULT_LOCKED; } -static int copy_user_bh(struct page *to, struct buffer_head *bh, - unsigned blkbits, unsigned long vaddr) +static int copy_user_bh(struct page *to, struct inode *inode, + struct buffer_head *bh, unsigned long vaddr) { + struct block_device *bdev = bh->b_bdev; void __pmem *vfrom; void *vto; - if (dax_get_addr(bh, &vfrom, blkbits) < 0) - return -EIO; + vfrom = dax_map_atomic(bdev, to_sector(bh, inode), bh->b_size); + if (IS_ERR(vfrom)) + return PTR_ERR(vfrom); vto = kmap_atomic(to); copy_user_page(vto, (void __force *)vfrom, vaddr, to); kunmap_atomic(vto); + dax_unmap_atomic(bdev, vfrom); return 0; } static int dax_insert_mapping(struct inode *inode, struct buffer_head *bh, struct vm_area_struct *vma, struct vm_fault *vmf) { - struct address_space *mapping = inode->i_mapping; - sector_t sector = bh->b_blocknr << (inode->i_blkbits - 9); unsigned long vaddr = (unsigned long)vmf->virtual_address; - void __pmem *addr; + struct address_space *mapping = inode->i_mapping; + struct block_device *bdev = bh->b_bdev; unsigned long pfn; + void __pmem *addr; pgoff_t size; int error; @@ -305,11 +353,10 @@ static int dax_insert_mapping(struct inode *inode, struct buffer_head *bh, goto out; } - error = bdev_direct_access(bh->b_bdev, sector, &addr, &pfn, bh->b_size); - if (error < 0) - goto out; - if (error < PAGE_SIZE) { - error = -EIO; + addr = __dax_map_atomic(bdev, to_sector(bh, inode), bh->b_size, + &pfn, NULL); + if (IS_ERR(addr)) { + error = PTR_ERR(addr); goto out; } @@ -317,6 +364,7 @@ static int dax_insert_mapping(struct inode *inode, struct buffer_head *bh, clear_pmem(addr, PAGE_SIZE); wmb_pmem(); } + dax_unmap_atomic(bdev, addr); error = vm_insert_mixed(vma, vaddr, pfn); @@ -412,7 +460,7 @@ int __dax_fault(struct vm_area_struct *vma, struct vm_fault *vmf, if (vmf->cow_page) { struct page *new_page = vmf->cow_page; if (buffer_written(&bh)) - error = copy_user_bh(new_page, &bh, blkbits, vaddr); + error = copy_user_bh(new_page, inode, &bh, vaddr); else clear_user_highpage(new_page, vaddr); if (error) @@ -524,11 +572,9 @@ int __dax_pmd_fault(struct vm_area_struct *vma, unsigned long address, unsigned blkbits = inode->i_blkbits; unsigned long pmd_addr = address & PMD_MASK; bool write = flags & FAULT_FLAG_WRITE; - long length; - void __pmem *kaddr; + struct block_device *bdev; pgoff_t size, pgoff; - sector_t block, sector; - unsigned long pfn; + sector_t block; int result = 0; /* Fall back to PTEs if we're going to COW */ @@ -552,9 +598,9 @@ int __dax_pmd_fault(struct vm_area_struct *vma, unsigned long address, block = (sector_t)pgoff << (PAGE_SHIFT - blkbits); bh.b_size = PMD_SIZE; - length = get_block(inode, block, &bh, write); - if (length) + if (get_block(inode, block, &bh, write) != 0) return VM_FAULT_SIGBUS; + bdev = bh.b_bdev; i_mmap_lock_read(mapping); /* @@ -609,15 +655,20 @@ int __dax_pmd_fault(struct vm_area_struct *vma, unsigned long address, result = VM_FAULT_NOPAGE; spin_unlock(ptl); } else { - sector = bh.b_blocknr << (blkbits - 9); - length = bdev_direct_access(bh.b_bdev, sector, &kaddr, &pfn, - bh.b_size); - if (length < 0) { + long length; + unsigned long pfn; + void __pmem *kaddr = __dax_map_atomic(bdev, + to_sector(&bh, inode), HPAGE_SIZE, &pfn, + &length); + + if (IS_ERR(kaddr)) { result = VM_FAULT_SIGBUS; goto out; } - if ((length < PMD_SIZE) || (pfn & PG_PMD_COLOUR)) + if ((length < PMD_SIZE) || (pfn & PG_PMD_COLOUR)) { + dax_unmap_atomic(bdev, kaddr); goto fallback; + } if (buffer_unwritten(&bh) || buffer_new(&bh)) { clear_pmem(kaddr, HPAGE_SIZE); @@ -626,6 +677,7 @@ int __dax_pmd_fault(struct vm_area_struct *vma, unsigned long address, mem_cgroup_count_vm_event(vma->vm_mm, PGMAJFAULT); result |= VM_FAULT_MAJOR; } + dax_unmap_atomic(bdev, kaddr); result |= vmf_insert_pfn_pmd(vma, address, pmd, pfn, write); } @@ -729,12 +781,15 @@ int dax_zero_page_range(struct inode *inode, loff_t from, unsigned length, if (err < 0) return err; if (buffer_written(&bh)) { - void __pmem *addr; - err = dax_get_addr(&bh, &addr, inode->i_blkbits); - if (err < 0) - return err; + struct block_device *bdev = bh.b_bdev; + void __pmem *addr = dax_map_atomic(bdev, to_sector(&bh, inode), + PAGE_CACHE_SIZE); + + if (IS_ERR(addr)) + return PTR_ERR(addr); clear_pmem(addr + offset, length); wmb_pmem(); + dax_unmap_atomic(bdev, addr); } return 0; diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index cf57884db4b7..59a770dad804 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -792,6 +792,8 @@ extern int scsi_cmd_ioctl(struct request_queue *, struct gendisk *, fmode_t, extern int sg_scsi_ioctl(struct request_queue *, struct gendisk *, fmode_t, struct scsi_ioctl_command __user *); +extern int blk_queue_enter(struct request_queue *q, gfp_t gfp); +extern void blk_queue_exit(struct request_queue *q); extern void blk_start_queue(struct request_queue *q); extern void blk_stop_queue(struct request_queue *q); extern void blk_sync_queue(struct request_queue *q); ^ permalink raw reply related [flat|nested] 18+ messages in thread
* Re: [PATCH v2 3/5] block, dax: fix lifetime of in-kernel dax mappings with dax_map_atomic() 2015-10-22 17:10 ` [PATCH v2 3/5] block, dax: fix lifetime of in-kernel dax mappings with dax_map_atomic() Dan Williams @ 2015-10-28 20:15 ` Jeff Moyer 0 siblings, 0 replies; 18+ messages in thread From: Jeff Moyer @ 2015-10-28 20:15 UTC (permalink / raw) To: Dan Williams Cc: axboe, Jens Axboe, jack, linux-nvdimm, david, linux-kernel, hch, akpm Dan Williams <dan.j.williams@intel.com> writes: > The DAX implementation needs to protect new calls to ->direct_access() > and usage of its return value against unbind of the underlying block > device. Use blk_queue_enter()/blk_queue_exit() to either prevent > blk_cleanup_queue() from proceeding, or fail the dax_map_atomic() if the > request_queue is being torn down. I don't see any problems with your changes here. Reviewed-by: Jeff Moyer <jmoyer@redhat.com> > Cc: Jens Axboe <axboe@kernel.dk> > Cc: Christoph Hellwig <hch@lst.de> > Cc: Boaz Harrosh <boaz@plexistor.com> > Cc: Dave Chinner <david@fromorbit.com> > Cc: Ross Zwisler <ross.zwisler@linux.intel.com> > Signed-off-by: Dan Williams <dan.j.williams@intel.com> > --- > block/blk.h | 2 - > fs/dax.c | 165 ++++++++++++++++++++++++++++++++---------------- > include/linux/blkdev.h | 2 + > 3 files changed, 112 insertions(+), 57 deletions(-) > > diff --git a/block/blk.h b/block/blk.h > index 157c93d54dc9..dc7d9411fa45 100644 > --- a/block/blk.h > +++ b/block/blk.h > @@ -72,8 +72,6 @@ void blk_dequeue_request(struct request *rq); > void __blk_queue_free_tags(struct request_queue *q); > bool __blk_end_bidi_request(struct request *rq, int error, > unsigned int nr_bytes, unsigned int bidi_bytes); > -int blk_queue_enter(struct request_queue *q, gfp_t gfp); > -void blk_queue_exit(struct request_queue *q); > void blk_freeze_queue(struct request_queue *q); > > static inline void blk_queue_enter_live(struct request_queue *q) > diff --git a/fs/dax.c b/fs/dax.c > index f8e543839e5c..a480729c00ec 100644 > --- a/fs/dax.c > +++ b/fs/dax.c > @@ -30,6 +30,40 @@ > #include <linux/vmstat.h> > #include <linux/sizes.h> > > +static void __pmem *__dax_map_atomic(struct block_device *bdev, sector_t sector, > + long size, unsigned long *pfn, long *len) > +{ > + long rc; > + void __pmem *addr; > + struct request_queue *q = bdev->bd_queue; > + > + if (blk_queue_enter(q, GFP_NOWAIT) != 0) > + return (void __pmem *) ERR_PTR(-EIO); > + rc = bdev_direct_access(bdev, sector, &addr, pfn, size); > + if (len) > + *len = rc; > + if (rc < 0) { > + blk_queue_exit(q); > + return (void __pmem *) ERR_PTR(rc); > + } > + return addr; > +} > + > +static void __pmem *dax_map_atomic(struct block_device *bdev, sector_t sector, > + long size) > +{ > + unsigned long pfn; > + > + return __dax_map_atomic(bdev, sector, size, &pfn, NULL); > +} > + > +static void dax_unmap_atomic(struct block_device *bdev, void __pmem *addr) > +{ > + if (IS_ERR(addr)) > + return; > + blk_queue_exit(bdev->bd_queue); > +} > + > int dax_clear_blocks(struct inode *inode, sector_t block, long size) > { > struct block_device *bdev = inode->i_sb->s_bdev; > @@ -42,9 +76,9 @@ int dax_clear_blocks(struct inode *inode, sector_t block, long size) > long count, sz; > > sz = min_t(long, size, SZ_1M); > - count = bdev_direct_access(bdev, sector, &addr, &pfn, sz); > - if (count < 0) > - return count; > + addr = __dax_map_atomic(bdev, sector, size, &pfn, &count); > + if (IS_ERR(addr)) > + return PTR_ERR(addr); > if (count < sz) > sz = count; > clear_pmem(addr, sz); > @@ -52,6 +86,7 @@ int dax_clear_blocks(struct inode *inode, sector_t block, long size) > size -= sz; > BUG_ON(sz & 511); > sector += sz / 512; > + dax_unmap_atomic(bdev, addr); > cond_resched(); > } while (size); > > @@ -60,14 +95,6 @@ int dax_clear_blocks(struct inode *inode, sector_t block, long size) > } > EXPORT_SYMBOL_GPL(dax_clear_blocks); > > -static long dax_get_addr(struct buffer_head *bh, void __pmem **addr, > - unsigned blkbits) > -{ > - unsigned long pfn; > - sector_t sector = bh->b_blocknr << (blkbits - 9); > - return bdev_direct_access(bh->b_bdev, sector, addr, &pfn, bh->b_size); > -} > - > /* the clear_pmem() calls are ordered by a wmb_pmem() in the caller */ > static void dax_new_buf(void __pmem *addr, unsigned size, unsigned first, > loff_t pos, loff_t end) > @@ -97,19 +124,30 @@ static bool buffer_size_valid(struct buffer_head *bh) > return bh->b_state != 0; > } > > + > +static sector_t to_sector(const struct buffer_head *bh, > + const struct inode *inode) > +{ > + sector_t sector = bh->b_blocknr << (inode->i_blkbits - 9); > + > + return sector; > +} > + > static ssize_t dax_io(struct inode *inode, struct iov_iter *iter, > loff_t start, loff_t end, get_block_t get_block, > struct buffer_head *bh) > { > - ssize_t retval = 0; > - loff_t pos = start; > - loff_t max = start; > - loff_t bh_max = start; > - void __pmem *addr; > + loff_t pos = start, max = start, bh_max = start; > + struct block_device *bdev = NULL; > + int rw = iov_iter_rw(iter), rc; > + long map_len = 0; > + unsigned long pfn; > + void __pmem *addr = NULL; > + void __pmem *kmap = (void __pmem *) ERR_PTR(-EIO); > bool hole = false; > bool need_wmb = false; > > - if (iov_iter_rw(iter) != WRITE) > + if (rw == READ) > end = min(end, i_size_read(inode)); > > while (pos < end) { > @@ -124,13 +162,13 @@ static ssize_t dax_io(struct inode *inode, struct iov_iter *iter, > if (pos == bh_max) { > bh->b_size = PAGE_ALIGN(end - pos); > bh->b_state = 0; > - retval = get_block(inode, block, bh, > - iov_iter_rw(iter) == WRITE); > - if (retval) > + rc = get_block(inode, block, bh, rw == WRITE); > + if (rc) > break; > if (!buffer_size_valid(bh)) > bh->b_size = 1 << blkbits; > bh_max = pos - first + bh->b_size; > + bdev = bh->b_bdev; > } else { > unsigned done = bh->b_size - > (bh_max - (pos - first)); > @@ -138,21 +176,27 @@ static ssize_t dax_io(struct inode *inode, struct iov_iter *iter, > bh->b_size -= done; > } > > - hole = iov_iter_rw(iter) != WRITE && !buffer_written(bh); > + hole = rw == READ && !buffer_written(bh); > if (hole) { > addr = NULL; > size = bh->b_size - first; > } else { > - retval = dax_get_addr(bh, &addr, blkbits); > - if (retval < 0) > + dax_unmap_atomic(bdev, kmap); > + kmap = __dax_map_atomic(bdev, > + to_sector(bh, inode), > + bh->b_size, &pfn, &map_len); > + if (IS_ERR(kmap)) { > + rc = PTR_ERR(kmap); > break; > + } > + addr = kmap; > if (buffer_unwritten(bh) || buffer_new(bh)) { > - dax_new_buf(addr, retval, first, pos, > - end); > + dax_new_buf(addr, map_len, first, pos, > + end); > need_wmb = true; > } > addr += first; > - size = retval - first; > + size = map_len - first; > } > max = min(pos + size, end); > } > @@ -175,8 +219,9 @@ static ssize_t dax_io(struct inode *inode, struct iov_iter *iter, > > if (need_wmb) > wmb_pmem(); > + dax_unmap_atomic(bdev, kmap); > > - return (pos == start) ? retval : pos - start; > + return (pos == start) ? rc : pos - start; > } > > /** > @@ -265,28 +310,31 @@ static int dax_load_hole(struct address_space *mapping, struct page *page, > return VM_FAULT_LOCKED; > } > > -static int copy_user_bh(struct page *to, struct buffer_head *bh, > - unsigned blkbits, unsigned long vaddr) > +static int copy_user_bh(struct page *to, struct inode *inode, > + struct buffer_head *bh, unsigned long vaddr) > { > + struct block_device *bdev = bh->b_bdev; > void __pmem *vfrom; > void *vto; > > - if (dax_get_addr(bh, &vfrom, blkbits) < 0) > - return -EIO; > + vfrom = dax_map_atomic(bdev, to_sector(bh, inode), bh->b_size); > + if (IS_ERR(vfrom)) > + return PTR_ERR(vfrom); > vto = kmap_atomic(to); > copy_user_page(vto, (void __force *)vfrom, vaddr, to); > kunmap_atomic(vto); > + dax_unmap_atomic(bdev, vfrom); > return 0; > } > > static int dax_insert_mapping(struct inode *inode, struct buffer_head *bh, > struct vm_area_struct *vma, struct vm_fault *vmf) > { > - struct address_space *mapping = inode->i_mapping; > - sector_t sector = bh->b_blocknr << (inode->i_blkbits - 9); > unsigned long vaddr = (unsigned long)vmf->virtual_address; > - void __pmem *addr; > + struct address_space *mapping = inode->i_mapping; > + struct block_device *bdev = bh->b_bdev; > unsigned long pfn; > + void __pmem *addr; > pgoff_t size; > int error; > > @@ -305,11 +353,10 @@ static int dax_insert_mapping(struct inode *inode, struct buffer_head *bh, > goto out; > } > > - error = bdev_direct_access(bh->b_bdev, sector, &addr, &pfn, bh->b_size); > - if (error < 0) > - goto out; > - if (error < PAGE_SIZE) { > - error = -EIO; > + addr = __dax_map_atomic(bdev, to_sector(bh, inode), bh->b_size, > + &pfn, NULL); > + if (IS_ERR(addr)) { > + error = PTR_ERR(addr); > goto out; > } > > @@ -317,6 +364,7 @@ static int dax_insert_mapping(struct inode *inode, struct buffer_head *bh, > clear_pmem(addr, PAGE_SIZE); > wmb_pmem(); > } > + dax_unmap_atomic(bdev, addr); > > error = vm_insert_mixed(vma, vaddr, pfn); > > @@ -412,7 +460,7 @@ int __dax_fault(struct vm_area_struct *vma, struct vm_fault *vmf, > if (vmf->cow_page) { > struct page *new_page = vmf->cow_page; > if (buffer_written(&bh)) > - error = copy_user_bh(new_page, &bh, blkbits, vaddr); > + error = copy_user_bh(new_page, inode, &bh, vaddr); > else > clear_user_highpage(new_page, vaddr); > if (error) > @@ -524,11 +572,9 @@ int __dax_pmd_fault(struct vm_area_struct *vma, unsigned long address, > unsigned blkbits = inode->i_blkbits; > unsigned long pmd_addr = address & PMD_MASK; > bool write = flags & FAULT_FLAG_WRITE; > - long length; > - void __pmem *kaddr; > + struct block_device *bdev; > pgoff_t size, pgoff; > - sector_t block, sector; > - unsigned long pfn; > + sector_t block; > int result = 0; > > /* Fall back to PTEs if we're going to COW */ > @@ -552,9 +598,9 @@ int __dax_pmd_fault(struct vm_area_struct *vma, unsigned long address, > block = (sector_t)pgoff << (PAGE_SHIFT - blkbits); > > bh.b_size = PMD_SIZE; > - length = get_block(inode, block, &bh, write); > - if (length) > + if (get_block(inode, block, &bh, write) != 0) > return VM_FAULT_SIGBUS; > + bdev = bh.b_bdev; > i_mmap_lock_read(mapping); > > /* > @@ -609,15 +655,20 @@ int __dax_pmd_fault(struct vm_area_struct *vma, unsigned long address, > result = VM_FAULT_NOPAGE; > spin_unlock(ptl); > } else { > - sector = bh.b_blocknr << (blkbits - 9); > - length = bdev_direct_access(bh.b_bdev, sector, &kaddr, &pfn, > - bh.b_size); > - if (length < 0) { > + long length; > + unsigned long pfn; > + void __pmem *kaddr = __dax_map_atomic(bdev, > + to_sector(&bh, inode), HPAGE_SIZE, &pfn, > + &length); > + > + if (IS_ERR(kaddr)) { > result = VM_FAULT_SIGBUS; > goto out; > } > - if ((length < PMD_SIZE) || (pfn & PG_PMD_COLOUR)) > + if ((length < PMD_SIZE) || (pfn & PG_PMD_COLOUR)) { > + dax_unmap_atomic(bdev, kaddr); > goto fallback; > + } > > if (buffer_unwritten(&bh) || buffer_new(&bh)) { > clear_pmem(kaddr, HPAGE_SIZE); > @@ -626,6 +677,7 @@ int __dax_pmd_fault(struct vm_area_struct *vma, unsigned long address, > mem_cgroup_count_vm_event(vma->vm_mm, PGMAJFAULT); > result |= VM_FAULT_MAJOR; > } > + dax_unmap_atomic(bdev, kaddr); > > result |= vmf_insert_pfn_pmd(vma, address, pmd, pfn, write); > } > @@ -729,12 +781,15 @@ int dax_zero_page_range(struct inode *inode, loff_t from, unsigned length, > if (err < 0) > return err; > if (buffer_written(&bh)) { > - void __pmem *addr; > - err = dax_get_addr(&bh, &addr, inode->i_blkbits); > - if (err < 0) > - return err; > + struct block_device *bdev = bh.b_bdev; > + void __pmem *addr = dax_map_atomic(bdev, to_sector(&bh, inode), > + PAGE_CACHE_SIZE); > + > + if (IS_ERR(addr)) > + return PTR_ERR(addr); > clear_pmem(addr + offset, length); > wmb_pmem(); > + dax_unmap_atomic(bdev, addr); > } > > return 0; > diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h > index cf57884db4b7..59a770dad804 100644 > --- a/include/linux/blkdev.h > +++ b/include/linux/blkdev.h > @@ -792,6 +792,8 @@ extern int scsi_cmd_ioctl(struct request_queue *, struct gendisk *, fmode_t, > extern int sg_scsi_ioctl(struct request_queue *, struct gendisk *, fmode_t, > struct scsi_ioctl_command __user *); > > +extern int blk_queue_enter(struct request_queue *q, gfp_t gfp); > +extern void blk_queue_exit(struct request_queue *q); > extern void blk_start_queue(struct request_queue *q); > extern void blk_stop_queue(struct request_queue *q); > extern void blk_sync_queue(struct request_queue *q); > > _______________________________________________ > Linux-nvdimm mailing list > Linux-nvdimm@lists.01.org > https://lists.01.org/mailman/listinfo/linux-nvdimm ^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH v2 4/5] block: introduce bdev_file_inode() 2015-10-22 17:10 [PATCH v2 0/5] block, dax: updates for 4.4 Dan Williams ` (2 preceding siblings ...) 2015-10-22 17:10 ` [PATCH v2 3/5] block, dax: fix lifetime of in-kernel dax mappings with dax_map_atomic() Dan Williams @ 2015-10-22 17:10 ` Dan Williams 2015-10-22 20:37 ` Jan Kara 2015-10-28 20:16 ` Jeff Moyer 2015-10-22 17:10 ` [PATCH v2 5/5] block: enable dax for raw block devices Dan Williams 4 siblings, 2 replies; 18+ messages in thread From: Dan Williams @ 2015-10-22 17:10 UTC (permalink / raw) To: axboe Cc: jack, linux-nvdimm, david, linux-kernel, Al Viro, ross.zwisler, willy, akpm, hch Similar to the file_inode() helper, provide a helper to lookup the inode for a raw block device itself. Cc: Al Viro <viro@zeniv.linux.org.uk> Suggested-by: Jan Kara <jack@suse.cz> Signed-off-by: Dan Williams <dan.j.williams@intel.com> --- fs/block_dev.c | 19 ++++++++++++------- 1 file changed, 12 insertions(+), 7 deletions(-) diff --git a/fs/block_dev.c b/fs/block_dev.c index 0a793c7930eb..c1f691859a56 100644 --- a/fs/block_dev.c +++ b/fs/block_dev.c @@ -147,11 +147,16 @@ blkdev_get_block(struct inode *inode, sector_t iblock, return 0; } +static struct inode *bdev_file_inode(struct file *file) +{ + return file->f_mapping->host; +} + static ssize_t blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter, loff_t offset) { struct file *file = iocb->ki_filp; - struct inode *inode = file->f_mapping->host; + struct inode *inode = bdev_file_inode(file); if (IS_DAX(inode)) return dax_do_io(iocb, inode, iter, offset, blkdev_get_block, @@ -329,7 +334,7 @@ static int blkdev_write_end(struct file *file, struct address_space *mapping, */ static loff_t block_llseek(struct file *file, loff_t offset, int whence) { - struct inode *bd_inode = file->f_mapping->host; + struct inode *bd_inode = bdev_file_inode(file); loff_t retval; mutex_lock(&bd_inode->i_mutex); @@ -340,7 +345,7 @@ static loff_t block_llseek(struct file *file, loff_t offset, int whence) int blkdev_fsync(struct file *filp, loff_t start, loff_t end, int datasync) { - struct inode *bd_inode = filp->f_mapping->host; + struct inode *bd_inode = bdev_file_inode(filp); struct block_device *bdev = I_BDEV(bd_inode); int error; @@ -1579,14 +1584,14 @@ EXPORT_SYMBOL(blkdev_put); static int blkdev_close(struct inode * inode, struct file * filp) { - struct block_device *bdev = I_BDEV(filp->f_mapping->host); + struct block_device *bdev = I_BDEV(bdev_file_inode(filp)); blkdev_put(bdev, filp->f_mode); return 0; } static long block_ioctl(struct file *file, unsigned cmd, unsigned long arg) { - struct block_device *bdev = I_BDEV(file->f_mapping->host); + struct block_device *bdev = I_BDEV(bdev_file_inode(file)); fmode_t mode = file->f_mode; /* @@ -1611,7 +1616,7 @@ static long block_ioctl(struct file *file, unsigned cmd, unsigned long arg) ssize_t blkdev_write_iter(struct kiocb *iocb, struct iov_iter *from) { struct file *file = iocb->ki_filp; - struct inode *bd_inode = file->f_mapping->host; + struct inode *bd_inode = bdev_file_inode(file); loff_t size = i_size_read(bd_inode); struct blk_plug plug; ssize_t ret; @@ -1643,7 +1648,7 @@ EXPORT_SYMBOL_GPL(blkdev_write_iter); ssize_t blkdev_read_iter(struct kiocb *iocb, struct iov_iter *to) { struct file *file = iocb->ki_filp; - struct inode *bd_inode = file->f_mapping->host; + struct inode *bd_inode = bdev_file_inode(file); loff_t size = i_size_read(bd_inode); loff_t pos = iocb->ki_pos; ^ permalink raw reply related [flat|nested] 18+ messages in thread
* Re: [PATCH v2 4/5] block: introduce bdev_file_inode() 2015-10-22 17:10 ` [PATCH v2 4/5] block: introduce bdev_file_inode() Dan Williams @ 2015-10-22 20:37 ` Jan Kara 2015-10-28 20:16 ` Jeff Moyer 1 sibling, 0 replies; 18+ messages in thread From: Jan Kara @ 2015-10-22 20:37 UTC (permalink / raw) To: Dan Williams Cc: axboe, jack, linux-nvdimm, david, linux-kernel, Al Viro, ross.zwisler, willy, akpm, hch On Thu 22-10-15 13:10:38, Dan Williams wrote: > Similar to the file_inode() helper, provide a helper to lookup the inode for a > raw block device itself. > > Cc: Al Viro <viro@zeniv.linux.org.uk> > Suggested-by: Jan Kara <jack@suse.cz> > Signed-off-by: Dan Williams <dan.j.williams@intel.com> Looks good. You can add: Reviewed-by: Jan Kara <jack@suse.com> Honza > --- > fs/block_dev.c | 19 ++++++++++++------- > 1 file changed, 12 insertions(+), 7 deletions(-) > > diff --git a/fs/block_dev.c b/fs/block_dev.c > index 0a793c7930eb..c1f691859a56 100644 > --- a/fs/block_dev.c > +++ b/fs/block_dev.c > @@ -147,11 +147,16 @@ blkdev_get_block(struct inode *inode, sector_t iblock, > return 0; > } > > +static struct inode *bdev_file_inode(struct file *file) > +{ > + return file->f_mapping->host; > +} > + > static ssize_t > blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter, loff_t offset) > { > struct file *file = iocb->ki_filp; > - struct inode *inode = file->f_mapping->host; > + struct inode *inode = bdev_file_inode(file); > > if (IS_DAX(inode)) > return dax_do_io(iocb, inode, iter, offset, blkdev_get_block, > @@ -329,7 +334,7 @@ static int blkdev_write_end(struct file *file, struct address_space *mapping, > */ > static loff_t block_llseek(struct file *file, loff_t offset, int whence) > { > - struct inode *bd_inode = file->f_mapping->host; > + struct inode *bd_inode = bdev_file_inode(file); > loff_t retval; > > mutex_lock(&bd_inode->i_mutex); > @@ -340,7 +345,7 @@ static loff_t block_llseek(struct file *file, loff_t offset, int whence) > > int blkdev_fsync(struct file *filp, loff_t start, loff_t end, int datasync) > { > - struct inode *bd_inode = filp->f_mapping->host; > + struct inode *bd_inode = bdev_file_inode(filp); > struct block_device *bdev = I_BDEV(bd_inode); > int error; > > @@ -1579,14 +1584,14 @@ EXPORT_SYMBOL(blkdev_put); > > static int blkdev_close(struct inode * inode, struct file * filp) > { > - struct block_device *bdev = I_BDEV(filp->f_mapping->host); > + struct block_device *bdev = I_BDEV(bdev_file_inode(filp)); > blkdev_put(bdev, filp->f_mode); > return 0; > } > > static long block_ioctl(struct file *file, unsigned cmd, unsigned long arg) > { > - struct block_device *bdev = I_BDEV(file->f_mapping->host); > + struct block_device *bdev = I_BDEV(bdev_file_inode(file)); > fmode_t mode = file->f_mode; > > /* > @@ -1611,7 +1616,7 @@ static long block_ioctl(struct file *file, unsigned cmd, unsigned long arg) > ssize_t blkdev_write_iter(struct kiocb *iocb, struct iov_iter *from) > { > struct file *file = iocb->ki_filp; > - struct inode *bd_inode = file->f_mapping->host; > + struct inode *bd_inode = bdev_file_inode(file); > loff_t size = i_size_read(bd_inode); > struct blk_plug plug; > ssize_t ret; > @@ -1643,7 +1648,7 @@ EXPORT_SYMBOL_GPL(blkdev_write_iter); > ssize_t blkdev_read_iter(struct kiocb *iocb, struct iov_iter *to) > { > struct file *file = iocb->ki_filp; > - struct inode *bd_inode = file->f_mapping->host; > + struct inode *bd_inode = bdev_file_inode(file); > loff_t size = i_size_read(bd_inode); > loff_t pos = iocb->ki_pos; > > -- Jan Kara <jack@suse.com> SUSE Labs, CR ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v2 4/5] block: introduce bdev_file_inode() 2015-10-22 17:10 ` [PATCH v2 4/5] block: introduce bdev_file_inode() Dan Williams 2015-10-22 20:37 ` Jan Kara @ 2015-10-28 20:16 ` Jeff Moyer 1 sibling, 0 replies; 18+ messages in thread From: Jeff Moyer @ 2015-10-28 20:16 UTC (permalink / raw) To: Dan Williams Cc: axboe, jack, linux-nvdimm, david, linux-kernel, hch, Al Viro, akpm Dan Williams <dan.j.williams@intel.com> writes: > Similar to the file_inode() helper, provide a helper to lookup the inode for a > raw block device itself. > > Cc: Al Viro <viro@zeniv.linux.org.uk> > Suggested-by: Jan Kara <jack@suse.cz> > Signed-off-by: Dan Williams <dan.j.williams@intel.com> Reviewed-by: Jeff Moyer <jmoyer@redhat.com> > --- > fs/block_dev.c | 19 ++++++++++++------- > 1 file changed, 12 insertions(+), 7 deletions(-) > > diff --git a/fs/block_dev.c b/fs/block_dev.c > index 0a793c7930eb..c1f691859a56 100644 > --- a/fs/block_dev.c > +++ b/fs/block_dev.c > @@ -147,11 +147,16 @@ blkdev_get_block(struct inode *inode, sector_t iblock, > return 0; > } > > +static struct inode *bdev_file_inode(struct file *file) > +{ > + return file->f_mapping->host; > +} > + > static ssize_t > blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter, loff_t offset) > { > struct file *file = iocb->ki_filp; > - struct inode *inode = file->f_mapping->host; > + struct inode *inode = bdev_file_inode(file); > > if (IS_DAX(inode)) > return dax_do_io(iocb, inode, iter, offset, blkdev_get_block, > @@ -329,7 +334,7 @@ static int blkdev_write_end(struct file *file, struct address_space *mapping, > */ > static loff_t block_llseek(struct file *file, loff_t offset, int whence) > { > - struct inode *bd_inode = file->f_mapping->host; > + struct inode *bd_inode = bdev_file_inode(file); > loff_t retval; > > mutex_lock(&bd_inode->i_mutex); > @@ -340,7 +345,7 @@ static loff_t block_llseek(struct file *file, loff_t offset, int whence) > > int blkdev_fsync(struct file *filp, loff_t start, loff_t end, int datasync) > { > - struct inode *bd_inode = filp->f_mapping->host; > + struct inode *bd_inode = bdev_file_inode(filp); > struct block_device *bdev = I_BDEV(bd_inode); > int error; > > @@ -1579,14 +1584,14 @@ EXPORT_SYMBOL(blkdev_put); > > static int blkdev_close(struct inode * inode, struct file * filp) > { > - struct block_device *bdev = I_BDEV(filp->f_mapping->host); > + struct block_device *bdev = I_BDEV(bdev_file_inode(filp)); > blkdev_put(bdev, filp->f_mode); > return 0; > } > > static long block_ioctl(struct file *file, unsigned cmd, unsigned long arg) > { > - struct block_device *bdev = I_BDEV(file->f_mapping->host); > + struct block_device *bdev = I_BDEV(bdev_file_inode(file)); > fmode_t mode = file->f_mode; > > /* > @@ -1611,7 +1616,7 @@ static long block_ioctl(struct file *file, unsigned cmd, unsigned long arg) > ssize_t blkdev_write_iter(struct kiocb *iocb, struct iov_iter *from) > { > struct file *file = iocb->ki_filp; > - struct inode *bd_inode = file->f_mapping->host; > + struct inode *bd_inode = bdev_file_inode(file); > loff_t size = i_size_read(bd_inode); > struct blk_plug plug; > ssize_t ret; > @@ -1643,7 +1648,7 @@ EXPORT_SYMBOL_GPL(blkdev_write_iter); > ssize_t blkdev_read_iter(struct kiocb *iocb, struct iov_iter *to) > { > struct file *file = iocb->ki_filp; > - struct inode *bd_inode = file->f_mapping->host; > + struct inode *bd_inode = bdev_file_inode(file); > loff_t size = i_size_read(bd_inode); > loff_t pos = iocb->ki_pos; > > > _______________________________________________ > Linux-nvdimm mailing list > Linux-nvdimm@lists.01.org > https://lists.01.org/mailman/listinfo/linux-nvdimm ^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH v2 5/5] block: enable dax for raw block devices 2015-10-22 17:10 [PATCH v2 0/5] block, dax: updates for 4.4 Dan Williams ` (3 preceding siblings ...) 2015-10-22 17:10 ` [PATCH v2 4/5] block: introduce bdev_file_inode() Dan Williams @ 2015-10-22 17:10 ` Dan Williams 2015-10-28 20:50 ` Jeff Moyer 4 siblings, 1 reply; 18+ messages in thread From: Dan Williams @ 2015-10-22 17:10 UTC (permalink / raw) To: axboe Cc: jack, linux-nvdimm, Jan Kara, david, linux-kernel, Jeff Moyer, ross.zwisler, willy, akpm, hch If an application wants exclusive access to all of the persistent memory provided by an NVDIMM namespace it can use this raw-block-dax facility to forgo establishing a filesystem. This capability is targeted primarily to hypervisors wanting to provision persistent memory for guests. Cc: Jeff Moyer <jmoyer@redhat.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Dave Chinner <david@fromorbit.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Ross Zwisler <ross.zwisler@linux.intel.com> Reviewed-by: Jan Kara <jack@suse.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com> --- fs/block_dev.c | 60 +++++++++++++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 59 insertions(+), 1 deletion(-) diff --git a/fs/block_dev.c b/fs/block_dev.c index c1f691859a56..210d05103657 100644 --- a/fs/block_dev.c +++ b/fs/block_dev.c @@ -1687,13 +1687,71 @@ static const struct address_space_operations def_blk_aops = { .is_dirty_writeback = buffer_check_dirty_writeback, }; +#ifdef CONFIG_FS_DAX +/* + * In the raw block case we do not need to contend with truncation nor + * unwritten file extents. Without those concerns there is no need for + * additional locking beyond the mmap_sem context that these routines + * are already executing under. + * + * Note, there is no protection if the block device is dynamically + * resized (partition grow/shrink) during a fault. A stable block device + * size is already not enforced in the blkdev_direct_IO path. + * + * For DAX, it is the responsibility of the block device driver to + * ensure the whole-disk device size is stable while requests are in + * flight. + * + * Finally, in contrast to the generic_file_mmap() path, there are no + * calls to sb_start_pagefault(). That is meant to synchronize write + * faults against requests to freeze the contents of the filesystem + * hosting vma->vm_file. However, in the case of a block device special + * file, it is a 0-sized device node usually hosted on devtmpfs, i.e. + * nothing to do with the super_block for bdev_file_inode(vma->vm_file). + * We could call get_super() in this path to retrieve the right + * super_block, but the generic_file_mmap() path does not do this for + * the CONFIG_FS_DAX=n case. + */ +static int blkdev_dax_fault(struct vm_area_struct *vma, struct vm_fault *vmf) +{ + return __dax_fault(vma, vmf, blkdev_get_block, NULL); +} + +static int blkdev_dax_pmd_fault(struct vm_area_struct *vma, unsigned long addr, + pmd_t *pmd, unsigned int flags) +{ + return __dax_pmd_fault(vma, addr, pmd, flags, blkdev_get_block, NULL); +} + +static const struct vm_operations_struct blkdev_dax_vm_ops = { + .page_mkwrite = blkdev_dax_fault, + .fault = blkdev_dax_fault, + .pmd_fault = blkdev_dax_pmd_fault, +}; + +static int blkdev_mmap(struct file *file, struct vm_area_struct *vma) +{ + struct inode *bd_inode = bdev_file_inode(file); + + if (!IS_DAX(bd_inode)) + return generic_file_mmap(file, vma); + + file_accessed(file); + vma->vm_ops = &blkdev_dax_vm_ops; + vma->vm_flags |= VM_MIXEDMAP | VM_HUGEPAGE; + return 0; +} +#else +#define blkdev_mmap generic_file_mmap +#endif + const struct file_operations def_blk_fops = { .open = blkdev_open, .release = blkdev_close, .llseek = block_llseek, .read_iter = blkdev_read_iter, .write_iter = blkdev_write_iter, - .mmap = generic_file_mmap, + .mmap = blkdev_mmap, .fsync = blkdev_fsync, .unlocked_ioctl = block_ioctl, #ifdef CONFIG_COMPAT ^ permalink raw reply related [flat|nested] 18+ messages in thread
* Re: [PATCH v2 5/5] block: enable dax for raw block devices 2015-10-22 17:10 ` [PATCH v2 5/5] block: enable dax for raw block devices Dan Williams @ 2015-10-28 20:50 ` Jeff Moyer 2015-10-29 7:14 ` Dan Williams 0 siblings, 1 reply; 18+ messages in thread From: Jeff Moyer @ 2015-10-28 20:50 UTC (permalink / raw) To: Dan Williams Cc: axboe, jack, linux-nvdimm, Jan Kara, david, linux-kernel, ross.zwisler, willy, akpm, hch Dan Williams <dan.j.williams@intel.com> writes: > If an application wants exclusive access to all of the persistent memory > provided by an NVDIMM namespace it can use this raw-block-dax facility > to forgo establishing a filesystem. This capability is targeted > primarily to hypervisors wanting to provision persistent memory for > guests. OK, I'm going to expose my ignorance here. :) Why does the block device need a page_mkwrite handler? -Jeff > Cc: Jeff Moyer <jmoyer@redhat.com> > Cc: Christoph Hellwig <hch@lst.de> > Cc: Dave Chinner <david@fromorbit.com> > Cc: Andrew Morton <akpm@linux-foundation.org> > Cc: Ross Zwisler <ross.zwisler@linux.intel.com> > Reviewed-by: Jan Kara <jack@suse.com> > Signed-off-by: Dan Williams <dan.j.williams@intel.com> > --- > fs/block_dev.c | 60 +++++++++++++++++++++++++++++++++++++++++++++++++++++++- > 1 file changed, 59 insertions(+), 1 deletion(-) > > diff --git a/fs/block_dev.c b/fs/block_dev.c > index c1f691859a56..210d05103657 100644 > --- a/fs/block_dev.c > +++ b/fs/block_dev.c > @@ -1687,13 +1687,71 @@ static const struct address_space_operations def_blk_aops = { > .is_dirty_writeback = buffer_check_dirty_writeback, > }; > > +#ifdef CONFIG_FS_DAX > +/* > + * In the raw block case we do not need to contend with truncation nor > + * unwritten file extents. Without those concerns there is no need for > + * additional locking beyond the mmap_sem context that these routines > + * are already executing under. > + * > + * Note, there is no protection if the block device is dynamically > + * resized (partition grow/shrink) during a fault. A stable block device > + * size is already not enforced in the blkdev_direct_IO path. > + * > + * For DAX, it is the responsibility of the block device driver to > + * ensure the whole-disk device size is stable while requests are in > + * flight. > + * > + * Finally, in contrast to the generic_file_mmap() path, there are no > + * calls to sb_start_pagefault(). That is meant to synchronize write > + * faults against requests to freeze the contents of the filesystem > + * hosting vma->vm_file. However, in the case of a block device special > + * file, it is a 0-sized device node usually hosted on devtmpfs, i.e. > + * nothing to do with the super_block for bdev_file_inode(vma->vm_file). > + * We could call get_super() in this path to retrieve the right > + * super_block, but the generic_file_mmap() path does not do this for > + * the CONFIG_FS_DAX=n case. > + */ > +static int blkdev_dax_fault(struct vm_area_struct *vma, struct vm_fault *vmf) > +{ > + return __dax_fault(vma, vmf, blkdev_get_block, NULL); > +} > + > +static int blkdev_dax_pmd_fault(struct vm_area_struct *vma, unsigned long addr, > + pmd_t *pmd, unsigned int flags) > +{ > + return __dax_pmd_fault(vma, addr, pmd, flags, blkdev_get_block, NULL); > +} > + > +static const struct vm_operations_struct blkdev_dax_vm_ops = { > + .page_mkwrite = blkdev_dax_fault, > + .fault = blkdev_dax_fault, > + .pmd_fault = blkdev_dax_pmd_fault, > +}; > + > +static int blkdev_mmap(struct file *file, struct vm_area_struct *vma) > +{ > + struct inode *bd_inode = bdev_file_inode(file); > + > + if (!IS_DAX(bd_inode)) > + return generic_file_mmap(file, vma); > + > + file_accessed(file); > + vma->vm_ops = &blkdev_dax_vm_ops; > + vma->vm_flags |= VM_MIXEDMAP | VM_HUGEPAGE; > + return 0; > +} > +#else > +#define blkdev_mmap generic_file_mmap > +#endif > + > const struct file_operations def_blk_fops = { > .open = blkdev_open, > .release = blkdev_close, > .llseek = block_llseek, > .read_iter = blkdev_read_iter, > .write_iter = blkdev_write_iter, > - .mmap = generic_file_mmap, > + .mmap = blkdev_mmap, > .fsync = blkdev_fsync, > .unlocked_ioctl = block_ioctl, > #ifdef CONFIG_COMPAT ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v2 5/5] block: enable dax for raw block devices 2015-10-28 20:50 ` Jeff Moyer @ 2015-10-29 7:14 ` Dan Williams 0 siblings, 0 replies; 18+ messages in thread From: Dan Williams @ 2015-10-29 7:14 UTC (permalink / raw) To: Jeff Moyer Cc: Jens Axboe, Jan Kara, linux-nvdimm, Jan Kara, david, linux-kernel@vger.kernel.org, Ross Zwisler, Matthew Wilcox, Andrew Morton, Christoph Hellwig On Thu, Oct 29, 2015 at 5:50 AM, Jeff Moyer <jmoyer@redhat.com> wrote: > Dan Williams <dan.j.williams@intel.com> writes: > >> If an application wants exclusive access to all of the persistent memory >> provided by an NVDIMM namespace it can use this raw-block-dax facility >> to forgo establishing a filesystem. This capability is targeted >> primarily to hypervisors wanting to provision persistent memory for >> guests. > > OK, I'm going to expose my ignorance here. :) Why does the block device > need a page_mkwrite handler? > You're right, it buys us nothing, and deleting it saves having to comment on why this page_mkwrite instance is not calling sb_start_pagefault. ^ permalink raw reply [flat|nested] 18+ messages in thread
end of thread, other threads:[~2015-10-29 7:14 UTC | newest] Thread overview: 18+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2015-10-22 17:10 [PATCH v2 0/5] block, dax: updates for 4.4 Dan Williams 2015-10-22 17:10 ` [PATCH v2 1/5] pmem, dax: clean up clear_pmem() Dan Williams 2015-10-22 20:48 ` Jeff Moyer 2015-10-22 22:29 ` Dan Williams 2015-10-28 21:01 ` Jeff Moyer 2015-10-27 17:31 ` Ross Zwisler 2015-10-22 17:10 ` [PATCH v2 2/5] dax: increase granularity of dax_clear_blocks() operations Dan Williams 2015-10-22 21:04 ` Jeff Moyer 2015-10-22 22:57 ` Williams, Dan J 2015-10-28 21:02 ` Jeff Moyer 2015-10-22 17:10 ` [PATCH v2 3/5] block, dax: fix lifetime of in-kernel dax mappings with dax_map_atomic() Dan Williams 2015-10-28 20:15 ` Jeff Moyer 2015-10-22 17:10 ` [PATCH v2 4/5] block: introduce bdev_file_inode() Dan Williams 2015-10-22 20:37 ` Jan Kara 2015-10-28 20:16 ` Jeff Moyer 2015-10-22 17:10 ` [PATCH v2 5/5] block: enable dax for raw block devices Dan Williams 2015-10-28 20:50 ` Jeff Moyer 2015-10-29 7:14 ` Dan Williams
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).