linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Boaz Harrosh <boaz@plexistor.com>
To: Dave Chinner <david@fromorbit.com>,
	Dan Williams <dan.j.williams@intel.com>
Cc: Ross Zwisler <ross.zwisler@linux.intel.com>,
	linux-nvdimm <linux-nvdimm@ml01.01.org>,
	Matthew Wilcox <willy@linux.intel.com>,
	"Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>,
	Oleg Nesterov <oleg@redhat.com>, Mel Gorman <mgorman@suse.de>,
	Johannes Weiner <hannes@cmpxchg.org>,
	linux-mm <linux-mm@kvack.org>, Arnd Bergmann <arnd@arndb.de>
Subject: Re: [RFC 0/2] New MAP_PMEM_AWARE mmap flag
Date: Mon, 22 Feb 2016 11:57:41 +0200	[thread overview]
Message-ID: <56CADB95.4080701@plexistor.com> (raw)
In-Reply-To: <20160221223157.GC25832@dastard>

On 02/22/2016 12:31 AM, Dave Chinner wrote:
> On Sun, Feb 21, 2016 at 02:03:43PM -0800, Dan Williams wrote:
>> On Sun, Feb 21, 2016 at 1:23 PM, Boaz Harrosh <boaz@plexistor.com> wrote:
>>> On 02/21/2016 10:57 PM, Dan Williams wrote:
>>>> On Sun, Feb 21, 2016 at 12:24 PM, Boaz Harrosh <boaz@plexistor.com> wrote:
>>>>> On 02/21/2016 09:51 PM, Dan Williams wrote:
>>> Sure. please have a look. What happens is that the legacy app
>>> will add the page to the radix tree, come the fsync it will be
>>> flushed. Even though a "new-type" app might fault on the same page
>>> before or after, which did not add it to the radix tree.
>>> So yes, all pages faulted by legacy apps will be flushed.
>>>
>>> I have manually tested all this and it seems to work. Can you see
>>> a theoretical scenario where it would not?
>>
>> I'm worried about the scenario where the pmem aware app assumes that
>> none of the cachelines in its mapping are dirty when it goes to issue
>> pcommit.  We'll have two applications with different perceptions of
>> when writes are durable.  Maybe it's not a problem in practice, at
>> least current generation x86 cpus flush existing dirty cachelines when
>> performing non-temporal stores.  However, it bothers me that there are
>> cpus where a pmem-unaware app could prevent a pmem-aware app from
>> making writes durable.  It seems if one app has established a
>> MAP_PMEM_AWARE mapping it needs guarantees that all apps participating
>> in that shared mapping have the same awareness.
> 
> Which, in practice, cannot work. Think cp, rsync, or any other
> program a user can run that can read the file the MAP_PMEM_AWARE
> application is using.
> 

Yes what of it? nothing will happen, it all just works.

Perhaps you did not understand, we are talking about DAX mapped
file. Not a combination of dax vs page-cached system.

One thread stores a value X in memory movnt style, one thread pocks
the same X value from memory, CPUs do this all the time. What of it?

>> Another potential issue is that MAP_PMEM_AWARE is not enough on its
>> own.  If the filesystem or inode does not support DAX the application
>> needs to assume page cache semantics.  At a minimum MAP_PMEM_AWARE
>> requests would need to fail if DAX is not available.

DAN this is a good Idea. I will add it. In a system perspective this
is not needed. In fact today what will happen if you load nvml on a
none -dax mounted fs? nothing will work at all even though at the
beginning the all data seems to be there. right?
But I think with this here it is a chance for us to let nvml unload
gracefully before any destructive changes are made.

> 
> They will always still need to call msync()/fsync() to guarantee
> data integrity, because the filesystem metadata that indexes the
> data still needs to be committed before data integrity can be
> guaranteed. i.e. MAP_PMEM_AWARE by itself it not sufficient for data
> integrity, and so the app will have to be written like any other app
> that uses page cache based mmap().
> 

Sure yes. I agree completely. msync()/fsync() will need to be called.

I apologize, you have missed the motivation of this patch because I
did not explain very good. Our motivation is speed.

One can have durable data by:
1. Doing movnt  - Done and faster then memcpy even
2. radix-tree-add; memcpy; cl_flush;
   Surly this one is much slower lock heavy, and resource consuming.
   Our micro benchmarks show 3-8 times slowness. (memory speeds remember)

So sure a MAP_PMEM_AWARE *must* call m/fsync() for data integrity but
will not pay the "slow" price at all, it will all be very fast because
the o(n) radix-tree management+traversal+cl_flush will not be there, only
the meta-data bits will sync.

> Indeed, the application cannot even assume that a fully allocated
> file does not require msync/fsync because the filesystem may be
> doing things like dedupe, defrag, copy on write, etc behind the back
> of the application and so file metadata changes may still be in
> volatile RAM even though the application has flushed it's data.
> Applications have no idea what the underlying filesystem and storage
> is doing and so they cannot assume that complete data integrity is
> provided by userspace driven CPU cache flush instructions on their
> file data.
> 

Exactly, m/fsync() is needed, only will be much *faster*

> This "pmem aware applications only need to commit their data"
> thinking is what got us into this mess in the first place. It's
> wrong, and we need to stop trying to make pmem work this way because
> it's a fundamentally broken concept.
> 

Hey sir Dave, Please hold your horses. What mess are you talking about?
there is no mess. All We are trying to do is enable model [1] above vs
current model [2], which costs a lot.

Every bit of data integrity and FS freedom to manage data behind the
scenes, is kept intact.
	YES apps need to fsync!

Thank you, I will add this warning in the next submission. To explain
better.

> Cheers,
> Dave.
> 

Cheers
Boaz

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2016-02-22  9:57 UTC|newest]

Thread overview: 69+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-02-21 17:03 [RFC 0/2] New MAP_PMEM_AWARE mmap flag Boaz Harrosh
2016-02-21 17:04 ` [RFC 1/2] mmap: Define a new " Boaz Harrosh
2016-02-21 17:06 ` [RFC 2/2] dax: Support " Boaz Harrosh
2016-02-21 19:51 ` [RFC 0/2] New " Dan Williams
2016-02-21 20:24   ` Boaz Harrosh
2016-02-21 20:57     ` Dan Williams
2016-02-21 21:23       ` Boaz Harrosh
2016-02-21 22:03         ` Dan Williams
2016-02-21 22:31           ` Dave Chinner
2016-02-22  9:57             ` Boaz Harrosh [this message]
2016-02-22 15:34             ` Jeff Moyer
2016-02-22 17:44               ` Christoph Hellwig
2016-02-22 17:58                 ` Jeff Moyer
2016-02-22 18:03                   ` Christoph Hellwig
2016-02-22 18:52                     ` Jeff Moyer
2016-02-23  9:45                       ` Christoph Hellwig
2016-02-22 20:05                 ` Rudoff, Andy
2016-02-23  9:52                   ` Christoph Hellwig
2016-02-23 10:07                     ` Rudoff, Andy
2016-02-23 12:06                       ` Dave Chinner
2016-02-23 17:10                         ` Ross Zwisler
2016-02-23 21:47                           ` Dave Chinner
2016-02-23 22:15                             ` Boaz Harrosh
2016-02-23 23:28                               ` Dave Chinner
2016-02-24  0:08                                 ` Boaz Harrosh
2016-02-23 14:10                     ` Boaz Harrosh
2016-02-23 16:56                       ` Dan Williams
2016-02-23 17:05                         ` Ross Zwisler
2016-02-23 17:26                           ` Dan Williams
2016-02-23 21:55                         ` Boaz Harrosh
2016-02-23 22:33                           ` Dan Williams
2016-02-23 23:07                             ` Boaz Harrosh
2016-02-23 23:23                               ` Dan Williams
2016-02-23 23:40                                 ` Boaz Harrosh
2016-02-24  0:08                                   ` Dave Chinner
2016-02-23 23:28                             ` Jeff Moyer
2016-02-23 23:34                               ` Dan Williams
2016-02-23 23:43                                 ` Jeff Moyer
2016-02-23 23:56                                   ` Dan Williams
2016-02-24  4:09                                     ` Ross Zwisler
2016-02-24 19:30                                       ` Ross Zwisler
2016-02-25  9:46                                         ` Jan Kara
2016-02-25  7:44                                       ` Boaz Harrosh
2016-02-24 15:02                                     ` Jeff Moyer
2016-02-24 22:56                                       ` Dave Chinner
2016-02-25 16:24                                         ` Jeff Moyer
2016-02-25 19:11                                           ` Jeff Moyer
2016-02-25 20:15                                             ` Dave Chinner
2016-02-25 20:57                                               ` Jeff Moyer
2016-02-25 22:27                                                 ` Dave Chinner
2016-02-26  4:02                                                   ` Dan Williams
2016-02-26 10:04                                                     ` Thanumalayan Sankaranarayana Pillai
2016-02-28 10:17                                                       ` Boaz Harrosh
2016-03-03 17:38                                                         ` Howard Chu
2016-02-29 20:25                                                   ` Jeff Moyer
2016-02-25 21:08                                               ` Phil Terry
2016-02-25 21:39                                                 ` Dave Chinner
2016-02-25 21:20                                           ` Dave Chinner
2016-02-29 20:32                                             ` Jeff Moyer
2016-02-23 17:25                       ` Ross Zwisler
2016-02-23 22:47                         ` Boaz Harrosh
2016-02-22 21:50               ` Dave Chinner
2016-02-23 13:51               ` Boaz Harrosh
2016-02-23 14:22                 ` Jeff Moyer
2016-02-22 11:05           ` Boaz Harrosh
2016-03-11  6:44 ` Andy Lutomirski
2016-03-11 19:07   ` Dan Williams
2016-03-11 19:10     ` Andy Lutomirski
2016-03-11 23:02       ` Rudoff, Andy

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=56CADB95.4080701@plexistor.com \
    --to=boaz@plexistor.com \
    --cc=arnd@arndb.de \
    --cc=dan.j.williams@intel.com \
    --cc=david@fromorbit.com \
    --cc=hannes@cmpxchg.org \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=linux-mm@kvack.org \
    --cc=linux-nvdimm@ml01.01.org \
    --cc=mgorman@suse.de \
    --cc=oleg@redhat.com \
    --cc=ross.zwisler@linux.intel.com \
    --cc=willy@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).