public inbox for linux-arch@vger.kernel.org
 help / color / mirror / Atom feed
From: Andrew Morton <akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>
To: Milosz Tanski <milosz-B5zB6C1i6pkAvxtiuMwx3w@public.gmane.org>
Cc: LKML <linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>,
	Christoph Hellwig <hch-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org>,
	"linux-fsdevel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org"
	<linux-fsdevel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>,
	"linux-aio-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org"
	<linux-aio-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org>,
	Mel Gorman <mgorman-l3A5Bk7waGM@public.gmane.org>,
	Volker Lendecke
	<Volker.Lendecke-3ekOc4rQMZmzQB+pC5nmwQ@public.gmane.org>,
	Tejun Heo <tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>,
	Jeff Moyer <jmoyer-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>,
	Theodore Ts'o <tytso-3s7WtUTddSA@public.gmane.org>,
	Al Viro <viro-RmSDqhL/yNMiFSDQTTA3OLVCufUGDwFn@public.gmane.org>,
	Linux API <linux-api-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>,
	Michael Kerrisk
	<mtk.manpages-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>,
	linux-arch-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
Subject: Re: [PATCH v6 0/7] vfs: Non-blockling buffered fs read (page cache only)
Date: Thu, 4 Dec 2014 15:11:02 -0800	[thread overview]
Message-ID: <20141204151102.2d7e11dca39f130c2dff2294@linux-foundation.org> (raw)
In-Reply-To: <CANP1eJGVyBOt1rQ8jA4tMrNGX5X61-UWbVy6kKj_ByeTqAEOBQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>

On Wed, 3 Dec 2014 11:48:28 -0500 Milosz Tanski <milosz-B5zB6C1i6pkAvxtiuMwx3w@public.gmane.org> wrote:

> On Tue, Dec 2, 2014 at 5:42 PM, Andrew Morton <akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org> wrote:
> >
> > On Tue, 2 Dec 2014 17:17:42 -0500 Milosz Tanski <milosz-B5zB6C1i6pkAvxtiuMwx3w@public.gmane.org> wrote:
> >
> > > > There have been several incomplete attempts to implement fincore().  If
> > > > we were to complete those attempts, preadv2() could be implemented
> > > > using fincore()+pread().  Plus we get fincore(), which is useful for
> > > > other (but probably similar) reasons.  Probably fincore()+pwrite() could
> > > > be used to implement pwritev2(), but I don't know what pwritev2() does
> > > > yet.
> > > >
> > > > Implementing fincore() is more flexible, requires less code and is less
> > > > likely to have bugs.  So why not go that way?  Yes, it's more CPU
> > > > intensive, but how much?  Is the difference sufficient to justify the
> > > > preadv2()/pwritev2() approach?
> > >
> > > I would like to see a fincore() functionality (for other reasons) I
> > > don't think it does the job here. fincore() + preadv() is inherently
> > > racy as there's no guarantee that the data becomes uncached between
> > > the two calls.
> >
> > There will always be holes.  For example find_get_page() could block on
> > lock_page() while some other process is doing IO.
> > page_cache_async_readahead() does lots of memory allocation which can
> > get blocked for long periods in the page allocator.
> > page_cache_async_readahead() can block on synchronous metadata reads,
> > etc.
> 
> Andrew I think it would helpful if you did read through the patches.
> The first 3 are somewhat uninteresting as it's just wiring up the new
> syscalls and plumbing the flag argument through. The core of the
> RWF_NONBLOCK is patch 4: https://lkml.org/lkml/2014/11/10/463 and if
> you strip away the fs specific changes the core of it is very simple.
> 
> The core is mostly contained in do_generic_file_read() in filemap.c,
> and is very short and easy to understand. It boils down to we read as
> much data as we can given what's in the page cache. There's no
> fallback to diskio for readpage() in case of missing pages and we bail
> before any calls to page_cache_async_readahead(). And to the best of
> my knowledge lock_page() does not lock the page, all it does is call
> pagecache_get_page() without the FGP_LOCK flag.
> 
> I've spent time a decent amount of time looking at this to make sure
> we cover all our major bases. It's possible I missed something but the
> biggest offenders should be covered and if I missed something I'd love
> to cover that as well.

OK.

> >
> >
> > > There's no overlap between prwritev2 and fincore() functionality.
> >
> > Do we actually need pwritev2()?  What's the justification for that?
> 
> 
> I'm okay with splitting up the pwritev2 and preadv2 into two
> independent patchsets to be considered on their own merits.

Well, we can do both together if both are wanted.  The changelogs are
very skimpy on pwritev2().  A full description and careful
justification in the [0/n] changelog would be useful - something that
tells us "what's wrong with O_DSYNC+pwrite".


> >
> >
> >
> > Please let's examine the alternative(s) seriously.  It would be mistake
> > to add preadv2/pwritev2 if fincore+pread would have sufficed.
> 
> 
>  What the motivation for my change and also approach is a very common
> pattern to async buffered disk IO in userspace server applications. It
> comes down to having one thread to handle the network and a thread
> pool to perform IO requests. Why a threadpool and not something like a
> sendfile() for reads? Many non-trivial applications perform additional
> processing (ssl, checksuming, transformation). Unfortunately this has
> a inherent increase in average latency due to increased
> synchronization penalties (enqueue and notify) but primarily due to
> fast requests (already in cache) behind stuck behind slow request.
> 
> Here's the illustration of the common architecture:
> http://i.imgur.com/f8Pla7j.png. In fact, most apps are even simpler
> where they replace the request queue, task worker with a single thread
> doing network IO using epoll or such.
> 
> preadv2 with RWF_NONBLOCK is analogous to the kernel recvmsg() with
> the MSG_NOWAIT flag. It's really frustrating that such capacity
> doesn't exist today. As with the user space application design we can
> skip the io threadpool and decrease average request latency in many
> common workloads (linear reads or zipf data accesses).
> 
> preadv2 with RWF_NONBLOCK as implemented does not suffer the same
> eviction races as fincore + pread because it's not implemented as two
> syscalls. It also has a much lower surface of possible blocking /
> locking then fincore + pread because it cannot fallback to reading
> from disk, it does not trigger read-ahead, and does not wait for page
> lock.

I can see all that, but it's handwaving.  Yes, preadv2() will perform
better in some circumstances than fincore+pread.  But how much better? 
Enough to justify this approach, or not?

Alas, the only way to really settle that is to implement fincore() and
to subject it to a decent amount of realistic quantitative testing.

Ho hum.

Could you please hunt down some libuv developers, see if we can solicit
some quality input from them?  As I said, we really don't want to merge
this then find that people don't use it for some reason, or that it
needs changes.

WARNING: multiple messages have this Message-ID (diff)
From: Andrew Morton <akpm@linux-foundation.org>
To: Milosz Tanski <milosz@adfin.com>
Cc: LKML <linux-kernel@vger.kernel.org>,
	Christoph Hellwig <hch@infradead.org>,
	"linux-fsdevel@vger.kernel.org" <linux-fsdevel@vger.kernel.org>,
	"linux-aio@kvack.org" <linux-aio@kvack.org>,
	Mel Gorman <mgorman@suse.de>,
	Volker Lendecke <Volker.Lendecke@sernet.de>,
	Tejun Heo <tj@kernel.org>, Jeff Moyer <jmoyer@redhat.com>,
	Theodore Ts'o <tytso@mit.edu>, Al Viro <viro@zeniv.linux.org.uk>,
	Linux API <linux-api@vger.kernel.org>,
	Michael Kerrisk <mtk.manpages@gmail.com>,
	linux-arch@vger.kernel.org
Subject: Re: [PATCH v6 0/7] vfs: Non-blockling buffered fs read (page cache only)
Date: Thu, 4 Dec 2014 15:11:02 -0800	[thread overview]
Message-ID: <20141204151102.2d7e11dca39f130c2dff2294@linux-foundation.org> (raw)
Message-ID: <20141204231102.uavbAqEdEso-1PgdBYssDEAd7Rf_aZHDulrfBW3BpYA@z> (raw)
In-Reply-To: <CANP1eJGVyBOt1rQ8jA4tMrNGX5X61-UWbVy6kKj_ByeTqAEOBQ@mail.gmail.com>

On Wed, 3 Dec 2014 11:48:28 -0500 Milosz Tanski <milosz@adfin.com> wrote:

> On Tue, Dec 2, 2014 at 5:42 PM, Andrew Morton <akpm@linux-foundation.org> wrote:
> >
> > On Tue, 2 Dec 2014 17:17:42 -0500 Milosz Tanski <milosz@adfin.com> wrote:
> >
> > > > There have been several incomplete attempts to implement fincore().  If
> > > > we were to complete those attempts, preadv2() could be implemented
> > > > using fincore()+pread().  Plus we get fincore(), which is useful for
> > > > other (but probably similar) reasons.  Probably fincore()+pwrite() could
> > > > be used to implement pwritev2(), but I don't know what pwritev2() does
> > > > yet.
> > > >
> > > > Implementing fincore() is more flexible, requires less code and is less
> > > > likely to have bugs.  So why not go that way?  Yes, it's more CPU
> > > > intensive, but how much?  Is the difference sufficient to justify the
> > > > preadv2()/pwritev2() approach?
> > >
> > > I would like to see a fincore() functionality (for other reasons) I
> > > don't think it does the job here. fincore() + preadv() is inherently
> > > racy as there's no guarantee that the data becomes uncached between
> > > the two calls.
> >
> > There will always be holes.  For example find_get_page() could block on
> > lock_page() while some other process is doing IO.
> > page_cache_async_readahead() does lots of memory allocation which can
> > get blocked for long periods in the page allocator.
> > page_cache_async_readahead() can block on synchronous metadata reads,
> > etc.
> 
> Andrew I think it would helpful if you did read through the patches.
> The first 3 are somewhat uninteresting as it's just wiring up the new
> syscalls and plumbing the flag argument through. The core of the
> RWF_NONBLOCK is patch 4: https://lkml.org/lkml/2014/11/10/463 and if
> you strip away the fs specific changes the core of it is very simple.
> 
> The core is mostly contained in do_generic_file_read() in filemap.c,
> and is very short and easy to understand. It boils down to we read as
> much data as we can given what's in the page cache. There's no
> fallback to diskio for readpage() in case of missing pages and we bail
> before any calls to page_cache_async_readahead(). And to the best of
> my knowledge lock_page() does not lock the page, all it does is call
> pagecache_get_page() without the FGP_LOCK flag.
> 
> I've spent time a decent amount of time looking at this to make sure
> we cover all our major bases. It's possible I missed something but the
> biggest offenders should be covered and if I missed something I'd love
> to cover that as well.

OK.

> >
> >
> > > There's no overlap between prwritev2 and fincore() functionality.
> >
> > Do we actually need pwritev2()?  What's the justification for that?
> 
> 
> I'm okay with splitting up the pwritev2 and preadv2 into two
> independent patchsets to be considered on their own merits.

Well, we can do both together if both are wanted.  The changelogs are
very skimpy on pwritev2().  A full description and careful
justification in the [0/n] changelog would be useful - something that
tells us "what's wrong with O_DSYNC+pwrite".


> >
> >
> >
> > Please let's examine the alternative(s) seriously.  It would be mistake
> > to add preadv2/pwritev2 if fincore+pread would have sufficed.
> 
> 
>  What the motivation for my change and also approach is a very common
> pattern to async buffered disk IO in userspace server applications. It
> comes down to having one thread to handle the network and a thread
> pool to perform IO requests. Why a threadpool and not something like a
> sendfile() for reads? Many non-trivial applications perform additional
> processing (ssl, checksuming, transformation). Unfortunately this has
> a inherent increase in average latency due to increased
> synchronization penalties (enqueue and notify) but primarily due to
> fast requests (already in cache) behind stuck behind slow request.
> 
> Here's the illustration of the common architecture:
> http://i.imgur.com/f8Pla7j.png. In fact, most apps are even simpler
> where they replace the request queue, task worker with a single thread
> doing network IO using epoll or such.
> 
> preadv2 with RWF_NONBLOCK is analogous to the kernel recvmsg() with
> the MSG_NOWAIT flag. It's really frustrating that such capacity
> doesn't exist today. As with the user space application design we can
> skip the io threadpool and decrease average request latency in many
> common workloads (linear reads or zipf data accesses).
> 
> preadv2 with RWF_NONBLOCK as implemented does not suffer the same
> eviction races as fincore + pread because it's not implemented as two
> syscalls. It also has a much lower surface of possible blocking /
> locking then fincore + pread because it cannot fallback to reading
> from disk, it does not trigger read-ahead, and does not wait for page
> lock.

I can see all that, but it's handwaving.  Yes, preadv2() will perform
better in some circumstances than fincore+pread.  But how much better? 
Enough to justify this approach, or not?

Alas, the only way to really settle that is to implement fincore() and
to subject it to a decent amount of realistic quantitative testing.

Ho hum.

Could you please hunt down some libuv developers, see if we can solicit
some quality input from them?  As I said, we really don't want to merge
this then find that people don't use it for some reason, or that it
needs changes.

  parent reply	other threads:[~2014-12-04 23:11 UTC|newest]

Thread overview: 63+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-11-10 16:40 [PATCH v6 0/7] vfs: Non-blockling buffered fs read (page cache only) Milosz Tanski
2014-11-10 16:40 ` Milosz Tanski
2014-11-10 16:40 ` [PATCH v6 1/7] vfs: Prepare for adding a new preadv/pwritev with user flags Milosz Tanski
2014-11-10 16:40   ` Milosz Tanski
2014-11-10 16:40 ` [PATCH v6 2/7] vfs: Define new syscalls preadv2,pwritev2 Milosz Tanski
2014-11-10 16:40   ` Milosz Tanski
2014-11-11 21:09   ` Jeff Moyer
2014-11-11 21:09     ` Jeff Moyer
2014-11-12 13:18   ` mohanty bhagaban
2014-11-10 16:40 ` [PATCH v6 3/7] x86: wire up preadv2 and pwritev2 Milosz Tanski
2014-11-10 16:40   ` Milosz Tanski
2014-11-10 16:40 ` [PATCH v6 4/7] vfs: RWF_NONBLOCK flag for preadv2 Milosz Tanski
2014-11-10 16:40   ` Milosz Tanski
2014-11-10 16:40 ` [PATCH v6 5/7] xfs: add RWF_NONBLOCK support Milosz Tanski
2014-11-10 16:40   ` Milosz Tanski
2014-11-10 16:40 ` [PATCH v6 6/7] fs: pass iocb to generic_write_sync Milosz Tanski
2014-11-10 16:40   ` Milosz Tanski
2014-11-10 16:40 ` [PATCH v6 7/7] fs: add a flag for per-operation O_DSYNC semantics Milosz Tanski
2014-11-10 16:40   ` Milosz Tanski
2014-11-11  6:44 ` [PATCH v6 0/7] vfs: Non-blockling buffered fs read (page cache only) Dave Chinner
2014-11-11  6:44   ` Dave Chinner
2014-11-11 16:02   ` Milosz Tanski
2014-11-11 16:02     ` Milosz Tanski
2014-11-11 17:03     ` Jeff Moyer
2014-11-11 17:03       ` Jeff Moyer
2014-11-11 21:42       ` Dave Chinner
2014-11-11 21:42         ` Dave Chinner
2014-11-11 23:21         ` Jeff Moyer
2014-11-11 23:21           ` Jeff Moyer
2014-11-11 22:49       ` Theodore Ts'o
2014-11-11 22:49         ` Theodore Ts'o
2014-11-11 23:27         ` Thomas Gleixner
2014-11-11 23:27           ` Thomas Gleixner
2014-11-11 21:40     ` Dave Chinner
2014-11-11 21:40       ` Dave Chinner
2014-11-14 16:32   ` Jeff Moyer
2014-11-14 16:32     ` Jeff Moyer
2014-11-14 16:39     ` Dave Jones
2014-11-14 16:39       ` Dave Jones
2014-11-14 16:51       ` Jeff Moyer
2014-11-14 16:51         ` Jeff Moyer
2014-11-14 18:46         ` Milosz Tanski
2014-11-14 18:46           ` Milosz Tanski
     [not found]       ` <20141114163912.GA23769-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2014-11-14 18:45         ` Milosz Tanski
2014-11-14 18:45           ` Milosz Tanski
2014-11-14 18:52           ` Jeff Moyer
2014-11-14 18:52             ` Jeff Moyer
     [not found] ` <cover.1415636409.git.milosz-B5zB6C1i6pkAvxtiuMwx3w@public.gmane.org>
2014-11-24  9:53   ` Christoph Hellwig
2014-11-24  9:53     ` Christoph Hellwig
2014-11-25 23:01 ` Andrew Morton
2014-12-02 22:17   ` Milosz Tanski
2014-12-02 22:17     ` Milosz Tanski
2014-12-02 22:42     ` Andrew Morton
2014-12-02 22:42       ` Andrew Morton
2014-12-03  9:10       ` Volker Lendecke
2014-12-03  9:10         ` Volker Lendecke
2014-12-03 16:48       ` Milosz Tanski
     [not found]         ` <CANP1eJGVyBOt1rQ8jA4tMrNGX5X61-UWbVy6kKj_ByeTqAEOBQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2014-12-04 23:11           ` Andrew Morton [this message]
2014-12-04 23:11             ` Andrew Morton
2014-12-05  8:17             ` Volker Lendecke
2014-12-05  8:17               ` Volker Lendecke
2015-01-21 14:55               ` Milosz Tanski
2015-01-21 14:55                 ` Milosz Tanski

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20141204151102.2d7e11dca39f130c2dff2294@linux-foundation.org \
    --to=akpm-de/tnxtf+jlsfhdxvbkv3wd2fqjk+8+b@public.gmane.org \
    --cc=Volker.Lendecke-3ekOc4rQMZmzQB+pC5nmwQ@public.gmane.org \
    --cc=hch-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org \
    --cc=jmoyer-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org \
    --cc=linux-aio-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org \
    --cc=linux-api-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
    --cc=linux-arch-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
    --cc=linux-fsdevel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
    --cc=linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
    --cc=mgorman-l3A5Bk7waGM@public.gmane.org \
    --cc=milosz-B5zB6C1i6pkAvxtiuMwx3w@public.gmane.org \
    --cc=mtk.manpages-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org \
    --cc=tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org \
    --cc=tytso-3s7WtUTddSA@public.gmane.org \
    --cc=viro-RmSDqhL/yNMiFSDQTTA3OLVCufUGDwFn@public.gmane.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox