From: Dave Chinner <david@fromorbit.com>
To: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: Jeff Moyer <jmoyer@redhat.com>, Mel Gorman <mgorman@suse.de>,
Rob van der Heij <rvdheij@gmail.com>,
Andrew Morton <akpm@linux-foundation.org>,
Yannick Brosseau <yannick.brosseau@gmail.com>,
stable@vger.kernel.org, LKML <linux-kernel@vger.kernel.org>,
"lttng-dev@lists.lttng.org" <lttng-dev@lists.lttng.org>
Subject: Re: [-stable 3.8.1 performance regression] madvise POSIX_FADV_DONTNEED
Date: Fri, 5 Jul 2013 11:42:14 +1000 [thread overview]
Message-ID: <20130705014214.GT14996@dastard> (raw)
In-Reply-To: <20130704003103.GA13899@Krystal>
On Wed, Jul 03, 2013 at 08:31:03PM -0400, Mathieu Desnoyers wrote:
> * Dave Chinner (david@fromorbit.com) wrote:
> > On Wed, Jul 03, 2013 at 10:53:08AM -0400, Jeff Moyer wrote:
> > > Mel Gorman <mgorman@suse.de> writes:
> > >
> > > >> > I just tried replacing my sync_file_range()+fadvise() calls and instead
> > > >> > pass the O_DIRECT flag to open(). Unfortunately, I must be doing
> > > >> > something very wrong, because I get only 1/3rd of the throughput, and
> > > >> > the page cache fills up. Any idea why ?
> > > >>
> > > >> Since O_DIRECT does not seem to provide acceptable throughput, it may be
> > > >> interesting to investigate other ways to lessen the latency impact of
> > > >> the fadvise DONTNEED hint.
> > > >>
> > > >
> > > > There are cases where O_DIRECT falls back to buffered IO which is why you
> > > > might have found that page cache was still filling up. There are a few
> > > > reasons why this can happen but I would guess the common cause is that
> > > > the range of pages being written was in the page cache already and could
> > > > not be invalidated for some reason. I'm guessing this is the common case
> > > > for page cache filling even with O_DIRECT but would not bet money on it
> > > > as it's not a problem I investigated before.
> > >
> > > Even when O_DIRECT falls back to buffered I/O for writes, it will
> > > invalidate the page cache range described by the buffered I/O once it
> > > completes. For reads, the range is written out synchronously before the
> > > direct I/O is issued. Either way, you shouldn't see the page cache
> > > filling up.
> >
> > <sigh>
> >
> > I keep forgetting that filesystems other than XFS have sub-optimal
> > direct IO implementations. I wish that "silent fallback to buffered
> > IO" idea had never seen the light of day, and that filesystems
> > implemented direct IO properly.
> >
> > > Switching to O_DIRECT often incurs a performance hit, especially if the
> > > application does not submit more than one I/O at a time. Remember,
> > > you're not getting readahead, and you're not getting the benefit of the
> > > writeback code submitting batches of I/O.
> >
> > With the way IO is being done, there won't be any readahead (write
> > only workload) and they are directly controlling writeback one chunk
> > at a time, so there's not writeback caching to do batching, either.
> > There's no obvious reason that direct IO should be any slower
> > assuming that the application is actually doing 1MB sized and
> > aligned IOs like was mentioned, because both methods are directly
> > dispatching and then waiting for IO completion.
>
> As a clarification, I use 256kB "chunks" (sub-buffers) in my tests, not
> 1MB. Also, please note that since I'm using splice(), each individual
> splice call is internally limited to 16 pages worth of data transfer
> (64kB).
So you are doing 4 write() calls to a single sync_file_range(), not
1:1?
> > What filesystem is in use here?
>
> My test was performed on ext3 filesystem, that was itself sitting on
> raid-1 software raid.
Yup, ext3 has limitations on direct IO that requires block
allocation and will often fall back to buffered IO. ext4 and XFS
should not have the same problems...
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
next prev parent reply other threads:[~2013-07-05 1:42 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <51BE1828.3060206@gmail.com>
2013-06-17 14:13 ` [-stable 3.8.1 performance regression] madvise POSIX_FADV_DONTNEED Mathieu Desnoyers
2013-06-17 21:24 ` Andrew Morton
2013-06-17 21:39 ` Raphaël Beamonte
[not found] ` <CAE_Gge34HCroSgNgiXL1j7Le3CNKRXR=7TZQhJSmY+wfWniKug@mail.gmail.com>
2013-06-17 21:57 ` [lttng-dev] " Andrew Morton
2013-06-18 2:15 ` Mathieu Desnoyers
2013-06-18 2:44 ` Andrew Morton
2013-06-18 9:29 ` Mel Gorman
2013-06-18 10:11 ` Mel Gorman
2013-06-19 19:25 ` Mathieu Desnoyers
2013-06-20 6:36 ` Rob van der Heij
[not found] ` <CAJCc=kijujORhPUmPvzHj-MMdyVbf-iHEK0Jx-VHbTO8q4ESFA@mail.gmail.com>
2013-06-20 12:20 ` Mathieu Desnoyers
2013-06-25 1:56 ` Dave Chinner
2013-07-02 13:58 ` Mathieu Desnoyers
2013-07-03 0:55 ` Mathieu Desnoyers
2013-07-03 8:47 ` Mel Gorman
2013-07-03 14:53 ` Jeff Moyer
2013-07-04 0:03 ` Dave Chinner
2013-07-04 0:31 ` Mathieu Desnoyers
2013-07-04 21:11 ` Rob van der Heij
2013-07-05 1:42 ` Dave Chinner [this message]
2013-07-05 2:34 ` Mathieu Desnoyers
2013-07-03 18:47 ` Yannick Brosseau
2013-07-05 14:18 ` Mel Gorman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20130705014214.GT14996@dastard \
--to=david@fromorbit.com \
--cc=akpm@linux-foundation.org \
--cc=jmoyer@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=lttng-dev@lists.lttng.org \
--cc=mathieu.desnoyers@efficios.com \
--cc=mgorman@suse.de \
--cc=rvdheij@gmail.com \
--cc=stable@vger.kernel.org \
--cc=yannick.brosseau@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).