linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Phil Turmel <philip@turmel.org>
To: Robert Kierski <rkierski@cray.com>,
	Dallas Clement <dallas.a.clement@gmail.com>
Cc: "linux-raid@vger.kernel.org" <linux-raid@vger.kernel.org>
Subject: Re: RAID 5,6 sequential writing seems slower in newer kernels
Date: Thu, 3 Dec 2015 09:37:52 -0500	[thread overview]
Message-ID: <566053C0.4060706@turmel.org> (raw)
In-Reply-To: <F7761B9B1D11B64BBB666019E9378117FDDF2F@CFWEX01.americas.cray.com>

On 12/03/2015 08:43 AM, Robert Kierski wrote:
> This is why I use Direct-IO to the bare metal block device instead of going through the FS.  Rather than discussing the real problem, we're off in the weed talking about whether the tests should be using O_SYNC and whether there is a problem introduced in the latest version of the FS.

It's not off in the weeds for Dallas, the OP.

> FS's and cache are very good at hiding the problems of those things below them and prevent you from exercising the code you're interested in debugging.

Yep, you seem to have a real problem.

Phil

  reply	other threads:[~2015-12-03 14:37 UTC|newest]

Thread overview: 35+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-12-01 23:02 RAID 5,6 sequential writing seems slower in newer kernels Dallas Clement
2015-12-02  1:07 ` keld
2015-12-02 14:18   ` Robert Kierski
2015-12-02 14:45     ` Phil Turmel
2015-12-02 15:28       ` Robert Kierski
2015-12-02 15:37         ` Phil Turmel
2015-12-02 15:44           ` Robert Kierski
2015-12-02 15:51             ` Phil Turmel
2015-12-02 19:50               ` Dallas Clement
2015-12-03  0:12                 ` Dallas Clement
2015-12-03  2:18                   ` Phil Turmel
2015-12-03  2:24                     ` Dallas Clement
2015-12-03  2:33                       ` Dallas Clement
2015-12-03  2:38                         ` Phil Turmel
2015-12-03  2:51                           ` Dallas Clement
2015-12-03  4:30                             ` Phil Turmel
2015-12-03  4:49                               ` Dallas Clement
2015-12-03 13:43                               ` Robert Kierski
2015-12-03 14:37                                 ` Phil Turmel [this message]
2015-12-03  2:34                       ` Phil Turmel
2015-12-03 14:19                 ` Robert Kierski
2015-12-03 14:39                   ` Dallas Clement
2015-12-03 15:04                   ` Phil Turmel
2015-12-03 22:21                     ` Weedy
2015-12-04 13:40                     ` Robert Kierski
2015-12-04 16:08                       ` Dallas Clement
2015-12-07 14:29                         ` Robert Kierski
2015-12-08 19:38                           ` Dallas Clement
2015-12-08 21:24                             ` Robert Kierski
2015-12-04 18:51                       ` Shaohua Li
2015-12-05  1:38                         ` Dallas Clement
2015-12-07 14:18                         ` Robert Kierski
2015-12-02 15:37       ` Robert Kierski
2015-12-02  5:22 ` Roman Mamedov
2015-12-02 14:15 ` Robert Kierski

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=566053C0.4060706@turmel.org \
    --to=philip@turmel.org \
    --cc=dallas.a.clement@gmail.com \
    --cc=linux-raid@vger.kernel.org \
    --cc=rkierski@cray.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).