linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Shaohua Li <shli@kernel.org>
To: Les Stroud <les@lesstroud.com>
Cc: linux-raid@vger.kernel.org
Subject: Re: Process stuck in md_flush_request (state: D)
Date: Mon, 27 Feb 2017 10:28:06 -0800	[thread overview]
Message-ID: <20170227182806.jntxzhyw3nkohl5r@kernel.org> (raw)
In-Reply-To: <829563C6-A2AF-4E5F-B5AF-D33D2E5A734E@lesstroud.com>

On Mon, Feb 27, 2017 at 09:49:59AM -0500, Les Stroud wrote:
> After a period of a couple of weeks with one of our test instances having this problem every other day, they were all nice enough to operate without an issue for 9 days.  It finally reoccurred last night on one of the machines.  
> 
> It exhibits the same symptoms and the call traces look as they did previously.  This particular instance is configured with a deadline scheduler.  I was able to capture the inflight you requested:
> 
> $ cat /sys/block/xvd[abcde]/inflight
>        0        0
>        0        0
>        0        0
>        0        0
>        0        0
> 
> I’ve had this happen on instances with the deadline scheduler and the noop scheduler.  At this point, I have not had this happen on an instance that is noop and the raid filesystem (ext4) is mounted with nobarrier.  The instances with noop/nobarrier have not been running long enough for me to make any sort of conclusion that it works around the problem.  Frankly, I’m not sure I understand the interaction between ext4 barriers and raid0 block flushes well enough to theorize whether it should or shouldn’t make a difference.

If nobarrier, ext4 doesn't send flush request.
 
> Does any of this help with identifying the bug?  Is there anymore information I can get that would be useful?  


Unfortunately I can't find anything fishing. Does the xcdx disk correctly
handle flush request? For example, you can do the same test with a single such
disk and check if anything wrong.

Thanks,
Shaohua

  reply	other threads:[~2017-02-27 18:28 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <36A8825E-F387-4ED8-8672-976094B3BEBB@lesstroud.com>
2017-02-17 19:05 ` Process stuck in md_flush_request (state: D) Les Stroud
2017-02-17 20:06   ` Shaohua Li
2017-02-17 20:40     ` Les Stroud
2017-02-27 14:49       ` Les Stroud
2017-02-27 18:28         ` Shaohua Li [this message]
2017-02-27 18:48           ` Les Stroud
2017-02-28  0:44             ` Shaohua Li
     [not found]             ` <1224510038.17134.1488242683070@vsaw28.prod.google.com>
2017-02-28  2:58               ` Les Stroud

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170227182806.jntxzhyw3nkohl5r@kernel.org \
    --to=shli@kernel.org \
    --cc=les@lesstroud.com \
    --cc=linux-raid@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).