linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Josef Bacik <jbacik@fusionio.com>
To: Stefan Priebe <s.priebe@profihost.ag>
Cc: Josef Bacik <JBacik@fusionio.com>,
	"linux-btrfs@vger.kernel.org" <linux-btrfs@vger.kernel.org>
Subject: Re: btrfs deadlock in 3.5-rc3
Date: Mon, 25 Jun 2012 14:02:52 -0400	[thread overview]
Message-ID: <20120625180252.GD7404@localhost.localdomain> (raw)
In-Reply-To: <4FE8A21E.7050104@profihost.ag>

On Mon, Jun 25, 2012 at 11:38:38AM -0600, Stefan Priebe wrote:
> 
> Am 25.06.2012 16:48, schrieb Josef Bacik:
> > On Mon, Jun 25, 2012 at 08:45:02AM -0600, Stefan Priebe - Profihost AG wrote:
> >>>
> >>> Thats weird, sysrq+w should have a bunch of stacktraces but it's empty, so
> >>> unless theres a bug theres nothing blocked.  Is the box actually hung or is it
> >>> just taking forever?  Maybe try sysrq+w again to see if the one you pasted was
> >>> just a fluke?  Thanks,
> >>
> >> This one looks better:
> >> http://pastebin.com/raw.php?i=R4pztDRt
> >>
> >
> > Ok looks like you have discard turned on.
> Yes
> 
>  >  Can you turn that off and see if you
> > can still reproduce the deadlock?  If so sysrq+w again, if not then I know where
> > to look ;).  Thanks,
> without discard i can't reproduce but random write speed with ceph 
> without discard is a LOT slower (around 8000 iops/s instead of 
> 13000iops/s). So i don't know if it is discard or if i'm just not able 
> to trigger it.
> 

Ouch, what kind of drive goes faster with discard _on_?  Anyway it looks like
we're waiting for the discard to come back, so either its your drive or theres a
bug in the block layer.  Maybe try an older kernel and see if it's broken there,
and then bisect it down?  Thanks,

Josef

  reply	other threads:[~2012-06-25 18:02 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-06-23  8:50 btrfs deadlock in 3.5-rc3 Stefan Priebe
2012-06-23 13:46 ` Michael
2012-06-23 14:55   ` Stefan Priebe
2012-06-25 13:08 ` Josef Bacik
2012-06-25 14:08   ` Stefan Priebe - Profihost AG
2012-06-25 14:20     ` Josef Bacik
2012-06-25 14:45       ` Stefan Priebe - Profihost AG
2012-06-25 14:48         ` Josef Bacik
2012-06-25 17:38           ` Stefan Priebe
2012-06-25 18:02             ` Josef Bacik [this message]
2012-06-25 18:28               ` Stefan Priebe
2012-06-25 19:33                 ` Stefan Priebe
2012-06-25 20:11                   ` Josef Bacik
2012-06-25 20:20                     ` Stefan Priebe
2012-06-25 20:23                       ` Josef Bacik
2012-06-25 20:33                         ` Stefan Priebe
2012-06-26 16:47                         ` Stefan Priebe
2012-06-26 20:14                           ` Josef Bacik
2012-06-26 20:19                             ` Stefan Priebe
2012-06-26 20:48                               ` Josef Bacik
2012-06-27  5:47                                 ` Stefan Priebe - Profihost AG
2012-06-27 13:30                                   ` Josef Bacik
2012-06-27 21:17                                   ` Josef Bacik

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20120625180252.GD7404@localhost.localdomain \
    --to=jbacik@fusionio.com \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=s.priebe@profihost.ag \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).