linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Olivier Bonvalet <btrfs.list@daevel.fr>
To: linux-btrfs@vger.kernel.org
Subject: [solved] Re: Frozen transaction
Date: Tue, 09 Oct 2012 15:49:01 +0200	[thread overview]
Message-ID: <50742B4D.5060600@daevel.fr> (raw)
In-Reply-To: <20121009123249.GO4405@twin.jikos.cz>

On 09/10/2012 14:32, David Sterba wrote:
> On Tue, Oct 09, 2012 at 12:07:20PM +0200, Olivier Bonvalet wrote:
>> I didn't see any "stack" entry in /proc/$PID/ ; I will try to find which
>> kernel option export that.
>
> CONFIG_STACKTRACE

CONFIG_STACKTRACE_SUPPORT=y
CONFIG_HAVE_REGS_AND_STACK_ACCESS_API=y
CONFIG_CC_STACKPROTECTOR=y
# CONFIG_DEBUG_STACK_USAGE is not set
CONFIG_USER_STACKTRACE_SUPPORT=y
# CONFIG_DEBUG_STACKOVERFLOW is not set

I suppose it's CONFIG_DEBUG_STACK_USAGE ?

>
>>> If the problem persists accross reboots, how long after mount does it
>>> take to get to this state? Cleaner usually kicks in after the 30 second
>>> transaction commit period, so this should be easy to verify if it's
>>> immediate or if it requires some load to get into the dead state.
>>
>> The cleaner process get it's state D between 30 and 60 seconds after the
>> reboot. But that cleaner process should not throw a lot of write access ?
>
> It needs to update the references so does both reads and writes.
>
>> This time I tried to remount with the space-cache enabled, there is a lot of
>> read access now. Does that space cache will help to find "free locations" ?
>
> Yes.
>
> As for the reads, the free space needs to fill the memory structures, if
> the disk is almost full there are also quite some data to read before
> it's complete. But reads are not the problem.

Well... I don't know if it is related to that space cache, but the 
cleanup process is now working : it makes a lot of write requests, and I 
have now 30Go of free space. So it will be solved soon.

Any chance that it can be related to that space cache feature ?


>> I suppose I can't resize the FS without solving that cleanup deadlock before
>> ?
>
> Probably no, although if you're fast enough and add another device before
> the cleaner starts, it could work :)

Ho it's possible, it's a virtualized system, so the device can easily grow.

> Other than that, these are the patches that should fix the deadlock:
>
> https://patchwork.kernel.org/patch/1383951/
> https://patchwork.kernel.org/patch/1383941/
>
> (it touches vfs and needs recompiling whole kernel, not just btrfs)
>

I was starting to patch my kernel before to see it's now solved.

Thanks for your answers !

Olivier

  reply	other threads:[~2012-10-09 13:49 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-10-09  7:37 Frozen transaction Olivier Bonvalet
2012-10-09  9:52 ` David Sterba
2012-10-09 10:07   ` Olivier Bonvalet
2012-10-09 12:32     ` David Sterba
2012-10-09 13:49       ` Olivier Bonvalet [this message]
2012-10-09 14:07         ` [solved] " David Sterba
2012-10-09 14:12           ` Olivier Bonvalet
2012-10-09 14:55             ` David Sterba

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=50742B4D.5060600@daevel.fr \
    --to=btrfs.list@daevel.fr \
    --cc=linux-btrfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).