From: Martin <m_btrfs@ml1.co.uk>
To: linux-btrfs@vger.kernel.org
Subject: Re: btrfs raid1 on 16TB goes read-only after "btrfs: block rsv returned -28"
Date: Wed, 05 Jun 2013 16:59:57 +0100 [thread overview]
Message-ID: <konn9n$7hf$1@ger.gmane.org> (raw)
In-Reply-To: <20130605154329.GQ20133@carfax.org.uk>
On 05/06/13 16:43, Hugo Mills wrote:
> On Wed, Jun 05, 2013 at 04:28:33PM +0100, Martin wrote:
>> On 05/06/13 16:05, Hugo Mills wrote:
>>> On Wed, Jun 05, 2013 at 03:57:42PM +0100, Martin wrote:
>>>> Dear Devs,
>>>>
>>>> I have x4 4TB HDDs formatted with:
>>>>
>>>> mkfs.btrfs -L bu-16TB_0 -d raid1 -m raid1 /dev/sd[cdef]
>>>>
>>>>
>>>> /etc/fstab mounts with the options:
>>>>
>>>> noatime,noauto,space_cache,inode_cache
>>>>
>>>>
>>>> All on kernel 3.8.13.
>>>>
>>>>
>>>> Upon using rsync to copy some heavily hardlinked backups
>>>> from ReiserFS, I've seen:
>>>>
>>>>
>>>> The following "block rsv returned -28" is repeated 7 times
>>>> until there is a call trace for:
>>>
>>> This is ENOSPC. Can you post the output of "btrfs fi df
>>> /mountpoint" and "btrfs fi show", please?
>>
>>
>> btrfs fi df:
>>
>> Data, RAID1: total=2.85TB, used=2.84TB Data: total=8.00MB,
>> used=0.00 System, RAID1: total=8.00MB, used=412.00KB System:
>> total=4.00MB, used=0.00 Metadata, RAID1: total=27.00GB,
>> used=25.82GB Metadata: total=8.00MB, used=0.00
>>
>>
>> btrfs fi show:
>>
>> Label: 'bu-16TB_0' uuid: 8fd9a0a8-9109-46db-8da0-396d9c6bc8e9
>> Total devices 4 FS bytes used 2.87TB devid 4 size 3.64TB used
>> 1.44TB path /dev/sdf devid 3 size 3.64TB used 1.44TB path
>> /dev/sde devid 1 size 3.64TB used 1.44TB path /dev/sdc devid
>> 2 size 3.64TB used 1.44TB path /dev/sdd
>
> OK, so you've got plenty of space to allocate. There were some
> issues in this area (block reserves and ENOSPC, and I think
> specifically addressing the issue of ENOSPC when there's space
> available to allocate) that were fixed between 3.8 and 3.9 (and
> probably some between 3.9 and 3.10-rc as well), so upgrading your
> kernel _may_ help here.
>
> Something else that may possibly help as a sticking-plaster is to
> write metadata more slowly, so that you don't have quite so much of
> it waiting to be written out for the next transaction. Practically,
> this may involve things like running "sync" on a loop. But it's
> definitely a horrible hack that may help if you're desperate for a
> quick fix until you can finish creating metadata so quickly and
> upgrade your kernel...
>
> Hugo.
Thanks for that. I can give kernel 3.9.4 a try. For a giggle, I'll try
first with "nice 19" and syncs in a loop...
One confusing bit is why the "Data, RAID1: total=2.85TB" from "btrfs
fi df"?
Thanks,
Martin
next prev parent reply other threads:[~2013-06-05 16:00 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-06-05 14:57 btrfs raid1 on 16TB goes read-only after "btrfs: block rsv returned -28" Martin
2013-06-05 15:05 ` Hugo Mills
2013-06-05 15:28 ` Martin
2013-06-05 15:43 ` Hugo Mills
2013-06-05 15:59 ` Martin [this message]
2013-06-05 16:04 ` Hugo Mills
2013-06-05 16:24 ` David Sterba
2013-06-05 21:12 ` Martin
2013-06-07 23:14 ` Martin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='konn9n$7hf$1@ger.gmane.org' \
--to=m_btrfs@ml1.co.uk \
--cc=linux-btrfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).