From: "Helmut Hullen" <Hullen@t-online.de>
To: linux-btrfs@vger.kernel.org
Subject: Re: "not enough space" with "data raid0"
Date: 17 Mar 2012 14:46:00 +0100 [thread overview]
Message-ID: <C52qGigT1uB@helmut.hullen.de> (raw)
In-Reply-To: <20120317122522.GD3172@carfax.org.uk>
Hallo, Hugo,
Du meintest am 17.03.12:
[no space left on device ...]
>>> Where is the problem, how can I use the full space?
>> Effectively it's missing the trigger to rebalance when the 'primary'
>> device starts to get full, or just to randomly spread the data
>> between the devices.
> No, a balance isn't going to help here. RAID-0 requires a minimum
> of 2 chunks in a block group. With two disks, you're only going to be
> able to fill the smallest one before you run out of space.
Ok - it happens only with 2 disks/Partitions?
I've continued playing; added a 3rd partition/device and then balanced:
# btrfs device add /dev/sdd1 /mnt/btr
## 73 + 146 + 146 GByte
# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sdc1 358432300 225400136 59842800 80% /mnt/btr
# added about 70 GByte (all together more than 3 times the smallest partition)
# btrfs fi show
Label: 'Scsi' uuid: e30586e9-a903-4d17-8ec0-1781457212c6
Total devices 3 FS bytes used 214.67GB
devid 1 size 68.37GB used 68.37GB path /dev/sdb1
devid 3 size 136.73GB used 41.01GB path /dev/sdd1
devid 2 size 136.73GB used 109.36GB path /dev/sdc1
Btrfs Btrfs v0.19
# btrfs fi df /mnt/btr
Data, RAID0: total=216.71GB, used=214.38GB
System, RAID1: total=8.00MB, used=24.00KB
System: total=4.00MB, used=0.00
Metadata, RAID1: total=1.00GB, used=295.13MB
=================================================================
# btrfs fi balance /mnt/btr
# btrfs fi show
Label: 'Scsi' uuid: e30586e9-a903-4d17-8ec0-1781457212c6
Total devices 3 FS bytes used 214.67GB
devid 1 size 68.37GB used 67.34GB path /dev/sdb1
devid 3 size 136.73GB used 40.85GB path /dev/sdd1
devid 2 size 136.73GB used 107.85GB path /dev/sdc1
Btrfs Btrfs v0.19
# btrfs fi df /mnt/btr
Data, RAID0: total=215.01GB, used=214.38GB
System, RAID1: total=8.00MB, used=24.00KB
System: total=4.00MB, used=0.00
Metadata, RAID1: total=512.00MB, used=294.88MB
----------------------------------------------------------------------
Looks as desired, the 3-disks-system contains more than 3 times the
smallest disk.
Balancing hasn't redistributed the contents - no problem.
By the way: you should name the prefixes in the NIST way for powers of
2: KiB, MiB, GiB. Or change to decimal prefixes.
Viele Gruesse!
Helmut
next prev parent reply other threads:[~2012-03-17 13:46 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-03-17 12:01 "not enough space" with "data raid0" Helmut Hullen
2012-03-17 12:13 ` Chris Samuel
2012-03-17 12:18 ` Helmut Hullen
2012-03-17 12:19 ` Hugo Mills
2012-03-28 18:38 ` Phillip Susi
2012-03-28 22:11 ` Alex
2012-03-29 1:07 ` Duncan
2012-03-17 12:19 ` Alex
2012-03-17 12:25 ` Hugo Mills
2012-03-17 13:36 ` Alex
2012-03-17 14:00 ` Hugo Mills
2012-03-17 14:24 ` Helmut Hullen
2012-03-17 14:43 ` Hugo Mills
2012-03-17 15:17 ` Alex
2012-03-19 15:14 ` Alex
2012-03-19 17:58 ` Hugo Mills
2012-03-19 21:45 ` Alex Plumbley-Jones
2012-03-17 13:46 ` Helmut Hullen [this message]
2012-03-17 14:04 ` Hugo Mills
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=C52qGigT1uB@helmut.hullen.de \
--to=hullen@t-online.de \
--cc=helmut@hullen.de \
--cc=linux-btrfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).