linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Helmut Hullen" <Hullen@t-online.de>
To: linux-btrfs@vger.kernel.org
Subject: Re: 800 GByte free, but "no space left"
Date: 05 Dec 2010 12:46:00 +0100	[thread overview]
Message-ID: <BbJOcqST1uB@helmut.hullen.de> (raw)
In-Reply-To: <AANLkTiktNawUi95WjtceJ6RfW_FG1S=brWtMY9ut=pJ9@mail.gmail.com>

Hallo, cwillu,

Du meintest am 05.12.10:

>> Maybe you're right. But if you're right then I have got the worst of
>> two worlds. I don't want neither RAID0 nor RAID1, I want a bundle of
>> different disks (at least partititions) which seem to be one large
>> disk. And I've hoped btrfs does this job.

> That's what raid0 is, and it's actually the best of both worlds:
> your metadata (which will be less than 5% of the data) is safely
> duplicated, such that you always have the checksums even with a disk
> gone, so you can verify that the data that you still have is good,
> while not wasting space duplicating every little bit of file data,
> which you may not care about that much, and which you have backed up
> anyway (right? right?).

Is it really RAID0, or is the btrfs way only similar to RAID0? I don't =
=20
like RAID0 because I never now on which physical disk the files are. =20
That makes changing disks very risky.

[...]

> This will almost certainly become much more tunable in the future,
> but not every feature that people want is done yet.  In fact, most of
> the really cool user-visible features aren't done yet.  Btrfs is
> still pretty young.

But I still hope btrfs is smarter than RAID0 or LVM ...

[...]

>> 1.5 TByte disk:
>> =A0 =A0 =A0 =A0btrfs device delete /dev/sdc3 /srv/MM
>> =A0 =A0 =A0 =A0btrfs filesystem balance /srv/MM
>>
>> and then disconnect the 1.5 TByte disk (and hope that now the 2
>> TByte disk sets the limits).
>> No nice way ...

> No, just run the balance without adding another disk.  That will
> probably work (although it _will_ take a while on a large
> filesystem).

I'll try - perhaps it helps for some (few) weeks.

> I'm not sure that you understand how this all works though;  you
> might want to re-read the wiki articles (which I believe have been
> freshened up recently).

Beg your pardon - my major interest is that the system works. I'm glad =
=20
when I believe to understand what happens, but this feeling is an add-=20
on, no must.

>> Is there a way to avoid this (presumably) RAID mismatch?

> Yes, you can specify the raid level for each when you make the
> filesystem (and will eventually be able to do it with existing
> filesystems).  However, as I described above, you really want
> metadata to be duplicated.  Your problem is more of an unfortunate
> case of everything not being tuned quite right yet.

May be - I thought avoiding the "RAID0" definition was a good idea.

>> By the way: working with TByte disks includes (for home users) that
>> there's no backup ...

> Not sure why you'd think that.  It can't be the bandwidth, and if you
> can't afford a second drive, there's a good case to be made that you
> can't afford the data you can't afford to lose.

The data isn't really valuable - DVB videos. Most of them are copied to=
 =20
disks in a machine about 250 km away. And the TV stations repeat them o=
n =20
and on.

Viele Gruesse!
Helmut
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" =
in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

  reply	other threads:[~2010-12-05 11:46 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <AANLkTimJ3dDdOFiQb8G=rrjCk2h68Y59oWdKEGH-80jN@mail.gmail.com>
2010-12-05  7:48 ` 800 GByte free, but "no space left" Helmut Hullen
2010-12-05  8:59   ` cwillu
2010-12-05  9:51     ` Helmut Hullen
2010-12-05 10:36       ` cwillu
2010-12-05 11:46         ` Helmut Hullen [this message]
2010-12-05 11:08   ` Evert Vorster
2010-12-05 11:22     ` Hugo Mills
2010-12-05 12:21       ` Helmut Hullen
2010-12-05 13:49       ` Evert Vorster
2010-12-05 14:33         ` Helmut Hullen
2010-12-05 18:00           ` Evert Vorster
2010-12-05 18:26             ` Helmut Hullen
2010-12-06  9:56               ` Brian Rogers
2010-12-06 11:41                 ` Hugo Mills
2010-12-05 20:28       ` Helmut Hullen
2010-12-06  7:43         ` Helmut Hullen
2010-12-06 11:43           ` Hugo Mills
2010-12-06 12:42             ` Helmut Hullen
2010-12-06 12:48               ` Hugo Mills
2010-12-06 13:13                 ` Helmut Hullen
2010-12-06 13:28                   ` Hugo Mills
2010-12-06 14:45                     ` Helmut Hullen
2010-12-06 15:18                       ` Hugo Mills
2010-12-06 17:13                         ` Helmut Hullen
2010-12-06 18:29                           ` Hugo Mills
2010-12-07 17:05                             ` Helmut Hullen
2010-12-07 17:25                               ` Hugo Mills
2010-12-07 17:44                                 ` Helmut Hullen
2010-12-05 11:35     ` Helmut Hullen
2010-12-02 18:23 Helmut Hullen
2010-12-03  3:28 ` Mike Fedyk
2010-12-03  6:47   ` Helmut Hullen
2010-12-04 17:17 ` Helmut Hullen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=BbJOcqST1uB@helmut.hullen.de \
    --to=hullen@t-online.de \
    --cc=helmut@hullen.de \
    --cc=linux-btrfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).