* fail to mount after first reboot
@ 2012-08-19 14:08 Daniel Pocock
2012-08-19 14:15 ` Hugo Mills
0 siblings, 1 reply; 6+ messages in thread
From: Daniel Pocock @ 2012-08-19 14:08 UTC (permalink / raw)
To: linux-btrfs
I created a 1TB RAID1. So far it is just for testing, no important data
on there.
After a reboot, I tried to mount it again
# mount /dev/mapper/vg00-btrfsvol0_0 /mnt/btrfs0
mount: wrong fs type, bad option, bad superblock on
/dev/mapper/vg00-btrfsvol0_0,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so
I checked dmesg:
[17216.145092] device fsid c959d4a5-0713-4685-b572-8a679ec37e20 devid 1
transid 34 /dev/mapper/vg00-btrfsvol0_0
[17216.145639] btrfs: disk space caching is enabled
[17216.146987] btrfs: failed to read the system array on dm-100
[17216.147556] btrfs: open_ctree failed
Then I did btrfsck - it reported no errors, but mounted OK:
# btrfsck /dev/mapper/vg00-btrfsvol0_0
checking extents
checking fs roots
checking root refs
found 26848493568 bytes used err is 0
total csum bytes: 26170252
total tree bytes: 48517120
total fs tree bytes: 5492736
btree space waste bytes: 14307930
file data blocks allocated: 26799976448
referenced 26799976448
Btrfs Btrfs v0.19
# mount /dev/mapper/vg00-btrfsvol0_0 /mnt/btrfs0
#
I checked dmesg again, these are the messages from the second mount:
[17299.180600] device fsid 928b939f-7f9d-4095-b1ba-e35c5f1277bf devid 1
transid 37928 /dev/dm-96
[17299.204475] device fsid c959d4a5-0713-4685-b572-8a679ec37e20 devid 2
transid 34 /dev/dm-99
[17299.204658] device fsid c959d4a5-0713-4685-b572-8a679ec37e20 devid 1
transid 34 /dev/dm-100
[17299.288317] device fsid 928b939f-7f9d-4095-b1ba-e35c5f1277bf devid 1
transid 37928 /dev/dm-96
[17299.289024] device fsid c959d4a5-0713-4685-b572-8a679ec37e20 devid 2
transid 34 /dev/dm-99
[17299.289150] device fsid c959d4a5-0713-4685-b572-8a679ec37e20 devid 1
transid 34 /dev/dm-100
[17310.978518] device fsid c959d4a5-0713-4685-b572-8a679ec37e20 devid 1
transid 34 /dev/mapper/vg00-btrfsvol0_0
[17310.993882] btrfs: disk space caching is enabled
Can anyone comment on this?
Also, df is reporting double the actual RAID1 volume size, and double
the amount of data stored in this filesystem:
# df -lh .
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg00-btrfsvol0_0 1.9T 51G 1.8T 3% /mnt/btrfs0
I would expect to see Size=1T, Used=25G
# strace -v -e trace=statfs df -lh /mnt/btrfs0
statfs("/mnt/btrfs0", {f_type=0x9123683e, f_bsize=4096,
f_blocks=488374272, f_bfree=475264720, f_bavail=474749786, f_files=0,
f_ffree=0, f_fsid={2083217090, -1714407264}, f_namelen=255,
f_frsize=4096}) = 0
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg00-btrfsvol0_0 1.9T 51G 1.8T 3% /mnt/btrfs0
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: fail to mount after first reboot
2012-08-19 14:08 fail to mount after first reboot Daniel Pocock
@ 2012-08-19 14:15 ` Hugo Mills
2012-08-19 14:33 ` Daniel Pocock
0 siblings, 1 reply; 6+ messages in thread
From: Hugo Mills @ 2012-08-19 14:15 UTC (permalink / raw)
To: Daniel Pocock; +Cc: linux-btrfs
[-- Attachment #1: Type: text/plain, Size: 2187 bytes --]
On Sun, Aug 19, 2012 at 02:08:17PM +0000, Daniel Pocock wrote:
>
>
> I created a 1TB RAID1. So far it is just for testing, no important data
> on there.
>
>
> After a reboot, I tried to mount it again
>
> # mount /dev/mapper/vg00-btrfsvol0_0 /mnt/btrfs0
> mount: wrong fs type, bad option, bad superblock on
> /dev/mapper/vg00-btrfsvol0_0,
> missing codepage or helper program, or other error
> In some cases useful info is found in syslog - try
> dmesg | tail or so
With multi-volume btrfs filesystems, you have to run "btrfs dev
scan" before trying to mount it. Usually, the distribution will do
this in the initrd (if you've installed its btrfs-progs package).
> Then I did btrfsck - it reported no errors, but mounted OK:
>
> # btrfsck /dev/mapper/vg00-btrfsvol0_0
[...]
The first thing that btrfsck does is to do a device scan.
[...]
> Can anyone comment on this?
See above.
> Also, df is reporting double the actual RAID1 volume size, and double
> the amount of data stored in this filesystem:
>
> # df -lh .
> Filesystem Size Used Avail Use% Mounted on
> /dev/mapper/vg00-btrfsvol0_0 1.9T 51G 1.8T 3% /mnt/btrfs0
>
> I would expect to see Size=1T, Used=25G
>
> # strace -v -e trace=statfs df -lh /mnt/btrfs0
> statfs("/mnt/btrfs0", {f_type=0x9123683e, f_bsize=4096,
> f_blocks=488374272, f_bfree=475264720, f_bavail=474749786, f_files=0,
> f_ffree=0, f_fsid={2083217090, -1714407264}, f_namelen=255,
> f_frsize=4096}) = 0
> Filesystem Size Used Avail Use% Mounted on
> /dev/mapper/vg00-btrfsvol0_0 1.9T 51G 1.8T 3% /mnt/btrfs0
This is an FAQ:
https://btrfs.wiki.kernel.org/index.php/FAQ#Why_is_free_space_so_complicated.3F
tl;dr: It's reporting the total number of raw storage bytes,
because it's impossible to compute actual usable space in the general
case.
Hugo.
--
=== Hugo Mills: hugo@... carfax.org.uk | darksatanic.net | lug.org.uk ===
PGP key: 515C238D from wwwkeys.eu.pgp.net or http://www.carfax.org.uk
--- In one respect at least, the Martians are a happy people: ---
they have no lawyers.
[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 190 bytes --]
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: fail to mount after first reboot
2012-08-19 14:15 ` Hugo Mills
@ 2012-08-19 14:33 ` Daniel Pocock
2012-08-19 14:51 ` Hugo Mills
0 siblings, 1 reply; 6+ messages in thread
From: Daniel Pocock @ 2012-08-19 14:33 UTC (permalink / raw)
To: Hugo Mills, linux-btrfs
On 19/08/12 14:15, Hugo Mills wrote:
> On Sun, Aug 19, 2012 at 02:08:17PM +0000, Daniel Pocock wrote:
>>
>>
>> I created a 1TB RAID1. So far it is just for testing, no important data
>> on there.
>>
>>
>> After a reboot, I tried to mount it again
>>
>> # mount /dev/mapper/vg00-btrfsvol0_0 /mnt/btrfs0
>> mount: wrong fs type, bad option, bad superblock on
>> /dev/mapper/vg00-btrfsvol0_0,
>> missing codepage or helper program, or other error
>> In some cases useful info is found in syslog - try
>> dmesg | tail or so
>
> With multi-volume btrfs filesystems, you have to run "btrfs dev
> scan" before trying to mount it. Usually, the distribution will do
> this in the initrd (if you've installed its btrfs-progs package).
>
I'm running Debian, I've just updated the system from squeeze to wheezy
(with 3.2 kernel) so I could try btrfs and do other QA testing on wheezy
(as it is in the beta phase now)
I already had the btrfs-tools package installed, before creating the
filesystem. So it appears Debian doesn't have an init script
It does have /lib/udev/rules.d/60-btrfs.rules:
SUBSYSTEM!="block", GOTO="btrfs_end"
ACTION!="add|change", GOTO="btrfs_end"
ENV{ID_FS_TYPE}!="btrfs", GOTO="btrfs_end"
RUN+="/sbin/modprobe btrfs"
RUN+="/sbin/btrfs device scan $env{DEVNAME}"
LABEL="btrfs_end"
but I'm guessing that isn't any use to my logical volumes that are
activated early in the boot sequence?
Could I be having this problem because I put my btrfs on logical volumes?
Here is the package version I have:
# dpkg --list | grep btrfs
ii btrfs-tools 0.19+20120328-7
Checksumming Copy on Write Filesystem utilities
Here is a more thorough dmesg, since boot, does this suggest the scan
was invoked? I remember seeing some message about checking for btrfs
filesystems just after selecting the kernel in grub (root is ext3)
# dmesg | grep btrfs
[ 40.677505] btrfs: setting nodatacow
[ 40.677514] btrfs: turning off barriers
[17216.145092] device fsid c959d4a5-0713-4685-b572-8a679ec37e20 devid 1
transid 34 /dev/mapper/vg00-btrfsvol0_0
[17216.145639] btrfs: disk space caching is enabled
[17216.146987] btrfs: failed to read the system array on dm-100
[17216.147556] btrfs: open_ctree failed
[17310.978518] device fsid c959d4a5-0713-4685-b572-8a679ec37e20 devid 1
transid 34 /dev/mapper/vg00-btrfsvol0_0
[17310.993882] btrfs: disk space caching is enabled
[17598.736657] device fsid c959d4a5-0713-4685-b572-8a679ec37e20 devid 1
transid 37 /dev/mapper/vg00-btrfsvol0_0
[17598.750849] btrfs: disk space caching is enabled
>> Then I did btrfsck - it reported no errors, but mounted OK:
>>
>> # btrfsck /dev/mapper/vg00-btrfsvol0_0
> [...]
>
> The first thing that btrfsck does is to do a device scan.
>
> [...]
Ok, that is most likely why my next mount attempted succeeded
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: fail to mount after first reboot
2012-08-19 14:33 ` Daniel Pocock
@ 2012-08-19 14:51 ` Hugo Mills
2012-08-19 16:02 ` Daniel Pocock
0 siblings, 1 reply; 6+ messages in thread
From: Hugo Mills @ 2012-08-19 14:51 UTC (permalink / raw)
To: Daniel Pocock; +Cc: linux-btrfs
[-- Attachment #1: Type: text/plain, Size: 3428 bytes --]
On Sun, Aug 19, 2012 at 02:33:14PM +0000, Daniel Pocock wrote:
> On 19/08/12 14:15, Hugo Mills wrote:
> > On Sun, Aug 19, 2012 at 02:08:17PM +0000, Daniel Pocock wrote:
> >> I created a 1TB RAID1. So far it is just for testing, no important data
> >> on there.
> >>
> >> After a reboot, I tried to mount it again
> >>
> >> # mount /dev/mapper/vg00-btrfsvol0_0 /mnt/btrfs0
> >> mount: wrong fs type, bad option, bad superblock on
> >> /dev/mapper/vg00-btrfsvol0_0,
> >> missing codepage or helper program, or other error
> >> In some cases useful info is found in syslog - try
> >> dmesg | tail or so
> >
> > With multi-volume btrfs filesystems, you have to run "btrfs dev
> > scan" before trying to mount it. Usually, the distribution will do
> > this in the initrd (if you've installed its btrfs-progs package).
>
> I'm running Debian, I've just updated the system from squeeze to wheezy
> (with 3.2 kernel) so I could try btrfs and do other QA testing on wheezy
> (as it is in the beta phase now)
>
> I already had the btrfs-tools package installed, before creating the
> filesystem. So it appears Debian doesn't have an init script
>
> It does have /lib/udev/rules.d/60-btrfs.rules:
> SUBSYSTEM!="block", GOTO="btrfs_end"
> ACTION!="add|change", GOTO="btrfs_end"
> ENV{ID_FS_TYPE}!="btrfs", GOTO="btrfs_end"
> RUN+="/sbin/modprobe btrfs"
> RUN+="/sbin/btrfs device scan $env{DEVNAME}"
>
> LABEL="btrfs_end"
>
> but I'm guessing that isn't any use to my logical volumes that are
> activated early in the boot sequence?
>
> Could I be having this problem because I put my btrfs on logical volumes?
Possibly. You may need the "Device mapper uevents" option in the
kernel (CONFIG_DM_UEVENT) to trigger that udev rule when you enable
your VG(s). Not sure if it's available/enabled in your kernel.
> Here is the package version I have:
>
> # dpkg --list | grep btrfs
> ii btrfs-tools 0.19+20120328-7
> Checksumming Copy on Write Filesystem utilities
That should be fine.
> Here is a more thorough dmesg, since boot, does this suggest the scan
> was invoked? I remember seeing some message about checking for btrfs
> filesystems just after selecting the kernel in grub (root is ext3)
That message was probably grub checking the FS.
> # dmesg | grep btrfs
> [ 40.677505] btrfs: setting nodatacow
> [ 40.677514] btrfs: turning off barriers
> [17216.145092] device fsid c959d4a5-0713-4685-b572-8a679ec37e20 devid 1
> transid 34 /dev/mapper/vg00-btrfsvol0_0
> [17216.145639] btrfs: disk space caching is enabled
> [17216.146987] btrfs: failed to read the system array on dm-100
> [17216.147556] btrfs: open_ctree failed
> [17310.978518] device fsid c959d4a5-0713-4685-b572-8a679ec37e20 devid 1
> transid 34 /dev/mapper/vg00-btrfsvol0_0
> [17310.993882] btrfs: disk space caching is enabled
> [17598.736657] device fsid c959d4a5-0713-4685-b572-8a679ec37e20 devid 1
> transid 37 /dev/mapper/vg00-btrfsvol0_0
> [17598.750849] btrfs: disk space caching is enabled
No, doesn't look like there were any scan results coming in before
17216.
Hugo.
--
=== Hugo Mills: hugo@... carfax.org.uk | darksatanic.net | lug.org.uk ===
PGP key: 515C238D from wwwkeys.eu.pgp.net or http://www.carfax.org.uk
--- In one respect at least, the Martians are a happy people: ---
they have no lawyers.
[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 190 bytes --]
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: fail to mount after first reboot
2012-08-19 14:51 ` Hugo Mills
@ 2012-08-19 16:02 ` Daniel Pocock
2012-08-20 18:47 ` Daniel Pocock
0 siblings, 1 reply; 6+ messages in thread
From: Daniel Pocock @ 2012-08-19 16:02 UTC (permalink / raw)
To: Hugo Mills, linux-btrfs
On 19/08/12 16:51, Hugo Mills wrote:
> On Sun, Aug 19, 2012 at 02:33:14PM +0000, Daniel Pocock wrote:
>> On 19/08/12 14:15, Hugo Mills wrote:
>>> On Sun, Aug 19, 2012 at 02:08:17PM +0000, Daniel Pocock wrote:
>>>> I created a 1TB RAID1. So far it is just for testing, no important data
>>>> on there.
>>>>
>>>> After a reboot, I tried to mount it again
>>>>
>>>> # mount /dev/mapper/vg00-btrfsvol0_0 /mnt/btrfs0
>>>> mount: wrong fs type, bad option, bad superblock on
>>>> /dev/mapper/vg00-btrfsvol0_0,
>>>> missing codepage or helper program, or other error
>>>> In some cases useful info is found in syslog - try
>>>> dmesg | tail or so
>>>
>>> With multi-volume btrfs filesystems, you have to run "btrfs dev
>>> scan" before trying to mount it. Usually, the distribution will do
>>> this in the initrd (if you've installed its btrfs-progs package).
>>
>> I'm running Debian, I've just updated the system from squeeze to wheezy
>> (with 3.2 kernel) so I could try btrfs and do other QA testing on wheezy
>> (as it is in the beta phase now)
>>
>> I already had the btrfs-tools package installed, before creating the
>> filesystem. So it appears Debian doesn't have an init script
>>
>> It does have /lib/udev/rules.d/60-btrfs.rules:
>> SUBSYSTEM!="block", GOTO="btrfs_end"
>> ACTION!="add|change", GOTO="btrfs_end"
>> ENV{ID_FS_TYPE}!="btrfs", GOTO="btrfs_end"
>> RUN+="/sbin/modprobe btrfs"
>> RUN+="/sbin/btrfs device scan $env{DEVNAME}"
>>
>> LABEL="btrfs_end"
>>
>> but I'm guessing that isn't any use to my logical volumes that are
>> activated early in the boot sequence?
>>
>> Could I be having this problem because I put my btrfs on logical volumes?
>
> Possibly. You may need the "Device mapper uevents" option in the
> kernel (CONFIG_DM_UEVENT) to trigger that udev rule when you enable
> your VG(s). Not sure if it's available/enabled in your kernel.
>
I've created a Debian bug report for the issue:
http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=685311
Thanks for the quick feedback about this
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: fail to mount after first reboot
2012-08-19 16:02 ` Daniel Pocock
@ 2012-08-20 18:47 ` Daniel Pocock
0 siblings, 0 replies; 6+ messages in thread
From: Daniel Pocock @ 2012-08-20 18:47 UTC (permalink / raw)
To: Hugo Mills, linux-btrfs
On 19/08/12 18:02, Daniel Pocock wrote:
>
>
> On 19/08/12 16:51, Hugo Mills wrote:
>> On Sun, Aug 19, 2012 at 02:33:14PM +0000, Daniel Pocock wrote:
>>> On 19/08/12 14:15, Hugo Mills wrote:
>>>> On Sun, Aug 19, 2012 at 02:08:17PM +0000, Daniel Pocock wrote:
>>>>> I created a 1TB RAID1. So far it is just for testing, no important data
>>>>> on there.
>>>>>
>>>>> After a reboot, I tried to mount it again
>>>>>
>>>>> # mount /dev/mapper/vg00-btrfsvol0_0 /mnt/btrfs0
>>>>> mount: wrong fs type, bad option, bad superblock on
>>>>> /dev/mapper/vg00-btrfsvol0_0,
>>>>> missing codepage or helper program, or other error
>>>>> In some cases useful info is found in syslog - try
>>>>> dmesg | tail or so
>>>>
>>>> With multi-volume btrfs filesystems, you have to run "btrfs dev
>>>> scan" before trying to mount it. Usually, the distribution will do
>>>> this in the initrd (if you've installed its btrfs-progs package).
>>>
>>> I'm running Debian, I've just updated the system from squeeze to wheezy
>>> (with 3.2 kernel) so I could try btrfs and do other QA testing on wheezy
>>> (as it is in the beta phase now)
>>>
>>> I already had the btrfs-tools package installed, before creating the
>>> filesystem. So it appears Debian doesn't have an init script
>>>
>>> It does have /lib/udev/rules.d/60-btrfs.rules:
>>> SUBSYSTEM!="block", GOTO="btrfs_end"
>>> ACTION!="add|change", GOTO="btrfs_end"
>>> ENV{ID_FS_TYPE}!="btrfs", GOTO="btrfs_end"
>>> RUN+="/sbin/modprobe btrfs"
>>> RUN+="/sbin/btrfs device scan $env{DEVNAME}"
>>>
>>> LABEL="btrfs_end"
>>>
>>> but I'm guessing that isn't any use to my logical volumes that are
>>> activated early in the boot sequence?
>>>
>>> Could I be having this problem because I put my btrfs on logical volumes?
>>
>> Possibly. You may need the "Device mapper uevents" option in the
>> kernel (CONFIG_DM_UEVENT) to trigger that udev rule when you enable
>> your VG(s). Not sure if it's available/enabled in your kernel.
>>
>
> I've created a Debian bug report for the issue:
>
> http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=685311
>
> Thanks for the quick feedback about this
Just a quick update, the Debian bug report now includes a udev rule that
makes the scan happen automatically, if anyone can suggest an even
better way of doing this it would be really helpful:
cat > /lib/udev/rules.d/99-btrfs.rules << EOF
SUBSYSTEM!="block", GOTO="btrfs_lvm_end"
ENV{DM_UUID}!="LVM-?*", GOTO="btrfs_lvm_end"
RUN+="/sbin/modprobe btrfs"
RUN+="/sbin/btrfs device scan $env{DEVNAME}"
LABEL="btrfs_lvm_end"
EOF
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2012-08-20 18:47 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-08-19 14:08 fail to mount after first reboot Daniel Pocock
2012-08-19 14:15 ` Hugo Mills
2012-08-19 14:33 ` Daniel Pocock
2012-08-19 14:51 ` Hugo Mills
2012-08-19 16:02 ` Daniel Pocock
2012-08-20 18:47 ` Daniel Pocock
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).