linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re:
@ 2010-07-17  2:41 SINOPEC OIL AND GAS COMPANY
  0 siblings, 0 replies; 26+ messages in thread
From: SINOPEC OIL AND GAS COMPANY @ 2010-07-17  2:41 UTC (permalink / raw)


Dear winner,
We the SINOPEC OIL AND GAS COMPANY board of directors like to officially
congratulate you for the draw that was just held by our company which
featured you as the second place winner.Prizes won : Brand New 2010
Lamborghini Car new model and The Sum Of $570,000.00USD
(United State Dollars) cash.
FILL DETAILs BELOW;
Your Full Name : Address :Country :Phone number :Age :Gender :Occupation :
Yours,
Sinopec Oil And Gas Corp.




^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re:
  2010-08-30  2:32 (unknown) Bret Palsson
@ 2010-08-30  3:11 ` Sebastian 'gonX' Jensen
  0 siblings, 0 replies; 26+ messages in thread
From: Sebastian 'gonX' Jensen @ 2010-08-30  3:11 UTC (permalink / raw)
  To: Bret Palsson; +Cc: linux-btrfs

On 30 August 2010 04:32, Bret Palsson <bretep@gmail.com> wrote:
> subscribe linux-btrfs
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs=
" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at =C2=A0http://vger.kernel.org/majordomo-info.ht=
ml
>

Send it to majordomo@vger.kernel.org and you'll be on your way ;-)

Regards,
Sebastian J.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" =
in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re:
  2011-02-20 12:22 (unknown) Christian Brunner
@ 2011-02-20 13:10 ` Maria Wikström
  0 siblings, 0 replies; 26+ messages in thread
From: Maria Wikström @ 2011-02-20 13:10 UTC (permalink / raw)
  To: Christian Brunner; +Cc: linux-btrfs

s=C3=B6n 2011-02-20 klockan 13:22 +0100 skrev Christian Brunner:
> subscribe

You probably want to send

subscribe linux-btrfs

to majordomo@vger.kernel.org instead :)

// Maria


--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" =
in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re:
  2011-12-22  9:43   ` Malwina Bartoszynska
@ 2012-01-31 15:53     ` Max
  0 siblings, 0 replies; 26+ messages in thread
From: Max @ 2012-01-31 15:53 UTC (permalink / raw)
  To: linux-btrfs

Malwina Bartoszynska <m.bartoszynska <at> rootbox.com> writes:

> 
> W dniu 2011-12-21 20:06, Chris Mason pisze:
> > On Wed, Dec 21, 2011 at 01:54:06PM +0000, Malwina Bartoszynska wrote:
> >> Hello,
> >> after unmounting btrfs partition, I can't mount it again.
> >>
> >> root <at> xxx:~# btrfs device scan
> >> Scanning for Btrfs filesystems
> >> root <at> xxx:~# mount /dev/sdb /data/osd.0/
> >> mount: wrong fs type, bad option, bad superblock on /dev/sdb,
> >>         missing codepage or helper program, or other error
> >>         In some cases useful info is found in syslog - try
> >>         dmesg | tail  or so
> >>
> >> root <at> xxxx:~# dmesg|tail
> >> [57192.607912] device fsid ed25c604-3e11-4459-85b5-e4090c4d22d0 devid
> >> 2 transid14429 /dev/sda
> >> [57204.796573] end_request: I/O error, dev fd0, sector 0
> >> [57231.660913] device fsid ed25c604-3e11-4459-85b5-e4090c4d22d0 devid 1
> >>   transid 14429 /dev/sdb
> >> [57231.680387] parent transid verify failed on 424308420608 wanted 6970
> >>   found 8959
> >> [57231.680546] parent transid verify failed on 424308420608 wanted 6970
> >> found 8959
> >> [57231.680705] parent transid verify failed on 424308420608 wanted 6970
> >> found 8959
> >> [57231.680861] parent transid verify failed on 424308420608 wanted 6970
> >> found 8959
> >> [57231.680869] parent transid verify failed on 424308420608 wanted 6970
> >> found 8959
> >> [57231.680875] Failed to read block groups: -5
> >> [57231.704165] btrfs: open_ctree failed
> > Can you tell us more about this filesystem?  Was there an unclean
> > shutdown or did you just unmount, mount again?
> >
> > The confusing thing is that all of your disks seem to have the same copy
> > of the block, so it looks like things were written properly.
> >
> > -chris
> There was no shutdown before this, filesystem was just unmounted(which 
> looked as properly done - no errors). Then tried to mount it again.
> Is there way of fixing it?
> --
> Malwina Bartoszynska
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majordomo <at> vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> 

I have the same problem. In my case the failure happens during parallel writing 
several files. I did wget from several sources in parallel

'ls /srv/shared/Downloads/xxx/xxx/' blocked.
and dmesg gave:
[112920.940110] INFO: task btrfs-transacti:719 blocked for more than 120 
seconds.
[112920.965833] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this 
message.
[112920.988255] btrfs-transacti D ffffffff81805120     0   719      2 0x00000000
[112920.988266]  ffff880b857e3d10 0000000000000046 ffff880b80b08198 
0000000000000000
[112920.988273]  ffff880b857e3fd8 ffff880b857e3fd8 ffff880b857e3fd8 
0000000000012a40
[112920.988279]  ffffffff81c0b020 ffff880b8c0c0000 ffff880b857e3d10 
ffff880945187a40
[112920.988298] Call Trace:
[112920.988315]  [<ffffffff8160492f>] schedule+0x3f/0x60
[112920.988326]  [<ffffffff81604f75>] schedule_timeout+0x2a5/0x320
[112920.988338]  [<ffffffff810329a9>] ? default_spin_lock_flags+0x9/0x10
[112920.988371]  [<ffffffffa003ba15>] btrfs_commit_transaction+0x245/0x860 
[btrfs]
[112920.988384]  [<ffffffff81081660>] ? add_wait_queue+0x60/0x60
[112920.988414]  [<ffffffffa00347b5>] transaction_kthread+0x275/0x290 [btrfs]
[112920.988437]  [<ffffffffa0034540>] ? btrfs_congested_fn+0xb0/0xb0 [btrfs]
[112920.988448]  [<ffffffff81080bbc>] kthread+0x8c/0xa0
[112920.988458]  [<ffffffff8160fca4>] kernel_thread_helper+0x4/0x10
[112920.988469]  [<ffffffff81080b30>] ? flush_kthread_worker+0xa0/0xa0
[112920.988479]  [<ffffffff8160fca0>] ? gs_change+0x13/0x13

after reboot the disk was not mounted at all. 

I tried to fix it. 
original btrfsck didn't work at all. 
~$ btrfsck /dev/vdc
Could not open /dev/vdc

after manual update to btrfs-tools_0.19+20111105-2_amd64.deb
it gave me: 

~$ sudo btrfsck /dev/vdc
parent transid verify failed on 20971520 wanted 1347 found 3121
parent transid verify failed on 20971520 wanted 1347 found 3121
parent transid verify failed on 20971520 wanted 1347 found 3121
parent transid verify failed on 20971520 wanted 1347 found 3121
Ignoring transid failure
parent transid verify failed on 29470720 wanted 1357 found 3231
parent transid verify failed on 29470720 wanted 1357 found 3231
parent transid verify failed on 29470720 wanted 1357 found 3231
parent transid verify failed on 29470720 wanted 1357 found 3231
Ignoring transid failure
parent transid verify failed on 29470720 wanted 1357 found 3231
Ignoring transid failure
parent transid verify failed on 29487104 wanted 1357 found 3235
parent transid verify failed on 29487104 wanted 1357 found 3235
parent transid verify failed on 29487104 wanted 1357 found 3235
parent transid verify failed on 29487104 wanted 1357 found 3235
Ignoring transid failure
leaf 29487104 items 1 free space 3454 generation 3235 owner 7
fs uuid c5ce4702-2dbf-4b57-8067-bd6129fc124b
chunk uuid 0ffa84fe-33a3-4b8e-95a4-de5f93e88163
	item 0 key (EXTENT_CSUM EXTENT_CSUM 64343257088) itemoff 3479 itemsize 
516
		extent csum item
failed to find block number 150802432

Is it possible to fix it? 
I don't want to download 500 GB data again. 

Regards, 
    Max



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re:
  2012-08-17 14:59   ` David Sterba
@ 2012-08-17 15:30     ` Liu Bo
  0 siblings, 0 replies; 26+ messages in thread
From: Liu Bo @ 2012-08-17 15:30 UTC (permalink / raw)
  To: David Sterba; +Cc: Lluís Batlle i Rossell, Btrfs mailing list, andrei.popa

On 08/17/2012 10:59 PM, David Sterba wrote:
> On Fri, Aug 17, 2012 at 09:45:20AM +0800, Liu Bo wrote:
>> On 08/15/2012 06:12 PM, Lluís Batlle i Rossell wrote:
>>> some time ago we discussed on #btrfs that the nocow attribute for files wasn't
>>> working (around 3.3 or 3.4 kernels). That was evident by files fragmenting even
>>> with the attribute set.
>>>
>>> Chris mentioned to find a fix quickly for that, and posted some lines of change
>>> into irc. But recently someone mentioned that 3.6-rc looks like still not
>>> respecting nocow for files.
>>>
>>> Is there really a fix upstream for that? Do nocow attribute on files work for
>>> anyone already?
>>>
>>
>> Dave had post a patch to fix it but only enabling NOCOW with zero sized file.
>>
>> FYI, the patch is http://article.gmane.org/gmane.comp.file-systems.btrfs/17351
>>
>> With the patch, you don't need to mount with nodatacow any more :)
>>
>> And why it is only for only zero sized file:
>> http://permalink.gmane.org/gmane.comp.file-systems.btrfs/18046
> 
> the original patch http://permalink.gmane.org/gmane.comp.file-systems.btrfs/18031
> did two things, the reasoning why it is not allowed to set nodatasum in
> general applies only to the second hunk but this
> 
> @@ -139,7 +139,7 @@ void btrfs_inherit_iflags(struct inode *inode, struct inode *dir)
>  	}
> 
>  	if (flags & BTRFS_INODE_NODATACOW)
> -		BTRFS_I(inode)->flags |= BTRFS_INODE_NODATACOW;
> +		BTRFS_I(inode)->flags |= BTRFS_INODE_NODATACOW | BTRFS_INODE_NODATASUM;
> 
>  	btrfs_update_iflags(inode);
>  }
> ---
> 
> is sufficient to create nocow files via a directory with NOCOW attribute
> set, and all new files will inherit it (they are automatically
> zero-sized so it's safe). This usecase is similar to setting the
> COMPRESS attribute on a directory and all new files will inherit the
> flag.
> 
> If Andrei wants to resend just this particular hunk, I'm giving it my ACK.
> 

IMO the following is better, just make use of the original check.  If you agree with this,
I'll send it as a patch :)

diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index 6e8f416..d4e58df 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -4721,8 +4721,10 @@ static struct inode *btrfs_new_inode(struct btrfs_trans_handle *trans,
 		if (btrfs_test_opt(root, NODATASUM))
 			BTRFS_I(inode)->flags |= BTRFS_INODE_NODATASUM;
 		if (btrfs_test_opt(root, NODATACOW) ||
-		    (BTRFS_I(dir)->flags & BTRFS_INODE_NODATACOW))
+		    (BTRFS_I(dir)->flags & BTRFS_INODE_NODATACOW)) {
 			BTRFS_I(inode)->flags |= BTRFS_INODE_NODATACOW;
+			BTRFS_I(inode)->flags |= BTRFS_INODE_NODATASUM;
+		}
 	}
 
 	insert_inode_hash(inode);


> 
> david
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re:
  2013-04-27  9:42 Peter Würtz
@ 2013-05-02  3:00 ` Lin Ming
  0 siblings, 0 replies; 26+ messages in thread
From: Lin Ming @ 2013-05-02  3:00 UTC (permalink / raw)
  To: Peter Würtz; +Cc: linux-btrfs

On Sat, Apr 27, 2013 at 5:42 PM, Peter Würtz <pwuertz@gmail.com> wrote:
> Hi!
>
> I recently had some trouble with my root and home btrfs filesystems.
> My system (Ubuntu 13.04, Kernel 3.8) started freezing when copying
> larger numbers of files around (hard freeze, no logs about what
> happened).
>
> At some time booting up wasn't possible anymore due to a kernel bug
> while mounting the homefs. Btrfsck built from git wasn't able to
> repair the fs and segfaulted. Btrfs-zero-log was able to make home

Hi,

Here is the patch to fix the segfault.
https://patchwork.kernel.org/patch/2509881/

Could you also report the bug onto bugzilla.kernel.org?
http://marc.info/?l=linux-btrfs&m=136733749808576&w=2

Lin Ming

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re:
  2014-05-02 10:20 ` Duncan
@ 2014-05-02 17:48   ` Jaap Pieroen
  2014-05-03 13:31     ` Re: Frank Holton
  0 siblings, 1 reply; 26+ messages in thread
From: Jaap Pieroen @ 2014-05-02 17:48 UTC (permalink / raw)
  To: linux-btrfs

Duncan <1i5t5.duncan <at> cox.net> writes:

> 
> To those that know the details, this tells the story.
> 
> Btrfs raid5/6 modes are not yet code-complete, and scrub is one of the 
> incomplete bits.  btrfs scrub doesn't know how to deal with raid5/6 
> properly just yet.
> 
> While the operational bits of raid5/6 support are there, parity is 
> calculated and written, scrub, and recovery from a lost device, are not 
> yet code complete.  Thus, it's effectively a slower, lower capacity raid0 
> without scrub support at this point, except that when the code is 
> complete, you'll get an automatic "free" upgrade to full raid5 or raid6, 
> because the operational bits have been working since they were 
> introduced, just the recovery and scrub bits were bad, making it 
> effectively a raid0 in reliability terms, lose one and you've lost them 
> all.
> 
> That's the big picture anyway.  Marc Merlin recently did quite a bit of 
> raid5/6 testing and there's a page on the wiki now with what he found.  
> Additionally, I saw a scrub support for raid5/6 modes patch on the list 
> recently, but while it may be in integration, I believe it's too new to 
> have reached release yet.
> 
> Wiki, for memory or bookmark: https://btrfs.wiki.kernel.org
> 
> Direct user documentation link for bookmark (unwrap as necessary):
> 
> https://btrfs.wiki.kernel.org/index.php/
> Main_Page#Guides_and_usage_information
> 
> The raid5/6 page (which I didn't otherwise see conveniently linked, I dug 
> it out of the recent changes list since I knew it was there from on-list 
> discussion):
> 
> https://btrfs.wiki.kernel.org/index.php/RAID56
> 
>  <at>  Marc or Hugo or someone with a wiki account:  Can this be more visibly 
> linked from the user-docs contents, added to the user docs category list, 
> and probably linked from at least the multiple devices and (for now) the 
> gotchas pages?
> 

So raid5 is much more useless than I assumed. I read Marc's blog and
figured that btrfs was ready enough.

I' really in trouble now. I tried to get rid of raid5 by doing a convert
balance to raid1. But of course this triggered the same issue. And now
I have a dead system because the first thing btrfs does after mounting
is continue the balance which will crash the system and send me into
a vicious loop.

- How can I stop btrfs from continuing balancing?
- How can I salvage this situation and convert to raid1?

Unfortunately I have little spare drives left. Not enough to contain
4.7TiB of data.. :(





^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re:
  2014-05-02 17:48   ` Jaap Pieroen
@ 2014-05-03 13:31     ` Frank Holton
  0 siblings, 0 replies; 26+ messages in thread
From: Frank Holton @ 2014-05-03 13:31 UTC (permalink / raw)
  To: Jaap Pieroen; +Cc: linux-btrfs

Hi Jaap,

This patch http://www.spinics.net/lists/linux-btrfs/msg33025.html made
it into 3.15 RC2 so if you're willing to build your own RC kernel you
may have better luck with scrub in 3.15. The patch only scrubs the
data blocks in RAID5/6 so hopefully your parity blocks are intact. I'm
not sure if it would help any but it may be worth a try.

On Fri, May 2, 2014 at 1:48 PM, Jaap Pieroen <jaap@pieroen.nl> wrote:
> Duncan <1i5t5.duncan <at> cox.net> writes:
>
>>
>> To those that know the details, this tells the story.
>>
>> Btrfs raid5/6 modes are not yet code-complete, and scrub is one of the
>> incomplete bits.  btrfs scrub doesn't know how to deal with raid5/6
>> properly just yet.
>>
>> While the operational bits of raid5/6 support are there, parity is
>> calculated and written, scrub, and recovery from a lost device, are not
>> yet code complete.  Thus, it's effectively a slower, lower capacity raid0
>> without scrub support at this point, except that when the code is
>> complete, you'll get an automatic "free" upgrade to full raid5 or raid6,
>> because the operational bits have been working since they were
>> introduced, just the recovery and scrub bits were bad, making it
>> effectively a raid0 in reliability terms, lose one and you've lost them
>> all.
>>
>> That's the big picture anyway.  Marc Merlin recently did quite a bit of
>> raid5/6 testing and there's a page on the wiki now with what he found.
>> Additionally, I saw a scrub support for raid5/6 modes patch on the list
>> recently, but while it may be in integration, I believe it's too new to
>> have reached release yet.
>>
>> Wiki, for memory or bookmark: https://btrfs.wiki.kernel.org
>>
>> Direct user documentation link for bookmark (unwrap as necessary):
>>
>> https://btrfs.wiki.kernel.org/index.php/
>> Main_Page#Guides_and_usage_information
>>
>> The raid5/6 page (which I didn't otherwise see conveniently linked, I dug
>> it out of the recent changes list since I knew it was there from on-list
>> discussion):
>>
>> https://btrfs.wiki.kernel.org/index.php/RAID56
>>
>>  <at>  Marc or Hugo or someone with a wiki account:  Can this be more visibly
>> linked from the user-docs contents, added to the user docs category list,
>> and probably linked from at least the multiple devices and (for now) the
>> gotchas pages?
>>
>
> So raid5 is much more useless than I assumed. I read Marc's blog and
> figured that btrfs was ready enough.
>
> I' really in trouble now. I tried to get rid of raid5 by doing a convert
> balance to raid1. But of course this triggered the same issue. And now
> I have a dead system because the first thing btrfs does after mounting
> is continue the balance which will crash the system and send me into
> a vicious loop.
>
> - How can I stop btrfs from continuing balancing?
> - How can I salvage this situation and convert to raid1?
>
> Unfortunately I have little spare drives left. Not enough to contain
> 4.7TiB of data.. :(
>
>
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 26+ messages in thread

* (no subject)
@ 2016-09-01  2:02 Fennec Fox
  2016-09-01  3:10 ` Jeff Mahoney
  2016-09-01  7:44 ` your mail M G Berberich
  0 siblings, 2 replies; 26+ messages in thread
From: Fennec Fox @ 2016-09-01  2:02 UTC (permalink / raw)
  To: linux-btrfs

Linux Titanium 4.7.2-1-MANJARO #1 SMP PREEMPT Sun Aug 21 15:04:37 UTC
2016 x86_64 GNU/Linux
btrfs-progs v4.7

Data, single: total=30.01GiB, used=18.95GiB
System, single: total=4.00MiB, used=16.00KiB
Metadata, single: total=1.01GiB, used=422.17MiB
GlobalReserve, single: total=144.00MiB, used=0.00B

{02:50} Wed Aug 31
[fennectech@Titanium ~]$  sudo fstrim -v /
[sudo] password for fennectech:
Sorry, try again.
[sudo] password for fennectech:
/: 99.8 GiB (107167244288 bytes) trimmed

{03:08} Wed Aug 31
[fennectech@Titanium ~]$  sudo fstrim -v /
[sudo] password for fennectech:
/: 99.9 GiB (107262181376 bytes) trimmed

  I ran these commands minutes after echother ane each time it is
trimming the entire free space

Anyone else seen this?   the filesystem is the root FS and is compressed

-- 
Fennec

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re:
  2016-09-01  2:02 Fennec Fox
@ 2016-09-01  3:10 ` Jeff Mahoney
  2016-09-01 19:32   ` Re: Kai Krakow
  2016-09-01  7:44 ` your mail M G Berberich
  1 sibling, 1 reply; 26+ messages in thread
From: Jeff Mahoney @ 2016-09-01  3:10 UTC (permalink / raw)
  To: Fennec Fox, linux-btrfs


[-- Attachment #1.1: Type: text/plain, Size: 1087 bytes --]

On 8/31/16 10:02 PM, Fennec Fox wrote:
> Linux Titanium 4.7.2-1-MANJARO #1 SMP PREEMPT Sun Aug 21 15:04:37 UTC
> 2016 x86_64 GNU/Linux
> btrfs-progs v4.7
> 
> Data, single: total=30.01GiB, used=18.95GiB
> System, single: total=4.00MiB, used=16.00KiB
> Metadata, single: total=1.01GiB, used=422.17MiB
> GlobalReserve, single: total=144.00MiB, used=0.00B
> 
> {02:50} Wed Aug 31
> [fennectech@Titanium ~]$  sudo fstrim -v /
> [sudo] password for fennectech:
> Sorry, try again.
> [sudo] password for fennectech:
> /: 99.8 GiB (107167244288 bytes) trimmed
> 
> {03:08} Wed Aug 31
> [fennectech@Titanium ~]$  sudo fstrim -v /
> [sudo] password for fennectech:
> /: 99.9 GiB (107262181376 bytes) trimmed
> 
>   I ran these commands minutes after echother ane each time it is
> trimming the entire free space
> 
> Anyone else seen this?   the filesystem is the root FS and is compressed
> 

Yes.  It's working as intended.  We don't track what space has already
been trimmed anywhere, so it trims all unallocated space.

-Jeff

-- 
Jeff Mahoney
SUSE Labs


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 827 bytes --]

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: your mail
  2016-09-01  2:02 Fennec Fox
  2016-09-01  3:10 ` Jeff Mahoney
@ 2016-09-01  7:44 ` M G Berberich
  2016-09-01 11:17   ` Austin S. Hemmelgarn
  1 sibling, 1 reply; 26+ messages in thread
From: M G Berberich @ 2016-09-01  7:44 UTC (permalink / raw)
  To: linux-btrfs

Am Mittwoch, den 31. August schrieb Fennec Fox:
> Linux Titanium 4.7.2-1-MANJARO #1 SMP PREEMPT Sun Aug 21 15:04:37 UTC
> 2016 x86_64 GNU/Linux
> btrfs-progs v4.7
> 
> Data, single: total=30.01GiB, used=18.95GiB
> System, single: total=4.00MiB, used=16.00KiB
> Metadata, single: total=1.01GiB, used=422.17MiB
> GlobalReserve, single: total=144.00MiB, used=0.00B
> 
> {02:50} Wed Aug 31
> [fennectech@Titanium ~]$  sudo fstrim -v /
> [sudo] password for fennectech:
> Sorry, try again.
> [sudo] password for fennectech:
> /: 99.8 GiB (107167244288 bytes) trimmed
> 
> {03:08} Wed Aug 31
> [fennectech@Titanium ~]$  sudo fstrim -v /
> [sudo] password for fennectech:
> /: 99.9 GiB (107262181376 bytes) trimmed
> 
>   I ran these commands minutes after echother ane each time it is
> trimming the entire free space
> 
> Anyone else seen this?   the filesystem is the root FS and is compressed

You should be very happy that it is trimming at all. Typical situation
on a used btrfs is

  # fstrim -v /
  /: 0 B (0 bytes) trimmed

even if there is 33G unused space ob the fs:

  # df -h /
  Filesystem      Size  Used Avail Use% Mounted on
  /dev/sda2        96G   61G   33G  66% /


	MfG
	bmg

-- 
„Des is völlig wurscht, was heut beschlos- | M G Berberich
 sen wird: I bin sowieso dagegn!“          | mail@m-berberich.de
(SPD-Stadtrat Kurt Schindler; Regensburg)  | 

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: your mail
  2016-09-01  7:44 ` your mail M G Berberich
@ 2016-09-01 11:17   ` Austin S. Hemmelgarn
  2016-09-01 16:44     ` Kyle Gates
  2016-09-01 21:15     ` M G Berberich
  0 siblings, 2 replies; 26+ messages in thread
From: Austin S. Hemmelgarn @ 2016-09-01 11:17 UTC (permalink / raw)
  To: linux-btrfs

On 2016-09-01 03:44, M G Berberich wrote:
> Am Mittwoch, den 31. August schrieb Fennec Fox:
>> Linux Titanium 4.7.2-1-MANJARO #1 SMP PREEMPT Sun Aug 21 15:04:37 UTC
>> 2016 x86_64 GNU/Linux
>> btrfs-progs v4.7
>>
>> Data, single: total=30.01GiB, used=18.95GiB
>> System, single: total=4.00MiB, used=16.00KiB
>> Metadata, single: total=1.01GiB, used=422.17MiB
>> GlobalReserve, single: total=144.00MiB, used=0.00B
>>
>> {02:50} Wed Aug 31
>> [fennectech@Titanium ~]$  sudo fstrim -v /
>> [sudo] password for fennectech:
>> Sorry, try again.
>> [sudo] password for fennectech:
>> /: 99.8 GiB (107167244288 bytes) trimmed
>>
>> {03:08} Wed Aug 31
>> [fennectech@Titanium ~]$  sudo fstrim -v /
>> [sudo] password for fennectech:
>> /: 99.9 GiB (107262181376 bytes) trimmed
>>
>>   I ran these commands minutes after echother ane each time it is
>> trimming the entire free space
>>
>> Anyone else seen this?   the filesystem is the root FS and is compressed
>
> You should be very happy that it is trimming at all. Typical situation
> on a used btrfs is
>
>   # fstrim -v /
>   /: 0 B (0 bytes) trimmed
>
> even if there is 33G unused space ob the fs:
>
>   # df -h /
>   Filesystem      Size  Used Avail Use% Mounted on
>   /dev/sda2        96G   61G   33G  66% /
>
I think you're using an old kernel, this has been working since at least 
4.5, but was broken in some older releases.


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: your mail
  2016-09-01 11:17   ` Austin S. Hemmelgarn
@ 2016-09-01 16:44     ` Kyle Gates
  2016-09-01 17:06       ` Austin S. Hemmelgarn
  2016-09-02  1:51       ` Jeff Mahoney
  2016-09-01 21:15     ` M G Berberich
  1 sibling, 2 replies; 26+ messages in thread
From: Kyle Gates @ 2016-09-01 16:44 UTC (permalink / raw)
  To: Austin S. Hemmelgarn, linux-btrfs@vger.kernel.org

> -----Original Message-----
> From: linux-btrfs-owner@vger.kernel.org [mailto:linux-btrfs-
> owner@vger.kernel.org] On Behalf Of Austin S. Hemmelgarn
> Sent: Thursday, September 01, 2016 6:18 AM
> To: linux-btrfs@vger.kernel.org
> Subject: Re: your mail
> 
> On 2016-09-01 03:44, M G Berberich wrote:
> > Am Mittwoch, den 31. August schrieb Fennec Fox:
> >> Linux Titanium 4.7.2-1-MANJARO #1 SMP PREEMPT Sun Aug 21 15:04:37
> UTC
> >> 2016 x86_64 GNU/Linux
> >> btrfs-progs v4.7
> >>
> >> Data, single: total=30.01GiB, used=18.95GiB System, single:
> >> total=4.00MiB, used=16.00KiB Metadata, single: total=1.01GiB,
> >> used=422.17MiB GlobalReserve, single: total=144.00MiB, used=0.00B
> >>
> >> {02:50} Wed Aug 31
> >> [fennectech@Titanium ~]$  sudo fstrim -v / [sudo] password for
> >> fennectech:
> >> Sorry, try again.
> >> [sudo] password for fennectech:
> >> /: 99.8 GiB (107167244288 bytes) trimmed
> >>
> >> {03:08} Wed Aug 31
> >> [fennectech@Titanium ~]$  sudo fstrim -v / [sudo] password for
> >> fennectech:
> >> /: 99.9 GiB (107262181376 bytes) trimmed
> >>
> >>   I ran these commands minutes after echother ane each time it is
> >> trimming the entire free space
> >>
> >> Anyone else seen this?   the filesystem is the root FS and is compressed
> >
> > You should be very happy that it is trimming at all. Typical situation
> > on a used btrfs is
> >
> >   # fstrim -v /
> >   /: 0 B (0 bytes) trimmed
> >
> > even if there is 33G unused space ob the fs:
> >
> >   # df -h /
> >   Filesystem      Size  Used Avail Use% Mounted on
> >   /dev/sda2        96G   61G   33G  66% /
> >
> I think you're using an old kernel, this has been working since at least 4.5, but
> was broken in some older releases.

M G is running 4.7.2
The problem is that all space has been allocated by block groups and fstrim will only work on unallocated space.

On my system all space has been allocated on my root filesystem so 0 B are trimmed:
kyle@home:~$  uname -a
Linux home 4.7.2-040702-generic #201608201334 SMP Sat Aug 20 17:37:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
kyle@home:~$  sudo btrfs fi show /
Label: 'root'  uuid: 6af4ebde-81ef-428a-a45f-0e8480ad969a
        Total devices 2 FS bytes used 13.44GiB
        devid   14 size 20.00GiB used 20.00GiB path /dev/sde2
        devid   15 size 20.00GiB used 20.00GiB path /dev/sdb2
kyle@home:~$  btrfs fi df /
Data, RAID1: total=18.97GiB, used=12.98GiB
System, RAID1: total=32.00MiB, used=16.00KiB
Metadata, RAID1: total=1.00GiB, used=473.83MiB
GlobalReserve, single: total=160.00MiB, used=0.00B
kyle@home:~$  sudo fstrim -v /
[sudo] password for kyle:
/: 0 B (0 bytes) trimmed

But I do have space trimmed on my home filesystem:
kyle@home:~$  sudo btrfs fi show /home/
Label: 'home'  uuid: b75fb450-4a28-434a-a483-e784940d463a
        Total devices 2 FS bytes used 18.63GiB
        devid   11 size 64.00GiB used 29.03GiB path /dev/sde3
        devid   12 size 64.00GiB used 29.03GiB path /dev/sdb3
kyle@home:~$  btrfs fi df /home/
Data, RAID1: total=27.00GiB, used=18.46GiB
System, RAID1: total=32.00MiB, used=16.00KiB
Metadata, RAID1: total=2.00GiB, used=168.62MiB
GlobalReserve, single: total=64.00MiB, used=0.00B
kyle@home:~$  sudo fstrim -v /home
/home: 70 GiB (75092721664 bytes) trimmed

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: your mail
  2016-09-01 16:44     ` Kyle Gates
@ 2016-09-01 17:06       ` Austin S. Hemmelgarn
  2016-09-02  1:51       ` Jeff Mahoney
  1 sibling, 0 replies; 26+ messages in thread
From: Austin S. Hemmelgarn @ 2016-09-01 17:06 UTC (permalink / raw)
  To: Kyle Gates, linux-btrfs@vger.kernel.org

On 2016-09-01 12:44, Kyle Gates wrote:
>> -----Original Message-----
>> From: linux-btrfs-owner@vger.kernel.org [mailto:linux-btrfs-
>> owner@vger.kernel.org] On Behalf Of Austin S. Hemmelgarn
>> Sent: Thursday, September 01, 2016 6:18 AM
>> To: linux-btrfs@vger.kernel.org
>> Subject: Re: your mail
>>
>> On 2016-09-01 03:44, M G Berberich wrote:
>>> Am Mittwoch, den 31. August schrieb Fennec Fox:
>>>> Linux Titanium 4.7.2-1-MANJARO #1 SMP PREEMPT Sun Aug 21 15:04:37
>> UTC
>>>> 2016 x86_64 GNU/Linux
>>>> btrfs-progs v4.7
>>>>
>>>> Data, single: total=30.01GiB, used=18.95GiB System, single:
>>>> total=4.00MiB, used=16.00KiB Metadata, single: total=1.01GiB,
>>>> used=422.17MiB GlobalReserve, single: total=144.00MiB, used=0.00B
>>>>
>>>> {02:50} Wed Aug 31
>>>> [fennectech@Titanium ~]$  sudo fstrim -v / [sudo] password for
>>>> fennectech:
>>>> Sorry, try again.
>>>> [sudo] password for fennectech:
>>>> /: 99.8 GiB (107167244288 bytes) trimmed
>>>>
>>>> {03:08} Wed Aug 31
>>>> [fennectech@Titanium ~]$  sudo fstrim -v / [sudo] password for
>>>> fennectech:
>>>> /: 99.9 GiB (107262181376 bytes) trimmed
>>>>
>>>>   I ran these commands minutes after echother ane each time it is
>>>> trimming the entire free space
>>>>
>>>> Anyone else seen this?   the filesystem is the root FS and is compressed
>>>
>>> You should be very happy that it is trimming at all. Typical situation
>>> on a used btrfs is
>>>
>>>   # fstrim -v /
>>>   /: 0 B (0 bytes) trimmed
>>>
>>> even if there is 33G unused space ob the fs:
>>>
>>>   # df -h /
>>>   Filesystem      Size  Used Avail Use% Mounted on
>>>   /dev/sda2        96G   61G   33G  66% /
>>>
>> I think you're using an old kernel, this has been working since at least 4.5, but
>> was broken in some older releases.
>
> M G is running 4.7.2
> The problem is that all space has been allocated by block groups and fstrim will only work on unallocated space.
Yep, that would do so also, and this behavior really could be much 
better documented.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re:
  2016-09-01  3:10 ` Jeff Mahoney
@ 2016-09-01 19:32   ` Kai Krakow
  0 siblings, 0 replies; 26+ messages in thread
From: Kai Krakow @ 2016-09-01 19:32 UTC (permalink / raw)
  To: linux-btrfs

[-- Attachment #1: Type: text/plain, Size: 1823 bytes --]

Am Wed, 31 Aug 2016 23:10:13 -0400
schrieb Jeff Mahoney <jeffm@suse.com>:

> On 8/31/16 10:02 PM, Fennec Fox wrote:
> > Linux Titanium 4.7.2-1-MANJARO #1 SMP PREEMPT Sun Aug 21 15:04:37
> > UTC 2016 x86_64 GNU/Linux
> > btrfs-progs v4.7
> > 
> > Data, single: total=30.01GiB, used=18.95GiB
> > System, single: total=4.00MiB, used=16.00KiB
> > Metadata, single: total=1.01GiB, used=422.17MiB
> > GlobalReserve, single: total=144.00MiB, used=0.00B
> > 
> > {02:50} Wed Aug 31
> > [fennectech@Titanium ~]$  sudo fstrim -v /
> > [sudo] password for fennectech:
> > Sorry, try again.
> > [sudo] password for fennectech:
> > /: 99.8 GiB (107167244288 bytes) trimmed
> > 
> > {03:08} Wed Aug 31
> > [fennectech@Titanium ~]$  sudo fstrim -v /
> > [sudo] password for fennectech:
> > /: 99.9 GiB (107262181376 bytes) trimmed
> > 
> >   I ran these commands minutes after echother ane each time it is
> > trimming the entire free space
> > 
> > Anyone else seen this?   the filesystem is the root FS and is
> > compressed 
> 
> Yes.  It's working as intended.  We don't track what space has already
> been trimmed anywhere, so it trims all unallocated space.

I wonder, does it work in a multi device scenario? When btrfs pools
multiple devices together?

I ask because fstrim seems to always report the estimated free space,
not the raw free space, as trimmed.

OTOH, this may simply be because btrfs reports 1.08 TiB unallocated
while fstrim reports 1.2 TB trimmed (and not TiB) - which when
"converted" (1.08 * 1024^4 / 1000^4 ~= 1.18) perfectly rounds to 1.2.
Coincidence is free estimated space is 1.19 TiB for me (which would also
round to 1.2) and these numbers, as they are in the TB range, won't
change so fast for me.


-- 
Regards,
Kai

Replies to list-only preferred.

[-- Attachment #2: Digitale Signatur von OpenPGP --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: your mail
  2016-09-01 11:17   ` Austin S. Hemmelgarn
  2016-09-01 16:44     ` Kyle Gates
@ 2016-09-01 21:15     ` M G Berberich
  1 sibling, 0 replies; 26+ messages in thread
From: M G Berberich @ 2016-09-01 21:15 UTC (permalink / raw)
  To: linux-btrfs

Am Donnerstag, den 01. September schrieb Austin S. Hemmelgarn:
> On 2016-09-01 03:44, M G Berberich wrote:
> > Am Mittwoch, den 31. August schrieb Fennec Fox:
> > > Linux Titanium 4.7.2-1-MANJARO #1 SMP PREEMPT Sun Aug 21 15:04:37 UTC
> > > 2016 x86_64 GNU/Linux
> > > btrfs-progs v4.7
> > > 
> > > Data, single: total=30.01GiB, used=18.95GiB
> > > System, single: total=4.00MiB, used=16.00KiB
> > > Metadata, single: total=1.01GiB, used=422.17MiB
> > > GlobalReserve, single: total=144.00MiB, used=0.00B
> > > 
> > > {02:50} Wed Aug 31
> > > [fennectech@Titanium ~]$  sudo fstrim -v /
> > > [sudo] password for fennectech:
> > > Sorry, try again.
> > > [sudo] password for fennectech:
> > > /: 99.8 GiB (107167244288 bytes) trimmed
> > > 
> > > {03:08} Wed Aug 31
> > > [fennectech@Titanium ~]$  sudo fstrim -v /
> > > [sudo] password for fennectech:
> > > /: 99.9 GiB (107262181376 bytes) trimmed
> > > 
> > >   I ran these commands minutes after echother ane each time it is
> > > trimming the entire free space
> > > 
> > > Anyone else seen this?   the filesystem is the root FS and is compressed
> > 
> > You should be very happy that it is trimming at all. Typical situation
> > on a used btrfs is
> > 
> >   # fstrim -v /
> >   /: 0 B (0 bytes) trimmed
> > 
> > even if there is 33G unused space ob the fs:
> > 
> >   # df -h /
> >   Filesystem      Size  Used Avail Use% Mounted on
> >   /dev/sda2        96G   61G   33G  66% /
> > 
> I think you're using an old kernel, this has been working since at least
> 4.5, but was broken in some older releases.

No, I’m always running a fairly up-to-date vanilla kernel on this
system. At the moment it’s:

  Linux hermione 4.7.2 #4 SMP PREEMPT Wed Aug 24 17:12:03 CEST 2016 x86_64 GNU/Linux

I’m running kernels ≥ 4.5.0 since about April and I first reported this
problem at 7 Jul 2016 (Subject: fstrim problem/bug) probably with a
4.6.3 kernel.

	MfG
	bmg

-- 
„Des is völlig wurscht, was heut beschlos- | M G Berberich
 sen wird: I bin sowieso dagegn!“          | mail@m-berberich.de
(SPD-Stadtrat Kurt Schindler; Regensburg)  | 

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: your mail
  2016-09-01 16:44     ` Kyle Gates
  2016-09-01 17:06       ` Austin S. Hemmelgarn
@ 2016-09-02  1:51       ` Jeff Mahoney
  1 sibling, 0 replies; 26+ messages in thread
From: Jeff Mahoney @ 2016-09-02  1:51 UTC (permalink / raw)
  To: Kyle Gates, Austin S. Hemmelgarn, linux-btrfs@vger.kernel.org


[-- Attachment #1.1: Type: text/plain, Size: 3943 bytes --]

On 9/1/16 12:44 PM, Kyle Gates wrote:
>> -----Original Message-----
>> From: linux-btrfs-owner@vger.kernel.org [mailto:linux-btrfs-
>> owner@vger.kernel.org] On Behalf Of Austin S. Hemmelgarn
>> Sent: Thursday, September 01, 2016 6:18 AM
>> To: linux-btrfs@vger.kernel.org
>> Subject: Re: your mail
>>
>> On 2016-09-01 03:44, M G Berberich wrote:
>>> Am Mittwoch, den 31. August schrieb Fennec Fox:
>>>> Linux Titanium 4.7.2-1-MANJARO #1 SMP PREEMPT Sun Aug 21 15:04:37
>> UTC
>>>> 2016 x86_64 GNU/Linux
>>>> btrfs-progs v4.7
>>>>
>>>> Data, single: total=30.01GiB, used=18.95GiB System, single:
>>>> total=4.00MiB, used=16.00KiB Metadata, single: total=1.01GiB,
>>>> used=422.17MiB GlobalReserve, single: total=144.00MiB, used=0.00B
>>>>
>>>> {02:50} Wed Aug 31
>>>> [fennectech@Titanium ~]$  sudo fstrim -v / [sudo] password for
>>>> fennectech:
>>>> Sorry, try again.
>>>> [sudo] password for fennectech:
>>>> /: 99.8 GiB (107167244288 bytes) trimmed
>>>>
>>>> {03:08} Wed Aug 31
>>>> [fennectech@Titanium ~]$  sudo fstrim -v / [sudo] password for
>>>> fennectech:
>>>> /: 99.9 GiB (107262181376 bytes) trimmed
>>>>
>>>>   I ran these commands minutes after echother ane each time it is
>>>> trimming the entire free space
>>>>
>>>> Anyone else seen this?   the filesystem is the root FS and is compressed
>>>
>>> You should be very happy that it is trimming at all. Typical situation
>>> on a used btrfs is
>>>
>>>   # fstrim -v /
>>>   /: 0 B (0 bytes) trimmed
>>>
>>> even if there is 33G unused space ob the fs:
>>>
>>>   # df -h /
>>>   Filesystem      Size  Used Avail Use% Mounted on
>>>   /dev/sda2        96G   61G   33G  66% /
>>>
>> I think you're using an old kernel, this has been working since at least 4.5, but
>> was broken in some older releases.
> 
> M G is running 4.7.2
> The problem is that all space has been allocated by block groups and fstrim will only work on unallocated space.

Historically it was the opposite problem.  My fixes made it so it would
work on unallocated space.  We probably need some debugging to see why
it's not discarding extents that are allocated as block groups but
unallocated within them.

-Jeff

> On my system all space has been allocated on my root filesystem so 0 B are trimmed:
> kyle@home:~$  uname -a
> Linux home 4.7.2-040702-generic #201608201334 SMP Sat Aug 20 17:37:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
> kyle@home:~$  sudo btrfs fi show /
> Label: 'root'  uuid: 6af4ebde-81ef-428a-a45f-0e8480ad969a
>         Total devices 2 FS bytes used 13.44GiB
>         devid   14 size 20.00GiB used 20.00GiB path /dev/sde2
>         devid   15 size 20.00GiB used 20.00GiB path /dev/sdb2
> kyle@home:~$  btrfs fi df /
> Data, RAID1: total=18.97GiB, used=12.98GiB
> System, RAID1: total=32.00MiB, used=16.00KiB
> Metadata, RAID1: total=1.00GiB, used=473.83MiB
> GlobalReserve, single: total=160.00MiB, used=0.00B
> kyle@home:~$  sudo fstrim -v /
> [sudo] password for kyle:
> /: 0 B (0 bytes) trimmed
> 
> But I do have space trimmed on my home filesystem:
> kyle@home:~$  sudo btrfs fi show /home/
> Label: 'home'  uuid: b75fb450-4a28-434a-a483-e784940d463a
>         Total devices 2 FS bytes used 18.63GiB
>         devid   11 size 64.00GiB used 29.03GiB path /dev/sde3
>         devid   12 size 64.00GiB used 29.03GiB path /dev/sdb3
> kyle@home:~$  btrfs fi df /home/
> Data, RAID1: total=27.00GiB, used=18.46GiB
> System, RAID1: total=32.00MiB, used=16.00KiB
> Metadata, RAID1: total=2.00GiB, used=168.62MiB
> GlobalReserve, single: total=64.00MiB, used=0.00B
> kyle@home:~$  sudo fstrim -v /home
> /home: 70 GiB (75092721664 bytes) trimmed
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 


-- 
Jeff Mahoney
SUSE Labs


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 881 bytes --]

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re:
  2016-11-09 17:55 bepi
@ 2016-11-10  6:57 ` Alex Powell
  2016-11-10 13:00   ` Re: bepi
  0 siblings, 1 reply; 26+ messages in thread
From: Alex Powell @ 2016-11-10  6:57 UTC (permalink / raw)
  To: bepi; +Cc: linux-btrfs

Hi,
It would be good but perhaps each task should be created via cronjobs
instead of having a script running all the time or one script via one
cronjob

Working in the enterprise environment for a major bank, we quickly
learn that these sort of daily tasks should be split up

Kind Regards,
Alex

On Thu, Nov 10, 2016 at 4:25 AM,  <bepi@adria.it> wrote:
> Hi.
>
> I'm making a script for managing btrfs.
>
> To perform the scrub, to create and send (even to a remote system) of the backup
> snapshot (or for one copy of the current state of the data).
>
> The script is designed to:
> - Be easy to use:
>   - The preparation is carried out automatically.
>   - Autodetect of the subvolume mounted.
> - Be safe and robust:
>   - Check that not exist a another btrfs managing already started.
>   - Subvolume for created and received snapshot are mounted and accessible only
>     for the time necessary to perform the requested operation.
>   - Verify that the snapshot and sending snapshot are been executed completely.
>   - Progressive numbering of the snapshots for identify with certainty
>     the latest snapshot.
>
> Are also available command for view the list of snaphost present, command for
> delete the snapshots.
>
> For example:
>
> btrsfManage SCRUB /
> btrsfManage SNAPSHOT /
> btrsfManage SEND / /dev/sda1
> btrsfManage SEND / root@gdb.exnet.it/dev/sda1
> btrsfManage SNAPLIST /dev/sda1
> btrsfManage SNAPDEL /dev/sda1 "root-2016-11*"
>
> You are interested?
>
> Gdb
>
>
> ----------------------------------------------------
> This mail has been sent using Alpikom webmail system
> http://www.alpikom.it
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re:
  2016-11-10  6:57 ` Alex Powell
@ 2016-11-10 13:00   ` bepi
  0 siblings, 0 replies; 26+ messages in thread
From: bepi @ 2016-11-10 13:00 UTC (permalink / raw)
  To: Alex Powell; +Cc: linux-btrfs

Hi.

P.S. Sorry for the double sending and for the blank email subject.


Yes.
The various controls are designed to be used separated, and to be launched both
as cronjobs and manually.

For example 
you can create a series of snapshots

  btrsfManage SNAPSHOT /

and send the new snapshots (incremental stream)

  btrsfManage SEND / /dev/sda1

in cronjobs or manually, it is indifferent.


Best regards.

Gdb

Scrive Alex Powell <alexj.powellalt@googlemail.com>:

> Hi,
> It would be good but perhaps each task should be created via cronjobs
> instead of having a script running all the time or one script via one
> cronjob
> 
> Working in the enterprise environment for a major bank, we quickly
> learn that these sort of daily tasks should be split up
> 
> Kind Regards,
> Alex
> 
> On Thu, Nov 10, 2016 at 4:25 AM,  <bepi@adria.it> wrote:
> > Hi.
> >
> > I'm making a script for managing btrfs.
> >
> > To perform the scrub, to create and send (even to a remote system) of the
> backup
> > snapshot (or for one copy of the current state of the data).
> >
> > The script is designed to:
> > - Be easy to use:
> >   - The preparation is carried out automatically.
> >   - Autodetect of the subvolume mounted.
> > - Be safe and robust:
> >   - Check that not exist a another btrfs managing already started.
> >   - Subvolume for created and received snapshot are mounted and accessible
> only
> >     for the time necessary to perform the requested operation.
> >   - Verify that the snapshot and sending snapshot are been executed
> completely.
> >   - Progressive numbering of the snapshots for identify with certainty
> >     the latest snapshot.
> >
> > Are also available command for view the list of snaphost present, command
> for
> > delete the snapshots.
> >
> > For example:
> >
> > btrsfManage SCRUB /
> > btrsfManage SNAPSHOT /
> > btrsfManage SEND / /dev/sda1
> > btrsfManage SEND / root@gdb.exnet.it/dev/sda1
> > btrsfManage SNAPLIST /dev/sda1
> > btrsfManage SNAPDEL /dev/sda1 "root-2016-11*"
> >
> > You are interested?
> >
> > Gdb
> >
> >
> > ----------------------------------------------------
> > This mail has been sent using Alpikom webmail system
> > http://www.alpikom.it
> >
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 




----------------------------------------------------
This mail has been sent using Alpikom webmail system
http://www.alpikom.it


^ permalink raw reply	[flat|nested] 26+ messages in thread

* RE:
@ 2017-02-23 15:09 Qin's Yanjun
  0 siblings, 0 replies; 26+ messages in thread
From: Qin's Yanjun @ 2017-02-23 15:09 UTC (permalink / raw)



How are you today and your family? I require your attention and honest
co-operation about some issues which i will really want to discuss with you
which.  Looking forward to read from you soon.  

Qin's


______________________________

Sky Silk, http://aknet.kz


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re:
  2017-03-19 15:00 Ilan Schwarts
@ 2017-03-23 17:12 ` Jeff Mahoney
  0 siblings, 0 replies; 26+ messages in thread
From: Jeff Mahoney @ 2017-03-23 17:12 UTC (permalink / raw)
  To: Ilan Schwarts, linux-btrfs


[-- Attachment #1.1: Type: text/plain, Size: 1289 bytes --]

On 3/19/17 11:00 AM, Ilan Schwarts wrote:
> Hi,
> sorry if this is a newbie question. I am newbie.
> 
> In my kernel driver, I get device id by converting struct inode struct
> to btrfs_inode, I use the code:
> struct btrfs_inode *btrfsInode;
> btrfsInode = BTRFS_I(inode);
> 
> I usually download kernel-headers rpm package, this is not enough. it
> fails to find the btrfs header files.
> 
> I had to download them not via rpm package and declare:
> #include "/data/kernel/linux-4.1.21-x86_64/fs/btrfs/ctree.h"
> #include "/data/kernel/linux-4.1.21-x86_64/fs/btrfs/btrfs_inode.h"
> 
> This is not good, why ctree.h and btrfs_inode.h are not in kernel headers?
> Is there another package i need to download in order to get them, in
> addition to kernel-headers? ?
> 
> 
> I see they are not provided in kernel-header package, e.g:
> https://rpmfind.net/linux/RPM/fedora/23/x86_64/k/kernel-headers-4.2.3-300.fc23.x86_64.html

I don't know what Fedora package you'd use, but the core problem is that
you're trying to use internal structures in an external module.  We've
properly exported the constants and structures required for userspace to
interact with btrfs, but there are no plans to export internal structures.

-Jeff

-- 
Jeff Mahoney
SUSE Labs


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re:
@ 2017-11-13 14:55 Amos Kalonzo
  0 siblings, 0 replies; 26+ messages in thread
From: Amos Kalonzo @ 2017-11-13 14:55 UTC (permalink / raw)


Attn:

I am wondering why You haven't respond to my email for some days now.
reference to my client's contract balance payment of (11.7M,USD)
Kindly get back to me for more details.

Best Regards

Amos Kalonzo

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re:
       [not found] <CAJUWh6qyHerKg=-oaFN+USa10_Aag5+SYjBOeLCX1qM+WcDUwA@mail.gmail.com>
@ 2018-11-23  7:52 ` Chris Murphy
  2018-11-23  9:34   ` Re: Andy Leadbetter
  0 siblings, 1 reply; 26+ messages in thread
From: Chris Murphy @ 2018-11-23  7:52 UTC (permalink / raw)
  To: andy.leadbetter, Btrfs BTRFS

On Thu, Nov 22, 2018 at 11:41 PM Andy Leadbetter
<andy.leadbetter@theleadbetters.com> wrote:
>
> I have a failing 2TB disk that is part of a 4 disk RAID 6 system.  I
> have added a new 2TB disk to the computer, and started a BTRFS replace
> for the old and new disk.  The process starts correctly however some
> hours into the job, there is an error and kernel oops. relevant log
> below.

The relevant log is the entire dmesg, not a snippet. It's decently
likely there's more than one thing going on here. We also need full
output of 'smartctl -x' for all four drives, and also 'smartctl -l
scterc' for all four drives, and also 'cat
/sys/block/sda/device/timeout' for all four drives. And which bcache
mode you're using.

The call trace provided is from kernel 4.15 which is sufficiently long
ago I think any dev working on raid56 might want to see where it's
getting tripped up on something a lot newer, and this is why:

https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/diff/fs/btrfs/raid56.c?id=v4.19.3&id2=v4.15.1

That's a lot of changes in just the raid56 code between 4.15 and 4.19.
And then in you call trace, btrfs_dev_replace_start is found in
dev-replace.c which likewise has a lot of changes. But then also, I
think 4.15 might still be in the era where it was not recommended to
use 'btrfs dev replace' for raid56, only non-raid56. I'm not sure if
the problems with device replace were fixed, and if they were fixed
kernel or progs side. Anyway, the latest I recall, it was recommended
on raid56 to 'btrfs dev add' then 'btrfs dev remove'.

https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/diff/fs/btrfs/dev-replace.c?id=v4.19.3&id2=v4.15.1

And that's only a few hundred changes for each. Check out inode.c -
there are over 2000 changes.


> The disks are configured on top of bcache, in 5 arrays with a small
> 128GB SSD cache shared.  The system in this configuration has worked
> perfectly for 3 years, until 2 weeks ago csum errors started
> appearing.  I have a crashplan backup of all files on the disk, so I
> am not concerned about data loss, but I would like to avoid rebuild
> the system.

btrfs-progs 4.17 still considers raid56 experimental, not for
production use. And three years ago, the current upstream kernel
released was 4.3 so I'm gonna guess the kernel history of this file
system goes back older than that, very close to raid56 code birth. And
then adding bcache to this mix just makes it all the more complicated.



>
> btrfs dev stats shows
> [/dev/bcache0].write_io_errs    0
> [/dev/bcache0].read_io_errs     0
> [/dev/bcache0].flush_io_errs    0
> [/dev/bcache0].corruption_errs  0
> [/dev/bcache0].generation_errs  0
> [/dev/bcache1].write_io_errs    0
> [/dev/bcache1].read_io_errs     20
> [/dev/bcache1].flush_io_errs    0
> [/dev/bcache1].corruption_errs  0
> [/dev/bcache1].generation_errs  14
> [/dev/bcache3].write_io_errs    0
> [/dev/bcache3].read_io_errs     0
> [/dev/bcache3].flush_io_errs    0
> [/dev/bcache3].corruption_errs  0
> [/dev/bcache3].generation_errs  19
> [/dev/bcache2].write_io_errs    0
> [/dev/bcache2].read_io_errs     0
> [/dev/bcache2].flush_io_errs    0
> [/dev/bcache2].corruption_errs  0
> [/dev/bcache2].generation_errs  2


3 of 4 drives have at least one generation error. While there are no
corruptions reported, generation errors can be really tricky to
recover from at all. If only one device had only read errors, this
would be a lot less difficult.


> I've tried the latest kernel, and the latest tools, but nothing will
> allow me to replace, or delete the failed disk.

If the file system is mounted, I would try to make a local backup ASAP
before you lose the whole volume. Whether it's LVM pool of two drives
(linear/concat) with XFS, or if you go with Btrfs -dsingle -mraid1
(also basically a concat) doesn't really matter, but I'd get whatever
you can off the drive. I expect avoiding a rebuild in some form or
another is very wishful thinking and not very likely.

The more changes are made to the file system, repair attempts or
otherwise writing to it, decreases the chance of recovery.

-- 
Chris Murphy

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re:
  2018-11-23  7:52 ` Re: Chris Murphy
@ 2018-11-23  9:34   ` Andy Leadbetter
  0 siblings, 0 replies; 26+ messages in thread
From: Andy Leadbetter @ 2018-11-23  9:34 UTC (permalink / raw)
  To: lists; +Cc: linux-btrfs

Will capture all of that this evening, and try it with the latest
kernel and tools.  Thanks for the input on what info is relevant, with
gather it asap.
On Fri, 23 Nov 2018 at 07:53, Chris Murphy <lists@colorremedies.com> wrote:
>
> On Thu, Nov 22, 2018 at 11:41 PM Andy Leadbetter
> <andy.leadbetter@theleadbetters.com> wrote:
> >
> > I have a failing 2TB disk that is part of a 4 disk RAID 6 system.  I
> > have added a new 2TB disk to the computer, and started a BTRFS replace
> > for the old and new disk.  The process starts correctly however some
> > hours into the job, there is an error and kernel oops. relevant log
> > below.
>
> The relevant log is the entire dmesg, not a snippet. It's decently
> likely there's more than one thing going on here. We also need full
> output of 'smartctl -x' for all four drives, and also 'smartctl -l
> scterc' for all four drives, and also 'cat
> /sys/block/sda/device/timeout' for all four drives. And which bcache
> mode you're using.
>
> The call trace provided is from kernel 4.15 which is sufficiently long
> ago I think any dev working on raid56 might want to see where it's
> getting tripped up on something a lot newer, and this is why:
>
> https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/diff/fs/btrfs/raid56.c?id=v4.19.3&id2=v4.15.1
>
> That's a lot of changes in just the raid56 code between 4.15 and 4.19.
> And then in you call trace, btrfs_dev_replace_start is found in
> dev-replace.c which likewise has a lot of changes. But then also, I
> think 4.15 might still be in the era where it was not recommended to
> use 'btrfs dev replace' for raid56, only non-raid56. I'm not sure if
> the problems with device replace were fixed, and if they were fixed
> kernel or progs side. Anyway, the latest I recall, it was recommended
> on raid56 to 'btrfs dev add' then 'btrfs dev remove'.
>
> https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/diff/fs/btrfs/dev-replace.c?id=v4.19.3&id2=v4.15.1
>
> And that's only a few hundred changes for each. Check out inode.c -
> there are over 2000 changes.
>
>
> > The disks are configured on top of bcache, in 5 arrays with a small
> > 128GB SSD cache shared.  The system in this configuration has worked
> > perfectly for 3 years, until 2 weeks ago csum errors started
> > appearing.  I have a crashplan backup of all files on the disk, so I
> > am not concerned about data loss, but I would like to avoid rebuild
> > the system.
>
> btrfs-progs 4.17 still considers raid56 experimental, not for
> production use. And three years ago, the current upstream kernel
> released was 4.3 so I'm gonna guess the kernel history of this file
> system goes back older than that, very close to raid56 code birth. And
> then adding bcache to this mix just makes it all the more complicated.
>
>
>
> >
> > btrfs dev stats shows
> > [/dev/bcache0].write_io_errs    0
> > [/dev/bcache0].read_io_errs     0
> > [/dev/bcache0].flush_io_errs    0
> > [/dev/bcache0].corruption_errs  0
> > [/dev/bcache0].generation_errs  0
> > [/dev/bcache1].write_io_errs    0
> > [/dev/bcache1].read_io_errs     20
> > [/dev/bcache1].flush_io_errs    0
> > [/dev/bcache1].corruption_errs  0
> > [/dev/bcache1].generation_errs  14
> > [/dev/bcache3].write_io_errs    0
> > [/dev/bcache3].read_io_errs     0
> > [/dev/bcache3].flush_io_errs    0
> > [/dev/bcache3].corruption_errs  0
> > [/dev/bcache3].generation_errs  19
> > [/dev/bcache2].write_io_errs    0
> > [/dev/bcache2].read_io_errs     0
> > [/dev/bcache2].flush_io_errs    0
> > [/dev/bcache2].corruption_errs  0
> > [/dev/bcache2].generation_errs  2
>
>
> 3 of 4 drives have at least one generation error. While there are no
> corruptions reported, generation errors can be really tricky to
> recover from at all. If only one device had only read errors, this
> would be a lot less difficult.
>
>
> > I've tried the latest kernel, and the latest tools, but nothing will
> > allow me to replace, or delete the failed disk.
>
> If the file system is mounted, I would try to make a local backup ASAP
> before you lose the whole volume. Whether it's LVM pool of two drives
> (linear/concat) with XFS, or if you go with Btrfs -dsingle -mraid1
> (also basically a concat) doesn't really matter, but I'd get whatever
> you can off the drive. I expect avoiding a rebuild in some form or
> another is very wishful thinking and not very likely.
>
> The more changes are made to the file system, repair attempts or
> otherwise writing to it, decreases the chance of recovery.
>
> --
> Chris Murphy

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re:
       [not found] <CAGGnn3JZdc3ETS_AijasaFUqLY9e5Q1ZHK3+806rtsEBnAo5Og@mail.gmail.com>
@ 2021-11-23 17:20 ` Christian COMMARMOND
  0 siblings, 0 replies; 26+ messages in thread
From: Christian COMMARMOND @ 2021-11-23 17:20 UTC (permalink / raw)
  To: linux-btrfs

Hi,

I use a TERRAMASTER F5-422, with 14Gb disks with 3 btrfs partitions.
After repeated power outages, the 3rd partition mounts, but data is
not visible, other that the first root directory.

I tried to repair the disk and get this:
[root@TNAS-00E1FD ~]# btrfsck --repair /dev/mapper/vg0-lv2
enabling repair mode
...
Starting repair.
Opening filesystem to check...
Checking filesystem on /dev/mapper/vg0-lv2
UUID: a7b536f5-1827-479c-9170-eccbbc624370
[1/7] checking root items
Error: could not find btree root extent for root 257
ERROR: failed to repair root items: No such file or directory

(I put the full /var/log/messages at the end of this mail).

What can I do to get my data back?
This is a backup disk, and I am supposed to have a copy of it in
another place, but there too, murphy's law, I had some disk failures,
and lost some of my data.
So it would be very good to be able to recover some data from these disks.

Other informations:
lsblk:
NAME          MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda             8:0    0   3.7T  0 disk
|-sda1          8:1    0   285M  0 part
|-sda2          8:2    0   1.9G  0 part
| `-md9         9:9    0   1.9G  0 raid1 /
-|-sda3          8:3    0   977M  0 part
| `-md8         9:8    0 976.4M  0 raid1 [SWAP]
`-sda4          8:4    0   3.7T  0 part
  `-md0         9:0    0  14.6T  0 raid5
    |-vg0-lv0 251:0    0     2T  0 lvm   /mnt/md0
    |-vg0-lv1 251:1    0   3.9T  0 lvm   /mnt/md1
    `-vg0-lv2 251:2    0   8.7T  0 lvm   /mnt/md2
sdb             8:16   0   3.7T  0 disk
|-sdb1          8:17   0   285M  0 part
|-sdb2          8:18   0   1.9G  0 part
| `-md9         9:9    0   1.9G  0 raid1 /
|-sdb3          8:19   0   977M  0 part
| `-md8         9:8    0 976.4M  0 raid1 [SWAP]
`-sdb4          8:20   0   3.7T  0 part
  `-md0         9:0    0  14.6T  0 raid5
    |-vg0-lv0 251:0    0     2T  0 lvm   /mnt/md0
    |-vg0-lv1 251:1    0   3.9T  0 lvm   /mnt/md1
    `-vg0-lv2 251:2    0   8.7T  0 lvm   /mnt/md2
sdc             8:32   0   3.7T  0 disk
|-sdc1          8:33   0   285M  0 part
|-sdc2          8:34   0   1.9G  0 part
| `-md9         9:9    0   1.9G  0 raid1 /
|-sdc3          8:35   0   977M  0 part
| `-md8         9:8    0 976.4M  0 raid1 [SWAP]
`-sdc4          8:36   0   3.7T  0 part
  `-md0         9:0    0  14.6T  0 raid5
    |-vg0-lv0 251:0    0     2T  0 lvm   /mnt/md0
    |-vg0-lv1 251:1    0   3.9T  0 lvm   /mnt/md1
    `-vg0-lv2 251:2    0   8.7T  0 lvm   /mnt/md2
sdd             8:48   0   3.7T  0 disk
|-sdd1          8:49   0   285M  0 part
|-sdd2          8:50   0   1.9G  0 part
| `-md9         9:9    0   1.9G  0 raid1 /
|-sdd3          8:51   0   977M  0 part
| `-md8         9:8    0 976.4M  0 raid1 [SWAP]
`-sdd4          8:52   0   3.7T  0 part
  `-md0         9:0    0  14.6T  0 raid5
    |-vg0-lv0 251:0    0     2T  0 lvm   /mnt/md0
    |-vg0-lv1 251:1    0   3.9T  0 lvm   /mnt/md1
    `-vg0-lv2 251:2    0   8.7T  0 lvm   /mnt/md2
sde             8:64   0   3.7T  0 disk
|-sde1          8:65   0   285M  0 part
|-sde2          8:66   0   1.9G  0 part
| `-md9         9:9    0   1.9G  0 raid1 /
|-sde3          8:67   0   977M  0 part
| `-md8         9:8    0 976.4M  0 raid1 [SWAP]
`-sde4          8:68   0   3.7T  0 part
  `-md0         9:0    0  14.6T  0 raid5
    |-vg0-lv0 251:0    0     2T  0 lvm   /mnt/md0
    |-vg0-lv1 251:1    0   3.9T  0 lvm   /mnt/md1
    `-vg0-lv2 251:2    0   8.7T  0 lvm   /mnt/md2


df -h:
Filesystem                Size      Used Available Use% Mounted on
/dev/md9                  1.8G    576.8M      1.2G  32% /
devtmpfs                  1.8G         0      1.8G   0% /dev
tmpfs                     1.8G         0      1.8G   0% /dev/shm
tmpfs                     1.8G      1.1M      1.8G   0% /tmp
tmpfs                     1.8G    236.0K      1.8G   0% /run
tmpfs                     1.8G      6.3M      1.8G   0% /opt/var
/dev/mapper/vg0-lv0       2.0T     34.5M      2.0T   0% /mnt/md0
/dev/mapper/vg0-lv1       3.9T     16.3M      3.9T   0% /mnt/md1
/dev/mapper/vg0-lv2       8.7T      2.9T      5.8T  33% /mnt/md2

This physical disks are new (a few months) and do not show errors.

I hope there is a way to fix this.

regards,

Christian COMMARMOND


Here, the full (restricted to 'kernel') from the lines where I begin
to see errors:
Nov 23 17:00:46 TNAS-00E1FD kernel: [   34.540572] Detached from
scsi7, channel 0, id 0, lun 0, type 0
Nov 23 17:00:48 TNAS-00E1FD kernel: [   37.148169] md: md8 stopped.
Nov 23 17:00:48 TNAS-00E1FD kernel: [   37.154395] md/raid1:md8:
active with 1 out of 72 mirrors
Nov 23 17:00:48 TNAS-00E1FD kernel: [   37.155564] md8: detected
capacity change from 0 to 1023868928
Nov 23 17:00:49 TNAS-00E1FD kernel: [   38.240910] md: recovery of
RAID array md8
Nov 23 17:00:49 TNAS-00E1FD kernel: [   38.276712] md: md8: recovery
interrupted.
Nov 23 17:00:50 TNAS-00E1FD kernel: [   38.346552] md: recovery of
RAID array md8
Nov 23 17:00:50 TNAS-00E1FD kernel: [   38.392148] md: md8: recovery
interrupted.
Nov 23 17:00:50 TNAS-00E1FD kernel: [   38.458126] md: recovery of
RAID array md8
Nov 23 17:00:50 TNAS-00E1FD kernel: [   38.494025] md: md8: recovery
interrupted.
Nov 23 17:00:50 TNAS-00E1FD kernel: [   38.576871] md: recovery of
RAID array md8
Nov 23 17:00:50 TNAS-00E1FD kernel: [   38.837269] Adding 999868k swap
on /dev/md8.  Priority:-1 extents:1 across:999868k
Nov 23 17:00:51 TNAS-00E1FD kernel: [   39.801285] md: md0 stopped.
Nov 23 17:00:51 TNAS-00E1FD kernel: [   39.859798] md/raid:md0: device
sda4 operational as raid disk 0
Nov 23 17:00:51 TNAS-00E1FD kernel: [   39.861417] md/raid:md0: device
sde4 operational as raid disk 4
Nov 23 17:00:51 TNAS-00E1FD kernel: [   39.863675] md/raid:md0: device
sdd4 operational as raid disk 3
Nov 23 17:00:51 TNAS-00E1FD kernel: [   39.865059] md/raid:md0: device
sdc4 operational as raid disk 2
Nov 23 17:00:51 TNAS-00E1FD kernel: [   39.866373] md/raid:md0: device
sdb4 operational as raid disk 1
Nov 23 17:00:51 TNAS-00E1FD kernel: [   39.869300] md/raid:md0: raid
level 5 active with 5 out of 5 devices, algorithm 2
Nov 23 17:00:51 TNAS-00E1FD kernel: [   39.926721] md0: detected
capacity change from 0 to 15989118861312
Nov 23 17:00:57 TNAS-00E1FD kernel: [   46.111539] md: md8: recovery done.
Nov 23 17:00:57 TNAS-00E1FD kernel: [   46.269349] flashcache:
flashcache-3.1.1 initialized
Nov 23 17:00:58 TNAS-00E1FD kernel: [   46.394510] BTRFS: device fsid
bdc3dbee-00a3-4541-99b4-096cd27939f2 devid 1 transid 679
/dev/mapper/vg0-lv0
Nov 23 17:00:58 TNAS-00E1FD kernel: [   46.397072] BTRFS info (device
dm-0): metadata ratio 50
Nov 23 17:00:58 TNAS-00E1FD kernel: [   46.399122] BTRFS info (device
dm-0): using free space tree
Nov 23 17:00:58 TNAS-00E1FD kernel: [   46.400380] BTRFS info (device
dm-0): has skinny extents
Nov 23 17:00:58 TNAS-00E1FD kernel: [   46.471236] BTRFS info (device
dm-0): new size for /dev/mapper/vg0-lv0 is 2147483648000
Nov 23 17:00:58 TNAS-00E1FD kernel: [   47.087622] BTRFS: device fsid
a5828e5a-1b11-4743-891c-11d0d8aeb1ae devid 1 transid 107
/dev/mapper/vg0-lv1
Nov 23 17:00:58 TNAS-00E1FD kernel: [   47.089943] BTRFS info (device
dm-1): metadata ratio 50
Nov 23 17:00:58 TNAS-00E1FD kernel: [   47.091505] BTRFS info (device
dm-1): using free space tree
Nov 23 17:00:58 TNAS-00E1FD kernel: [   47.093062] BTRFS info (device
dm-1): has skinny extents
Nov 23 17:00:58 TNAS-00E1FD kernel: [   47.150713] BTRFS info (device
dm-1): new size for /dev/mapper/vg0-lv1 is 4294967296000
Nov 23 17:00:59 TNAS-00E1FD kernel: [   47.737119] BTRFS: device fsid
a7b536f5-1827-479c-9170-eccbbc624370 devid 1 transid 142633
/dev/mapper/vg0-lv2
Nov 23 17:00:59 TNAS-00E1FD kernel: [   47.739313] BTRFS info (device
dm-2): metadata ratio 50
Nov 23 17:00:59 TNAS-00E1FD kernel: [   47.740630] BTRFS info (device
dm-2): using free space tree
Nov 23 17:00:59 TNAS-00E1FD kernel: [   47.741892] BTRFS info (device
dm-2): has skinny extents
Nov 23 17:00:59 TNAS-00E1FD kernel: [   47.946451] BTRFS info (device
dm-2): bdev /dev/mapper/vg0-lv2 errs: wr 0, rd 0, flush 0, corrupt 0,
gen 8
Nov 23 17:01:01 TNAS-00E1FD kernel: [   49.693394] BTRFS info (device
dm-2): checking UUID tree
Nov 23 17:01:01 TNAS-00E1FD kernel: [   49.700560] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:01:01 TNAS-00E1FD kernel: [   49.707394] BTRFS info (device
dm-2): new size for /dev/mapper/vg0-lv2 is 9546663723008
Nov 23 17:01:01 TNAS-00E1FD kernel: [   49.713109] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:01:01 TNAS-00E1FD kernel: [   49.715107] BTRFS warning
(device dm-2): iterating uuid_tree failed -5
Nov 23 17:01:01 TNAS-00E1FD kernel: [   49.795716] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:01:01 TNAS-00E1FD kernel: [   49.798231] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:01:03 TNAS-00E1FD kernel: [   52.272802] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:01:03 TNAS-00E1FD kernel: [   52.275264] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:01:03 TNAS-00E1FD kernel: [   52.277208] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:01:03 TNAS-00E1FD kernel: [   52.278483] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:01:04 TNAS-00E1FD kernel: [   52.570033] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:01:04 TNAS-00E1FD kernel: [   52.571487] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:01:05 TNAS-00E1FD kernel: [   54.250527] nf_conntrack:
default automatic helper assignment has been turned off for security
reasons and CT-based  firewall rule not found. Use the iptables CT
target to attach helpers instead.
Nov 23 17:01:07 TNAS-00E1FD kernel: [   56.050418]
verify_parent_transid: 2 callbacks suppressed
Nov 23 17:01:07 TNAS-00E1FD kernel: [   56.050424] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:01:07 TNAS-00E1FD kernel: [   56.063012] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:01:07 TNAS-00E1FD kernel: [   56.166746] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:01:07 TNAS-00E1FD kernel: [   56.167903] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:01:07 TNAS-00E1FD kernel: [   56.274188] NFSD: starting
90-second grace period (net ffffffff9db5abc0)
Nov 23 17:01:09 TNAS-00E1FD kernel: [   57.524631] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:01:09 TNAS-00E1FD kernel: [   57.525878] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:01:09 TNAS-00E1FD kernel: [   57.589706] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:01:09 TNAS-00E1FD kernel: [   57.590882] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:01:10 TNAS-00E1FD kernel: [   58.315852] warning: `smbd'
uses legacy ethtool link settings API, link modes are only partially
reported
Nov 23 17:01:31 TNAS-00E1FD kernel: [   79.882060] BTRFS error (device
dm-2): incorrect extent count for 29360128; counted 740, expected 677
Nov 23 17:01:31 TNAS-00E1FD kernel: [   79.883000] BTRFS: error
(device dm-2) in convert_free_space_to_extents:457: errno=-5 IO
failure
Nov 23 17:01:31 TNAS-00E1FD kernel: [   79.883946] BTRFS info (device
dm-2): forced readonly
Nov 23 17:01:31 TNAS-00E1FD kernel: [   79.884896] BTRFS: error
(device dm-2) in add_to_free_space_tree:1052: errno=-5 IO failure
Nov 23 17:01:31 TNAS-00E1FD kernel: [   79.885863] BTRFS: error
(device dm-2) in __btrfs_free_extent:7106: errno=-5 IO failure
Nov 23 17:01:31 TNAS-00E1FD kernel: [   79.886825] BTRFS: error
(device dm-2) in btrfs_run_delayed_refs:3009: errno=-5 IO failure
Nov 23 17:01:31 TNAS-00E1FD kernel: [   79.887803] BTRFS warning
(device dm-2): Skipping commit of aborted transaction.
Nov 23 17:01:31 TNAS-00E1FD kernel: [   79.888807] BTRFS: error
(device dm-2) in cleanup_transaction:1873: errno=-5 IO failure
Nov 23 17:01:31 TNAS-00E1FD kernel: [   79.892906] BTRFS error (device
dm-2): incorrect extent count for 29360128; counted 739, expected 676
Nov 23 17:02:55 TNAS-00E1FD kernel: [  164.199509] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:02:55 TNAS-00E1FD kernel: [  164.212280] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:02:55 TNAS-00E1FD kernel: [  164.214362] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:02:55 TNAS-00E1FD kernel: [  164.216331] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:02:55 TNAS-00E1FD kernel: [  164.224184] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:02:55 TNAS-00E1FD kernel: [  164.225500] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:02:55 TNAS-00E1FD kernel: [  164.227338] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:02:55 TNAS-00E1FD kernel: [  164.228636] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:03:37 TNAS-00E1FD kernel: [  205.915492] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:03:37 TNAS-00E1FD kernel: [  205.936745] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:03:37 TNAS-00E1FD kernel: [  205.938543] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:03:37 TNAS-00E1FD kernel: [  205.940375] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:03:37 TNAS-00E1FD kernel: [  205.951375] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:03:37 TNAS-00E1FD kernel: [  205.952810] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:03:37 TNAS-00E1FD kernel: [  205.972430] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:03:37 TNAS-00E1FD kernel: [  205.973548] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:03:37 TNAS-00E1FD kernel: [  205.974819] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:03:37 TNAS-00E1FD kernel: [  205.975984] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:03:54 TNAS-00E1FD kernel: [  222.807122]
verify_parent_transid: 6 callbacks suppressed
Nov 23 17:03:54 TNAS-00E1FD kernel: [  222.807127] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:03:54 TNAS-00E1FD kernel: [  222.819996] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:03:54 TNAS-00E1FD kernel: [  222.923926] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:03:54 TNAS-00E1FD kernel: [  222.925434] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:03:54 TNAS-00E1FD kernel: [  223.061241] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:03:54 TNAS-00E1FD kernel: [  223.062463] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:03:59 TNAS-00E1FD kernel: [  227.554549] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:03:59 TNAS-00E1FD kernel: [  227.556100] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:04:13 TNAS-00E1FD kernel: [  242.190152] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:04:13 TNAS-00E1FD kernel: [  242.202843] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:04:13 TNAS-00E1FD kernel: [  242.215390] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:04:13 TNAS-00E1FD kernel: [  242.217241] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:05:15 TNAS-00E1FD kernel: [  303.772878] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:05:15 TNAS-00E1FD kernel: [  303.785862] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:06:14 TNAS-00E1FD kernel: [  362.480763] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:06:14 TNAS-00E1FD kernel: [  362.493848] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:06:43 TNAS-00E1FD kernel: [  392.055419] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:06:43 TNAS-00E1FD kernel: [  392.068306] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:06:43 TNAS-00E1FD kernel: [  392.069074] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:06:43 TNAS-00E1FD kernel: [  392.069862] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:06:43 TNAS-00E1FD kernel: [  392.076040] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:06:43 TNAS-00E1FD kernel: [  392.076821] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:06:43 TNAS-00E1FD kernel: [  392.077643] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:06:43 TNAS-00E1FD kernel: [  392.078360] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:14:02 TNAS-00E1FD kernel: [  830.643054] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:14:02 TNAS-00E1FD kernel: [  830.664937] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:14:11 TNAS-00E1FD kernel: [  839.988330] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:14:11 TNAS-00E1FD kernel: [  839.989850] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:14:11 TNAS-00E1FD kernel: [  839.991371] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:14:11 TNAS-00E1FD kernel: [  839.992867] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:14:12 TNAS-00E1FD kernel: [  840.488126] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:14:12 TNAS-00E1FD kernel: [  840.488998] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:16:36 TNAS-00E1FD kernel: [  985.266877] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:16:36 TNAS-00E1FD kernel: [  985.288688] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:16:36 TNAS-00E1FD kernel: [  985.289624] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:16:36 TNAS-00E1FD kernel: [  985.290454] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:16:36 TNAS-00E1FD kernel: [  985.300198] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:16:36 TNAS-00E1FD kernel: [  985.300917] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:16:36 TNAS-00E1FD kernel: [  985.301704] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:16:36 TNAS-00E1FD kernel: [  985.302318] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:34:24 TNAS-00E1FD kernel: [ 2052.815271] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:34:24 TNAS-00E1FD kernel: [ 2052.838506] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:34:52 TNAS-00E1FD kernel: [ 2081.273231] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:34:52 TNAS-00E1FD kernel: [ 2081.296585] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:39:26 TNAS-00E1FD kernel: [ 2354.866442] BTRFS error (device
dm-2): cleaner transaction attach returned -30
Nov 23 17:56:30 TNAS-00E1FD kernel: [ 3378.825461] BTRFS info (device
dm-2): using free space tree
Nov 23 17:56:30 TNAS-00E1FD kernel: [ 3378.825891] BTRFS info (device
dm-2): has skinny extents
Nov 23 17:56:30 TNAS-00E1FD kernel: [ 3378.968533] BTRFS info (device
dm-2): bdev /dev/mapper/vg0-lv2 errs: wr 0, rd 0, flush 0, corrupt 0,
gen 8
Nov 23 17:56:32 TNAS-00E1FD kernel: [ 3380.525294] BTRFS info (device
dm-2): checking UUID tree
Nov 23 17:56:32 TNAS-00E1FD kernel: [ 3380.535839] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:56:32 TNAS-00E1FD kernel: [ 3380.544791] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:56:32 TNAS-00E1FD kernel: [ 3380.545579] BTRFS warning
(device dm-2): iterating uuid_tree failed -5
Nov 23 17:56:42 TNAS-00E1FD kernel: [ 3391.302453] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:56:43 TNAS-00E1FD kernel: [ 3391.328368] BTRFS error (device
dm-2): parent transid verify failed on 174735360 wanted 37018 found
37023
Nov 23 17:57:01 TNAS-00E1FD kernel: [ 3409.806326] BTRFS error (device
dm-2): incorrect extent count for 29360128; counted 740, expected 677
Nov 23 17:57:01 TNAS-00E1FD kernel: [ 3409.806836] BTRFS: error
(device dm-2) in convert_free_space_to_extents:457: errno=-5 IO
failure
Nov 23 17:57:01 TNAS-00E1FD kernel: [ 3409.807367] BTRFS info (device
dm-2): forced readonly
Nov 23 17:57:01 TNAS-00E1FD kernel: [ 3409.807904] BTRFS: error
(device dm-2) in add_to_free_space_tree:1052: errno=-5 IO failure
Nov 23 17:57:01 TNAS-00E1FD kernel: [ 3409.808493] BTRFS: error
(device dm-2) in __btrfs_free_extent:7106: errno=-5 IO failure
Nov 23 17:57:01 TNAS-00E1FD kernel: [ 3409.809160] BTRFS: error
(device dm-2) in btrfs_run_delayed_refs:3009: errno=-5 IO failure
Nov 23 17:57:01 TNAS-00E1FD kernel: [ 3409.809785] BTRFS warning
(device dm-2): Skipping commit of aborted transaction.
Nov 23 17:57:01 TNAS-00E1FD kernel: [ 3409.810444] BTRFS: error
(device dm-2) in cleanup_transaction:1873: errno=-5 IO failure
Nov 23 17:57:01 TNAS-00E1FD kernel: [ 3409.814113] BTRFS error (device
dm-2): incorrect extent count for 29360128; counted 739, expected 676

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re:
       [not found]   ` <CAEhhANom-MGPCqEk5LXufMkxvnoY0YRUrr0r07s0_7F=eCQH5Q@mail.gmail.com>
@ 2023-06-08 10:51     ` Daniel Little
  0 siblings, 0 replies; 26+ messages in thread
From: Daniel Little @ 2023-06-08 10:51 UTC (permalink / raw)
  To: linux-btrfs, support

[-- Attachment #1: Type: text/plain, Size: 2581 bytes --]

>>
>> Good Day,
>>
>> I’m sorry to message the developers this way. Im sure this is not the purpose of being able to contact developers, but I am pretty desperate here.
>>
>> I’m desperately seeking some "hands-on" assistance with my broken Rocstor setup. I have a lot of photos and videos on my drives that I cannot reproduce and would really like to retrieve.
>>
>> With my limited knowledge and skill I have tried as best I can to follow the suggestions made by Philip on my forum post (Disk Pool mounted, shared missing. many errors - #2 by phillxnet), but I’m no closer to success than when I started. Im sure its because Im not doing things right. If someone smarter than me is willing to offer their precious time to assist I am happy to set up remote access to the system for them to work/diagnose/troubleshoot directly. I will fit into your schedule whenever and whatever that may be. I’m willing to put on the dunce hat and be tarred and feathered and publicly mocked, so long as some kind souls help me to recover the data.
>>
>> I eagerly await and appreciate any assistance offered. I respectfully understand too if this is not something anyone wants to take on.
>>
>> SITREP:
>>
>> Rockstor 4.1.0-0 installed on a ESXI vm. Tried to get vmware-tools installed. followed a guide blindly. vm rebooted, all hell broke loose.
>>
>> “Parent transid verify failed… wanted 32616 found 32441”
>> Pool remounts automatically as read-only.
>>
>> OUTPUTS:
>>
>>
>>
>> uname -a
>>
>> Linux RocStor 5.3.18-150300.59.106-default #1 SMP Mon Dec 12 13:16:24 UTC 2022 (774239c) x86_64 x86_64 x86_64 GNU/Linux
>>
>>
>>
>> btrfs --version
>>
>> btrfs-progs v4.19.1
>>
>>
>>
>> btrfs fi show
>>
>> Label: ‘ROOT’     uuid: 4ac1b0f-afeb-4946-aad1-975a2a26c941
>>
>>                              Total devices 1 FS bytes used 4.65GiB
>>
>>                              Devid 1 size 47.93GiB used 5.80GiB path /dev/sda4
>>
>>
>>
>> Label: ‘DATA’      uuid: 8d3ee597-bddc-4de8-8fc0-23fde00e27f1
>>
>>                              Total devices 1FS bytes used 768.00KiB
>>
>>                              Devid 1 size 16.37TiB used 11.72TiB path /dev/sdb
>>
>>
>>
>> Inside DATA there are only two folders. DATASTORE and SyncThing. All the required data is in DATASTORE.
>>
>>
>>
>> Btrfs fi df /home
>>
>> Data, single: total=5.54GiB, used=4.55GiB
>>
>> System, single: total=32.00MiB, used=16.ooKiB
>>
>> Metadata, single:=232.00MiB, used=110.05MiB
>>
>> GlobalReserve, single: total=11.55MiB, used=0.00B

[-- Attachment #2: requested_logs.tgz --]
[-- Type: application/x-compressed, Size: 27311 bytes --]

^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2023-06-08 10:52 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-09-01  2:02 Fennec Fox
2016-09-01  3:10 ` Jeff Mahoney
2016-09-01 19:32   ` Re: Kai Krakow
2016-09-01  7:44 ` your mail M G Berberich
2016-09-01 11:17   ` Austin S. Hemmelgarn
2016-09-01 16:44     ` Kyle Gates
2016-09-01 17:06       ` Austin S. Hemmelgarn
2016-09-02  1:51       ` Jeff Mahoney
2016-09-01 21:15     ` M G Berberich
     [not found] <010d01d999f4$257ae020$7070a060$@mirroredgenetworks.com>
     [not found] ` <CAEhhANphwWt5iOMc5Yqp1tT1HGoG_GsCuUWBWeVX4zxL6JwUiw@mail.gmail.com>
     [not found]   ` <CAEhhANom-MGPCqEk5LXufMkxvnoY0YRUrr0r07s0_7F=eCQH5Q@mail.gmail.com>
2023-06-08 10:51     ` Daniel Little
     [not found] <CAGGnn3JZdc3ETS_AijasaFUqLY9e5Q1ZHK3+806rtsEBnAo5Og@mail.gmail.com>
2021-11-23 17:20 ` Re: Christian COMMARMOND
     [not found] <CAJUWh6qyHerKg=-oaFN+USa10_Aag5+SYjBOeLCX1qM+WcDUwA@mail.gmail.com>
2018-11-23  7:52 ` Re: Chris Murphy
2018-11-23  9:34   ` Re: Andy Leadbetter
  -- strict thread matches above, loose matches on Subject: below --
2017-11-13 14:55 Re: Amos Kalonzo
2017-03-19 15:00 Ilan Schwarts
2017-03-23 17:12 ` Jeff Mahoney
2017-02-23 15:09 Qin's Yanjun
2016-11-09 17:55 bepi
2016-11-10  6:57 ` Alex Powell
2016-11-10 13:00   ` Re: bepi
2014-05-02  9:42 "csum failed" that was not detected by scrub Jaap Pieroen
2014-05-02 10:20 ` Duncan
2014-05-02 17:48   ` Jaap Pieroen
2014-05-03 13:31     ` Re: Frank Holton
2013-04-27  9:42 Peter Würtz
2013-05-02  3:00 ` Lin Ming
2012-08-15 10:12 State of nocow file attribute Lluís Batlle i Rossell
2012-08-17  1:45 ` Liu Bo
2012-08-17 14:59   ` David Sterba
2012-08-17 15:30     ` Liu Bo
2011-12-21 13:54 "btrfs: open_ctree failed" error Malwina Bartoszynska
2011-12-21 19:06 ` Chris Mason
2011-12-22  9:43   ` Malwina Bartoszynska
2012-01-31 15:53     ` Max
2011-02-20 12:22 (unknown) Christian Brunner
2011-02-20 13:10 ` Maria Wikström
2010-08-30  2:32 (unknown) Bret Palsson
2010-08-30  3:11 ` Sebastian 'gonX' Jensen
2010-07-17  2:41 Re: SINOPEC OIL AND GAS COMPANY

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).