* discard on SSDs quickly causes backup trees to vanish
@ 2017-11-10 23:52 Chris Murphy
2017-11-11 1:54 ` Hans van Kranenburg
0 siblings, 1 reply; 10+ messages in thread
From: Chris Murphy @ 2017-11-10 23:52 UTC (permalink / raw)
To: Btrfs BTRFS; +Cc: David Sterba, Austin S Hemmelgarn
Hardware:
HP Spectre which contains a SAMSUNG MZVLV256HCHP-000H1, which is an NVMe drive.
Kernels:
various but definitely 4.12.0 through 4.13.10
Problem:
Within seconds of the super being updated to point to a new root tree,
the old root tree cannot be read with btrfs-debug-tree. Example:
$ sudo btrfs-debug-tree -b 84258750464 /dev/nvme0n1p8
btrfs-progs v4.13.3
node 84258750464 level 1 items 2 free 491 generation 200994 owner 1
fs uuid 2662057f-e6c7-47fa-8af9-ad933a22f6ec
chunk uuid 1df72dcf-f515-404a-894a-f7345f988793
key (EXTENT_TREE ROOT_ITEM 0) block 84258783232 (5142748) gen 200994
key (452 INODE_ITEM 0) block 84258881536 (5142754) gen 200994
(wait 10-40 seconds while file system is in use)
$ sudo btrfs-debug-tree -b 84258750464 /dev/nvme0n1p8
btrfs-progs v4.13
checksum verify failed on 84258750464 found E4E3BDB6 wanted 00000000
checksum verify failed on 84258750464 found E4E3BDB6 wanted 00000000
bytenr mismatch, want=84258750464, have=0
ERROR: failed to read 84258750464
[chris@f26h ~]$
This suggests a problem for any kind of automatic recovery, should it
be needed at next mount time, following a crash or power failure, as
well as rendering the usebackuproot useless.
I think until discard mount option has some kind of delay (generation
based perhaps), so that at least the various backup trees, in
particular the root tree, is not immediately subject to discard, that
the Btrfs wiki needs to suggest the discard mount option is unsafe on
SSD.
While I have not experienced any other problems in roughly a year of
using discard and Btrfs on this hardware, if I had needed a rollback
offered by use of a backup tree, they simply wouldn't have been
available, and I'd have been hosed.
(Needs testing on LVM thinp to see if discard causes a similar problem
with Btrfs on LVM thinly provisioned volumes, even with hard drives.)
--
Chris Murphy
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: discard on SSDs quickly causes backup trees to vanish
2017-11-10 23:52 discard on SSDs quickly causes backup trees to vanish Chris Murphy
@ 2017-11-11 1:54 ` Hans van Kranenburg
2017-11-11 2:30 ` Qu Wenruo
0 siblings, 1 reply; 10+ messages in thread
From: Hans van Kranenburg @ 2017-11-11 1:54 UTC (permalink / raw)
To: Chris Murphy, Btrfs BTRFS; +Cc: David Sterba, Austin S Hemmelgarn
On 11/11/2017 12:52 AM, Chris Murphy wrote:
> Hardware:
> HP Spectre which contains a SAMSUNG MZVLV256HCHP-000H1, which is an NVMe drive.
>
> Kernels:
> various but definitely 4.12.0 through 4.13.10
>
> Problem:
> Within seconds of the super being updated to point to a new root tree,
> the old root tree cannot be read with btrfs-debug-tree.
Is this a problem, or is this just expected, by design?
> Example:
>
>
> $ sudo btrfs-debug-tree -b 84258750464 /dev/nvme0n1p8
> btrfs-progs v4.13.3
> node 84258750464 level 1 items 2 free 491 generation 200994 owner 1
> fs uuid 2662057f-e6c7-47fa-8af9-ad933a22f6ec
> chunk uuid 1df72dcf-f515-404a-894a-f7345f988793
> key (EXTENT_TREE ROOT_ITEM 0) block 84258783232 (5142748) gen 200994
> key (452 INODE_ITEM 0) block 84258881536 (5142754) gen 200994
>
> (wait 10-40 seconds while file system is in use)
After the superblock is written, this space is freed up to be
overwritten by new writes immediately...
> $ sudo btrfs-debug-tree -b 84258750464 /dev/nvme0n1p8
> btrfs-progs v4.13
> checksum verify failed on 84258750464 found E4E3BDB6 wanted 00000000
> checksum verify failed on 84258750464 found E4E3BDB6 wanted 00000000
> bytenr mismatch, want=84258750464, have=0
> ERROR: failed to read 84258750464
> [chris@f26h ~]$
Even when not using discard, there might be new data in that place now,
when the file system is in use...
> This suggests a problem for any kind of automatic recovery, should it
> be needed at next mount time, following a crash or power failure, as
> well as rendering the usebackuproot useless.
Maybe the name backuproot is useless, because it's not a backup at all.
Only the most recent previous one is if you have to mount a filesystem
directly after some bug hosed your tree of trees during a final commit
when umounting just before?
> I think until discard mount option has some kind of delay (generation
> based perhaps), so that at least the various backup trees, in
> particular the root tree, is not immediately subject to discard, that
> the Btrfs wiki needs to suggest the discard mount option is unsafe on
> SSD.
>
> While I have not experienced any other problems in roughly a year of
> using discard and Btrfs on this hardware, if I had needed a rollback
> offered by use of a backup tree, they simply wouldn't have been
> available, and I'd have been hosed.
> (Needs testing on LVM thinp to see if discard causes a similar problem
> with Btrfs on LVM thinly provisioned volumes, even with hard drives.)
I actually start wondering why this option exists at all. I mean, even
when it seems you get a working filesystem back with it, there can be
metadata and data in all corners of the filesystem that already has been
overwritten?
It was introduced in commit af31f5e5b "Btrfs: add a log of past tree
roots" and the only information we get is "just in case we somehow lose
the roots", which is an explanation for adding this feature that does
not really tell me much about it.
--
Hans van Kranenburg
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: discard on SSDs quickly causes backup trees to vanish
2017-11-11 1:54 ` Hans van Kranenburg
@ 2017-11-11 2:30 ` Qu Wenruo
2017-11-11 3:13 ` Hans van Kranenburg
0 siblings, 1 reply; 10+ messages in thread
From: Qu Wenruo @ 2017-11-11 2:30 UTC (permalink / raw)
To: Hans van Kranenburg, Chris Murphy, Btrfs BTRFS
Cc: David Sterba, Austin S Hemmelgarn
[-- Attachment #1.1: Type: text/plain, Size: 4877 bytes --]
On 2017年11月11日 09:54, Hans van Kranenburg wrote:
> On 11/11/2017 12:52 AM, Chris Murphy wrote:
>> Hardware:
>> HP Spectre which contains a SAMSUNG MZVLV256HCHP-000H1, which is an NVMe drive.
>>
>> Kernels:
>> various but definitely 4.12.0 through 4.13.10
>>
>> Problem:
>> Within seconds of the super being updated to point to a new root tree,
>> the old root tree cannot be read with btrfs-debug-tree.
>
> Is this a problem, or is this just expected, by design?
It is a problem.
By design, tree block should only be discarded after no one is referring
to it. Note: commit root (last committed transaction) also counts.
So that's to say, last committed transaction must be alive, until
current transaction is committed.
If tree block is discarded during a uncommitted transaction, and power
loss happened, the fs can not be mounted any more (if a vital tree is
corrupted).
Even after transaction commitment, discard a tree block of last
transaction can lead to recovery problem (e.g. rollback the fs to
previous trans using backup roots)
>
>> Example:
>>
>>
>> $ sudo btrfs-debug-tree -b 84258750464 /dev/nvme0n1p8
>> btrfs-progs v4.13.3
>> node 84258750464 level 1 items 2 free 491 generation 200994 owner 1
>> fs uuid 2662057f-e6c7-47fa-8af9-ad933a22f6ec
>> chunk uuid 1df72dcf-f515-404a-894a-f7345f988793
>> key (EXTENT_TREE ROOT_ITEM 0) block 84258783232 (5142748) gen 200994
>> key (452 INODE_ITEM 0) block 84258881536 (5142754) gen 200994
>>
>> (wait 10-40 seconds while file system is in use)
>
> After the superblock is written, this space is freed up to be
> overwritten by new writes immediately...
Nope, this should not be the case.
The correct behavior is, new write should *never* overwrite tree blocks
used by commit_root (last committed transaction).
Or there is nothing to protect btrfs from power loss.
(Btrfs doesn't use journal, but completely rely metadata CoW to handle
power loss)
>
>> $ sudo btrfs-debug-tree -b 84258750464 /dev/nvme0n1p8
>> btrfs-progs v4.13
>> checksum verify failed on 84258750464 found E4E3BDB6 wanted 00000000
>> checksum verify failed on 84258750464 found E4E3BDB6 wanted 00000000
>> bytenr mismatch, want=84258750464, have=0
>> ERROR: failed to read 84258750464
>> [chris@f26h ~]$
>
> Even when not using discard, there might be new data in that place now,
> when the file system is in use...
As explained above, btrfs use CoW to survive power loss, especially for
metadata, so tree blocks of last committed transaction should not be
touched at all.
So, this is a problem, and I think that's why discard is not recommended
for btrfs (yet).
And we should have more runtime checks to ensure we won't allocate any
space used by committed transaction.
>
>> This suggests a problem for any kind of automatic recovery, should it
>> be needed at next mount time, following a crash or power failure, as
>> well as rendering the usebackuproot useless.
>
> Maybe the name backuproot is useless, because it's not a backup at all.
Most of the backup root is more or less useless, if and only if metadata
CoW is done completely as design.
One more chance to recover is never a bad idea.
As you can see, if metadata CoW is completely implemented as designed,
there will be no report of transid mismatch at all.
And btrfs should be bullet proof from the very beginning, but none of
these is true.
Thanks,
Qu
>
> Only the most recent previous one is if you have to mount a filesystem
> directly after some bug hosed your tree of trees during a final commit
> when umounting just before?
>
>> I think until discard mount option has some kind of delay (generation
>> based perhaps), so that at least the various backup trees, in
>> particular the root tree, is not immediately subject to discard, that
>> the Btrfs wiki needs to suggest the discard mount option is unsafe on
>> SSD.
>>
>> While I have not experienced any other problems in roughly a year of
>> using discard and Btrfs on this hardware, if I had needed a rollback
>> offered by use of a backup tree, they simply wouldn't have been
>> available, and I'd have been hosed.
>> (Needs testing on LVM thinp to see if discard causes a similar problem
>> with Btrfs on LVM thinly provisioned volumes, even with hard drives.)
>
> I actually start wondering why this option exists at all. I mean, even
> when it seems you get a working filesystem back with it, there can be
> metadata and data in all corners of the filesystem that already has been
> overwritten?
>
> It was introduced in commit af31f5e5b "Btrfs: add a log of past tree
> roots" and the only information we get is "just in case we somehow lose
> the roots", which is an explanation for adding this feature that does
> not really tell me much about it.
>
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 520 bytes --]
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: discard on SSDs quickly causes backup trees to vanish
2017-11-11 2:30 ` Qu Wenruo
@ 2017-11-11 3:13 ` Hans van Kranenburg
2017-11-11 3:48 ` Qu Wenruo
0 siblings, 1 reply; 10+ messages in thread
From: Hans van Kranenburg @ 2017-11-11 3:13 UTC (permalink / raw)
To: Qu Wenruo, Chris Murphy, Btrfs BTRFS; +Cc: David Sterba, Austin S Hemmelgarn
On 11/11/2017 03:30 AM, Qu Wenruo wrote:
>
> On 2017年11月11日 09:54, Hans van Kranenburg wrote:
>> On 11/11/2017 12:52 AM, Chris Murphy wrote:
>>> Hardware:
>>> HP Spectre which contains a SAMSUNG MZVLV256HCHP-000H1, which is an NVMe drive.
>>>
>>> Kernels:
>>> various but definitely 4.12.0 through 4.13.10
>>>
>>> Problem:
>>> Within seconds of the super being updated to point to a new root tree,
>>> the old root tree cannot be read with btrfs-debug-tree.
>>
>> Is this a problem, or is this just expected, by design?
>
> It is a problem.
>
> By design, tree block should only be discarded after no one is referring
> to it. Note: commit root (last committed transaction) also counts.
>
> So that's to say, last committed transaction must be alive, until
> current transaction is committed.
Yes, well, that's no different than what I was saying.
> If tree block is discarded during a uncommitted transaction, and power
> loss happened, the fs can not be mounted any more (if a vital tree is
> corrupted).
>
> Even after transaction commitment, discard a tree block of last
> transaction can lead to recovery problem (e.g. rollback the fs to
> previous trans using backup roots)
>>
>>> Example:
>>>
>>>
>>> $ sudo btrfs-debug-tree -b 84258750464 /dev/nvme0n1p8
>>> btrfs-progs v4.13.3
>>> node 84258750464 level 1 items 2 free 491 generation 200994 owner 1
>>> fs uuid 2662057f-e6c7-47fa-8af9-ad933a22f6ec
>>> chunk uuid 1df72dcf-f515-404a-894a-f7345f988793
>>> key (EXTENT_TREE ROOT_ITEM 0) block 84258783232 (5142748) gen 200994
>>> key (452 INODE_ITEM 0) block 84258881536 (5142754) gen 200994
>>>
>>> (wait 10-40 seconds while file system is in use)
>>
>> After the superblock is written, this space is freed up to be
>> overwritten by new writes immediately...
>
> Nope, this should not be the case.
The example is a block from a "backup root". It's ok to be overwritten.
> The correct behavior is, new write should *never* overwrite tree blocks
> used by commit_root (last committed transaction).
>
> Or there is nothing to protect btrfs from power loss.
> (Btrfs doesn't use journal, but completely rely metadata CoW to handle
> power loss)
>
>>
>>> $ sudo btrfs-debug-tree -b 84258750464 /dev/nvme0n1p8
>>> btrfs-progs v4.13
>>> checksum verify failed on 84258750464 found E4E3BDB6 wanted 00000000
>>> checksum verify failed on 84258750464 found E4E3BDB6 wanted 00000000
>>> bytenr mismatch, want=84258750464, have=0
>>> ERROR: failed to read 84258750464
>>> [chris@f26h ~]$
>>
>> Even when not using discard, there might be new data in that place now,
>> when the file system is in use...
>
> As explained above, btrfs use CoW to survive power loss, especially for
> metadata, so tree blocks of last committed transaction should not be
> touched at all.
>
> So, this is a problem, and I think that's why discard is not recommended
> for btrfs (yet).
> And we should have more runtime checks to ensure we won't allocate any
> space used by committed transaction.
>
>>
>>> This suggests a problem for any kind of automatic recovery, should it
>>> be needed at next mount time, following a crash or power failure, as
>>> well as rendering the usebackuproot useless.
>>
>> Maybe the name backuproot is useless, because it's not a backup at all.
>
> Most of the backup root is more or less useless, if and only if metadata
> CoW is done completely as design.
Is there an 'unless' missing here?
> One more chance to recover is never a bad idea.
It is a bad idea. The *only* case you can recover from is when you
freeze the filesystem *directly* after writing the superblock. Only in
that case you have both a consistent last committed and previous
transaction on disk.
If you do new writes and then again are able to mount with -o
usebackuproot and if any of the
transaction-before-the-last-committed-transaction blocks are overwritten
you're in a field of land mines and time bombs. Being able to mount
gives a false sense of recovery to the user in that case, because either
you're gonna crash into transid problems for metadata, or there are
files in the filesystem in which different data shows up than should,
potentially allowing users to see data from other users etc... It's just
dangerous.
> As you can see, if metadata CoW is completely implemented as designed,
> there will be no report of transid mismatch at all.
> And btrfs should be bullet proof from the very beginning, but none of
> these is true.
It is, it's not a bug. This is about the backup roots thingie, not about
the data from the last transaction.
>
> Thanks,
> Qu
>
>>
>> Only the most recent previous one is if you have to mount a filesystem
>> directly after some bug hosed your tree of trees during a final commit
>> when umounting just before?
>>
>>> I think until discard mount option has some kind of delay (generation
>>> based perhaps), so that at least the various backup trees, in
>>> particular the root tree, is not immediately subject to discard, that
>>> the Btrfs wiki needs to suggest the discard mount option is unsafe on
>>> SSD.
>>>
>>> While I have not experienced any other problems in roughly a year of
>>> using discard and Btrfs on this hardware, if I had needed a rollback
>>> offered by use of a backup tree, they simply wouldn't have been
>>> available, and I'd have been hosed.
>>> (Needs testing on LVM thinp to see if discard causes a similar problem
>>> with Btrfs on LVM thinly provisioned volumes, even with hard drives.)
>>
>> I actually start wondering why this option exists at all. I mean, even
>> when it seems you get a working filesystem back with it, there can be
>> metadata and data in all corners of the filesystem that already has been
>> overwritten?
>>
>> It was introduced in commit af31f5e5b "Btrfs: add a log of past tree
>> roots" and the only information we get is "just in case we somehow lose
>> the roots", which is an explanation for adding this feature that does
>> not really tell me much about it.
>>
--
Hans van Kranenburg
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: discard on SSDs quickly causes backup trees to vanish
2017-11-11 3:13 ` Hans van Kranenburg
@ 2017-11-11 3:48 ` Qu Wenruo
2017-11-11 4:24 ` Chris Murphy
2017-11-11 20:12 ` Hans van Kranenburg
0 siblings, 2 replies; 10+ messages in thread
From: Qu Wenruo @ 2017-11-11 3:48 UTC (permalink / raw)
To: Hans van Kranenburg, Chris Murphy, Btrfs BTRFS
Cc: David Sterba, Austin S Hemmelgarn
[-- Attachment #1.1: Type: text/plain, Size: 7403 bytes --]
On 2017年11月11日 11:13, Hans van Kranenburg wrote:
> On 11/11/2017 03:30 AM, Qu Wenruo wrote:
>>
>> On 2017年11月11日 09:54, Hans van Kranenburg wrote:
>>> On 11/11/2017 12:52 AM, Chris Murphy wrote:
>>>> Hardware:
>>>> HP Spectre which contains a SAMSUNG MZVLV256HCHP-000H1, which is an NVMe drive.
>>>>
>>>> Kernels:
>>>> various but definitely 4.12.0 through 4.13.10
>>>>
>>>> Problem:
>>>> Within seconds of the super being updated to point to a new root tree,
>>>> the old root tree cannot be read with btrfs-debug-tree.
>>>
>>> Is this a problem, or is this just expected, by design?
>>
>> It is a problem.
>>
>> By design, tree block should only be discarded after no one is referring
>> to it. Note: commit root (last committed transaction) also counts.
>>
>> So that's to say, last committed transaction must be alive, until
>> current transaction is committed.
>
> Yes, well, that's no different than what I was saying.
>
>> If tree block is discarded during a uncommitted transaction, and power
>> loss happened, the fs can not be mounted any more (if a vital tree is
>> corrupted).
>>
>> Even after transaction commitment, discard a tree block of last
>> transaction can lead to recovery problem (e.g. rollback the fs to
>> previous trans using backup roots)
>>>
>>>> Example:
>>>>
>>>>
>>>> $ sudo btrfs-debug-tree -b 84258750464 /dev/nvme0n1p8
>>>> btrfs-progs v4.13.3
>>>> node 84258750464 level 1 items 2 free 491 generation 200994 owner 1
>>>> fs uuid 2662057f-e6c7-47fa-8af9-ad933a22f6ec
>>>> chunk uuid 1df72dcf-f515-404a-894a-f7345f988793
>>>> key (EXTENT_TREE ROOT_ITEM 0) block 84258783232 (5142748) gen 200994
>>>> key (452 INODE_ITEM 0) block 84258881536 (5142754) gen 200994
>>>>
>>>> (wait 10-40 seconds while file system is in use)
>>>
>>> After the superblock is written, this space is freed up to be
>>> overwritten by new writes immediately...
>>
>> Nope, this should not be the case.
>
> The example is a block from a "backup root". It's ok to be overwritten.
>
>> The correct behavior is, new write should *never* overwrite tree blocks
>> used by commit_root (last committed transaction).
>>
>> Or there is nothing to protect btrfs from power loss.
>> (Btrfs doesn't use journal, but completely rely metadata CoW to handle
>> power loss)
>>
>>>
>>>> $ sudo btrfs-debug-tree -b 84258750464 /dev/nvme0n1p8
>>>> btrfs-progs v4.13
>>>> checksum verify failed on 84258750464 found E4E3BDB6 wanted 00000000
>>>> checksum verify failed on 84258750464 found E4E3BDB6 wanted 00000000
>>>> bytenr mismatch, want=84258750464, have=0
>>>> ERROR: failed to read 84258750464
>>>> [chris@f26h ~]$
>>>
>>> Even when not using discard, there might be new data in that place now,
>>> when the file system is in use...
>>
>> As explained above, btrfs use CoW to survive power loss, especially for
>> metadata, so tree blocks of last committed transaction should not be
>> touched at all.
>>
>> So, this is a problem, and I think that's why discard is not recommended
>> for btrfs (yet).
>> And we should have more runtime checks to ensure we won't allocate any
>> space used by committed transaction.
>>
>>>
>>>> This suggests a problem for any kind of automatic recovery, should it
>>>> be needed at next mount time, following a crash or power failure, as
>>>> well as rendering the usebackuproot useless.
>>>
>>> Maybe the name backuproot is useless, because it's not a backup at all.
>>
>> Most of the backup root is more or less useless, if and only if metadata
>> CoW is done completely as design.
>
> Is there an 'unless' missing here?
>
>> One more chance to recover is never a bad idea.
>
> It is a bad idea. The *only* case you can recover from is when you
> freeze the filesystem *directly* after writing the superblock. Only in
> that case you have both a consistent last committed and previous
> transaction on disk.
You're talking about the ideal case.
The truth is, we're living in a real world where every software has
bugs. And that's why sometimes we get transid error.
So keeps the backup root still makes sense.
And further more, different trees have different update frequency.
For root and extent tree, they get updated every transaction, while for
chunk tree it's seldom updated.
And backup roots are updated per transaction, which means we may have a
high chance to recover at least chunk root and to know the chunk map and
possible to grab some data.
>
> If you do new writes and then again are able to mount with -o
> usebackuproot and if any of the
> transaction-before-the-last-committed-transaction blocks are overwritten
> you're in a field of land mines and time bombs. Being able to mount
> gives a false sense of recovery to the user in that case, because either
> you're gonna crash into transid problems for metadata, or there are
> files in the filesystem in which different data shows up than should,
> potentially allowing users to see data from other users etc... It's just
> dangerous.
>
>> As you can see, if metadata CoW is completely implemented as designed,
>> there will be no report of transid mismatch at all.
>> And btrfs should be bullet proof from the very beginning, but none of
>> these is true.
>
> It is, it's not a bug. This is about the backup roots thingie, not about
> the data from the last transaction.
Check the original post.
It only gives the magic number, it's not saying if it's from backup root.
If it's dumped from running fs (it's completely possible) then it's the
problem I described.
Anyway, no matter what you think if it's a bug or not, I'll enhance tree
allocator to do extra check if the result overwrites the commit root.
And I strongly suspect transid related problems reported from mail list
has something to do with it.
Thanks,
Qu
>
>>
>> Thanks,
>> Qu
>>
>>>
>>> Only the most recent previous one is if you have to mount a filesystem
>>> directly after some bug hosed your tree of trees during a final commit
>>> when umounting just before?
>>>
>>>> I think until discard mount option has some kind of delay (generation
>>>> based perhaps), so that at least the various backup trees, in
>>>> particular the root tree, is not immediately subject to discard, that
>>>> the Btrfs wiki needs to suggest the discard mount option is unsafe on
>>>> SSD.
>>>>
>>>> While I have not experienced any other problems in roughly a year of
>>>> using discard and Btrfs on this hardware, if I had needed a rollback
>>>> offered by use of a backup tree, they simply wouldn't have been
>>>> available, and I'd have been hosed.
>>>> (Needs testing on LVM thinp to see if discard causes a similar problem
>>>> with Btrfs on LVM thinly provisioned volumes, even with hard drives.)
>>>
>>> I actually start wondering why this option exists at all. I mean, even
>>> when it seems you get a working filesystem back with it, there can be
>>> metadata and data in all corners of the filesystem that already has been
>>> overwritten?
>>>
>>> It was introduced in commit af31f5e5b "Btrfs: add a log of past tree
>>> roots" and the only information we get is "just in case we somehow lose
>>> the roots", which is an explanation for adding this feature that does
>>> not really tell me much about it.
>>>
>
>
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 520 bytes --]
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: discard on SSDs quickly causes backup trees to vanish
2017-11-11 3:48 ` Qu Wenruo
@ 2017-11-11 4:24 ` Chris Murphy
2017-11-11 20:12 ` Hans van Kranenburg
1 sibling, 0 replies; 10+ messages in thread
From: Chris Murphy @ 2017-11-11 4:24 UTC (permalink / raw)
To: Qu Wenruo
Cc: Hans van Kranenburg, Btrfs BTRFS, David Sterba,
Austin S Hemmelgarn
On Fri, Nov 10, 2017 at 8:48 PM, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>
> Check the original post.
> It only gives the magic number, it's not saying if it's from backup root.
>
> If it's dumped from running fs (it's completely possible) then it's the
> problem I described.
There are two methods:
1. mounted filesystem
btrfs insp dump-s /dev/sda1
copy the root tree address (line 11)
btrfs-debug-tree -b [[paste root tree address]] /dev/sda1
And then repeat btrfs-debug-tree command until it fails with the
reported checksum error message. It takes less than 30 seconds for it
to fail on an active file system (e.g. rootfs where there are files
being written most of the time like logs and such).
2. mounted or not mounted
btrfs insp dump-s -f /dev/sda1
copy any of the root trees other than the current one (three most
recent backup root trees)
btrfs-debug-tree -b [[paste tree address]]
Result, all of those backup (not current root) addresses fail. This
particular SSD I think is one that immediately reports zeros for
discarded blocks. Other SSDs report back original data, even once
discarded, until the SSD actually does garbage collection. So there's
more than one possible discard strategy that can confuse the results
people are having.
>
> Anyway, no matter what you think if it's a bug or not, I'll enhance tree
> allocator to do extra check if the result overwrites the commit root.
>
> And I strongly suspect transid related problems reported from mail list
> has something to do with it.
I think so. Sometimes we forget to ask the user for all the important
information like whether it's SSD, or what the mount options are. But
we've definitely seen cases where the user was using discard and also
had a crash or power failure, and then -o recovery/usebackuproot would
give similar messages. And it's very confusing how a working file
system just suddenly seems to fail with transid errors just because of
a crash or power fail.
I think maybe I've just been lucky with this NVMe drive, seems it has
fast commit to stable media time and is honoring expected write order.
I've definitely had crashes/forced power off, while using the discard
mount option. But no Btrfs problems or corruptions at all, in about
one year of continuous usage. But if for some reason the current root
tree were corrupt? OK no backup roots so the whole file system
probably fails now and can't be repaired?
--
Chris Murphy
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: discard on SSDs quickly causes backup trees to vanish
2017-11-11 3:48 ` Qu Wenruo
2017-11-11 4:24 ` Chris Murphy
@ 2017-11-11 20:12 ` Hans van Kranenburg
2017-11-12 0:28 ` Qu Wenruo
1 sibling, 1 reply; 10+ messages in thread
From: Hans van Kranenburg @ 2017-11-11 20:12 UTC (permalink / raw)
To: Qu Wenruo, Chris Murphy, Btrfs BTRFS; +Cc: David Sterba, Austin S Hemmelgarn
Hi,
On 11/11/2017 04:48 AM, Qu Wenruo wrote:
>
> On 2017年11月11日 11:13, Hans van Kranenburg wrote:
>> On 11/11/2017 03:30 AM, Qu Wenruo wrote:
>>>
>>
>>> One more chance to recover is never a bad idea.
>>
>> It is a bad idea. The *only* case you can recover from is when you
>> freeze the filesystem *directly* after writing the superblock. Only in
>> that case you have both a consistent last committed and previous
>> transaction on disk.
>
> You're talking about the ideal case.
>
> The truth is, we're living in a real world where every software has
> bugs. And that's why sometimes we get transid error.
>
> So keeps the backup root still makes sense.
>
> And further more, different trees have different update frequency.
> For root and extent tree, they get updated every transaction, while for
> chunk tree it's seldom updated.
>
> And backup roots are updated per transaction, which means we may have a
> high chance to recover at least chunk root and to know the chunk map and
> possible to grab some data.
That's entirely right yes. But "possible to grab some data" is a
whole different thing than "getting the filesystem back into a fully
functional consistent state..."
So it's about expectation management for end users. If the user
thinks "Ha! A backup! That's nice of btrfs, it keeps them so I can go
back!.", then the user will get disappointed when the backups are unusable.
The design of btrfs is that all metadata tree blocks and data extent
space that is not used by the last completed transaction are freed to be
reused, as soon as possible. For cow-only roots (e.g. root tree, extent
tree) this is already done immediately in the transaction code after
writing the super block (btrfs_finish_extent_commit, discard is also
immediately triggered), and for reference counted roots (subvolume
roots) the cleaner will asap do it.
So, the design gives zero guarantee that following a backup root will
work. But, it's better than nothing when trying to scrape some data off
of the borken filesystem.
Maybe it's enough to change man 5 btrfs with the mount options with
a warning for the usebackuproot option to let the user know that doing
this might result in a mountable filesystem, but that even in case it
does, the result should only be used to get as much data as possible off
of it before doing mkfs again. Or, if it succeeds, and if also umounting
again and running a full btrfsck and scrub to check all metadata and
data succeeds, the user might be pretty confident that nothing
referenced by the previous backuproot was already overwritten with new
data, in which case the filesystem can be continued to be used.
But it puts usebackuproot very much in the same department where tools
like btrfs-restore live.
>> If you do new writes and then again are able to mount with -o
>> usebackuproot and if any of the
>> transaction-before-the-last-committed-transaction blocks are overwritten
>> you're in a field of land mines and time bombs. Being able to mount
>> gives a false sense of recovery to the user in that case, because either
>> you're gonna crash into transid problems for metadata, or there are
>> files in the filesystem in which different data shows up than should,
>> potentially allowing users to see data from other users etc... It's just
>> dangerous.
>>
>>> As you can see, if metadata CoW is completely implemented as designed,
>>> there will be no report of transid mismatch at all.
>>> And btrfs should be bullet proof from the very beginning, but none of
>>> these is true.
>>
>> It is, it's not a bug. This is about the backup roots thingie, not about
>> the data from the last transaction.
>
> Check the original post.
> It only gives the magic number, it's not saying if it's from backup root.
>
> If it's dumped from running fs (it's completely possible) then it's the
> problem I described.
>
> Anyway, no matter what you think if it's a bug or not, I'll enhance tree
> allocator to do extra check if the result overwrites the commit root.
>
> And I strongly suspect transid related problems reported from mail list
> has something to do with it.
--
Hans van Kranenburg
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: discard on SSDs quickly causes backup trees to vanish
2017-11-11 20:12 ` Hans van Kranenburg
@ 2017-11-12 0:28 ` Qu Wenruo
2017-11-13 12:57 ` Austin S. Hemmelgarn
0 siblings, 1 reply; 10+ messages in thread
From: Qu Wenruo @ 2017-11-12 0:28 UTC (permalink / raw)
To: Hans van Kranenburg, Chris Murphy, Btrfs BTRFS
Cc: David Sterba, Austin S Hemmelgarn
[-- Attachment #1.1: Type: text/plain, Size: 5114 bytes --]
On 2017年11月12日 04:12, Hans van Kranenburg wrote:
> Hi,
>
> On 11/11/2017 04:48 AM, Qu Wenruo wrote:
>>
>> On 2017年11月11日 11:13, Hans van Kranenburg wrote:
>>> On 11/11/2017 03:30 AM, Qu Wenruo wrote:
>>>>
>>>
>>>> One more chance to recover is never a bad idea.
>>>
>>> It is a bad idea. The *only* case you can recover from is when you
>>> freeze the filesystem *directly* after writing the superblock. Only in
>>> that case you have both a consistent last committed and previous
>>> transaction on disk.
>>
>> You're talking about the ideal case.
>>
>> The truth is, we're living in a real world where every software has
>> bugs. And that's why sometimes we get transid error.
>>
>> So keeps the backup root still makes sense.
>>
>> And further more, different trees have different update frequency.
>> For root and extent tree, they get updated every transaction, while for
>> chunk tree it's seldom updated.
>>
>> And backup roots are updated per transaction, which means we may have a
>> high chance to recover at least chunk root and to know the chunk map and
>> possible to grab some data.
>
> That's entirely right yes. But "possible to grab some data" is a
> whole different thing than "getting the filesystem back into a fully
> functional consistent state..."
>
> So it's about expectation management for end users. If the user
> thinks "Ha! A backup! That's nice of btrfs, it keeps them so I can go
> back!.", then the user will get disappointed when the backups are unusable.
Without discard, user should be able to rollback to previous transaction
(backup_root[0])
The last transaction committed with commit_root and root->node switched,
and as I stated in previous mail, until this swtich, commit_root must be
fully available.
And after the last transaction there is no modification (since the last
trans is for unmount), so backuproot[0] should be fully accessible.
Discard can break it unless we have method to trace tree block space
usage for at least 2 transactions.
>
> The design of btrfs is that all metadata tree blocks and data extent
> space that is not used by the last completed transaction are freed to be
> reused, as soon as possible. For cow-only roots (e.g. root tree, extent
> tree) this is already done immediately in the transaction code after
> writing the super block (btrfs_finish_extent_commit, discard is also
> immediately triggered), and for reference counted roots (subvolume
> roots) the cleaner will asap do it.
>
> So, the design gives zero guarantee that following a backup root will
> work. But, it's better than nothing when trying to scrape some data off
> of the borken filesystem.
Again, only for discard.
>
> Maybe it's enough to change man 5 btrfs with the mount options with
> a warning for the usebackuproot option to let the user know that doing
> this might result in a mountable filesystem, but that even in case it
> does, the result should only be used to get as much data as possible off
> of it before doing mkfs again. Or, if it succeeds, and if also umounting
> again and running a full btrfsck and scrub to check all metadata and
> data succeeds, the user might be pretty confident that nothing
> referenced by the previous backuproot was already overwritten with new
> data, in which case the filesystem can be continued to be used.
>
> But it puts usebackuproot very much in the same department where tools
> like btrfs-restore live.
Isn't it the original design?
No one sane would use it for daily usage and it's original called
"recovery", I don't see any problem here.
Thanks,
Qu
>
>>> If you do new writes and then again are able to mount with -o
>>> usebackuproot and if any of the
>>> transaction-before-the-last-committed-transaction blocks are overwritten
>>> you're in a field of land mines and time bombs. Being able to mount
>>> gives a false sense of recovery to the user in that case, because either
>>> you're gonna crash into transid problems for metadata, or there are
>>> files in the filesystem in which different data shows up than should,
>>> potentially allowing users to see data from other users etc... It's just
>>> dangerous.
>>>
>>>> As you can see, if metadata CoW is completely implemented as designed,
>>>> there will be no report of transid mismatch at all.
>>>> And btrfs should be bullet proof from the very beginning, but none of
>>>> these is true.
>>>
>>> It is, it's not a bug. This is about the backup roots thingie, not about
>>> the data from the last transaction.
>>
>> Check the original post.
>> It only gives the magic number, it's not saying if it's from backup root.
>>
>> If it's dumped from running fs (it's completely possible) then it's the
>> problem I described.
>>
>> Anyway, no matter what you think if it's a bug or not, I'll enhance tree
>> allocator to do extra check if the result overwrites the commit root.
>>
>> And I strongly suspect transid related problems reported from mail list
>> has something to do with it.
>
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 520 bytes --]
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: discard on SSDs quickly causes backup trees to vanish
2017-11-12 0:28 ` Qu Wenruo
@ 2017-11-13 12:57 ` Austin S. Hemmelgarn
2017-11-13 13:13 ` Qu Wenruo
0 siblings, 1 reply; 10+ messages in thread
From: Austin S. Hemmelgarn @ 2017-11-13 12:57 UTC (permalink / raw)
To: Qu Wenruo, Hans van Kranenburg, Chris Murphy, Btrfs BTRFS; +Cc: David Sterba
On 2017-11-11 19:28, Qu Wenruo wrote:
>
>
> On 2017年11月12日 04:12, Hans van Kranenburg wrote:
>> Hi,
>>
>> On 11/11/2017 04:48 AM, Qu Wenruo wrote:
>>>
>>> On 2017年11月11日 11:13, Hans van Kranenburg wrote:
>>>> On 11/11/2017 03:30 AM, Qu Wenruo wrote:
>>>>>
>>>>
>>>>> One more chance to recover is never a bad idea.
>>>>
>>>> It is a bad idea. The *only* case you can recover from is when you
>>>> freeze the filesystem *directly* after writing the superblock. Only in
>>>> that case you have both a consistent last committed and previous
>>>> transaction on disk.
>>>
>>> You're talking about the ideal case.
>>>
>>> The truth is, we're living in a real world where every software has
>>> bugs. And that's why sometimes we get transid error.
>>>
>>> So keeps the backup root still makes sense.
>>>
>>> And further more, different trees have different update frequency.
>>> For root and extent tree, they get updated every transaction, while for
>>> chunk tree it's seldom updated.
>>>
>>> And backup roots are updated per transaction, which means we may have a
>>> high chance to recover at least chunk root and to know the chunk map and
>>> possible to grab some data.
>>
>> That's entirely right yes. But "possible to grab some data" is a
>> whole different thing than "getting the filesystem back into a fully
>> functional consistent state..."
>>
>> So it's about expectation management for end users. If the user
>> thinks "Ha! A backup! That's nice of btrfs, it keeps them so I can go
>> back!.", then the user will get disappointed when the backups are unusable.
>
> Without discard, user should be able to rollback to previous transaction
> (backup_root[0])
Unless BTRFS is going out of it's way to ensure this, that's not
necessarily true. I'm fairly certain that we try to reuse empty space
in already allocated chunks before allocating new ones, which would mean
that there's a reasonable chance on a filesystem that's got the proper
ratio of metadata and data chunks and has very little slack space in the
metadata chunks that the old transactions will get overwritten pretty
quickly (possibly immediately).
>
> The last transaction committed with commit_root and root->node switched,
> and as I stated in previous mail, until this swtich, commit_root must be
> fully available.
>
> And after the last transaction there is no modification (since the last
> trans is for unmount), so backuproot[0] should be fully accessible.
>
> Discard can break it unless we have method to trace tree block space
> usage for at least 2 transactions.
>
>>
>> The design of btrfs is that all metadata tree blocks and data extent
>> space that is not used by the last completed transaction are freed to be
>> reused, as soon as possible. For cow-only roots (e.g. root tree, extent
>> tree) this is already done immediately in the transaction code after
>> writing the super block (btrfs_finish_extent_commit, discard is also
>> immediately triggered), and for reference counted roots (subvolume
>> roots) the cleaner will asap do it.
>>
>> So, the design gives zero guarantee that following a backup root will
>> work. But, it's better than nothing when trying to scrape some data off
>> of the borken filesystem.
>
> Again, only for discard.
>
>>
>> Maybe it's enough to change man 5 btrfs with the mount options with
>> a warning for the usebackuproot option to let the user know that doing
>> this might result in a mountable filesystem, but that even in case it
>> does, the result should only be used to get as much data as possible off
>> of it before doing mkfs again. Or, if it succeeds, and if also umounting
>> again and running a full btrfsck and scrub to check all metadata and
>> data succeeds, the user might be pretty confident that nothing
>> referenced by the previous backuproot was already overwritten with new
>> data, in which case the filesystem can be continued to be used.
>>
>> But it puts usebackuproot very much in the same department where tools
>> like btrfs-restore live.
>
> Isn't it the original design?
> No one sane would use it for daily usage and it's original called
> "recovery", I don't see any problem here.
I agree on this point, it's not something regular users should be using,
but we don't really need to tell most people that. The only ones I can
see being a potential issue are those who actually read the
documentation but don't really have a good understanding of computers,
which in my experience is usually less than 1% of users in most cases.
>>
>>>> If you do new writes and then again are able to mount with -o
>>>> usebackuproot and if any of the
>>>> transaction-before-the-last-committed-transaction blocks are overwritten
>>>> you're in a field of land mines and time bombs. Being able to mount
>>>> gives a false sense of recovery to the user in that case, because either
>>>> you're gonna crash into transid problems for metadata, or there are
>>>> files in the filesystem in which different data shows up than should,
>>>> potentially allowing users to see data from other users etc... It's just
>>>> dangerous.
>>>>
>>>>> As you can see, if metadata CoW is completely implemented as designed,
>>>>> there will be no report of transid mismatch at all.
>>>>> And btrfs should be bullet proof from the very beginning, but none of
>>>>> these is true.
>>>>
>>>> It is, it's not a bug. This is about the backup roots thingie, not about
>>>> the data from the last transaction.
>>>
>>> Check the original post.
>>> It only gives the magic number, it's not saying if it's from backup root.
>>>
>>> If it's dumped from running fs (it's completely possible) then it's the
>>> problem I described.
>>>
>>> Anyway, no matter what you think if it's a bug or not, I'll enhance tree
>>> allocator to do extra check if the result overwrites the commit root.
>>>
>>> And I strongly suspect transid related problems reported from mail list
>>> has something to do with it.
>>
>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: discard on SSDs quickly causes backup trees to vanish
2017-11-13 12:57 ` Austin S. Hemmelgarn
@ 2017-11-13 13:13 ` Qu Wenruo
0 siblings, 0 replies; 10+ messages in thread
From: Qu Wenruo @ 2017-11-13 13:13 UTC (permalink / raw)
To: Austin S. Hemmelgarn, Hans van Kranenburg, Chris Murphy,
Btrfs BTRFS
Cc: David Sterba
[-- Attachment #1.1: Type: text/plain, Size: 6875 bytes --]
On 2017年11月13日 20:57, Austin S. Hemmelgarn wrote:
> On 2017-11-11 19:28, Qu Wenruo wrote:
>>
>>
>> On 2017年11月12日 04:12, Hans van Kranenburg wrote:
>>> Hi,
>>>
>>> On 11/11/2017 04:48 AM, Qu Wenruo wrote:
>>>>
>>>> On 2017年11月11日 11:13, Hans van Kranenburg wrote:
>>>>> On 11/11/2017 03:30 AM, Qu Wenruo wrote:
>>>>>>
>>>>>
>>>>>> One more chance to recover is never a bad idea.
>>>>>
>>>>> It is a bad idea. The *only* case you can recover from is when you
>>>>> freeze the filesystem *directly* after writing the superblock. Only in
>>>>> that case you have both a consistent last committed and previous
>>>>> transaction on disk.
>>>>
>>>> You're talking about the ideal case.
>>>>
>>>> The truth is, we're living in a real world where every software has
>>>> bugs. And that's why sometimes we get transid error.
>>>>
>>>> So keeps the backup root still makes sense.
>>>>
>>>> And further more, different trees have different update frequency.
>>>> For root and extent tree, they get updated every transaction, while for
>>>> chunk tree it's seldom updated.
>>>>
>>>> And backup roots are updated per transaction, which means we may have a
>>>> high chance to recover at least chunk root and to know the chunk map
>>>> and
>>>> possible to grab some data.
>>>
>>> That's entirely right yes. But "possible to grab some data" is a
>>> whole different thing than "getting the filesystem back into a fully
>>> functional consistent state..."
>>>
>>> So it's about expectation management for end users. If the user
>>> thinks "Ha! A backup! That's nice of btrfs, it keeps them so I can go
>>> back!.", then the user will get disappointed when the backups are
>>> unusable.
>>
>> Without discard, user should be able to rollback to previous transaction
>> (backup_root[0])
> Unless BTRFS is going out of it's way to ensure this, that's not
> necessarily true. I'm fairly certain that we try to reuse empty space
> in already allocated chunks before allocating new ones, which would mean
> that there's a reasonable chance on a filesystem that's got the proper
> ratio of metadata and data chunks and has very little slack space in the
> metadata chunks that the old transactions will get overwritten pretty
> quickly (possibly immediately).
Then btrfs will make metadata just like butter.
As the only thing to keep btrfs survive from a power loss is its
metadata CoW.
If previous (committed) transaction get modified before current trans
fully committed, power loss = death of data.
I'll add new sanity check to see if this is true.
If it ends up btrfs has already such protection, then just another
sanity test.
If not, at least we will find something to fix and know the reason why
btrfs is not bullet proof to power loss.
Thanks,
Qu
>>
>> The last transaction committed with commit_root and root->node switched,
>> and as I stated in previous mail, until this swtich, commit_root must be
>> fully available.
>>
>> And after the last transaction there is no modification (since the last
>> trans is for unmount), so backuproot[0] should be fully accessible.
>>
>> Discard can break it unless we have method to trace tree block space
>> usage for at least 2 transactions.
>>
>>>
>>> The design of btrfs is that all metadata tree blocks and data extent
>>> space that is not used by the last completed transaction are freed to be
>>> reused, as soon as possible. For cow-only roots (e.g. root tree, extent
>>> tree) this is already done immediately in the transaction code after
>>> writing the super block (btrfs_finish_extent_commit, discard is also
>>> immediately triggered), and for reference counted roots (subvolume
>>> roots) the cleaner will asap do it.
>>>
>>> So, the design gives zero guarantee that following a backup root will
>>> work. But, it's better than nothing when trying to scrape some data off
>>> of the borken filesystem.
>>
>> Again, only for discard.
>>
>>>
>>> Maybe it's enough to change man 5 btrfs with the mount options with
>>> a warning for the usebackuproot option to let the user know that doing
>>> this might result in a mountable filesystem, but that even in case it
>>> does, the result should only be used to get as much data as possible off
>>> of it before doing mkfs again. Or, if it succeeds, and if also umounting
>>> again and running a full btrfsck and scrub to check all metadata and
>>> data succeeds, the user might be pretty confident that nothing
>>> referenced by the previous backuproot was already overwritten with new
>>> data, in which case the filesystem can be continued to be used.
>>>
>>> But it puts usebackuproot very much in the same department where tools
>>> like btrfs-restore live.
>>
>> Isn't it the original design?
>> No one sane would use it for daily usage and it's original called
>> "recovery", I don't see any problem here.
> I agree on this point, it's not something regular users should be using,
> but we don't really need to tell most people that. The only ones I can
> see being a potential issue are those who actually read the
> documentation but don't really have a good understanding of computers,
> which in my experience is usually less than 1% of users in most cases.
>>>
>>>>> If you do new writes and then again are able to mount with -o
>>>>> usebackuproot and if any of the
>>>>> transaction-before-the-last-committed-transaction blocks are
>>>>> overwritten
>>>>> you're in a field of land mines and time bombs. Being able to mount
>>>>> gives a false sense of recovery to the user in that case, because
>>>>> either
>>>>> you're gonna crash into transid problems for metadata, or there are
>>>>> files in the filesystem in which different data shows up than should,
>>>>> potentially allowing users to see data from other users etc... It's
>>>>> just
>>>>> dangerous.
>>>>>
>>>>>> As you can see, if metadata CoW is completely implemented as
>>>>>> designed,
>>>>>> there will be no report of transid mismatch at all.
>>>>>> And btrfs should be bullet proof from the very beginning, but none of
>>>>>> these is true.
>>>>>
>>>>> It is, it's not a bug. This is about the backup roots thingie, not
>>>>> about
>>>>> the data from the last transaction.
>>>>
>>>> Check the original post.
>>>> It only gives the magic number, it's not saying if it's from backup
>>>> root.
>>>>
>>>> If it's dumped from running fs (it's completely possible) then it's the
>>>> problem I described.
>>>>
>>>> Anyway, no matter what you think if it's a bug or not, I'll enhance
>>>> tree
>>>> allocator to do extra check if the result overwrites the commit root.
>>>>
>>>> And I strongly suspect transid related problems reported from mail list
>>>> has something to do with it.
>>>
>>
>
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 520 bytes --]
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2017-11-13 13:13 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-11-10 23:52 discard on SSDs quickly causes backup trees to vanish Chris Murphy
2017-11-11 1:54 ` Hans van Kranenburg
2017-11-11 2:30 ` Qu Wenruo
2017-11-11 3:13 ` Hans van Kranenburg
2017-11-11 3:48 ` Qu Wenruo
2017-11-11 4:24 ` Chris Murphy
2017-11-11 20:12 ` Hans van Kranenburg
2017-11-12 0:28 ` Qu Wenruo
2017-11-13 12:57 ` Austin S. Hemmelgarn
2017-11-13 13:13 ` Qu Wenruo
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).