linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* raid56 write hole
@ 2020-04-30 17:00 Rollo ro
  2020-05-01  2:30 ` Zygo Blaxell
  0 siblings, 1 reply; 6+ messages in thread
From: Rollo ro @ 2020-04-30 17:00 UTC (permalink / raw)
  To: linux-btrfs

Hi, I read about the write hole and want to share my thoughts. I'm not
sure if I got it in full depth but as far as I understand, the issue
is about superblocks on disks, the superblocks containing addresses of
tree roots and if the trees are changed using copy on write, we have a
new root and it's address needs to be written into the superblock.
That leads to inconsistent data if one address is updated but another
one is not. Is this about right? (I'm not sure whether we are talking
here about a discrepancy between two root adresses in one superblock,
or between two superblocks on different disks or possibly both)

In my opinion, it's mandatory to have a consistent filesystem at _any_
point in time, so it can't be relied on flush to write all new
addresses!

I propose that the superblock should not contain the one single root
address for each tree that is hopefully correct, but it should contain
an array of addresses of tree root _candidates_. Also, the addresses
in the superblocks are written on filesystem creation, but not in
usual operation anymore. In usual operation, when we want to switch to
a new tree version, only _one_ of the root candidates is written with
new content, so there will be the latest root but also some older
roots. Now the point is, if there is a power outage or crash during
flush, we have all information needed to roll back to the last
consistent version.

We just need to find out which root candidate to use. (This is why I
call them candidates) To achieve that, the root candidates have an
additional attribute that's something like a version counter and we
also have a version counter variable in RAM. On a transition we
overwrite the oldest root candidate for each tree with all needed
information, it's counter with our current counter variable, and a
checksum. The counter variable is incremented after that. At some
point it will overflow, hence we need to consider that when we search
the latest one. Let's say we use 8 candidates, then the superblock
will contain something like:

LogicalAdress_t AddressRootCandidatesMetaData[8]
LogicalAdress_t AddressRootCandidatesData[8]

(just as an example)

While mounting, we read all '8 x number of trees x disks' root
candidates, lookup their version counters and check their checksums.
We have a struct like

typedef struct
{
    uint8_t Version;
    CheckResult_te CeckResult; /* enum INVALID = 0, VALID = 1 */
} VersionWithCheckResult_t

and build an array with that:

enum {ARRAY_SIZE = 8};
VersionWithCheckResult_t VersionWithCheckResult[ARRAY_SIZE];

and write it in a loop. For example we get:

{3, VALID}, {4, VALID}, {253, VALID}, {254, VALID}, {255, VALID}, {0,
VALID}, {1, VALID}, {2, VALID}
(-> Second entry is the most recent valid one)

We'd like to get this from all disks for all trees, but there was a
crash so some disks may have not written the new root candidate at
all:

{3, VALID}, {252, VALID}, {253, VALID}, {254, VALID}, {255, VALID},
{0, VALID}, {1, VALID}, {2, VALID}
(-> First entry is the most recent valid one, as the second entry has
not been updated)

or even left a corrupted one, which we will recognize by the checksum:
(-> First entry is the most recent valid one, as the second entry has
been corrupted)

{3, VALID}, {123, INVALID}, {253, VALID}, {254, VALID}, {255, VALID},
{0, VALID}, {1, VALID}, {2, VALID}

Then we walk through that array, first searching the first valid
entry, and then look if there are more recent, valid entries, like:

uint8_t IndexOfMostRecentValidEntry = 0xFF;
uint8_t i = 0;
while ((i < ARRAY_SIZE) && (IndexOfMostRecentValidEntry == 0xFF))
{
    if (VersionWithCheckResult[i].CheckResult == VALID)
    {
        IndexOfMostRecentValidEntry = i;
    }
}

for (i = 0, i < ARRAY_SIZE, i++)
{
    uint8_t IndexNext = CalcIndexNext(IndexOfMostRecentValidEntry); /*
Function calculates next index with respect to wrap around */
    uint8_t MoreRecentExpectedVersion =
VersionWithCheckResult[IndexOfMostRecentValidEntry].Version + 1u; /*
Overflows from 0xFF to 0 just like on-disk version numbers */
    if ((VersionWithCheckResult[IndexNext].Version ==
MoreRecentExpectedVersion) &&
(VersionWithCheckResult[IndexNext].CheckResult == VALID))
    {
        IndexOfMostRecentValidEntry = IndexNext;
    }
}

Then we build another array that will be aligned to the entry we found:

VersionWithCheckResultSorted[ARRAY_SIZE] = {0}; /* All elements inited
as 0 (INVALID) */
uint8_t Index = IndexOfMostRecentValidEntry;
for (i = 0, i < ARRAY_SIZE, i++)
{
    VersionWithCheckResultSorted[i] = VersionWithCheckResult[Index];
    Index = CalcIndexPrevious(Index); /* Function calculates previous
index with respect to wrap around */;
}

With the 3 example datasets from above, we get:

{4, VALID}, {3, VALID}, {2, VALID}, {1, VALID}, {0, VALID}, {255,
VALID}, {254, VALID}, {253, VALID}
{3, VALID}, {2, VALID}, {1, VALID}, {0, VALID}, {255, VALID}, {254,
VALID}, {253, VALID}, {252, VALID},
{3, VALID}, {2, VALID}, {1, VALID}, {0, VALID}, {255, VALID}, {254,
VALID}, {253, VALID}, {123, INVALID},

Now the versions are prioritized from left to right. It's easy to
figure out that the latest version we can use is 3. We just fall back
to the latest version for that we found a valid root candidate for
every tree. In this example, it's index = 0 in the superblock array.
So we use that to mount and set the counter variable to 1 for the next
writes.

As a consequence, there is no write hole, because we always fall back
to the latest state that is consistently available and discard the
last write if it has not been finished correctly for all trees.

notes:
- It's required that also the parity data is organized as COW trees. I
don't know if it's done that way now.
- For now I assumed that a root candidate is properly written or not.
One could think of skipping one position and go to the next canditate
in case of a recognized write error, but this is not covered in this
example. Hence, if there is an invalid entry, the lookup loop does not
look for further valid entries.
- All data that all 8 root canditates point to need to be kept because
we don't know which one will be used on the next mount. The data can
be discarded after a root candidate has been overwritten.
- ARRAY_SIZE basically could be 2. I just thought we could have some more

What do you think?

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2020-05-04  0:07 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2020-04-30 17:00 raid56 write hole Rollo ro
2020-05-01  2:30 ` Zygo Blaxell
2020-05-01 13:57   ` Rollo ro
2020-05-02  5:56     ` Zygo Blaxell
2020-05-03  0:40       ` Rollo ro
2020-05-04  0:04         ` Zygo Blaxell

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).