From: Allison Henderson <allison.henderson@oracle.com>
To: "djwong@kernel.org" <djwong@kernel.org>
Cc: Catherine Hoang <catherine.hoang@oracle.com>,
"david@fromorbit.com" <david@fromorbit.com>,
"willy@infradead.org" <willy@infradead.org>,
"linux-xfs@vger.kernel.org" <linux-xfs@vger.kernel.org>,
Chandan Babu <chandan.babu@oracle.com>,
"linux-fsdevel@vger.kernel.org" <linux-fsdevel@vger.kernel.org>,
"hch@infradead.org" <hch@infradead.org>
Subject: Re: [PATCH 13/14] xfs: document the userspace fsck driver program
Date: Fri, 3 Mar 2023 23:51:02 +0000 [thread overview]
Message-ID: <88be76f153cb4cd2faff4fb95b090b0242d1a55e.camel@oracle.com> (raw)
In-Reply-To: <Y//tj5C8xb29G8DK@magnolia>
On Wed, 2023-03-01 at 16:27 -0800, Darrick J. Wong wrote:
> On Wed, Mar 01, 2023 at 05:36:59AM +0000, Allison Henderson wrote:
> > On Fri, 2022-12-30 at 14:10 -0800, Darrick J. Wong wrote:
> > > From: Darrick J. Wong <djwong@kernel.org>
> > >
> > > Add the sixth chapter of the online fsck design documentation,
> > > where
> > > we discuss the details of the data structures and algorithms used
> > > by
> > > the
> > > driver program xfs_scrub.
> > >
> > > Signed-off-by: Darrick J. Wong <djwong@kernel.org>
> > > ---
> > > .../filesystems/xfs-online-fsck-design.rst | 313
> > > ++++++++++++++++++++
> > > 1 file changed, 313 insertions(+)
> > >
> > >
> > > diff --git a/Documentation/filesystems/xfs-online-fsck-design.rst
> > > b/Documentation/filesystems/xfs-online-fsck-design.rst
> > > index 2e20314f1831..05b9411fac7f 100644
> > > --- a/Documentation/filesystems/xfs-online-fsck-design.rst
> > > +++ b/Documentation/filesystems/xfs-online-fsck-design.rst
> > > @@ -300,6 +300,9 @@ The seven phases are as follows:
> > > 7. Re-check the summary counters and presents the caller with a
> > > summary of
> > > space usage and file counts.
> > >
> > > +This allocation of responsibilities will be :ref:`revisited
> > > <scrubcheck>`
> > > +later in this document.
> > > +
> > > Steps for Each Scrub Item
> > > -------------------------
> > >
> > > @@ -4505,3 +4508,313 @@ The proposed patches are in the
> > > `orphanage adoption
> > > <
> > > https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/
> > > log/?h=repair-orphanage>`_
> > > series.
> > > +
> > > +6. Userspace Algorithms and Data Structures
> > > +===========================================
> > > +
> > > +This section discusses the key algorithms and data structures of
> > > the
> > > userspace
> > > +program, ``xfs_scrub``, that provide the ability to drive
> > > metadata
> > > checks and
> > > +repairs in the kernel, verify file data, and look for other
> > > potential problems.
> > > +
> > > +.. _scrubcheck:
> > > +
> > > +Checking Metadata
> > > +-----------------
> > > +
> > > +Recall the :ref:`phases of fsck work<scrubphases>` outlined
> > > earlier.
> > > +That structure follows naturally from the data dependencies
> > > designed
> > > into the
> > > +filesystem from its beginnings in 1993.
> > > +In XFS, there are several groups of metadata dependencies:
> > > +
> > > +a. Filesystem summary counts depend on consistency within the
> > > inode
> > > indices,
> > > + the allocation group space btrees, and the realtime volume
> > > space
> > > + information.
> > > +
> > > +b. Quota resource counts depend on consistency within the quota
> > > file
> > > data
> > > + forks, inode indices, inode records, and the forks of every
> > > file
> > > on the
> > > + system.
> > > +
> > > +c. The naming hierarchy depends on consistency within the
> > > directory
> > > and
> > > + extended attribute structures.
> > > + This includes file link counts.
> > > +
> > > +d. Directories, extended attributes, and file data depend on
> > > consistency within
> > > + the file forks that map directory and extended attribute data
> > > to
> > > physical
> > > + storage media.
> > > +
> > > +e. The file forks depends on consistency within inode records
> > > and
> > > the space
> > > + metadata indices of the allocation groups and the realtime
> > > volume.
> > > + This includes quota and realtime metadata files.
> > > +
> > > +f. Inode records depends on consistency within the inode
> > > metadata
> > > indices.
> > > +
> > > +g. Realtime space metadata depend on the inode records and data
> > > forks of the
> > > + realtime metadata inodes.
> > > +
> > > +h. The allocation group metadata indices (free space, inodes,
> > > reference count,
> > > + and reverse mapping btrees) depend on consistency within the
> > > AG
> > > headers and
> > > + between all the AG metadata btrees.
> > > +
> > > +i. ``xfs_scrub`` depends on the filesystem being mounted and
> > > kernel
> > > support
> > > + for online fsck functionality.
> > > +
> > > +Therefore, a metadata dependency graph is a convenient way to
> > > schedule checking
> > > +operations in the ``xfs_scrub`` program:
> > > +
> > > +- Phase 1 checks that the provided path maps to an XFS
> > > filesystem
> > > and detect
> > > + the kernel's scrubbing abilities, which validates group (i).
> > > +
> > > +- Phase 2 scrubs groups (g) and (h) in parallel using a threaded
> > > workqueue.
> > > +
> > > +- Phase 3 checks groups (f), (e), and (d), in that order.
> > > + These groups are all file metadata, which means that inodes
> > > are
> > > scanned in
> > > + parallel.
> > ...When things are done in order, then they are done in serial
> > right?
> > Things done in parallel are done at the same time. Either the
> > phrase
> > "in that order" needs to go away, or the last line needs to drop
>
> Each inode is processed in parallel, but individual inodes are
> processed
> in f-e-d order.
>
> "Phase 3 scans inodes in parallel. For each inode, groups (f), (e),
> and
> (d) are checked, in that order."
Ohh, ok. Now that I re-read it, it makes sense but lets keep the new
one
>
> > > +
> > > +- Phase 4 repairs everything in groups (i) through (d) so that
> > > phases 5 and 6
> > > + may run reliably.
> > > +
> > > +- Phase 5 starts by checking groups (b) and (c) in parallel
> > > before
> > > moving on
> > > + to checking names.
> > > +
> > > +- Phase 6 depends on groups (i) through (b) to find file data
> > > blocks
> > > to verify,
> > > + to read them, and to report which blocks of which files are
> > > affected.
> > > +
> > > +- Phase 7 checks group (a), having validated everything else.
> > > +
> > > +Notice that the data dependencies between groups are enforced by
> > > the
> > > structure
> > > +of the program flow.
> > > +
> > > +Parallel Inode Scans
> > > +--------------------
> > > +
> > > +An XFS filesystem can easily contain hundreds of millions of
> > > inodes.
> > > +Given that XFS targets installations with large high-performance
> > > storage,
> > > +it is desirable to scrub inodes in parallel to minimize runtime,
> > > particularly
> > > +if the program has been invoked manually from a command line.
> > > +This requires careful scheduling to keep the threads as evenly
> > > loaded as
> > > +possible.
> > > +
> > > +Early iterations of the ``xfs_scrub`` inode scanner naïvely
> > > created
> > > a single
> > > +workqueue and scheduled a single workqueue item per AG.
> > > +Each workqueue item walked the inode btree (with
> > > ``XFS_IOC_INUMBERS``) to find
> > > +inode chunks and then called bulkstat (``XFS_IOC_BULKSTAT``) to
> > > gather enough
> > > +information to construct file handles.
> > > +The file handle was then passed to a function to generate scrub
> > > items for each
> > > +metadata object of each inode.
> > > +This simple algorithm leads to thread balancing problems in
> > > phase 3
> > > if the
> > > +filesystem contains one AG with a few large sparse files and the
> > > rest of the
> > > +AGs contain many smaller files.
> > > +The inode scan dispatch function was not sufficiently granular;
> > > it
> > > should have
> > > +been dispatching at the level of individual inodes, or, to
> > > constrain
> > > memory
> > > +consumption, inode btree records.
> > > +
> > > +Thanks to Dave Chinner, bounded workqueues in userspace enable
> > > ``xfs_scrub`` to
> > > +avoid this problem with ease by adding a second workqueue.
> > > +Just like before, the first workqueue is seeded with one
> > > workqueue
> > > item per AG,
> > > +and it uses INUMBERS to find inode btree chunks.
> > > +The second workqueue, however, is configured with an upper bound
> > > on
> > > the number
> > > +of items that can be waiting to be run.
> > > +Each inode btree chunk found by the first workqueue's workers
> > > are
> > > queued to the
> > > +second workqueue, and it is this second workqueue that queries
> > > BULKSTAT,
> > > +creates a file handle, and passes it to a function to generate
> > > scrub
> > > items for
> > > +each metadata object of each inode.
> > > +If the second workqueue is too full, the workqueue add function
> > > blocks the
> > > +first workqueue's workers until the backlog eases.
> > > +This doesn't completely solve the balancing problem, but reduces
> > > it
> > > enough to
> > > +move on to more pressing issues.
> > > +
> > > +The proposed patchsets are the scrub
> > > +`performance tweaks
> > > +<
> > > https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfsprogs-dev.g
> > > it/log/?h=scrub-performance-tweaks>`_
> > > +and the
> > > +`inode scan rebalance
> > > +<
> > > https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfsprogs-dev.g
> > > it/log/?h=scrub-iscan-rebalance>`_
> > > +series.
> > > +
> > > +.. _scrubrepair:
> > > +
> > > +Scheduling Repairs
> > > +------------------
> > > +
> > > +During phase 2, corruptions and inconsistencies reported in any
> > > AGI
> > > header or
> > > +inode btree are repaired immediately, because phase 3 relies on
> > > proper
> > > +functioning of the inode indices to find inodes to scan.
> > > +Failed repairs are rescheduled to phase 4.
> > > +Problems reported in any other space metadata are deferred to
> > > phase
> > > 4.
> > > +Optimization opportunities are always deferred to phase 4, no
> > > matter
> > > their
> > > +origin.
> > > +
> > > +During phase 3, corruptions and inconsistencies reported in any
> > > part
> > > of a
> > > +file's metadata are repaired immediately if all space metadata
> > > were
> > > validated
> > > +during phase 2.
> > > +Repairs that fail or cannot be repaired immediately are
> > > scheduled
> > > for phase 4.
> > > +
> > > +In the original design of ``xfs_scrub``, it was thought that
> > > repairs
> > > would be
> > > +so infrequent that the ``struct xfs_scrub_metadata`` objects
> > > used to
> > > +communicate with the kernel could also be used as the primary
> > > object
> > > to
> > > +schedule repairs.
> > > +With recent increases in the number of optimizations possible
> > > for a
> > > given
> > > +filesystem object, it became much more memory-efficient to track
> > > all
> > > eligible
> > > +repairs for a given filesystem object with a single repair item.
> > > +Each repair item represents a single lockable object -- AGs,
> > > metadata files,
> > > +individual inodes, or a class of summary information.
> > > +
> > > +Phase 4 is responsible for scheduling a lot of repair work in as
> > > quick a
> > > +manner as is practical.
> > > +The :ref:`data dependencies <scrubcheck>` outlined earlier still
> > > apply, which
> > > +means that ``xfs_scrub`` must try to complete the repair work
> > > scheduled by
> > > +phase 2 before trying repair work scheduled by phase 3.
> > > +The repair process is as follows:
> > > +
> > > +1. Start a round of repair with a workqueue and enough workers
> > > to
> > > keep the CPUs
> > > + as busy as the user desires.
> > > +
> > > + a. For each repair item queued by phase 2,
> > > +
> > > + i. Ask the kernel to repair everything listed in the
> > > repair
> > > item for a
> > > + given filesystem object.
> > > +
> > > + ii. Make a note if the kernel made any progress in
> > > reducing
> > > the number
> > > + of repairs needed for this object.
> > > +
> > > + iii. If the object no longer requires repairs, revalidate
> > > all
> > > metadata
> > > + associated with this object.
> > > + If the revalidation succeeds, drop the repair item.
> > > + If not, requeue the item for more repairs.
> > > +
> > > + b. If any repairs were made, jump back to 1a to retry all the
> > > phase 2 items.
> > > +
> > > + c. For each repair item queued by phase 3,
> > > +
> > > + i. Ask the kernel to repair everything listed in the
> > > repair
> > > item for a
> > > + given filesystem object.
> > > +
> > > + ii. Make a note if the kernel made any progress in
> > > reducing
> > > the number
> > > + of repairs needed for this object.
> > > +
> > > + iii. If the object no longer requires repairs, revalidate
> > > all
> > > metadata
> > > + associated with this object.
> > > + If the revalidation succeeds, drop the repair item.
> > > + If not, requeue the item for more repairs.
> > > +
> > > + d. If any repairs were made, jump back to 1c to retry all the
> > > phase 3 items.
> > > +
> > > +2. If step 1 made any repair progress of any kind, jump back to
> > > step
> > > 1 to start
> > > + another round of repair.
> > > +
> > > +3. If there are items left to repair, run them all serially one
> > > more
> > > time.
> > > + Complain if the repairs were not successful, since this is
> > > the
> > > last chance
> > > + to repair anything.
> > > +
> > > +Corruptions and inconsistencies encountered during phases 5 and
> > > 7
> > > are repaired
> > > +immediately.
> > > +Corrupt file data blocks reported by phase 6 cannot be recovered
> > > by
> > > the
> > > +filesystem.
> > > +
> > > +The proposed patchsets are the
> > > +`repair warning improvements
> > > +<
> > > https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfsprogs-dev.g
> > > it/log/?h=scrub-better-repair-warnings>`_,
> > > +refactoring of the
> > > +`repair data dependency
> > > +<
> > > https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfsprogs-dev.g
> > > it/log/?h=scrub-repair-data-deps>`_
> > > +and
> > > +`object tracking
> > > +<
> > > https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfsprogs-dev.g
> > > it/log/?h=scrub-object-tracking>`_,
> > > +and the
> > > +`repair scheduling
> > > +<
> > > https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfsprogs-dev.g
> > > it/log/?h=scrub-repair-scheduling>`_
> > > +improvement series.
> > > +
> > > +Checking Names for Confusable Unicode Sequences
> > > +-----------------------------------------------
> > > +
> > > +If ``xfs_scrub`` succeeds in validating the filesystem metadata
> > > by
> > > the end of
> > > +phase 4, it moves on to phase 5, which checks for suspicious
> > > looking
> > > names in
> > > +the filesystem.
> > > +These names consist of the filesystem label, names in directory
> > > entries, and
> > > +the names of extended attributes.
> > > +Like most Unix filesystems, XFS imposes the sparest of
> > > constraints
> > > on the
> > > +contents of a name -- slashes and null bytes are not allowed in
> > > directory
> > > +entries; and null bytes are not allowed in extended attributes
> > > and
> > maybe say "standard user accessible extended attributes"
>
> "userspace visible"?
Thats fine, mostly I meant to exclude parent pointers, but I've seen
other ideas that talk about using xattrs to store binary metadata, so
pptrs may not be the last to do this.
>
> I'll list-ify this too:
>
> Like most Unix filesystems, XFS imposes the sparest of constraints on
> the contents of a name:
>
> - slashes and null bytes are not allowed in directory entries;
>
> - null bytes are not allowed in userspace-visible extended
> attributes;
>
> - null bytes are not allowed in the filesystem label
Ok, I think that works
>
> > > the
> > > +filesystem label.
> > > +Directory entries and attribute keys store the length of the
> > > name
> > > explicitly
> > > +ondisk, which means that nulls are not name terminators.
> > > +For this section, the term "naming domain" refers to any place
> > > where
> > > names are
> > > +presented together -- all the names in a directory, or all the
> > > attributes of a
> > > +file.
> > > +
> > > +Although the Unix naming constraints are very permissive, the
> > > reality of most
> > > +modern-day Linux systems is that programs work with Unicode
> > > character code
> > > +points to support international languages.
> > > +These programs typically encode those code points in UTF-8 when
> > > interfacing
> > > +with the C library because the kernel expects null-terminated
> > > names.
> > > +In the common case, therefore, names found in an XFS filesystem
> > > are
> > > actually
> > > +UTF-8 encoded Unicode data.
> > > +
> > > +To maximize its expressiveness, the Unicode standard defines
> > > separate control
> > > +points for various characters that render similarly or
> > > identically
> > > in writing
> > > +systems around the world.
> > > +For example, the character "Cyrillic Small Letter A" U+0430 "а"
> > > often renders
> > > +identically to "Latin Small Letter A" U+0061 "a".
> >
> >
> > > +
> > > +The standard also permits characters to be constructed in
> > > multiple
> > > ways --
> > > +either by using a defined code point, or by combining one code
> > > point
> > > with
> > > +various combining marks.
> > > +For example, the character "Angstrom Sign U+212B "Å" can also be
> > > expressed
> > > +as "Latin Capital Letter A" U+0041 "A" followed by "Combining
> > > Ring
> > > Above"
> > > +U+030A "◌̊".
> > > +Both sequences render identically.
> > > +
> > > +Like the standards that preceded it, Unicode also defines
> > > various
> > > control
> > > +characters to alter the presentation of text.
> > > +For example, the character "Right-to-Left Override" U+202E can
> > > trick
> > > some
> > > +programs into rendering "moo\\xe2\\x80\\xaegnp.txt" as
> > > "mootxt.png".
> > > +A second category of rendering problems involves whitespace
> > > characters.
> > > +If the character "Zero Width Space" U+200B is encountered in a
> > > file
> > > name, the
> > > +name will render identically to a name that does not have the
> > > zero
> > > width
> > > +space.
> > > +
> > > +If two names within a naming domain have different byte
> > > sequences
> > > but render
> > > +identically, a user may be confused by it.
> > > +The kernel, in its indifference to upper level encoding schemes,
> > > permits this.
> > > +Most filesystem drivers persist the byte sequence names that are
> > > given to them
> > > +by the VFS.
> > > +
> > > +Techniques for detecting confusable names are explained in great
> > > detail in
> > > +sections 4 and 5 of the
> > > +`Unicode Security Mechanisms
> > > <https://unicode.org/reports/tr39/>`_
> > > +document.
> > I don't know that we need this much detail on character rendering.
> > I
> > think the example above is enough to make the point that character
> > strings can differ in binary, but render the same, so we need to
> > deal
> > with that. So I think that's really all the justification we need
> > for
> > the NFD usage
>
> I want to leave the link in, because TR39 is the canonical source for
> information about confusability detection. That is the location
> where
> the Unicode folks publish everything they currently know on the
> topic.
Sure, maybe just keep the last line then.
Allison
>
> > > +``xfs_scrub``, when it detects UTF-8 encoding in use on a
> > > system,
> > > uses the
> > When ``xfs_scrub`` detects UTF-8 encoding, it uses the...
>
> Changed, thanks.
>
> > > +Unicode normalization form NFD in conjunction with the
> > > confusable
> > > name
> > > +detection component of
> > > +`libicu <https://github.com/unicode-org/icu>`_
> > > +to identify names with a directory or within a file's extended
> > > attributes that
> > > +could be confused for each other.
> > > +Names are also checked for control characters, non-rendering
> > > characters, and
> > > +mixing of bidirectional characters.
> > > +All of these potential issues are reported to the system
> > > administrator during
> > > +phase 5.
> > > +
> > > +Media Verification of File Data Extents
> > > +---------------------------------------
> > > +
> > > +The system administrator can elect to initiate a media scan of
> > > all
> > > file data
> > > +blocks.
> > > +This scan after validation of all filesystem metadata (except
> > > for
> > > the summary
> > > +counters) as phase 6.
> > > +The scan starts by calling ``FS_IOC_GETFSMAP`` to scan the
> > > filesystem space map
> > > +to find areas that are allocated to file data fork extents.
> > > +Gaps betweeen data fork extents that are smaller than 64k are
> > > treated as if
> > > +they were data fork extents to reduce the command setup
> > > overhead.
> > > +When the space map scan accumulates a region larger than 32MB, a
> > > media
> > > +verification request is sent to the disk as a directio read of
> > > the
> > > raw block
> > > +device.
> > > +
> > > +If the verification read fails, ``xfs_scrub`` retries with
> > > single-
> > > block reads
> > > +to narrow down the failure to the specific region of the media
> > > and
> > > recorded.
> > > +When it has finished issuing verification requests, it again
> > > uses
> > > the space
> > > +mapping ioctl to map the recorded media errors back to metadata
> > > structures
> > > +and report what has been lost.
> > > +For media errors in blocks owned by files, the lack of parent
> > > pointers means
> > > +that the entire filesystem must be walked to report the file
> > > paths
> > > and offsets
> > > +corresponding to the media error.
> > >
> > This last bit will need to be updated after we come to a decision
> > with
> > the rfc
>
> I'll at least update it since this doc is now pretty deep into the
> pptrs
> stuff:
>
> "For media errors in blocks owned by files, parent pointers can be
> used
> to construct file paths from inode numbers for user-friendly
> reporting."
>
> > Other than that, I think it looks pretty good.
>
> Woot.
>
> --D
>
> > Allison
> >
next prev parent reply other threads:[~2023-03-03 23:51 UTC|newest]
Thread overview: 86+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <Y69UceeA2MEpjMJ8@magnolia>
2022-12-30 22:10 ` [PATCHSET v24.0 00/14] xfs: design documentation for online fsck Darrick J. Wong
2022-12-30 22:10 ` [PATCH 02/14] xfs: document the general theory underlying online fsck design Darrick J. Wong
2023-01-11 1:25 ` Allison Henderson
2023-01-11 23:39 ` Darrick J. Wong
2023-01-12 0:29 ` Dave Chinner
2023-01-18 0:03 ` Allison Henderson
2023-01-18 2:35 ` Darrick J. Wong
2022-12-30 22:10 ` [PATCH 01/14] xfs: document the motivation for " Darrick J. Wong
2023-01-07 5:01 ` Allison Henderson
2023-01-11 19:10 ` Darrick J. Wong
2023-01-18 0:03 ` Allison Henderson
2023-01-18 1:29 ` Darrick J. Wong
2023-01-12 0:10 ` Darrick J. Wong
2022-12-30 22:10 ` [PATCH 09/14] xfs: document online file metadata repair code Darrick J. Wong
2022-12-30 22:10 ` [PATCH 08/14] xfs: document btree bulk loading Darrick J. Wong
2023-02-09 5:47 ` Allison Henderson
2023-02-10 0:24 ` Darrick J. Wong
2023-02-16 15:46 ` Allison Henderson
2023-02-16 21:08 ` Darrick J. Wong
2022-12-30 22:10 ` [PATCH 05/14] xfs: document the filesystem metadata checking strategy Darrick J. Wong
2023-01-21 1:38 ` Allison Henderson
2023-02-02 19:04 ` Darrick J. Wong
2023-02-09 5:41 ` Allison Henderson
2022-12-30 22:10 ` [PATCH 04/14] xfs: document the user interface for online fsck Darrick J. Wong
2023-01-18 0:03 ` Allison Henderson
2023-01-18 2:42 ` Darrick J. Wong
2022-12-30 22:10 ` [PATCH 07/14] xfs: document pageable kernel memory Darrick J. Wong
2023-02-02 7:14 ` Allison Henderson
2023-02-02 23:14 ` Darrick J. Wong
2023-02-09 5:41 ` Allison Henderson
2023-02-09 23:14 ` Darrick J. Wong
2023-02-25 7:32 ` Allison Henderson
2022-12-30 22:10 ` [PATCH 03/14] xfs: document the testing plan for online fsck Darrick J. Wong
2023-01-18 0:03 ` Allison Henderson
2023-01-18 2:38 ` Darrick J. Wong
2022-12-30 22:10 ` [PATCH 06/14] xfs: document how online fsck deals with eventual consistency Darrick J. Wong
2023-01-05 9:08 ` Amir Goldstein
2023-01-05 19:40 ` Darrick J. Wong
2023-01-06 3:33 ` Amir Goldstein
2023-01-11 17:54 ` Darrick J. Wong
2023-01-31 6:11 ` Allison Henderson
2023-02-02 19:55 ` Darrick J. Wong
2023-02-09 5:41 ` Allison Henderson
2022-12-30 22:10 ` [PATCH 12/14] xfs: document directory tree repairs Darrick J. Wong
2023-01-14 2:32 ` [PATCH v24.2 " Darrick J. Wong
2023-02-03 2:12 ` [PATCH v24.3 " Darrick J. Wong
2023-02-25 7:33 ` Allison Henderson
2023-03-02 0:14 ` Darrick J. Wong
2023-03-03 23:50 ` Allison Henderson
2023-03-04 2:19 ` Darrick J. Wong
2022-12-30 22:10 ` [PATCH 13/14] xfs: document the userspace fsck driver program Darrick J. Wong
2023-03-01 5:36 ` Allison Henderson
2023-03-02 0:27 ` Darrick J. Wong
2023-03-03 23:51 ` Allison Henderson [this message]
2023-03-04 2:25 ` Darrick J. Wong
2022-12-30 22:10 ` [PATCH 14/14] xfs: document future directions of online fsck Darrick J. Wong
2023-03-01 5:37 ` Allison Henderson
2023-03-02 0:39 ` Darrick J. Wong
2023-03-03 23:51 ` Allison Henderson
2023-03-04 2:28 ` Darrick J. Wong
2022-12-30 22:10 ` [PATCH 11/14] xfs: document metadata file repair Darrick J. Wong
2023-02-25 7:33 ` Allison Henderson
2023-03-01 2:42 ` Darrick J. Wong
2022-12-30 22:10 ` [PATCH 10/14] xfs: document full filesystem scans for online fsck Darrick J. Wong
2023-02-16 15:47 ` Allison Henderson
2023-02-16 22:48 ` Darrick J. Wong
2023-02-25 7:33 ` Allison Henderson
2023-03-01 22:09 ` Darrick J. Wong
2023-03-07 1:30 ` [PATCHSET v24.3 00/14] xfs: design documentation " Darrick J. Wong
2023-03-07 1:30 ` Darrick J. Wong
2023-03-07 1:30 ` [PATCH 01/14] xfs: document the motivation for online fsck design Darrick J. Wong
2023-03-07 1:31 ` [PATCH 02/14] xfs: document the general theory underlying " Darrick J. Wong
2023-03-07 1:31 ` [PATCH 03/14] xfs: document the testing plan for online fsck Darrick J. Wong
2023-03-07 1:31 ` [PATCH 04/14] xfs: document the user interface " Darrick J. Wong
2023-03-07 1:31 ` [PATCH 05/14] xfs: document the filesystem metadata checking strategy Darrick J. Wong
2023-03-07 1:31 ` [PATCH 06/14] xfs: document how online fsck deals with eventual consistency Darrick J. Wong
2023-03-07 1:31 ` [PATCH 07/14] xfs: document pageable kernel memory Darrick J. Wong
2023-03-07 1:31 ` [PATCH 08/14] xfs: document btree bulk loading Darrick J. Wong
2023-03-07 1:31 ` [PATCH 09/14] xfs: document online file metadata repair code Darrick J. Wong
2023-03-07 1:31 ` [PATCH 10/14] xfs: document full filesystem scans for online fsck Darrick J. Wong
2023-03-07 1:31 ` [PATCH 11/14] xfs: document metadata file repair Darrick J. Wong
2023-03-07 1:31 ` [PATCH 12/14] xfs: document directory tree repairs Darrick J. Wong
2023-03-07 1:32 ` [PATCH 13/14] xfs: document the userspace fsck driver program Darrick J. Wong
2023-03-07 1:32 ` [PATCH 14/14] xfs: document future directions of online fsck Darrick J. Wong
2022-10-02 18:19 [PATCHSET v23.3 00/14] xfs: design documentation for " Darrick J. Wong
2022-10-02 18:19 ` [PATCH 13/14] xfs: document the userspace fsck driver program Darrick J. Wong
-- strict thread matches above, loose matches on Subject: below --
2022-08-07 18:30 [PATCHSET v2 00/14] xfs: design documentation for online fsck Darrick J. Wong
2022-08-07 18:31 ` [PATCH 13/14] xfs: document the userspace fsck driver program Darrick J. Wong
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=88be76f153cb4cd2faff4fb95b090b0242d1a55e.camel@oracle.com \
--to=allison.henderson@oracle.com \
--cc=catherine.hoang@oracle.com \
--cc=chandan.babu@oracle.com \
--cc=david@fromorbit.com \
--cc=djwong@kernel.org \
--cc=hch@infradead.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-xfs@vger.kernel.org \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).