From: Mark Brown <broonie@kernel.org>
To: David Sterba <dsterba@suse.cz>
Cc: David Sterba <dsterba@suse.com>,
Filipe Manana <fdmanana@suse.com>,
Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
Linux Next Mailing List <linux-next@vger.kernel.org>,
Qu Wenruo <wqu@suse.com>
Subject: linux-next: manual merge of the btrfs tree with the btrfs-fixes tree
Date: Tue, 17 Mar 2026 13:48:33 +0000 [thread overview]
Message-ID: <ablbsQZEOkzcDrHC@sirena.org.uk> (raw)
[-- Attachment #1: Type: text/plain, Size: 21987 bytes --]
Hi all,
Today's linux-next merge of the btrfs tree got a conflict in:
fs/btrfs/inode.c
between commit:
2b4cb4e58f346 ("btrfs: check for NULL root after calls to btrfs_csum_root()")
from the btrfs-fixes tree and commits:
95c2b73d28d71 ("btrfs: check for NULL root after calls to btrfs_csum_root()")
0d0b5f693bf44 ("btrfs: make add_pending_csums() to take an ordered extent as parameter")
30525e90e1e86 ("btrfs: rename btrfs_csum_file_blocks() to btrfs_insert_data_csums()")
from the btrfs tree.
I fixed it up (see below) and can carry the fix as necessary. This
is now fixed as far as linux-next is concerned, but any non trivial
conflicts should be mentioned to your upstream maintainer when your tree
is submitted for merging. You may also want to consider cooperating
with the maintainer of the conflicting tree to minimise any particularly
complex conflicts.
diff --combined fs/btrfs/inode.c
index f643a05208720,8d97a8ad3858b..0000000000000
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@@ -74,7 -74,6 +74,6 @@@
#include "delayed-inode.h"
#define COW_FILE_RANGE_KEEP_LOCKED (1UL << 0)
- #define COW_FILE_RANGE_NO_INLINE (1UL << 1)
struct btrfs_iget_args {
u64 ino;
@@@ -424,7 -423,7 +423,7 @@@ static inline void btrfs_cleanup_ordere
folio_put(folio);
}
- return btrfs_mark_ordered_io_finished(inode, NULL, offset, bytes, false);
+ return btrfs_mark_ordered_io_finished(inode, offset, bytes, false);
}
static int btrfs_dirty_inode(struct btrfs_inode *inode);
@@@ -622,6 -621,10 +621,10 @@@ static bool can_cow_file_range_inline(s
*
* If being used directly, you must have already checked we're allowed to cow
* the range by getting true from can_cow_file_range_inline().
+ *
+ * Return 0 if the inlined extent is created successfully.
+ * Return <0 for critical error, and should be considered as an writeback error.
+ * Return >0 if can not create an inlined extent (mostly due to lack of meta space).
*/
static noinline int __cow_file_range_inline(struct btrfs_inode *inode,
u64 size, size_t compressed_size,
@@@ -703,55 -706,6 +706,6 @@@ out
return ret;
}
- static noinline int cow_file_range_inline(struct btrfs_inode *inode,
- struct folio *locked_folio,
- u64 offset, u64 end,
- size_t compressed_size,
- int compress_type,
- struct folio *compressed_folio,
- bool update_i_size)
- {
- struct extent_state *cached = NULL;
- unsigned long clear_flags = EXTENT_DELALLOC | EXTENT_DELALLOC_NEW |
- EXTENT_DEFRAG | EXTENT_DO_ACCOUNTING | EXTENT_LOCKED;
- u64 size = min_t(u64, i_size_read(&inode->vfs_inode), end + 1);
- int ret;
-
- if (!can_cow_file_range_inline(inode, offset, size, compressed_size))
- return 1;
-
- btrfs_lock_extent(&inode->io_tree, offset, end, &cached);
- ret = __cow_file_range_inline(inode, size, compressed_size,
- compress_type, compressed_folio,
- update_i_size);
- if (ret > 0) {
- btrfs_unlock_extent(&inode->io_tree, offset, end, &cached);
- return ret;
- }
-
- /*
- * In the successful case (ret == 0 here), cow_file_range will return 1.
- *
- * Quite a bit further up the callstack in extent_writepage(), ret == 1
- * is treated as a short circuited success and does not unlock the folio,
- * so we must do it here.
- *
- * In the failure case, the locked_folio does get unlocked by
- * btrfs_folio_end_all_writers, which asserts that it is still locked
- * at that point, so we must *not* unlock it here.
- *
- * The other two callsites in compress_file_range do not have a
- * locked_folio, so they are not relevant to this logic.
- */
- if (ret == 0)
- locked_folio = NULL;
-
- extent_clear_unlock_delalloc(inode, offset, end, locked_folio, &cached,
- clear_flags, PAGE_UNLOCK |
- PAGE_START_WRITEBACK | PAGE_END_WRITEBACK);
- return ret;
- }
-
struct async_extent {
u64 start;
u64 ram_size;
@@@ -797,7 -751,7 +751,7 @@@ static int add_async_extent(struct asyn
* options, defragmentation, properties or heuristics.
*/
static inline int inode_need_compress(struct btrfs_inode *inode, u64 start,
- u64 end)
+ u64 end, bool check_inline)
{
struct btrfs_fs_info *fs_info = inode->root->fs_info;
@@@ -811,8 -765,10 +765,10 @@@
* do not even bother try compression, as there will be no space saving
* and will always fallback to regular write later.
*/
- if (start != 0 && end + 1 - start <= fs_info->sectorsize)
+ if (end + 1 - start <= fs_info->sectorsize &&
+ (!check_inline || (start > 0 || end + 1 < inode->disk_i_size)))
return 0;
+
/* Defrag ioctl takes precedence over mount options and properties. */
if (inode->defrag_compress == BTRFS_DEFRAG_DONT_COMPRESS)
return 0;
@@@ -890,28 -846,20 +846,20 @@@ static struct folio *compressed_bio_las
return page_folio(phys_to_page(paddr));
}
- static void zero_last_folio(struct compressed_bio *cb)
- {
- struct bio *bio = &cb->bbio.bio;
- struct folio *last_folio = compressed_bio_last_folio(cb);
- const u32 bio_size = bio->bi_iter.bi_size;
- const u32 foffset = offset_in_folio(last_folio, bio_size);
-
- folio_zero_range(last_folio, foffset, folio_size(last_folio) - foffset);
- }
-
static void round_up_last_block(struct compressed_bio *cb, u32 blocksize)
{
struct bio *bio = &cb->bbio.bio;
struct folio *last_folio = compressed_bio_last_folio(cb);
const u32 bio_size = bio->bi_iter.bi_size;
const u32 foffset = offset_in_folio(last_folio, bio_size);
+ const u32 padding_len = round_up(foffset, blocksize) - foffset;
bool ret;
if (IS_ALIGNED(bio_size, blocksize))
return;
- ret = bio_add_folio(bio, last_folio, round_up(foffset, blocksize) - foffset, foffset);
+ folio_zero_range(last_folio, foffset, padding_len);
+ ret = bio_add_folio(bio, last_folio, padding_len, foffset);
/* The remaining part should be merged thus never fail. */
ASSERT(ret);
}
@@@ -935,9 -883,7 +883,7 @@@ static void compress_file_range(struct
container_of(work, struct async_chunk, work);
struct btrfs_inode *inode = async_chunk->inode;
struct btrfs_fs_info *fs_info = inode->root->fs_info;
- struct address_space *mapping = inode->vfs_inode.i_mapping;
struct compressed_bio *cb = NULL;
- const u32 min_folio_size = btrfs_min_folio_size(fs_info);
u64 blocksize = fs_info->sectorsize;
u64 start = async_chunk->start;
u64 end = async_chunk->end;
@@@ -947,7 -893,6 +893,6 @@@
int ret = 0;
unsigned long total_compressed = 0;
unsigned long total_in = 0;
- unsigned int loff;
int compress_type = fs_info->compress_type;
int compress_level = fs_info->compress_level;
@@@ -1009,7 -954,7 +954,7 @@@ again
* been flagged as NOCOMPRESS. This flag can change at any time if we
* discover bad compression ratios.
*/
- if (!inode_need_compress(inode, start, end))
+ if (!inode_need_compress(inode, start, end, false))
goto cleanup_and_bail_uncompressed;
if (0 < inode->defrag_compress && inode->defrag_compress < BTRFS_NR_COMPRESS_TYPES) {
@@@ -1030,43 -975,13 +975,13 @@@
total_compressed = cb->bbio.bio.bi_iter.bi_size;
total_in = cur_len;
- /*
- * Zero the tail end of the last folio, as we might be sending it down
- * to disk.
- */
- loff = (total_compressed & (min_folio_size - 1));
- if (loff)
- zero_last_folio(cb);
-
- /*
- * Try to create an inline extent.
- *
- * If we didn't compress the entire range, try to create an uncompressed
- * inline extent, else a compressed one.
- *
- * Check cow_file_range() for why we don't even try to create inline
- * extent for the subpage case.
- */
- if (total_in < actual_end)
- ret = cow_file_range_inline(inode, NULL, start, end, 0,
- BTRFS_COMPRESS_NONE, NULL, false);
- else
- ret = cow_file_range_inline(inode, NULL, start, end, total_compressed,
- compress_type,
- bio_first_folio_all(&cb->bbio.bio), false);
- if (ret <= 0) {
- cleanup_compressed_bio(cb);
- if (ret < 0)
- mapping_set_error(mapping, -EIO);
- return;
- }
-
/*
* We aren't doing an inline extent. Round the compressed size up to a
* block size boundary so the allocator does sane things.
*/
- total_compressed = ALIGN(total_compressed, blocksize);
round_up_last_block(cb, blocksize);
+ total_compressed = cb->bbio.bio.bi_iter.bi_size;
+ ASSERT(IS_ALIGNED(total_compressed, blocksize));
/*
* One last check to make sure the compression is really a win, compare
@@@ -1437,11 -1352,6 +1352,6 @@@ free_reserved
*
* When this function fails, it unlocks all folios except @locked_folio.
*
- * When this function successfully creates an inline extent, it returns 1 and
- * unlocks all folios including locked_folio and starts I/O on them.
- * (In reality inline extents are limited to a single block, so locked_folio is
- * the only folio handled anyway).
- *
* When this function succeed and creates a normal extent, the folio locking
* status depends on the passed in flags:
*
@@@ -1485,25 -1395,6 +1395,6 @@@ static noinline int cow_file_range(stru
ASSERT(num_bytes <= btrfs_super_total_bytes(fs_info->super_copy));
inode_should_defrag(inode, start, end, num_bytes, SZ_64K);
-
- if (!(flags & COW_FILE_RANGE_NO_INLINE)) {
- /* lets try to make an inline extent */
- ret = cow_file_range_inline(inode, locked_folio, start, end, 0,
- BTRFS_COMPRESS_NONE, NULL, false);
- if (ret <= 0) {
- /*
- * We succeeded, return 1 so the caller knows we're done
- * with this page and already handled the IO.
- *
- * If there was an error then cow_file_range_inline() has
- * already done the cleanup.
- */
- if (ret == 0)
- ret = 1;
- goto done;
- }
- }
-
alloc_hint = btrfs_get_extent_allocation_hint(inode, start, num_bytes);
/*
@@@ -1581,7 -1472,6 +1472,6 @@@
}
extent_clear_unlock_delalloc(inode, orig_start, end, locked_folio, &cached,
EXTENT_LOCKED | EXTENT_DELALLOC, page_ops);
- done:
if (done_offset)
*done_offset = end;
return ret;
@@@ -1701,7 -1591,7 +1591,7 @@@ static bool run_delalloc_compressed(str
struct async_cow *ctx;
struct async_chunk *async_chunk;
unsigned long nr_pages;
- u64 num_chunks = DIV_ROUND_UP(end - start, SZ_512K);
+ u64 num_chunks = DIV_ROUND_UP(end - start, BTRFS_COMPRESSION_CHUNK_SIZE);
int i;
unsigned nofs_flag;
const blk_opf_t write_flags = wbc_to_write_flags(wbc);
@@@ -1718,7 -1608,7 +1608,7 @@@
atomic_set(&ctx->num_chunks, num_chunks);
for (i = 0; i < num_chunks; i++) {
- u64 cur_end = min(end, start + SZ_512K - 1);
+ u64 cur_end = min(end, start + BTRFS_COMPRESSION_CHUNK_SIZE - 1);
/*
* igrab is called higher up in the call chain, take only the
@@@ -1853,7 -1743,7 +1743,7 @@@ static int fallback_to_cow(struct btrfs
*/
btrfs_lock_extent(io_tree, start, end, &cached_state);
count = btrfs_count_range_bits(io_tree, &range_start, end, range_bytes,
- EXTENT_NORESERVE, 0, NULL);
+ EXTENT_NORESERVE, false, NULL);
if (count > 0 || is_space_ino || is_reloc_ino) {
u64 bytes = count;
struct btrfs_fs_info *fs_info = inode->root->fs_info;
@@@ -1884,7 -1774,7 +1774,7 @@@
* a locked folio, which can race with writeback.
*/
ret = cow_file_range(inode, locked_folio, start, end, NULL,
- COW_FILE_RANGE_NO_INLINE | COW_FILE_RANGE_KEEP_LOCKED);
+ COW_FILE_RANGE_KEEP_LOCKED);
ASSERT(ret != 1);
return ret;
}
@@@ -1936,6 -1826,11 +1826,11 @@@ static int can_nocow_file_extent(struc
int ret = 0;
bool nowait = path->nowait;
+ /* If there are pending snapshots for this root, we must do COW. */
+ if (args->writeback_path && !is_freespace_inode &&
+ atomic_read(&root->snapshot_force_cow))
+ goto out;
+
fi = btrfs_item_ptr(leaf, path->slots[0], struct btrfs_file_extent_item);
extent_type = btrfs_file_extent_type(leaf, fi);
@@@ -1997,11 -1892,6 +1892,6 @@@
path = NULL;
}
- /* If there are pending snapshots for this root, we must COW. */
- if (args->writeback_path && !is_freespace_inode &&
- atomic_read(&root->snapshot_force_cow))
- goto out;
-
args->file_extent.num_bytes = min(args->end + 1, extent_end) - args->start;
args->file_extent.offset += args->start - key->offset;
io_start = args->file_extent.disk_bytenr + args->file_extent.offset;
@@@ -2435,6 -2325,91 +2325,91 @@@ static bool should_nocow(struct btrfs_i
return false;
}
+ /*
+ * Return 0 if an inlined extent is created successfully.
+ * Return <0 if critical error happened.
+ * Return >0 if an inline extent can not be created.
+ */
+ static int run_delalloc_inline(struct btrfs_inode *inode, struct folio *locked_folio)
+ {
+ struct btrfs_fs_info *fs_info = inode->root->fs_info;
+ struct compressed_bio *cb = NULL;
+ struct extent_state *cached = NULL;
+ const u64 i_size = i_size_read(&inode->vfs_inode);
+ const u32 blocksize = fs_info->sectorsize;
+ int compress_type = fs_info->compress_type;
+ int compress_level = fs_info->compress_level;
+ u32 compressed_size = 0;
+ int ret;
+
+ ASSERT(folio_pos(locked_folio) == 0);
+
+ if (btrfs_inode_can_compress(inode) &&
+ inode_need_compress(inode, 0, blocksize, true)) {
+ if (inode->defrag_compress > 0 &&
+ inode->defrag_compress < BTRFS_NR_COMPRESS_TYPES) {
+ compress_type = inode->defrag_compress;
+ compress_level = inode->defrag_compress_level;
+ } else if (inode->prop_compress) {
+ compress_type = inode->prop_compress;
+ }
+ cb = btrfs_compress_bio(inode, 0, blocksize, compress_type, compress_level, 0);
+ if (IS_ERR(cb)) {
+ cb = NULL;
+ /* Just fall back to non-compressed case. */
+ } else {
+ compressed_size = cb->bbio.bio.bi_iter.bi_size;
+ }
+ }
+ if (!can_cow_file_range_inline(inode, 0, i_size, compressed_size)) {
+ if (cb)
+ cleanup_compressed_bio(cb);
+ return 1;
+ }
+
+ btrfs_lock_extent(&inode->io_tree, 0, blocksize - 1, &cached);
+ if (cb) {
+ ret = __cow_file_range_inline(inode, i_size, compressed_size, compress_type,
+ bio_first_folio_all(&cb->bbio.bio), false);
+ cleanup_compressed_bio(cb);
+ cb = NULL;
+ } else {
+ ret = __cow_file_range_inline(inode, i_size, 0, BTRFS_COMPRESS_NONE,
+ NULL, false);
+ }
+ /*
+ * We failed to insert inline extent due to lack of meta space.
+ * Just unlock the extent io range and fallback to regular COW/NOCOW path.
+ */
+ if (ret > 0) {
+ btrfs_unlock_extent(&inode->io_tree, 0, blocksize - 1, &cached);
+ return ret;
+ }
+
+ /*
+ * In the successful case (ret == 0 here), btrfs_run_delalloc_range()
+ * will return 1.
+ *
+ * Quite a bit further up the callstack in extent_writepage(), ret == 1
+ * is treated as a short circuited success and does not unlock the folio,
+ * so we must do it here.
+ *
+ * For failure case, the @locked_folio does get unlocked by
+ * btrfs_folio_end_lock_bitmap(), so we must *not* unlock it here.
+ *
+ * So if ret == 0, we let extent_clear_unlock_delalloc() to unlock the
+ * folio by passing NULL as @locked_folio.
+ * Otherwise pass @locked_folio as usual.
+ */
+ if (ret == 0)
+ locked_folio = NULL;
+ extent_clear_unlock_delalloc(inode, 0, blocksize - 1, locked_folio, &cached,
+ EXTENT_DELALLOC | EXTENT_DELALLOC_NEW | EXTENT_DEFRAG |
+ EXTENT_DO_ACCOUNTING | EXTENT_LOCKED,
+ PAGE_UNLOCK | PAGE_START_WRITEBACK | PAGE_END_WRITEBACK);
+ return ret;
+ }
+
/*
* Function to process delayed allocation (create CoW) for ranges which are
* being touched for the first time.
@@@ -2451,11 -2426,26 +2426,26 @@@ int btrfs_run_delalloc_range(struct btr
ASSERT(!(end <= folio_pos(locked_folio) ||
start >= folio_next_pos(locked_folio)));
+ if (start == 0 && end + 1 <= inode->root->fs_info->sectorsize &&
+ end + 1 >= inode->disk_i_size) {
+ int ret;
+
+ ret = run_delalloc_inline(inode, locked_folio);
+ if (ret < 0)
+ return ret;
+ if (ret == 0)
+ return 1;
+ /*
+ * Continue regular handling if we can not create an
+ * inlined extent.
+ */
+ }
+
if (should_nocow(inode, start, end))
return run_delalloc_nocow(inode, locked_folio, start, end);
if (btrfs_inode_can_compress(inode) &&
- inode_need_compress(inode, start, end) &&
+ inode_need_compress(inode, start, end, false) &&
run_delalloc_compressed(inode, locked_folio, start, end, wbc))
return 1;
@@@ -2745,17 -2735,19 +2735,19 @@@ void btrfs_clear_delalloc_extent(struc
}
/*
- * given a list of ordered sums record them in the inode. This happens
- * at IO completion time based on sums calculated at bio submission time.
+ * Given an ordered extent and insert all its checksums into the csum tree.
+ *
+ * This happens at IO completion time based on sums calculated at bio
+ * submission time.
*/
static int add_pending_csums(struct btrfs_trans_handle *trans,
- struct list_head *list)
+ struct btrfs_ordered_extent *oe)
{
struct btrfs_ordered_sum *sum;
struct btrfs_root *csum_root = NULL;
int ret;
- list_for_each_entry(sum, list, list) {
+ list_for_each_entry(sum, &oe->csum_list, list) {
if (!csum_root) {
csum_root = btrfs_csum_root(trans->fs_info,
sum->logical);
@@@ -2767,7 -2759,7 +2759,7 @@@
}
}
trans->adding_csums = true;
- ret = btrfs_csum_file_blocks(trans, csum_root, sum);
+ ret = btrfs_insert_data_csums(trans, csum_root, sum);
trans->adding_csums = false;
if (ret)
return ret;
@@@ -2956,7 -2948,9 +2948,9 @@@ out_page
* to reflect the errors and clean the page.
*/
mapping_set_error(folio->mapping, ret);
- btrfs_mark_ordered_io_finished(inode, folio, page_start,
+ btrfs_folio_clear_ordered(fs_info, folio, page_start,
+ folio_size(folio));
+ btrfs_mark_ordered_io_finished(inode, page_start,
folio_size(folio), !ret);
folio_clear_dirty_for_io(folio);
}
@@@ -3271,8 -3265,8 +3265,8 @@@ int btrfs_finish_one_ordered(struct btr
if (test_bit(BTRFS_ORDERED_NOCOW, &ordered_extent->flags)) {
/* Logic error */
- ASSERT(list_empty(&ordered_extent->list));
- if (unlikely(!list_empty(&ordered_extent->list))) {
+ ASSERT(list_empty(&ordered_extent->csum_list));
+ if (unlikely(!list_empty(&ordered_extent->csum_list))) {
ret = -EINVAL;
btrfs_abort_transaction(trans, ret);
goto out;
@@@ -3321,7 -3315,7 +3315,7 @@@
goto out;
}
- ret = add_pending_csums(trans, &ordered_extent->list);
+ ret = add_pending_csums(trans, ordered_extent);
if (unlikely(ret)) {
btrfs_abort_transaction(trans, ret);
goto out;
@@@ -3427,7 -3421,7 +3421,7 @@@ out
* This needs to be done to make sure anybody waiting knows we are done
* updating everything for this ordered extent.
*/
- btrfs_remove_ordered_extent(inode, ordered_extent);
+ btrfs_remove_ordered_extent(ordered_extent);
/* once for us */
btrfs_put_ordered_extent(ordered_extent);
@@@ -4697,7 -4691,7 +4691,7 @@@ static noinline int may_destroy_subvol(
dir_id = btrfs_super_root_dir(fs_info->super_copy);
di = btrfs_lookup_dir_item(NULL, fs_info->tree_root, path,
dir_id, &name, 0);
- if (di && !IS_ERR(di)) {
+ if (!IS_ERR_OR_NULL(di)) {
btrfs_dir_item_key_to_cpu(path->nodes[0], di, &key);
if (key.objectid == btrfs_root_id(root)) {
ret = -EPERM;
@@@ -6859,7 -6853,7 +6853,7 @@@ int btrfs_create_new_inode(struct btrfs
}
} else {
ret = btrfs_add_link(trans, BTRFS_I(dir), BTRFS_I(inode), name,
- 0, BTRFS_I(inode)->dir_index);
+ false, BTRFS_I(inode)->dir_index);
if (unlikely(ret)) {
btrfs_abort_transaction(trans, ret);
goto discard;
@@@ -7075,7 -7069,7 +7069,7 @@@ static int btrfs_link(struct dentry *ol
inode_set_ctime_current(inode);
ret = btrfs_add_link(trans, BTRFS_I(dir), BTRFS_I(inode),
- &fname.disk_name, 1, index);
+ &fname.disk_name, true, index);
if (ret)
goto fail;
@@@ -8173,7 -8167,7 +8167,7 @@@ void btrfs_destroy_inode(struct inode *
if (!freespace_inode)
btrfs_lockdep_acquire(root->fs_info, btrfs_ordered_extent);
- btrfs_remove_ordered_extent(inode, ordered);
+ btrfs_remove_ordered_extent(ordered);
btrfs_put_ordered_extent(ordered);
btrfs_put_ordered_extent(ordered);
}
@@@ -8495,14 -8489,14 +8489,14 @@@ static int btrfs_rename_exchange(struc
}
ret = btrfs_add_link(trans, BTRFS_I(new_dir), BTRFS_I(old_inode),
- new_name, 0, old_idx);
+ new_name, false, old_idx);
if (unlikely(ret)) {
btrfs_abort_transaction(trans, ret);
goto out_fail;
}
ret = btrfs_add_link(trans, BTRFS_I(old_dir), BTRFS_I(new_inode),
- old_name, 0, new_idx);
+ old_name, false, new_idx);
if (unlikely(ret)) {
btrfs_abort_transaction(trans, ret);
goto out_fail;
@@@ -8793,7 -8787,7 +8787,7 @@@ static int btrfs_rename(struct mnt_idma
}
ret = btrfs_add_link(trans, BTRFS_I(new_dir), BTRFS_I(old_inode),
- &new_fname.disk_name, 0, index);
+ &new_fname.disk_name, false, index);
if (unlikely(ret)) {
btrfs_abort_transaction(trans, ret);
goto out_fail;
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
next reply other threads:[~2026-03-17 13:48 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-17 13:48 Mark Brown [this message]
-- strict thread matches above, loose matches on Subject: below --
2026-03-24 13:25 linux-next: manual merge of the btrfs tree with the btrfs-fixes tree Mark Brown
2026-03-24 13:25 Mark Brown
2026-03-17 13:48 Mark Brown
2026-03-17 13:48 Mark Brown
2025-09-18 11:26 Mark Brown
2024-06-06 22:55 Stephen Rothwell
2024-06-06 23:12 ` Qu Wenruo
2023-10-04 23:09 Stephen Rothwell
2022-10-31 23:28 Stephen Rothwell
2022-09-05 23:50 Stephen Rothwell
2022-09-06 0:15 ` Stephen Rothwell
2022-09-06 19:41 ` David Sterba
2022-03-24 23:48 Stephen Rothwell
2022-02-24 13:44 broonie
2022-02-25 11:59 ` David Sterba
2021-01-10 22:29 Stephen Rothwell
2020-05-01 0:28 Stephen Rothwell
2020-05-03 21:40 ` David Sterba
2020-05-01 0:24 Stephen Rothwell
2020-05-01 1:05 ` Stephen Rothwell
2020-05-01 2:06 ` Qu Wenruo
2020-01-08 22:14 Stephen Rothwell
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ablbsQZEOkzcDrHC@sirena.org.uk \
--to=broonie@kernel.org \
--cc=dsterba@suse.com \
--cc=dsterba@suse.cz \
--cc=fdmanana@suse.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-next@vger.kernel.org \
--cc=wqu@suse.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox