* split btrfs_load_block_group_zone_info
@ 2023-06-05 8:51 Christoph Hellwig
2023-06-05 8:51 ` [PATCH 1/4] btrfs: introduce a zone_info struct to structure btrfs_load_block_group_zone_info Christoph Hellwig
` (4 more replies)
0 siblings, 5 replies; 12+ messages in thread
From: Christoph Hellwig @ 2023-06-05 8:51 UTC (permalink / raw)
To: Chris Mason, Josef Bacik, David Sterba
Cc: Johannes Thumshirn, Naohiro Aota, linux-btrfs
Hi all,
this series splits btrfs_load_block_group_zone_info into more
maintainable chunks. It already is pretty big, and with the
raid-stipe-tree it would grow even more.
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH 1/4] btrfs: introduce a zone_info struct to structure btrfs_load_block_group_zone_info
2023-06-05 8:51 split btrfs_load_block_group_zone_info Christoph Hellwig
@ 2023-06-05 8:51 ` Christoph Hellwig
2023-06-05 10:09 ` Johannes Thumshirn
2023-06-05 8:51 ` [PATCH 2/4] btrfs: factor out the per-zone logic from btrfs_load_block_group_zone_info Christoph Hellwig
` (3 subsequent siblings)
4 siblings, 1 reply; 12+ messages in thread
From: Christoph Hellwig @ 2023-06-05 8:51 UTC (permalink / raw)
To: Chris Mason, Josef Bacik, David Sterba
Cc: Johannes Thumshirn, Naohiro Aota, linux-btrfs
Add a new zone_info structure to hold per-zone information in
btrfs_load_block_group_zone_info and prepare for breaking out helpers
from it.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
fs/btrfs/zoned.c | 84 ++++++++++++++++++++++--------------------------
1 file changed, 38 insertions(+), 46 deletions(-)
diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c
index 1f5497b9b2695c..397b8c962eab50 100644
--- a/fs/btrfs/zoned.c
+++ b/fs/btrfs/zoned.c
@@ -1276,6 +1276,12 @@ static int calculate_alloc_pointer(struct btrfs_block_group *cache,
return ret;
}
+struct zone_info {
+ u64 physical;
+ u64 capacity;
+ u64 alloc_offset;
+};
+
int btrfs_load_block_group_zone_info(struct btrfs_block_group *cache, bool new)
{
struct btrfs_fs_info *fs_info = cache->fs_info;
@@ -1285,12 +1291,10 @@ int btrfs_load_block_group_zone_info(struct btrfs_block_group *cache, bool new)
struct btrfs_device *device;
u64 logical = cache->start;
u64 length = cache->length;
+ struct zone_info *zone_info = NULL;
int ret;
int i;
unsigned int nofs_flag;
- u64 *alloc_offsets = NULL;
- u64 *caps = NULL;
- u64 *physical = NULL;
unsigned long *active = NULL;
u64 last_alloc = 0;
u32 num_sequential = 0, num_conventional = 0;
@@ -1322,20 +1326,8 @@ int btrfs_load_block_group_zone_info(struct btrfs_block_group *cache, bool new)
goto out;
}
- alloc_offsets = kcalloc(map->num_stripes, sizeof(*alloc_offsets), GFP_NOFS);
- if (!alloc_offsets) {
- ret = -ENOMEM;
- goto out;
- }
-
- caps = kcalloc(map->num_stripes, sizeof(*caps), GFP_NOFS);
- if (!caps) {
- ret = -ENOMEM;
- goto out;
- }
-
- physical = kcalloc(map->num_stripes, sizeof(*physical), GFP_NOFS);
- if (!physical) {
+ zone_info = kcalloc(map->num_stripes, sizeof(*zone_info), GFP_NOFS);
+ if (!zone_info) {
ret = -ENOMEM;
goto out;
}
@@ -1347,20 +1339,21 @@ int btrfs_load_block_group_zone_info(struct btrfs_block_group *cache, bool new)
}
for (i = 0; i < map->num_stripes; i++) {
+ struct zone_info *info = zone_info + i;
bool is_sequential;
struct blk_zone zone;
struct btrfs_dev_replace *dev_replace = &fs_info->dev_replace;
int dev_replace_is_ongoing = 0;
device = map->stripes[i].dev;
- physical[i] = map->stripes[i].physical;
+ info->physical = map->stripes[i].physical;
if (device->bdev == NULL) {
- alloc_offsets[i] = WP_MISSING_DEV;
+ info->alloc_offset = WP_MISSING_DEV;
continue;
}
- is_sequential = btrfs_dev_is_sequential(device, physical[i]);
+ is_sequential = btrfs_dev_is_sequential(device, info->physical);
if (is_sequential)
num_sequential++;
else
@@ -1374,7 +1367,7 @@ int btrfs_load_block_group_zone_info(struct btrfs_block_group *cache, bool new)
__set_bit(i, active);
if (!is_sequential) {
- alloc_offsets[i] = WP_CONVENTIONAL;
+ info->alloc_offset = WP_CONVENTIONAL;
continue;
}
@@ -1382,25 +1375,25 @@ int btrfs_load_block_group_zone_info(struct btrfs_block_group *cache, bool new)
* This zone will be used for allocation, so mark this zone
* non-empty.
*/
- btrfs_dev_clear_zone_empty(device, physical[i]);
+ btrfs_dev_clear_zone_empty(device, info->physical);
down_read(&dev_replace->rwsem);
dev_replace_is_ongoing = btrfs_dev_replace_is_ongoing(dev_replace);
if (dev_replace_is_ongoing && dev_replace->tgtdev != NULL)
- btrfs_dev_clear_zone_empty(dev_replace->tgtdev, physical[i]);
+ btrfs_dev_clear_zone_empty(dev_replace->tgtdev, info->physical);
up_read(&dev_replace->rwsem);
/*
* The group is mapped to a sequential zone. Get the zone write
* pointer to determine the allocation offset within the zone.
*/
- WARN_ON(!IS_ALIGNED(physical[i], fs_info->zone_size));
+ WARN_ON(!IS_ALIGNED(info->physical, fs_info->zone_size));
nofs_flag = memalloc_nofs_save();
- ret = btrfs_get_dev_zone(device, physical[i], &zone);
+ ret = btrfs_get_dev_zone(device, info->physical, &zone);
memalloc_nofs_restore(nofs_flag);
if (ret == -EIO || ret == -EOPNOTSUPP) {
ret = 0;
- alloc_offsets[i] = WP_MISSING_DEV;
+ info->alloc_offset = WP_MISSING_DEV;
continue;
} else if (ret) {
goto out;
@@ -1415,26 +1408,26 @@ int btrfs_load_block_group_zone_info(struct btrfs_block_group *cache, bool new)
goto out;
}
- caps[i] = (zone.capacity << SECTOR_SHIFT);
+ info->capacity = (zone.capacity << SECTOR_SHIFT);
switch (zone.cond) {
case BLK_ZONE_COND_OFFLINE:
case BLK_ZONE_COND_READONLY:
btrfs_err(fs_info,
"zoned: offline/readonly zone %llu on device %s (devid %llu)",
- physical[i] >> device->zone_info->zone_size_shift,
+ info->physical >> device->zone_info->zone_size_shift,
rcu_str_deref(device->name), device->devid);
- alloc_offsets[i] = WP_MISSING_DEV;
+ info->alloc_offset = WP_MISSING_DEV;
break;
case BLK_ZONE_COND_EMPTY:
- alloc_offsets[i] = 0;
+ info->alloc_offset = 0;
break;
case BLK_ZONE_COND_FULL:
- alloc_offsets[i] = caps[i];
+ info->alloc_offset = info->capacity;
break;
default:
/* Partially used zone */
- alloc_offsets[i] =
+ info->alloc_offset =
((zone.wp - zone.start) << SECTOR_SHIFT);
__set_bit(i, active);
break;
@@ -1462,15 +1455,15 @@ int btrfs_load_block_group_zone_info(struct btrfs_block_group *cache, bool new)
switch (map->type & BTRFS_BLOCK_GROUP_PROFILE_MASK) {
case 0: /* single */
- if (alloc_offsets[0] == WP_MISSING_DEV) {
+ if (zone_info[0].alloc_offset == WP_MISSING_DEV) {
btrfs_err(fs_info,
"zoned: cannot recover write pointer for zone %llu",
- physical[0]);
+ zone_info[0].physical);
ret = -EIO;
goto out;
}
- cache->alloc_offset = alloc_offsets[0];
- cache->zone_capacity = caps[0];
+ cache->alloc_offset = zone_info[0].alloc_offset;
+ cache->zone_capacity = zone_info[0].capacity;
if (test_bit(0, active))
set_bit(BLOCK_GROUP_FLAG_ZONE_IS_ACTIVE, &cache->runtime_flags);
break;
@@ -1480,21 +1473,21 @@ int btrfs_load_block_group_zone_info(struct btrfs_block_group *cache, bool new)
ret = -EINVAL;
goto out;
}
- if (alloc_offsets[0] == WP_MISSING_DEV) {
+ if (zone_info[0].alloc_offset == WP_MISSING_DEV) {
btrfs_err(fs_info,
"zoned: cannot recover write pointer for zone %llu",
- physical[0]);
+ zone_info[0].physical);
ret = -EIO;
goto out;
}
- if (alloc_offsets[1] == WP_MISSING_DEV) {
+ if (zone_info[1].alloc_offset == WP_MISSING_DEV) {
btrfs_err(fs_info,
"zoned: cannot recover write pointer for zone %llu",
- physical[1]);
+ zone_info[1].physical);
ret = -EIO;
goto out;
}
- if (alloc_offsets[0] != alloc_offsets[1]) {
+ if (zone_info[0].alloc_offset != zone_info[1].alloc_offset) {
btrfs_err(fs_info,
"zoned: write pointer offset mismatch of zones in DUP profile");
ret = -EIO;
@@ -1510,8 +1503,9 @@ int btrfs_load_block_group_zone_info(struct btrfs_block_group *cache, bool new)
set_bit(BLOCK_GROUP_FLAG_ZONE_IS_ACTIVE,
&cache->runtime_flags);
}
- cache->alloc_offset = alloc_offsets[0];
- cache->zone_capacity = min(caps[0], caps[1]);
+ cache->alloc_offset = zone_info[0].alloc_offset;
+ cache->zone_capacity = min(zone_info[0].capacity,
+ zone_info[1].capacity);
break;
case BTRFS_BLOCK_GROUP_RAID1:
case BTRFS_BLOCK_GROUP_RAID0:
@@ -1564,9 +1558,7 @@ int btrfs_load_block_group_zone_info(struct btrfs_block_group *cache, bool new)
cache->physical_map = NULL;
}
bitmap_free(active);
- kfree(physical);
- kfree(caps);
- kfree(alloc_offsets);
+ kfree(zone_info);
free_extent_map(em);
return ret;
--
2.39.2
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH 2/4] btrfs: factor out the per-zone logic from btrfs_load_block_group_zone_info
2023-06-05 8:51 split btrfs_load_block_group_zone_info Christoph Hellwig
2023-06-05 8:51 ` [PATCH 1/4] btrfs: introduce a zone_info struct to structure btrfs_load_block_group_zone_info Christoph Hellwig
@ 2023-06-05 8:51 ` Christoph Hellwig
2023-06-05 10:15 ` Johannes Thumshirn
2023-06-05 8:51 ` [PATCH 3/4] btrfs: split out a helper to handle single BGs " Christoph Hellwig
` (2 subsequent siblings)
4 siblings, 1 reply; 12+ messages in thread
From: Christoph Hellwig @ 2023-06-05 8:51 UTC (permalink / raw)
To: Chris Mason, Josef Bacik, David Sterba
Cc: Johannes Thumshirn, Naohiro Aota, linux-btrfs
Split out a helper for the body of the per-zone loop in
btrfs_load_block_group_zone_info to make the function easier to read and
modify.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
fs/btrfs/zoned.c | 191 ++++++++++++++++++++++++-----------------------
1 file changed, 98 insertions(+), 93 deletions(-)
diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c
index 397b8c962eab50..533cbe849cd60f 100644
--- a/fs/btrfs/zoned.c
+++ b/fs/btrfs/zoned.c
@@ -1282,19 +1282,108 @@ struct zone_info {
u64 alloc_offset;
};
+static int btrfs_load_zone_info(struct btrfs_fs_info *fs_info, int zone_idx,
+ struct zone_info *info, unsigned long *active,
+ struct map_lookup *map)
+{
+ struct btrfs_dev_replace *dev_replace = &fs_info->dev_replace;
+ struct btrfs_device *device = map->stripes[zone_idx].dev;
+ int dev_replace_is_ongoing = 0;
+ unsigned int nofs_flag;
+ struct blk_zone zone;
+ int ret;
+
+ info->physical = map->stripes[zone_idx].physical;
+
+ if (!device->bdev) {
+ info->alloc_offset = WP_MISSING_DEV;
+ return 0;
+ }
+
+ /*
+ * Consider a zone as active if we can allow any number of
+ * active zones.
+ */
+ if (!device->zone_info->max_active_zones)
+ __set_bit(zone_idx, active);
+
+ if (!btrfs_dev_is_sequential(device, info->physical)) {
+ info->alloc_offset = WP_CONVENTIONAL;
+ return 0;
+ }
+
+ /*
+ * This zone will be used for allocation, so mark this zone non-empty.
+ */
+ btrfs_dev_clear_zone_empty(device, info->physical);
+
+ down_read(&dev_replace->rwsem);
+ dev_replace_is_ongoing = btrfs_dev_replace_is_ongoing(dev_replace);
+ if (dev_replace_is_ongoing && dev_replace->tgtdev != NULL)
+ btrfs_dev_clear_zone_empty(dev_replace->tgtdev, info->physical);
+ up_read(&dev_replace->rwsem);
+
+ /*
+ * The group is mapped to a sequential zone. Get the zone write pointer
+ * to determine the allocation offset within the zone.
+ */
+ WARN_ON(!IS_ALIGNED(info->physical, fs_info->zone_size));
+ nofs_flag = memalloc_nofs_save();
+ ret = btrfs_get_dev_zone(device, info->physical, &zone);
+ memalloc_nofs_restore(nofs_flag);
+ if (ret) {
+ if (ret != -EIO && ret != -EOPNOTSUPP)
+ return ret;
+ info->alloc_offset = WP_MISSING_DEV;
+ return 0;
+ }
+
+ if (zone.type == BLK_ZONE_TYPE_CONVENTIONAL) {
+ btrfs_err_in_rcu(fs_info,
+ "zoned: unexpected conventional zone %llu on device %s (devid %llu)",
+ zone.start << SECTOR_SHIFT,
+ rcu_str_deref(device->name), device->devid);
+ return -EIO;
+ }
+
+ info->capacity = (zone.capacity << SECTOR_SHIFT);
+
+ switch (zone.cond) {
+ case BLK_ZONE_COND_OFFLINE:
+ case BLK_ZONE_COND_READONLY:
+ btrfs_err(fs_info,
+ "zoned: offline/readonly zone %llu on device %s (devid %llu)",
+ info->physical >> device->zone_info->zone_size_shift,
+ rcu_str_deref(device->name), device->devid);
+ info->alloc_offset = WP_MISSING_DEV;
+ break;
+ case BLK_ZONE_COND_EMPTY:
+ info->alloc_offset = 0;
+ break;
+ case BLK_ZONE_COND_FULL:
+ info->alloc_offset = info->capacity;
+ break;
+ default:
+ /* Partially used zone */
+ info->alloc_offset = ((zone.wp - zone.start) << SECTOR_SHIFT);
+ __set_bit(zone_idx, active);
+ break;
+ }
+
+ return 0;
+}
+
int btrfs_load_block_group_zone_info(struct btrfs_block_group *cache, bool new)
{
struct btrfs_fs_info *fs_info = cache->fs_info;
struct extent_map_tree *em_tree = &fs_info->mapping_tree;
struct extent_map *em;
struct map_lookup *map;
- struct btrfs_device *device;
u64 logical = cache->start;
u64 length = cache->length;
struct zone_info *zone_info = NULL;
int ret;
int i;
- unsigned int nofs_flag;
unsigned long *active = NULL;
u64 last_alloc = 0;
u32 num_sequential = 0, num_conventional = 0;
@@ -1339,99 +1428,15 @@ int btrfs_load_block_group_zone_info(struct btrfs_block_group *cache, bool new)
}
for (i = 0; i < map->num_stripes; i++) {
- struct zone_info *info = zone_info + i;
- bool is_sequential;
- struct blk_zone zone;
- struct btrfs_dev_replace *dev_replace = &fs_info->dev_replace;
- int dev_replace_is_ongoing = 0;
-
- device = map->stripes[i].dev;
- info->physical = map->stripes[i].physical;
-
- if (device->bdev == NULL) {
- info->alloc_offset = WP_MISSING_DEV;
- continue;
- }
-
- is_sequential = btrfs_dev_is_sequential(device, info->physical);
- if (is_sequential)
- num_sequential++;
- else
- num_conventional++;
-
- /*
- * Consider a zone as active if we can allow any number of
- * active zones.
- */
- if (!device->zone_info->max_active_zones)
- __set_bit(i, active);
-
- if (!is_sequential) {
- info->alloc_offset = WP_CONVENTIONAL;
- continue;
- }
-
- /*
- * This zone will be used for allocation, so mark this zone
- * non-empty.
- */
- btrfs_dev_clear_zone_empty(device, info->physical);
-
- down_read(&dev_replace->rwsem);
- dev_replace_is_ongoing = btrfs_dev_replace_is_ongoing(dev_replace);
- if (dev_replace_is_ongoing && dev_replace->tgtdev != NULL)
- btrfs_dev_clear_zone_empty(dev_replace->tgtdev, info->physical);
- up_read(&dev_replace->rwsem);
-
- /*
- * The group is mapped to a sequential zone. Get the zone write
- * pointer to determine the allocation offset within the zone.
- */
- WARN_ON(!IS_ALIGNED(info->physical, fs_info->zone_size));
- nofs_flag = memalloc_nofs_save();
- ret = btrfs_get_dev_zone(device, info->physical, &zone);
- memalloc_nofs_restore(nofs_flag);
- if (ret == -EIO || ret == -EOPNOTSUPP) {
- ret = 0;
- info->alloc_offset = WP_MISSING_DEV;
- continue;
- } else if (ret) {
- goto out;
- }
-
- if (zone.type == BLK_ZONE_TYPE_CONVENTIONAL) {
- btrfs_err_in_rcu(fs_info,
- "zoned: unexpected conventional zone %llu on device %s (devid %llu)",
- zone.start << SECTOR_SHIFT,
- rcu_str_deref(device->name), device->devid);
- ret = -EIO;
+ ret = btrfs_load_zone_info(fs_info, i, &zone_info[i], active,
+ map);
+ if (ret)
goto out;
- }
- info->capacity = (zone.capacity << SECTOR_SHIFT);
-
- switch (zone.cond) {
- case BLK_ZONE_COND_OFFLINE:
- case BLK_ZONE_COND_READONLY:
- btrfs_err(fs_info,
- "zoned: offline/readonly zone %llu on device %s (devid %llu)",
- info->physical >> device->zone_info->zone_size_shift,
- rcu_str_deref(device->name), device->devid);
- info->alloc_offset = WP_MISSING_DEV;
- break;
- case BLK_ZONE_COND_EMPTY:
- info->alloc_offset = 0;
- break;
- case BLK_ZONE_COND_FULL:
- info->alloc_offset = info->capacity;
- break;
- default:
- /* Partially used zone */
- info->alloc_offset =
- ((zone.wp - zone.start) << SECTOR_SHIFT);
- __set_bit(i, active);
- break;
- }
+ if (zone_info[i].alloc_offset == WP_CONVENTIONAL)
+ num_conventional++;
+ else
+ num_sequential++;
}
if (num_sequential > 0)
--
2.39.2
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH 3/4] btrfs: split out a helper to handle single BGs from btrfs_load_block_group_zone_info
2023-06-05 8:51 split btrfs_load_block_group_zone_info Christoph Hellwig
2023-06-05 8:51 ` [PATCH 1/4] btrfs: introduce a zone_info struct to structure btrfs_load_block_group_zone_info Christoph Hellwig
2023-06-05 8:51 ` [PATCH 2/4] btrfs: factor out the per-zone logic from btrfs_load_block_group_zone_info Christoph Hellwig
@ 2023-06-05 8:51 ` Christoph Hellwig
2023-06-05 10:15 ` Johannes Thumshirn
2023-06-05 8:51 ` [PATCH 4/4] btrfs: split out a helper to handle dup " Christoph Hellwig
2023-07-20 13:32 ` split btrfs_load_block_group_zone_info Christoph Hellwig
4 siblings, 1 reply; 12+ messages in thread
From: Christoph Hellwig @ 2023-06-05 8:51 UTC (permalink / raw)
To: Chris Mason, Josef Bacik, David Sterba
Cc: Johannes Thumshirn, Naohiro Aota, linux-btrfs
Split the code handling a type single block group from
btrfs_load_block_group_zone_info to make the code more readable.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
fs/btrfs/zoned.c | 31 ++++++++++++++++++++-----------
1 file changed, 20 insertions(+), 11 deletions(-)
diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c
index 533cbe849cd60f..ea1f7f26a42249 100644
--- a/fs/btrfs/zoned.c
+++ b/fs/btrfs/zoned.c
@@ -1373,6 +1373,24 @@ static int btrfs_load_zone_info(struct btrfs_fs_info *fs_info, int zone_idx,
return 0;
}
+static int btrfs_load_block_group_single(struct btrfs_block_group *bg,
+ struct zone_info *info,
+ unsigned long *active)
+{
+ if (info->alloc_offset == WP_MISSING_DEV) {
+ btrfs_err(bg->fs_info,
+ "zoned: cannot recover write pointer for zone %llu",
+ info->physical);
+ return -EIO;
+ }
+
+ bg->alloc_offset = info->alloc_offset;
+ bg->zone_capacity = info->capacity;
+ if (test_bit(0, active))
+ set_bit(BLOCK_GROUP_FLAG_ZONE_IS_ACTIVE, &bg->runtime_flags);
+ return 0;
+}
+
int btrfs_load_block_group_zone_info(struct btrfs_block_group *cache, bool new)
{
struct btrfs_fs_info *fs_info = cache->fs_info;
@@ -1460,17 +1478,8 @@ int btrfs_load_block_group_zone_info(struct btrfs_block_group *cache, bool new)
switch (map->type & BTRFS_BLOCK_GROUP_PROFILE_MASK) {
case 0: /* single */
- if (zone_info[0].alloc_offset == WP_MISSING_DEV) {
- btrfs_err(fs_info,
- "zoned: cannot recover write pointer for zone %llu",
- zone_info[0].physical);
- ret = -EIO;
- goto out;
- }
- cache->alloc_offset = zone_info[0].alloc_offset;
- cache->zone_capacity = zone_info[0].capacity;
- if (test_bit(0, active))
- set_bit(BLOCK_GROUP_FLAG_ZONE_IS_ACTIVE, &cache->runtime_flags);
+ ret = btrfs_load_block_group_single(cache, &zone_info[0],
+ active);
break;
case BTRFS_BLOCK_GROUP_DUP:
if (map->type & BTRFS_BLOCK_GROUP_DATA) {
--
2.39.2
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH 4/4] btrfs: split out a helper to handle dup BGs from btrfs_load_block_group_zone_info
2023-06-05 8:51 split btrfs_load_block_group_zone_info Christoph Hellwig
` (2 preceding siblings ...)
2023-06-05 8:51 ` [PATCH 3/4] btrfs: split out a helper to handle single BGs " Christoph Hellwig
@ 2023-06-05 8:51 ` Christoph Hellwig
2023-06-05 10:16 ` Johannes Thumshirn
2023-07-20 13:32 ` split btrfs_load_block_group_zone_info Christoph Hellwig
4 siblings, 1 reply; 12+ messages in thread
From: Christoph Hellwig @ 2023-06-05 8:51 UTC (permalink / raw)
To: Chris Mason, Josef Bacik, David Sterba
Cc: Johannes Thumshirn, Naohiro Aota, linux-btrfs
Split the code handling a type dup block group from
btrfs_load_block_group_zone_info to make the code more readable.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
fs/btrfs/zoned.c | 80 +++++++++++++++++++++++++-----------------------
1 file changed, 42 insertions(+), 38 deletions(-)
diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c
index ea1f7f26a42249..7b575aca06236f 100644
--- a/fs/btrfs/zoned.c
+++ b/fs/btrfs/zoned.c
@@ -1391,6 +1391,47 @@ static int btrfs_load_block_group_single(struct btrfs_block_group *bg,
return 0;
}
+static int btrfs_load_block_group_dup(struct btrfs_block_group *bg,
+ struct map_lookup *map,
+ struct zone_info *zone_info,
+ unsigned long *active)
+{
+ if (map->type & BTRFS_BLOCK_GROUP_DATA) {
+ btrfs_err(bg->fs_info,
+ "zoned: profile DUP not yet supported on data bg");
+ return -EINVAL;
+ }
+
+ if (zone_info[0].alloc_offset == WP_MISSING_DEV) {
+ btrfs_err(bg->fs_info,
+ "zoned: cannot recover write pointer for zone %llu",
+ zone_info[0].physical);
+ return -EIO;
+ }
+ if (zone_info[1].alloc_offset == WP_MISSING_DEV) {
+ btrfs_err(bg->fs_info,
+ "zoned: cannot recover write pointer for zone %llu",
+ zone_info[1].physical);
+ return -EIO;
+ }
+ if (zone_info[0].alloc_offset != zone_info[1].alloc_offset) {
+ btrfs_err(bg->fs_info,
+ "zoned: write pointer offset mismatch of zones in DUP profile");
+ return -EIO;
+ }
+
+ if (test_bit(0, active) != test_bit(1, active)) {
+ if (!btrfs_zone_activate(bg))
+ return -EIO;
+ } else if (test_bit(0, active)) {
+ set_bit(BLOCK_GROUP_FLAG_ZONE_IS_ACTIVE, &bg->runtime_flags);
+ }
+
+ bg->alloc_offset = zone_info[0].alloc_offset;
+ bg->zone_capacity = min(zone_info[0].capacity, zone_info[1].capacity);
+ return 0;
+}
+
int btrfs_load_block_group_zone_info(struct btrfs_block_group *cache, bool new)
{
struct btrfs_fs_info *fs_info = cache->fs_info;
@@ -1482,44 +1523,7 @@ int btrfs_load_block_group_zone_info(struct btrfs_block_group *cache, bool new)
active);
break;
case BTRFS_BLOCK_GROUP_DUP:
- if (map->type & BTRFS_BLOCK_GROUP_DATA) {
- btrfs_err(fs_info, "zoned: profile DUP not yet supported on data bg");
- ret = -EINVAL;
- goto out;
- }
- if (zone_info[0].alloc_offset == WP_MISSING_DEV) {
- btrfs_err(fs_info,
- "zoned: cannot recover write pointer for zone %llu",
- zone_info[0].physical);
- ret = -EIO;
- goto out;
- }
- if (zone_info[1].alloc_offset == WP_MISSING_DEV) {
- btrfs_err(fs_info,
- "zoned: cannot recover write pointer for zone %llu",
- zone_info[1].physical);
- ret = -EIO;
- goto out;
- }
- if (zone_info[0].alloc_offset != zone_info[1].alloc_offset) {
- btrfs_err(fs_info,
- "zoned: write pointer offset mismatch of zones in DUP profile");
- ret = -EIO;
- goto out;
- }
- if (test_bit(0, active) != test_bit(1, active)) {
- if (!btrfs_zone_activate(cache)) {
- ret = -EIO;
- goto out;
- }
- } else {
- if (test_bit(0, active))
- set_bit(BLOCK_GROUP_FLAG_ZONE_IS_ACTIVE,
- &cache->runtime_flags);
- }
- cache->alloc_offset = zone_info[0].alloc_offset;
- cache->zone_capacity = min(zone_info[0].capacity,
- zone_info[1].capacity);
+ ret = btrfs_load_block_group_dup(cache, map, zone_info, active);
break;
case BTRFS_BLOCK_GROUP_RAID1:
case BTRFS_BLOCK_GROUP_RAID0:
--
2.39.2
^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH 1/4] btrfs: introduce a zone_info struct to structure btrfs_load_block_group_zone_info
2023-06-05 8:51 ` [PATCH 1/4] btrfs: introduce a zone_info struct to structure btrfs_load_block_group_zone_info Christoph Hellwig
@ 2023-06-05 10:09 ` Johannes Thumshirn
0 siblings, 0 replies; 12+ messages in thread
From: Johannes Thumshirn @ 2023-06-05 10:09 UTC (permalink / raw)
To: Christoph Hellwig, Chris Mason, Josef Bacik, David Sterba
Cc: Naohiro Aota, linux-btrfs@vger.kernel.org
Looks good,
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 2/4] btrfs: factor out the per-zone logic from btrfs_load_block_group_zone_info
2023-06-05 8:51 ` [PATCH 2/4] btrfs: factor out the per-zone logic from btrfs_load_block_group_zone_info Christoph Hellwig
@ 2023-06-05 10:15 ` Johannes Thumshirn
0 siblings, 0 replies; 12+ messages in thread
From: Johannes Thumshirn @ 2023-06-05 10:15 UTC (permalink / raw)
To: Christoph Hellwig, Chris Mason, Josef Bacik, David Sterba
Cc: Naohiro Aota, linux-btrfs@vger.kernel.org
Looks good,
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 3/4] btrfs: split out a helper to handle single BGs from btrfs_load_block_group_zone_info
2023-06-05 8:51 ` [PATCH 3/4] btrfs: split out a helper to handle single BGs " Christoph Hellwig
@ 2023-06-05 10:15 ` Johannes Thumshirn
0 siblings, 0 replies; 12+ messages in thread
From: Johannes Thumshirn @ 2023-06-05 10:15 UTC (permalink / raw)
To: Christoph Hellwig, Chris Mason, Josef Bacik, David Sterba
Cc: Naohiro Aota, linux-btrfs@vger.kernel.org
Looks good,
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 4/4] btrfs: split out a helper to handle dup BGs from btrfs_load_block_group_zone_info
2023-06-05 8:51 ` [PATCH 4/4] btrfs: split out a helper to handle dup " Christoph Hellwig
@ 2023-06-05 10:16 ` Johannes Thumshirn
0 siblings, 0 replies; 12+ messages in thread
From: Johannes Thumshirn @ 2023-06-05 10:16 UTC (permalink / raw)
To: Christoph Hellwig, Chris Mason, Josef Bacik, David Sterba
Cc: Naohiro Aota, linux-btrfs@vger.kernel.org
Looks good,
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: split btrfs_load_block_group_zone_info
2023-06-05 8:51 split btrfs_load_block_group_zone_info Christoph Hellwig
` (3 preceding siblings ...)
2023-06-05 8:51 ` [PATCH 4/4] btrfs: split out a helper to handle dup " Christoph Hellwig
@ 2023-07-20 13:32 ` Christoph Hellwig
2023-09-12 14:07 ` Johannes Thumshirn
4 siblings, 1 reply; 12+ messages in thread
From: Christoph Hellwig @ 2023-07-20 13:32 UTC (permalink / raw)
To: Chris Mason, Josef Bacik, David Sterba
Cc: Johannes Thumshirn, Naohiro Aota, linux-btrfs
Hi Dave,
can you take a look at this series? It's been out for almost 7
weeks and collected a few review. The patches still apply fine to the
latest misc-next branch.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: split btrfs_load_block_group_zone_info
2023-07-20 13:32 ` split btrfs_load_block_group_zone_info Christoph Hellwig
@ 2023-09-12 14:07 ` Johannes Thumshirn
2023-09-13 16:20 ` David Sterba
0 siblings, 1 reply; 12+ messages in thread
From: Johannes Thumshirn @ 2023-09-12 14:07 UTC (permalink / raw)
To: Christoph Hellwig, Chris Mason, Josef Bacik, David Sterba
Cc: Naohiro Aota, linux-btrfs@vger.kernel.org
On 20.07.23 15:33, Christoph Hellwig wrote:
> Hi Dave,
>
> can you take a look at this series? It's been out for almost 7
> weeks and collected a few review. The patches still apply fine to the
> latest misc-next branch.
>
The series still applies just fine (just verified with 'b4 shazam') and
builds nicely.
Can we please get this merged? That'll unclutter
btrfs_load_block_group_zone_info a lot.
Thanks,
Johannes
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: split btrfs_load_block_group_zone_info
2023-09-12 14:07 ` Johannes Thumshirn
@ 2023-09-13 16:20 ` David Sterba
0 siblings, 0 replies; 12+ messages in thread
From: David Sterba @ 2023-09-13 16:20 UTC (permalink / raw)
To: Johannes Thumshirn
Cc: Christoph Hellwig, Chris Mason, Josef Bacik, David Sterba,
Naohiro Aota, linux-btrfs@vger.kernel.org
On Tue, Sep 12, 2023 at 02:07:40PM +0000, Johannes Thumshirn wrote:
> On 20.07.23 15:33, Christoph Hellwig wrote:
> > Hi Dave,
> >
> > can you take a look at this series? It's been out for almost 7
> > weeks and collected a few review. The patches still apply fine to the
> > latest misc-next branch.
> >
>
> The series still applies just fine (just verified with 'b4 shazam') and
> builds nicely.
>
> Can we please get this merged? That'll unclutter
> btrfs_load_block_group_zone_info a lot.
Added to misc-next, thanks.
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2023-09-13 16:20 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-06-05 8:51 split btrfs_load_block_group_zone_info Christoph Hellwig
2023-06-05 8:51 ` [PATCH 1/4] btrfs: introduce a zone_info struct to structure btrfs_load_block_group_zone_info Christoph Hellwig
2023-06-05 10:09 ` Johannes Thumshirn
2023-06-05 8:51 ` [PATCH 2/4] btrfs: factor out the per-zone logic from btrfs_load_block_group_zone_info Christoph Hellwig
2023-06-05 10:15 ` Johannes Thumshirn
2023-06-05 8:51 ` [PATCH 3/4] btrfs: split out a helper to handle single BGs " Christoph Hellwig
2023-06-05 10:15 ` Johannes Thumshirn
2023-06-05 8:51 ` [PATCH 4/4] btrfs: split out a helper to handle dup " Christoph Hellwig
2023-06-05 10:16 ` Johannes Thumshirn
2023-07-20 13:32 ` split btrfs_load_block_group_zone_info Christoph Hellwig
2023-09-12 14:07 ` Johannes Thumshirn
2023-09-13 16:20 ` David Sterba
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).