* [PATCH v20 00/12] Implement copy offload support
[not found] <CGME20240520102747epcas5p33497a911ca70c991e5da8e22c5d1336b@epcas5p3.samsung.com>
@ 2024-05-20 10:20 ` Nitesh Shetty
2024-05-20 22:54 ` Bart Van Assche
2024-06-01 5:47 ` Christoph Hellwig
0 siblings, 2 replies; 8+ messages in thread
From: Nitesh Shetty @ 2024-05-20 10:20 UTC (permalink / raw)
To: Jens Axboe, Jonathan Corbet, Alasdair Kergon, Mike Snitzer,
Mikulas Patocka, Keith Busch, Christoph Hellwig, Sagi Grimberg,
Chaitanya Kulkarni, Alexander Viro, Christian Brauner, Jan Kara
Cc: martin.petersen, bvanassche, david, hare, damien.lemoal, anuj20.g,
joshi.k, nitheshshetty, gost.dev, Nitesh Shetty, linux-block,
linux-kernel, linux-doc, dm-devel, linux-nvme, linux-fsdevel
The patch series covers the points discussed in the past and most recently
in LSFMM'24[0].
We have covered the initial agreed requirements in this patch set and
further additional features suggested by the community.
This is next iteration of our previous patch set v19[1].
Copy offload is performed using two bio's -
1. Take a plug
2. The first bio containing destination info is prepared and sent,
a request is formed.
3. This is followed by preparing and sending the second bio containing the
source info.
4. This bio is merged with the request containing the destination info.
5. The plug is released, and the request containing source and destination
bio's is sent to the driver.
This design helps to avoid putting payload (token) in the request,
as sending payload that is not data to the device is considered a bad
approach.
So copy offload works only for request based storage drivers.
We can make use of copy emulation in case copy offload capability is
absent.
Overall series supports:
========================
1. Driver
- NVMe Copy command (single NS, TP 4065), including support
in nvme-target (for block and file back end).
2. Block layer
- Block-generic copy (REQ_OP_COPY_DST/SRC), operation with
interface accommodating two block-devs
- Merging copy requests in request layer
- Emulation, for in-kernel user when offload is natively
absent
- dm-linear support (for cases not requiring split)
3. User-interface
- copy_file_range
Testing
=======
Copy offload can be tested on:
a. QEMU: NVME simple copy (TP 4065). By setting nvme-ns
parameters mssrl,mcl, msrc. For more info [2].
b. Null block device
c. NVMe Fabrics loopback.
d. blktests[3]
Emulation can be tested on any device.
fio[4].
Infra and plumbing:
===================
We populate copy_file_range callback in def_blk_fops.
For devices that support copy-offload, use blkdev_copy_offload to
achieve in-device copy.
However for cases, where device doesn't support offload,
use splice_copy_file_range.
For in-kernel users (like NVMe fabrics), use blkdev_copy_offload
if device is copy offload capable or else use emulation
using blkdev_copy_emulation.
Modify checks in copy_file_range to support block-device.
Blktests[3]
======================
tests/block/035-040: Runs copy offload and emulation on null
block device.
tests/block/050,055: Runs copy offload and emulation on test
nvme block device.
tests/nvme/056-067: Create a loop backed fabrics device and
run copy offload and emulation.
Future Work
===========
- loopback device copy offload support
- upstream fio to use copy offload
- upstream blktest to test copy offload
- update man pages for copy_file_range
- expand in-kernel users of copy offload
These are to be taken up after this minimal series is agreed upon.
Additional links:
=================
[0] https://lore.kernel.org/linux-nvme/CA+1E3rJ7BZ7LjQXXTdX+-0Edz=zT14mmPGMiVCzUgB33C60tbQ@mail.gmail.com/
https://lore.kernel.org/linux-nvme/f0e19ae4-b37a-e9a3-2be7-a5afb334a5c3@nvidia.com/
https://lore.kernel.org/linux-nvme/20230113094648.15614-1-nj.shetty@samsung.com/
[1] https://lore.kernel.org/linux-nvme/20231222061313.12260-1-nj.shetty@samsung.com/
[2] https://qemu-project.gitlab.io/qemu/system/devices/nvme.html#simple-copy
[3] https://github.com/nitesh-shetty/blktests/tree/feat/copy_offload/v19
[4] https://github.com/SamsungDS/fio/tree/copyoffload-3.35-v14
Changes since v19:
=================
- block, nvme: update queue limits atomically
Also remove reviewed by tag from Hannes and Luis for
these patches. As we feel these patches changed
significantly from the previous one.
- vfs: generic_copy_file_range to splice_file_range
- rebased to latest linux-next
Changes since v18:
=================
- block, nvmet, null: change of copy dst/src request opcodes form
19,21 to 18,19 (Keith Busch)
Change the copy bio submission order to destination copy bio
first, followed by source copy bio.
Changes since v17:
=================
- block, nvmet: static error fixes (Dan Carpenter, kernel test robot)
- nvmet: pass COPY_FILE_SPLICE flag for vfs_copy_file_range in case
file backed nvmet device
Changes since v16:
=================
- block: fixed memory leaks and renamed function (Jinyoung Choi)
- nvmet: static error fixes (kernel test robot)
Changes since v15:
=================
- fs, nvmet: don't fallback to copy emulation for copy offload IO
failure, user can still use emulation by disabling
device offload (Hannes)
- block: patch,function description changes (Hannes)
- added Reviewed-by from Hannes and Luis.
Changes since v14:
=================
- block: (Bart Van Assche)
1. BLK_ prefix addition to COPY_MAX_BYES and COPY_MAX_SEGMENTS
2. Improved function,patch,cover-letter description
3. Simplified refcount updating.
- null-blk, nvme:
4. static warning fixes (kernel test robot)
Changes since v13:
=================
- block:
1. Simplified copy offload and emulation helpers, now
caller needs to decide between offload/emulation fallback
2. src,dst bio order change (Christoph Hellwig)
3. refcount changes similar to dio (Christoph Hellwig)
4. Single outstanding IO for copy emulation (Christoph Hellwig)
5. use copy_max_sectors to identify copy offload
capability and other reviews (Damien, Christoph)
6. Return status in endio handler (Christoph Hellwig)
- nvme-fabrics: fallback to emulation in case of partial
offload completion
- in kernel user addition (Ming lei)
- indentation, documentation, minor fixes, misc changes (Damien,
Christoph)
- blktests changes to test kernel changes
Changes since v12:
=================
- block,nvme: Replaced token based approach with request based
single namespace capable approach (Christoph Hellwig)
Changes since v11:
=================
- Documentation: Improved documentation (Damien Le Moal)
- block,nvme: ssize_t return values (Darrick J. Wong)
- block: token is allocated to SECTOR_SIZE (Matthew Wilcox)
- block: mem leak fix (Maurizio Lombardi)
Changes since v10:
=================
- NVMeOF: optimization in NVMe fabrics (Chaitanya Kulkarni)
- NVMeOF: sparse warnings (kernel test robot)
Changes since v9:
=================
- null_blk, improved documentation, minor fixes(Chaitanya Kulkarni)
- fio, expanded testing and minor fixes (Vincent Fu)
Changes since v8:
=================
- null_blk, copy_max_bytes_hw is made config fs parameter
(Damien Le Moal)
- Negative error handling in copy_file_range (Christian Brauner)
- minor fixes, better documentation (Damien Le Moal)
- fio upgraded to 3.34 (Vincent Fu)
Changes since v7:
=================
- null block copy offload support for testing (Damien Le Moal)
- adding direct flag check for copy offload to block device,
as we are using generic_copy_file_range for cached cases.
- Minor fixes
Changes since v6:
=================
- copy_file_range instead of ioctl for direct block device
- Remove support for multi range (vectored) copy
- Remove ioctl interface for copy.
- Remove offload support in dm kcopyd.
Changes since v5:
=================
- Addition of blktests (Chaitanya Kulkarni)
- Minor fix for fabrics file backed path
- Remove buggy zonefs copy file range implementation.
Changes since v4:
=================
- make the offload and emulation design asynchronous (Hannes
Reinecke)
- fabrics loopback support
- sysfs naming improvements (Damien Le Moal)
- use kfree() instead of kvfree() in cio_await_completion
(Damien Le Moal)
- use ranges instead of rlist to represent range_entry (Damien
Le Moal)
- change argument ordering in blk_copy_offload suggested (Damien
Le Moal)
- removed multiple copy limit and merged into only one limit
(Damien Le Moal)
- wrap overly long lines (Damien Le Moal)
- other naming improvements and cleanups (Damien Le Moal)
- correctly format the code example in description (Damien Le
Moal)
- mark blk_copy_offload as static (kernel test robot)
Changes since v3:
=================
- added copy_file_range support for zonefs
- added documentation about new sysfs entries
- incorporated review comments on v3
- minor fixes
Changes since v2:
=================
- fixed possible race condition reported by Damien Le Moal
- new sysfs controls as suggested by Damien Le Moal
- fixed possible memory leak reported by Dan Carpenter, lkp
- minor fixes
Changes since v1:
=================
- sysfs documentation (Greg KH)
- 2 bios for copy operation (Bart Van Assche, Mikulas Patocka,
Martin K. Petersen, Douglas Gilbert)
- better payload design (Darrick J. Wong)
Anuj Gupta (1):
fs/read_write: Enable copy_file_range for block device.
Nitesh Shetty (11):
block: Introduce queue limits and sysfs for copy-offload support
Add infrastructure for copy offload in block and request layer.
block: add copy offload support
block: add emulation for copy
fs, block: copy_file_range for def_blk_ops for direct block device
nvme: add copy offload support
nvmet: add copy command support for bdev and file ns
dm: Add support for copy offload
dm: Enable copy offload for dm-linear target
null: Enable trace capability for null block
null_blk: add support for copy offload
Documentation/ABI/stable/sysfs-block | 23 ++
Documentation/block/null_blk.rst | 5 +
block/blk-core.c | 7 +
block/blk-lib.c | 427 +++++++++++++++++++++++++++
block/blk-merge.c | 41 +++
block/blk-settings.c | 34 ++-
block/blk-sysfs.c | 43 +++
block/blk.h | 16 +
block/elevator.h | 1 +
block/fops.c | 26 ++
drivers/block/null_blk/Makefile | 2 -
drivers/block/null_blk/main.c | 105 ++++++-
drivers/block/null_blk/null_blk.h | 1 +
drivers/block/null_blk/trace.h | 25 ++
drivers/block/null_blk/zoned.c | 1 -
drivers/md/dm-linear.c | 1 +
drivers/md/dm-table.c | 37 +++
drivers/md/dm.c | 7 +
drivers/nvme/host/constants.c | 1 +
drivers/nvme/host/core.c | 81 ++++-
drivers/nvme/host/trace.c | 19 ++
drivers/nvme/target/admin-cmd.c | 9 +-
drivers/nvme/target/io-cmd-bdev.c | 71 +++++
drivers/nvme/target/io-cmd-file.c | 50 ++++
drivers/nvme/target/nvmet.h | 1 +
drivers/nvme/target/trace.c | 19 ++
fs/read_write.c | 8 +-
include/linux/bio.h | 6 +-
include/linux/blk_types.h | 10 +
include/linux/blkdev.h | 23 ++
include/linux/device-mapper.h | 3 +
include/linux/nvme.h | 43 ++-
32 files changed, 1124 insertions(+), 22 deletions(-)
base-commit: dbd9e2e056d8577375ae4b31ada94f8aa3769e8a
--
2.17.1
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v20 00/12] Implement copy offload support
2024-05-20 10:20 ` Nitesh Shetty
@ 2024-05-20 22:54 ` Bart Van Assche
2024-06-01 5:47 ` Christoph Hellwig
1 sibling, 0 replies; 8+ messages in thread
From: Bart Van Assche @ 2024-05-20 22:54 UTC (permalink / raw)
To: Nitesh Shetty, Jens Axboe, Jonathan Corbet, Alasdair Kergon,
Mike Snitzer, Mikulas Patocka, Keith Busch, Christoph Hellwig,
Sagi Grimberg, Chaitanya Kulkarni, Alexander Viro,
Christian Brauner, Jan Kara
Cc: martin.petersen, david, hare, damien.lemoal, anuj20.g, joshi.k,
nitheshshetty, gost.dev, linux-block, linux-kernel, linux-doc,
dm-devel, linux-nvme, linux-fsdevel
On 5/20/24 03:20, Nitesh Shetty wrote:
> 4. This bio is merged with the request containing the destination info.
bios with different operation types must never be merged. From attempt_merge():
if (req_op(req) != req_op(next))
return NULL;
Thanks,
Bart.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v20 00/12] Implement copy offload support
2024-05-20 10:20 ` Nitesh Shetty
2024-05-20 22:54 ` Bart Van Assche
@ 2024-06-01 5:47 ` Christoph Hellwig
2024-06-03 10:53 ` Nitesh Shetty
1 sibling, 1 reply; 8+ messages in thread
From: Christoph Hellwig @ 2024-06-01 5:47 UTC (permalink / raw)
To: Nitesh Shetty
Cc: Jens Axboe, Jonathan Corbet, Alasdair Kergon, Mike Snitzer,
Mikulas Patocka, Keith Busch, Christoph Hellwig, Sagi Grimberg,
Chaitanya Kulkarni, Alexander Viro, Christian Brauner, Jan Kara,
martin.petersen, bvanassche, david, hare, damien.lemoal, anuj20.g,
joshi.k, nitheshshetty, gost.dev, linux-block, linux-kernel,
linux-doc, dm-devel, linux-nvme, linux-fsdevel
On Mon, May 20, 2024 at 03:50:13PM +0530, Nitesh Shetty wrote:
> So copy offload works only for request based storage drivers.
I don't think that is actually true. It just requires a fair amount of
code in a bio based driver to match the bios up.
I'm missing any kind of information on what this patch set as-is
actually helps with. What operations are sped up, for what operations
does it reduce resource usage?
Part of that might be that the included use case of offloading
copy_file_range doesn't seem particularly useful - on any advance
file system that would be done using reflinks anyway.
Have you considered hooking into dm-kcopyd which would be an
instant win instead? Or into garbage collection in zoned or other
log structured file systems? Those would probably really like
multiple source bios, though.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v20 00/12] Implement copy offload support
2024-06-01 5:47 ` Christoph Hellwig
@ 2024-06-03 10:53 ` Nitesh Shetty
0 siblings, 0 replies; 8+ messages in thread
From: Nitesh Shetty @ 2024-06-03 10:53 UTC (permalink / raw)
To: Christoph Hellwig
Cc: Jens Axboe, Jonathan Corbet, Alasdair Kergon, Mike Snitzer,
Mikulas Patocka, Keith Busch, Sagi Grimberg, Chaitanya Kulkarni,
Alexander Viro, Christian Brauner, Jan Kara, martin.petersen,
bvanassche, david, hare, damien.lemoal, anuj20.g, joshi.k,
nitheshshetty, gost.dev, linux-block, linux-kernel, linux-doc,
dm-devel, linux-nvme, linux-fsdevel
[-- Attachment #1: Type: text/plain, Size: 2475 bytes --]
On 01/06/24 07:47AM, Christoph Hellwig wrote:
>On Mon, May 20, 2024 at 03:50:13PM +0530, Nitesh Shetty wrote:
>> So copy offload works only for request based storage drivers.
>
>I don't think that is actually true. It just requires a fair amount of
>code in a bio based driver to match the bios up.
>
>I'm missing any kind of information on what this patch set as-is
>actually helps with. What operations are sped up, for what operations
>does it reduce resource usage?
>
The major benefit of this copy-offload/emulation framework is
observed in fabrics setup, for copy workloads across the network.
The host will send offload command over the network and actual copy
can be achieved using emulation on the target (hence patch 4).
This results in higher performance and lower network consumption,
as compared to read and write travelling across the network.
With this design of copy-offload/emulation we are able to see the
following improvements as compared to userspace read + write on a
NVMeOF TCP setup:
Setup1: Network Speed: 1000Mb/s
Host PC: Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz
Target PC: AMD Ryzen 9 5900X 12-Core Processor
block size 8k:
Improvement in IO BW from 106 MiB/s to 360 MiB/s
Network utilisation drops from 97% to 6%.
block-size 1M:
Improvement in IO BW from 104 MiB/s to 2677 MiB/s
Network utilisation drops from 92% to 0.66%.
Setup2: Network Speed: 100Gb/s
Server: Intel(R) Xeon(R) Gold 6240 CPU @ 2.60GHz, 72 cores
(host and target have the same configuration)
block-size 8k:
17.5% improvement in IO BW (794 MiB/s to 933 MiB/s).
Network utilisation drops from 6.75% to 0.16%.
>Part of that might be that the included use case of offloading
>copy_file_range doesn't seem particularly useful - on any advance
>file system that would be done using reflinks anyway.
>
Instead of coining a new user interface just for copy,
we thought of using existing infra for plumbing.
When this series gets merged, we can add io-uring interface.
>Have you considered hooking into dm-kcopyd which would be an
>instant win instead? Or into garbage collection in zoned or other
>log structured file systems? Those would probably really like
>multiple source bios, though.
>
Our initial few version of the series had dm-kcopyd use case.
We dropped it, to make overall series lightweight and make it
easier to review and test.
When the current series gets merged, we will start adding
more in-kernel users in next phase.
Thank you,
Nitesh Shetty
[-- Attachment #2: Type: text/plain, Size: 0 bytes --]
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v20 00/12] Implement copy offload support
@ 2024-06-04 4:32 Christoph Hellwig
2024-06-04 7:16 ` Hannes Reinecke
0 siblings, 1 reply; 8+ messages in thread
From: Christoph Hellwig @ 2024-06-04 4:32 UTC (permalink / raw)
To: Nitesh Shetty
Cc: Christoph Hellwig, Jens Axboe, Jonathan Corbet, Alasdair Kergon,
Mike Snitzer, Mikulas Patocka, Keith Busch, Sagi Grimberg,
Chaitanya Kulkarni, Alexander Viro, Christian Brauner, Jan Kara,
martin.petersen, bvanassche, david, hare, damien.lemoal, anuj20.g,
joshi.k, nitheshshetty, gost.dev, linux-block, linux-kernel,
linux-doc, dm-devel, linux-nvme, linux-fsdevel
On Mon, Jun 03, 2024 at 10:53:39AM +0000, Nitesh Shetty wrote:
> The major benefit of this copy-offload/emulation framework is
> observed in fabrics setup, for copy workloads across the network.
> The host will send offload command over the network and actual copy
> can be achieved using emulation on the target (hence patch 4).
> This results in higher performance and lower network consumption,
> as compared to read and write travelling across the network.
> With this design of copy-offload/emulation we are able to see the
> following improvements as compared to userspace read + write on a
> NVMeOF TCP setup:
What is the use case of this? What workloads does raw copies a lot
of data inside a single block device?
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v20 00/12] Implement copy offload support
2024-06-04 4:32 [PATCH v20 00/12] Implement copy offload support Christoph Hellwig
@ 2024-06-04 7:16 ` Hannes Reinecke
2024-06-04 7:39 ` Damien Le Moal
0 siblings, 1 reply; 8+ messages in thread
From: Hannes Reinecke @ 2024-06-04 7:16 UTC (permalink / raw)
To: Christoph Hellwig, Nitesh Shetty
Cc: Jens Axboe, Jonathan Corbet, Alasdair Kergon, Mike Snitzer,
Mikulas Patocka, Keith Busch, Sagi Grimberg, Chaitanya Kulkarni,
Alexander Viro, Christian Brauner, Jan Kara, martin.petersen,
bvanassche, david, damien.lemoal, anuj20.g, joshi.k,
nitheshshetty, gost.dev, linux-block, linux-kernel, linux-doc,
dm-devel, linux-nvme, linux-fsdevel
On 6/4/24 06:32, Christoph Hellwig wrote:
> On Mon, Jun 03, 2024 at 10:53:39AM +0000, Nitesh Shetty wrote:
>> The major benefit of this copy-offload/emulation framework is
>> observed in fabrics setup, for copy workloads across the network.
>> The host will send offload command over the network and actual copy
>> can be achieved using emulation on the target (hence patch 4).
>> This results in higher performance and lower network consumption,
>> as compared to read and write travelling across the network.
>> With this design of copy-offload/emulation we are able to see the
>> following improvements as compared to userspace read + write on a
>> NVMeOF TCP setup:
>
> What is the use case of this? What workloads does raw copies a lot
> of data inside a single block device?
>
The canonical example would be VM provisioning from a master copy.
That's not within a single block device, mind; that's more for copying
the contents of one device to another.
But I wasn't aware that this approach is limited to copying within a
single block devices; that would be quite pointless indeed.
Cheers,
Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
hare@suse.de +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v20 00/12] Implement copy offload support
2024-06-04 7:16 ` Hannes Reinecke
@ 2024-06-04 7:39 ` Damien Le Moal
2024-06-05 8:14 ` Christoph Hellwig
0 siblings, 1 reply; 8+ messages in thread
From: Damien Le Moal @ 2024-06-04 7:39 UTC (permalink / raw)
To: Hannes Reinecke, Christoph Hellwig, Nitesh Shetty
Cc: Jens Axboe, Jonathan Corbet, Alasdair Kergon, Mike Snitzer,
Mikulas Patocka, Keith Busch, Sagi Grimberg, Chaitanya Kulkarni,
Alexander Viro, Christian Brauner, Jan Kara, martin.petersen,
bvanassche, david, damien.lemoal, anuj20.g, joshi.k,
nitheshshetty, gost.dev, linux-block, linux-kernel, linux-doc,
dm-devel, linux-nvme, linux-fsdevel
On 6/4/24 16:16, Hannes Reinecke wrote:
> On 6/4/24 06:32, Christoph Hellwig wrote:
>> On Mon, Jun 03, 2024 at 10:53:39AM +0000, Nitesh Shetty wrote:
>>> The major benefit of this copy-offload/emulation framework is
>>> observed in fabrics setup, for copy workloads across the network.
>>> The host will send offload command over the network and actual copy
>>> can be achieved using emulation on the target (hence patch 4).
>>> This results in higher performance and lower network consumption,
>>> as compared to read and write travelling across the network.
>>> With this design of copy-offload/emulation we are able to see the
>>> following improvements as compared to userspace read + write on a
>>> NVMeOF TCP setup:
>>
>> What is the use case of this? What workloads does raw copies a lot
>> of data inside a single block device?
>>
>
> The canonical example would be VM provisioning from a master copy.
> That's not within a single block device, mind; that's more for copying
> the contents of one device to another.
Wouldn't such use case more likely to use file copy ?
I have not heard a lot of cases where VM images occupy an entire block device,
but I may be mistaken here...
> But I wasn't aware that this approach is limited to copying within a
> single block devices; that would be quite pointless indeed.
Not pointless for any FS doing CoW+Rebalancing of block groups (e.g. btrfs) and
of course GC for FSes on zoned devices. But for this use case, an API allowing
multiple sources and one destination would be better.
--
Damien Le Moal
Western Digital Research
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v20 00/12] Implement copy offload support
2024-06-04 7:39 ` Damien Le Moal
@ 2024-06-05 8:14 ` Christoph Hellwig
0 siblings, 0 replies; 8+ messages in thread
From: Christoph Hellwig @ 2024-06-05 8:14 UTC (permalink / raw)
To: Damien Le Moal
Cc: Hannes Reinecke, Christoph Hellwig, Nitesh Shetty, Jens Axboe,
Jonathan Corbet, Alasdair Kergon, Mike Snitzer, Mikulas Patocka,
Keith Busch, Sagi Grimberg, Chaitanya Kulkarni, Alexander Viro,
Christian Brauner, Jan Kara, martin.petersen, bvanassche, david,
damien.lemoal, anuj20.g, joshi.k, nitheshshetty, gost.dev,
linux-block, linux-kernel, linux-doc, dm-devel, linux-nvme,
linux-fsdevel
On Tue, Jun 04, 2024 at 04:39:25PM +0900, Damien Le Moal wrote:
> > But I wasn't aware that this approach is limited to copying within a
> > single block devices; that would be quite pointless indeed.
>
> Not pointless for any FS doing CoW+Rebalancing of block groups (e.g. btrfs) and
> of course GC for FSes on zoned devices. But for this use case, an API allowing
> multiple sources and one destination would be better.
Yes.
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2024-06-05 8:14 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-06-04 4:32 [PATCH v20 00/12] Implement copy offload support Christoph Hellwig
2024-06-04 7:16 ` Hannes Reinecke
2024-06-04 7:39 ` Damien Le Moal
2024-06-05 8:14 ` Christoph Hellwig
[not found] <CGME20240520102747epcas5p33497a911ca70c991e5da8e22c5d1336b@epcas5p3.samsung.com>
2024-05-20 10:20 ` Nitesh Shetty
2024-05-20 22:54 ` Bart Van Assche
2024-06-01 5:47 ` Christoph Hellwig
2024-06-03 10:53 ` Nitesh Shetty
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).