From: Damien Le Moal <dlemoal@kernel.org>
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: "Geert Uytterhoeven" <geert@linux-m68k.org>,
"Richard Weinberger" <richard@nod.at>,
"Philipp Reisner" <philipp.reisner@linbit.com>,
"Lars Ellenberg" <lars.ellenberg@linbit.com>,
"Christoph Böhmwalder" <christoph.boehmwalder@linbit.com>,
"Josef Bacik" <josef@toxicpanda.com>,
"Ming Lei" <ming.lei@redhat.com>,
"Michael S. Tsirkin" <mst@redhat.com>,
"Jason Wang" <jasowang@redhat.com>,
"Roger Pau Monné" <roger.pau@citrix.com>,
"Alasdair Kergon" <agk@redhat.com>,
"Mike Snitzer" <snitzer@kernel.org>,
"Mikulas Patocka" <mpatocka@redhat.com>,
"Song Liu" <song@kernel.org>, "Yu Kuai" <yukuai3@huawei.com>,
"Vineeth Vijayan" <vneethv@linux.ibm.com>,
"Martin K. Petersen" <martin.petersen@oracle.com>,
linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
drbd-dev@lists.linbit.com, nbd@other.debian.org,
linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
Subject: Re: [PATCH 13/26] block: move cache control settings out of queue->flags
Date: Tue, 11 Jun 2024 16:55:04 +0900 [thread overview]
Message-ID: <d21b162a-1fd3-4fd1-a17f-f127f964bdf1@kernel.org> (raw)
In-Reply-To: <20240611051929.513387-14-hch@lst.de>
On 6/11/24 2:19 PM, Christoph Hellwig wrote:
> Move the cache control settings into the queue_limits so that they
> can be set atomically and all I/O is frozen when changing the
> flags.
...so that they can be set atomically with the device queue frozen when
changing the flags.
may be better.
>
> Add new features and flags field for the driver set flags, and internal
> (usually sysfs-controlled) flags in the block layer. Note that we'll
> eventually remove enough field from queue_limits to bring it back to the
> previous size.
>
> The disable flag is inverted compared to the previous meaning, which
> means it now survives a rescan, similar to the max_sectors and
> max_discard_sectors user limits.
>
> The FLUSH and FUA flags are now inherited by blk_stack_limits, which
> simplified the code in dm a lot, but also causes a slight behavior
> change in that dm-switch and dm-unstripe now advertise a write cache
> despite setting num_flush_bios to 0. The I/O path will handle this
> gracefully, but as far as I can tell the lack of num_flush_bios
> and thus flush support is a pre-existing data integrity bug in those
> targets that really needs fixing, after which a non-zero num_flush_bios
> should be required in dm for targets that map to underlying devices.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
> .../block/writeback_cache_control.rst | 67 +++++++++++--------
> arch/um/drivers/ubd_kern.c | 2 +-
> block/blk-core.c | 2 +-
> block/blk-flush.c | 9 ++-
> block/blk-mq-debugfs.c | 2 -
> block/blk-settings.c | 29 ++------
> block/blk-sysfs.c | 29 +++++---
> block/blk-wbt.c | 4 +-
> drivers/block/drbd/drbd_main.c | 2 +-
> drivers/block/loop.c | 9 +--
> drivers/block/nbd.c | 14 ++--
> drivers/block/null_blk/main.c | 12 ++--
> drivers/block/ps3disk.c | 7 +-
> drivers/block/rnbd/rnbd-clt.c | 10 +--
> drivers/block/ublk_drv.c | 8 ++-
> drivers/block/virtio_blk.c | 20 ++++--
> drivers/block/xen-blkfront.c | 9 ++-
> drivers/md/bcache/super.c | 7 +-
> drivers/md/dm-table.c | 39 +++--------
> drivers/md/md.c | 8 ++-
> drivers/mmc/core/block.c | 42 ++++++------
> drivers/mmc/core/queue.c | 12 ++--
> drivers/mmc/core/queue.h | 3 +-
> drivers/mtd/mtd_blkdevs.c | 5 +-
> drivers/nvdimm/pmem.c | 4 +-
> drivers/nvme/host/core.c | 7 +-
> drivers/nvme/host/multipath.c | 6 --
> drivers/scsi/sd.c | 28 +++++---
> include/linux/blkdev.h | 38 +++++++++--
> 29 files changed, 227 insertions(+), 207 deletions(-)
>
> diff --git a/Documentation/block/writeback_cache_control.rst b/Documentation/block/writeback_cache_control.rst
> index b208488d0aae85..9cfe27f90253c7 100644
> --- a/Documentation/block/writeback_cache_control.rst
> +++ b/Documentation/block/writeback_cache_control.rst
> @@ -46,41 +46,50 @@ worry if the underlying devices need any explicit cache flushing and how
> the Forced Unit Access is implemented. The REQ_PREFLUSH and REQ_FUA flags
> may both be set on a single bio.
>
> +Feature settings for block drivers
> +----------------------------------
>
> -Implementation details for bio based block drivers
> ---------------------------------------------------------------
> +For devices that do not support volatile write caches there is no driver
> +support required, the block layer completes empty REQ_PREFLUSH requests before
> +entering the driver and strips off the REQ_PREFLUSH and REQ_FUA bits from
> +requests that have a payload.
>
> -These drivers will always see the REQ_PREFLUSH and REQ_FUA bits as they sit
> -directly below the submit_bio interface. For remapping drivers the REQ_FUA
> -bits need to be propagated to underlying devices, and a global flush needs
> -to be implemented for bios with the REQ_PREFLUSH bit set. For real device
> -drivers that do not have a volatile cache the REQ_PREFLUSH and REQ_FUA bits
> -on non-empty bios can simply be ignored, and REQ_PREFLUSH requests without
> -data can be completed successfully without doing any work. Drivers for
> -devices with volatile caches need to implement the support for these
> -flags themselves without any help from the block layer.
> +For devices with volatile write caches the driver needs to tell the block layer
> +that it supports flushing caches by setting the
>
> + BLK_FEAT_WRITE_CACHE
>
> -Implementation details for request_fn based block drivers
> ----------------------------------------------------------
> +flag in the queue_limits feature field. For devices that also support the FUA
> +bit the block layer needs to be told to pass on the REQ_FUA bit by also setting
> +the
>
> -For devices that do not support volatile write caches there is no driver
> -support required, the block layer completes empty REQ_PREFLUSH requests before
> -entering the driver and strips off the REQ_PREFLUSH and REQ_FUA bits from
> -requests that have a payload. For devices with volatile write caches the
> -driver needs to tell the block layer that it supports flushing caches by
> -doing::
> + BLK_FEAT_FUA
> +
> +flag in the features field of the queue_limits structure.
> +
> +Implementation details for bio based block drivers
> +--------------------------------------------------
> +
> +For bio based drivers the REQ_PREFLUSH and REQ_FUA bit are simplify passed on
> +to the driver if the drivers sets the BLK_FEAT_WRITE_CACHE flag and the drivers
> +needs to handle them.
> +
> +*NOTE*: The REQ_FUA bit also gets passed on when the BLK_FEAT_FUA flags is
> +_not_ set. Any bio based driver that sets BLK_FEAT_WRITE_CACHE also needs to
> +handle REQ_FUA.
>
> - blk_queue_write_cache(sdkp->disk->queue, true, false);
> +For remapping drivers the REQ_FUA bits need to be propagated to underlying
> +devices, and a global flush needs to be implemented for bios with the
> +REQ_PREFLUSH bit set.
>
> -and handle empty REQ_OP_FLUSH requests in its prep_fn/request_fn. Note that
> -REQ_PREFLUSH requests with a payload are automatically turned into a sequence
> -of an empty REQ_OP_FLUSH request followed by the actual write by the block
> -layer. For devices that also support the FUA bit the block layer needs
> -to be told to pass through the REQ_FUA bit using::
> +Implementation details for blk-mq drivers
> +-----------------------------------------
>
> - blk_queue_write_cache(sdkp->disk->queue, true, true);
> +When the BLK_FEAT_WRITE_CACHE flag is set, REQ_OP_WRITE | REQ_PREFLUSH requests
> +with a payload are automatically turned into a sequence of a REQ_OP_FLUSH
> +request followed by the actual write by the block layer.
>
> -and the driver must handle write requests that have the REQ_FUA bit set
> -in prep_fn/request_fn. If the FUA bit is not natively supported the block
> -layer turns it into an empty REQ_OP_FLUSH request after the actual write.
> +When the BLK_FEA_FUA flags is set, the REQ_FUA bit simplify passed on for the
s/BLK_FEA_FUA/BLK_FEAT_FUA
> +REQ_OP_WRITE request, else a REQ_OP_FLUSH request is sent by the block layer
> +after the completion of the write request for bio submissions with the REQ_FUA
> +bit set.
> diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
> index 5c787965b7d09e..4f524c1d5e08bd 100644
> --- a/block/blk-sysfs.c
> +++ b/block/blk-sysfs.c
> @@ -423,32 +423,41 @@ static ssize_t queue_io_timeout_store(struct request_queue *q, const char *page,
>
> static ssize_t queue_wc_show(struct request_queue *q, char *page)
> {
> - if (test_bit(QUEUE_FLAG_WC, &q->queue_flags))
> - return sprintf(page, "write back\n");
> -
> - return sprintf(page, "write through\n");
> + if (q->limits.features & BLK_FLAGS_WRITE_CACHE_DISABLED)
> + return sprintf(page, "write through\n");
> + return sprintf(page, "write back\n");
> }
>
> static ssize_t queue_wc_store(struct request_queue *q, const char *page,
> size_t count)
> {
> + struct queue_limits lim;
> + bool disable;
> + int err;
> +
> if (!strncmp(page, "write back", 10)) {
> - if (!test_bit(QUEUE_FLAG_HW_WC, &q->queue_flags))
> - return -EINVAL;
> - blk_queue_flag_set(QUEUE_FLAG_WC, q);
> + disable = false;
> } else if (!strncmp(page, "write through", 13) ||
> - !strncmp(page, "none", 4)) {
> - blk_queue_flag_clear(QUEUE_FLAG_WC, q);
> + !strncmp(page, "none", 4)) {
> + disable = true;
> } else {
> return -EINVAL;
> }
I think you can drop the curly brackets for this chain of if-else-if-else.
>
> + lim = queue_limits_start_update(q);
> + if (disable)
> + lim.flags |= BLK_FLAGS_WRITE_CACHE_DISABLED;
> + else
> + lim.flags &= ~BLK_FLAGS_WRITE_CACHE_DISABLED;
> + err = queue_limits_commit_update(q, &lim);
> + if (err)
> + return err;
> return count;
> }
> diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
> index fd789eeb62d943..fbe125d55e25b4 100644
> --- a/drivers/md/dm-table.c
> +++ b/drivers/md/dm-table.c
> @@ -1686,34 +1686,16 @@ int dm_calculate_queue_limits(struct dm_table *t,
> return validate_hardware_logical_block_alignment(t, limits);
> }
>
> -static int device_flush_capable(struct dm_target *ti, struct dm_dev *dev,
> - sector_t start, sector_t len, void *data)
> -{
> - unsigned long flush = (unsigned long) data;
> - struct request_queue *q = bdev_get_queue(dev->bdev);
> -
> - return (q->queue_flags & flush);
> -}
> -
> -static bool dm_table_supports_flush(struct dm_table *t, unsigned long flush)
> +/*
> + * Check if an target requires flush support even if none of the underlying
s/an/a
> + * devices need it (e.g. to persist target-specific metadata).
> + */
> +static bool dm_table_supports_flush(struct dm_table *t)
> {
> - /*
> - * Require at least one underlying device to support flushes.
> - * t->devices includes internal dm devices such as mirror logs
> - * so we need to use iterate_devices here, which targets
> - * supporting flushes must provide.
> - */
> for (unsigned int i = 0; i < t->num_targets; i++) {
> struct dm_target *ti = dm_table_get_target(t, i);
>
> - if (!ti->num_flush_bios)
> - continue;
> -
> - if (ti->flush_supported)
> - return true;
> -
> - if (ti->type->iterate_devices &&
> - ti->type->iterate_devices(ti, device_flush_capable, (void *) flush))
> + if (ti->num_flush_bios && ti->flush_supported)
> return true;
> }
> diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
> index c792d4d81e5fcc..4e8931a2c76b07 100644
> --- a/include/linux/blkdev.h
> +++ b/include/linux/blkdev.h
> @@ -282,6 +282,28 @@ static inline bool blk_op_is_passthrough(blk_opf_t op)
> return op == REQ_OP_DRV_IN || op == REQ_OP_DRV_OUT;
> }
>
> +/* flags set by the driver in queue_limits.features */
> +enum {
> + /* supports a a volatile write cache */
Repeated "a".
> + BLK_FEAT_WRITE_CACHE = (1u << 0),
> +
> + /* supports passing on the FUA bit */
> + BLK_FEAT_FUA = (1u << 1),
> +};
> +static inline bool blk_queue_write_cache(struct request_queue *q)
> +{
> + return (q->limits.features & BLK_FEAT_WRITE_CACHE) &&
> + (q->limits.flags & BLK_FLAGS_WRITE_CACHE_DISABLED);
Hmm, shouldn't this be !(q->limits.flags & BLK_FLAGS_WRITE_CACHE_DISABLED) ?
> +}
> +
> static inline bool bdev_write_cache(struct block_device *bdev)
> {
> - return test_bit(QUEUE_FLAG_WC, &bdev_get_queue(bdev)->queue_flags);
> + return blk_queue_write_cache(bdev_get_queue(bdev));
> }
>
> static inline bool bdev_fua(struct block_device *bdev)
> {
> - return test_bit(QUEUE_FLAG_FUA, &bdev_get_queue(bdev)->queue_flags);
> + return bdev_get_queue(bdev)->limits.features & BLK_FEAT_FUA;
> }
>
> static inline bool bdev_nowait(struct block_device *bdev)
--
Damien Le Moal
Western Digital Research
next prev parent reply other threads:[~2024-06-11 7:55 UTC|newest]
Thread overview: 107+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-06-11 5:19 move features flags into queue_limits Christoph Hellwig
2024-06-11 5:19 ` [PATCH 01/26] sd: fix sd_is_zoned Christoph Hellwig
2024-06-11 5:46 ` Damien Le Moal
2024-06-11 8:11 ` Hannes Reinecke
2024-06-11 10:50 ` Johannes Thumshirn
2024-06-11 19:18 ` Bart Van Assche
2024-06-11 5:19 ` [PATCH 02/26] sd: move zone limits setup out of sd_read_block_characteristics Christoph Hellwig
2024-06-11 5:51 ` Damien Le Moal
2024-06-11 5:52 ` Christoph Hellwig
2024-06-11 5:54 ` Christoph Hellwig
2024-06-11 7:25 ` Damien Le Moal
2024-06-11 7:20 ` Damien Le Moal
2024-06-12 4:45 ` Christoph Hellwig
2024-06-13 9:39 ` Christoph Hellwig
2024-06-16 23:01 ` Damien Le Moal
2024-06-17 4:53 ` Christoph Hellwig
2024-06-17 6:03 ` Damien Le Moal
2024-06-11 8:12 ` Hannes Reinecke
2024-06-11 5:19 ` [PATCH 03/26] loop: stop using loop_reconfigure_limits in __loop_clr_fd Christoph Hellwig
2024-06-11 5:53 ` Damien Le Moal
2024-06-11 5:54 ` Christoph Hellwig
2024-06-11 8:14 ` Hannes Reinecke
2024-06-11 19:21 ` Bart Van Assche
2024-06-11 5:19 ` [PATCH 04/26] loop: always update discard settings in loop_reconfigure_limits Christoph Hellwig
2024-06-11 5:54 ` Damien Le Moal
2024-06-11 8:15 ` Hannes Reinecke
2024-06-11 19:23 ` Bart Van Assche
2024-06-11 5:19 ` [PATCH 05/26] loop: regularize upgrading the lock size for direct I/O Christoph Hellwig
2024-06-11 5:56 ` Damien Le Moal
2024-06-11 5:59 ` Christoph Hellwig
2024-06-11 8:16 ` Hannes Reinecke
2024-06-11 19:27 ` Bart Van Assche
2024-06-11 5:19 ` [PATCH 06/26] loop: also use the default block size from an underlying block device Christoph Hellwig
2024-06-11 5:58 ` Damien Le Moal
2024-06-11 5:59 ` Christoph Hellwig
2024-06-11 8:17 ` Hannes Reinecke
2024-06-11 19:28 ` Bart Van Assche
2024-06-11 5:19 ` [PATCH 07/26] loop: fold loop_update_rotational into loop_reconfigure_limits Christoph Hellwig
2024-06-11 6:00 ` Damien Le Moal
2024-06-11 8:18 ` Hannes Reinecke
2024-06-11 19:31 ` Bart Van Assche
2024-06-11 5:19 ` [PATCH 08/26] virtio_blk: remove virtblk_update_cache_mode Christoph Hellwig
2024-06-11 7:26 ` Damien Le Moal
2024-06-11 8:19 ` Hannes Reinecke
2024-06-11 11:49 ` Johannes Thumshirn
2024-06-11 15:43 ` Stefan Hajnoczi
2024-06-11 19:32 ` Bart Van Assche
2024-06-11 5:19 ` [PATCH 09/26] nbd: move setting the cache control flags to __nbd_set_size Christoph Hellwig
2024-06-11 7:28 ` Damien Le Moal
2024-06-11 8:20 ` Hannes Reinecke
2024-06-11 16:50 ` Josef Bacik
2024-06-11 19:34 ` Bart Van Assche
2024-06-11 5:19 ` [PATCH 10/26] xen-blkfront: don't disable cache flushes when they fail Christoph Hellwig
2024-06-11 7:30 ` Damien Le Moal
2024-06-12 4:50 ` Christoph Hellwig
2024-06-11 8:21 ` Hannes Reinecke
2024-06-12 8:01 ` Roger Pau Monné
2024-06-12 15:00 ` Christoph Hellwig
2024-06-12 15:56 ` Roger Pau Monné
2024-06-13 14:05 ` Christoph Hellwig
2024-06-14 7:56 ` Roger Pau Monné
2024-06-11 5:19 ` [PATCH 11/26] block: freeze the queue in queue_attr_store Christoph Hellwig
2024-06-11 7:32 ` Damien Le Moal
2024-06-11 8:22 ` Hannes Reinecke
2024-06-11 19:36 ` Bart Van Assche
2024-06-11 5:19 ` [PATCH 12/26] block: remove blk_flush_policy Christoph Hellwig
2024-06-11 7:33 ` Damien Le Moal
2024-06-11 8:23 ` Hannes Reinecke
2024-06-11 19:37 ` Bart Van Assche
2024-06-11 5:19 ` [PATCH 13/26] block: move cache control settings out of queue->flags Christoph Hellwig
2024-06-11 7:55 ` Damien Le Moal [this message]
2024-06-12 4:54 ` Christoph Hellwig
2024-06-11 9:58 ` Hannes Reinecke
2024-06-12 4:52 ` Christoph Hellwig
2024-06-12 14:53 ` Ulf Hansson
2024-06-11 5:19 ` [PATCH 14/26] block: move the nonrot flag to queue_limits Christoph Hellwig
2024-06-11 8:02 ` Damien Le Moal
2024-06-11 5:19 ` [PATCH 15/26] block: move the add_random " Christoph Hellwig
2024-06-11 8:06 ` Damien Le Moal
2024-06-11 5:19 ` [PATCH 16/26] block: move the io_stat flag setting " Christoph Hellwig
2024-06-11 8:09 ` Damien Le Moal
2024-06-12 4:58 ` Christoph Hellwig
2024-06-11 5:19 ` [PATCH 17/26] block: move the stable_write flag " Christoph Hellwig
2024-06-11 8:12 ` Damien Le Moal
2024-06-11 5:19 ` [PATCH 18/26] block: move the synchronous " Christoph Hellwig
2024-06-11 8:13 ` Damien Le Moal
2024-06-11 5:19 ` [PATCH 19/26] block: move the nowait " Christoph Hellwig
2024-06-11 8:16 ` Damien Le Moal
2024-06-12 5:01 ` Christoph Hellwig
2024-06-11 5:19 ` [PATCH 20/26] block: move the dax " Christoph Hellwig
2024-06-11 8:17 ` Damien Le Moal
2024-06-11 5:19 ` [PATCH 21/26] block: move the poll " Christoph Hellwig
2024-06-11 8:21 ` Damien Le Moal
2024-06-12 5:03 ` Christoph Hellwig
2024-06-11 5:19 ` [PATCH 22/26] block: move the zoned flag into the feature field Christoph Hellwig
2024-06-11 8:23 ` Damien Le Moal
2024-06-11 5:19 ` [PATCH 23/26] block: move the zone_resetall flag to queue_limits Christoph Hellwig
2024-06-11 8:24 ` Damien Le Moal
2024-06-11 5:19 ` [PATCH 24/26] block: move the pci_p2pdma " Christoph Hellwig
2024-06-11 8:24 ` Damien Le Moal
2024-06-11 5:19 ` [PATCH 25/26] block: move the skip_tagset_quiesce " Christoph Hellwig
2024-06-11 8:25 ` Damien Le Moal
2024-06-11 5:19 ` [PATCH 26/26] block: move the bounce flag into the feature field Christoph Hellwig
2024-06-11 8:26 ` Damien Le Moal
-- strict thread matches above, loose matches on Subject: below --
2024-06-17 6:04 move features flags into queue_limits v2 Christoph Hellwig
2024-06-17 6:04 ` [PATCH 13/26] block: move cache control settings out of queue->flags Christoph Hellwig
2024-06-17 6:23 ` Damien Le Moal
2024-06-17 10:36 ` Hannes Reinecke
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=d21b162a-1fd3-4fd1-a17f-f127f964bdf1@kernel.org \
--to=dlemoal@kernel.org \
--cc=agk@redhat.com \
--cc=axboe@kernel.dk \
--cc=ceph-devel@vger.kernel.org \
--cc=christoph.boehmwalder@linbit.com \
--cc=dm-devel@lists.linux.dev \
--cc=drbd-dev@lists.linbit.com \
--cc=geert@linux-m68k.org \
--cc=hch@lst.de \
--cc=jasowang@redhat.com \
--cc=josef@toxicpanda.com \
--cc=lars.ellenberg@linbit.com \
--cc=linux-bcache@vger.kernel.org \
--cc=linux-block@vger.kernel.org \
--cc=linux-m68k@lists.linux-m68k.org \
--cc=linux-mmc@vger.kernel.org \
--cc=linux-mtd@lists.infradead.org \
--cc=linux-nvme@lists.infradead.org \
--cc=linux-raid@vger.kernel.org \
--cc=linux-s390@vger.kernel.org \
--cc=linux-scsi@vger.kernel.org \
--cc=linux-um@lists.infradead.org \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=martin.petersen@oracle.com \
--cc=ming.lei@redhat.com \
--cc=mpatocka@redhat.com \
--cc=mst@redhat.com \
--cc=nbd@other.debian.org \
--cc=nvdimm@lists.linux.dev \
--cc=philipp.reisner@linbit.com \
--cc=richard@nod.at \
--cc=roger.pau@citrix.com \
--cc=snitzer@kernel.org \
--cc=song@kernel.org \
--cc=virtualization@lists.linux.dev \
--cc=vneethv@linux.ibm.com \
--cc=xen-devel@lists.xenproject.org \
--cc=yukuai3@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).