From: Jens Axboe <axboe@kernel.dk>
To: Bart Van Assche <bvanassche@acm.org>,
Damien Le Moal <dlemoal@kernel.org>,
"lsf-pc@lists.linux-foundation.org"
<lsf-pc@lists.linux-foundation.org>
Cc: "linux-block@vger.kernel.org" <linux-block@vger.kernel.org>,
"linux-scsi@vger.kernel.org" <linux-scsi@vger.kernel.org>,
"linux-nvme@lists.infradead.org" <linux-nvme@lists.infradead.org>,
Christoph Hellwig <hch@lst.de>
Subject: Re: [LSF/MM/BPF TOPIC] Improving Zoned Storage Support
Date: Wed, 17 Jan 2024 10:48:44 -0700 [thread overview]
Message-ID: <c38ab7b2-63aa-4a0c-9fa6-96be304d8df1@kernel.dk> (raw)
In-Reply-To: <43cc2e4c-1dce-40ab-b4dc-1aadbeb65371@acm.org>
On 1/17/24 10:36 AM, Bart Van Assche wrote:
> On 1/16/24 15:34, Damien Le Moal wrote:
>> FYI, I am about to post 20-something patches that completely remove zone write
>> locking and replace it with "zone write plugging". That is done above the IO
>> scheduler and also provides zone append emulation for drives that ask for it.
>>
>> With this change:
>> - Zone append emulation is moved to the block layer, as a generic
>> implementation. sd and dm zone append emulation code is removed.
>> - Any scheduler can be used, including "none". mq-deadline zone block device
>> special support is removed.
>> - Overall, a lot less code (the series removes more code than it adds).
>> - Reordering problems such as due to IO priority is resolved as well.
>>
>> This will need a lot of testing, which we are working on. But your help with
>> testing on UFS devices will be appreciated as well.
>
> When posting this patch series, please include performance results
> (IOPS) for a zoned null_blk device instance. mq-deadline doesn't support
> more than 200 K IOPS, which is less than what UFS devices support. I
> hope that this performance bottleneck will be solved with the new
> approach.
Not really zone related, but I was very aware of the single lock
limitations when I ported deadline to blk-mq. Was always hoping that
someone would actually take the time to make it more efficient, but so
far that hasn't happened. Or maybe it'll be a case of "just do it
yourself, Jens" at some point...
--
Jens Axboe
next prev parent reply other threads:[~2024-01-17 17:48 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-01-16 18:20 [LSF/MM/BPF TOPIC] Improving Zoned Storage Support Bart Van Assche
2024-01-16 23:34 ` Damien Le Moal
2024-01-17 1:21 ` Bart Van Assche
2024-01-17 17:36 ` Bart Van Assche
2024-01-17 17:48 ` Jens Axboe [this message]
2024-01-17 18:22 ` Bart Van Assche
2024-01-17 18:43 ` Jens Axboe
2024-01-17 20:06 ` Jens Axboe
2024-01-17 20:18 ` Bart Van Assche
2024-01-17 20:20 ` Jens Axboe
2024-01-17 21:02 ` Jens Axboe
2024-01-17 21:14 ` Jens Axboe
2024-01-17 21:33 ` Bart Van Assche
2024-01-17 21:40 ` Jens Axboe
2024-01-18 0:43 ` Bart Van Assche
2024-01-18 14:51 ` Jens Axboe
2024-01-18 0:38 ` Bart Van Assche
2024-01-18 0:42 ` Jens Axboe
2024-01-18 0:54 ` Bart Van Assche
2024-01-18 15:07 ` Jens Axboe
2024-01-17 8:15 ` Viacheslav Dubeyko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=c38ab7b2-63aa-4a0c-9fa6-96be304d8df1@kernel.dk \
--to=axboe@kernel.dk \
--cc=bvanassche@acm.org \
--cc=dlemoal@kernel.org \
--cc=hch@lst.de \
--cc=linux-block@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=linux-scsi@vger.kernel.org \
--cc=lsf-pc@lists.linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox