public inbox for linux-scsi@vger.kernel.org
 help / color / mirror / Atom feed
From: Damien Le Moal <dlemoal@kernel.org>
To: Bart Van Assche <bvanassche@acm.org>,
	Hannes Reinecke <hare@suse.de>,
	linux-block@vger.kernel.org, Jens Axboe <axboe@kernel.dk>,
	linux-scsi@vger.kernel.org,
	"Martin K . Petersen" <martin.petersen@oracle.com>,
	dm-devel@lists.linux.dev, Mike Snitzer <snitzer@redhat.com>
Cc: Christoph Hellwig <hch@lst.de>
Subject: Re: [PATCH 25/26] block: Reduce zone write plugging memory usage
Date: Mon, 12 Feb 2024 17:47:40 +0900	[thread overview]
Message-ID: <2b45ee45-5f2e-4923-9ef6-a7f03bcb65bf@kernel.org> (raw)
In-Reply-To: <c582fc6c-618e-4052-9f15-3045df819389@kernel.org>

On 2/12/24 17:23, Damien Le Moal wrote:
> On 2/11/24 12:40, Bart Van Assche wrote:
>> On 2/9/24 16:06, Damien Le Moal wrote:
>>> On 2/10/24 04:36, Bart Van Assche wrote:
>>>> written zones is typically less than 10. Hence, tracking the partially written
>>>
>>> That is far from guaranteed, especially with devices that have no active zone
>>> limits like SMR drives.
>>
>> Interesting. The zoned devices I'm working with try to keep data in memory
>> for all zones that are neither empty nor full and hence impose an upper limit
>> on the number of open zones.
>>
>>> But in any case, what exactly is your idea here ? Can you actually suggest
>>> something ? Are you suggesting that a sparse array of zone plugs be used, with
>>> an rb-tree or an xarray ? If that is what you are thinking, I can already tell
>>> you that this is the first thing I tried to do. Early versions of this work used
>>> a sparse xarray of zone plugs. But the problem with such approach is that it is
>>> a lot more complicated and there is a need for a single lock to manage that
>>> structure (which is really not good for performance).
>>
>> Hmm ... since the xarray data structure supports RCU I think that locking the
>> entire xarray is only required if the zone condition changes from empty into
>> not empty or from neither empty nor full into full?
>>
>> For the use cases I'm interested in a hash table implementation that supports
>> RCU-lookups probably will work better than an xarray. I think that the hash
>> table implementation in <linux/hashtable.h> supports RCU for lookups, insertion
>> and removal.
> 
> I spent some time digging into this and also revisiting the possibility of using
> an xarray. Conclusion is that this does not work well, at least in no way not
> better than what I did, and most of the time much worse. The reason is that we
> need at the very least to keep this information around:
> 1) If the zone is conventional or not
> 2) The zone capacity of sequential write required zones
> 
> Unless we keep this information, a report zone would be needed before starting
> writing to a zone that does not yet have a zone write plug allocated.
> 
> (1) and (2) above can be trivially combined into a single 32-bits value. But
> that value must exist for all zones. So at the very least, we need nr_zones * 4B
> of memory allocated at all time. For such case (i.e. non-sparse structure),
> xarray or hash table would be more costly in memory than a simple static array.
> 
> Given that we want to allocate/free zone write plugs dynamically as needed, we
> essentially need an array of pointers, so 8B * nr_zones for the base structure.
> From there, ideally, we should be able to use rcu to safely dereference/modify
> the array entries. However, static arrays are not supported by the rcu code from
> what I read.
> 
> Given this, my current approach that uses 16B per zone is the next best thing I
> can think of without introducing a single lock for modifying the array entries.
> 
> If you have any other idea, please share.

Replying to myself as I had an idea:
1) Store the zone capacity in a separate array: 4B * nr_zones needed. Storing
"0" as a value for a zone in that array would indicate that the zone is
conventional. No additional zone bitmap needed.
2) Use a sparse xarray for managing allocated zone write plugs: 64B per
allocated zone write plug needed, which for an SMR drive would generally be at
most 128 * 64B = 8K.

So for an SMR drive with 100,000 zones, that would be a total of 408 KB, instead
of the current 1.6 MB. Will try to prototype this to see how performance goes (I
am worried about the xarray lookup overhead in the hot path).

-- 
Damien Le Moal
Western Digital Research


  reply	other threads:[~2024-02-12  8:47 UTC|newest]

Thread overview: 107+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-02-02  7:30 [PATCH 00/26] Zone write plugging Damien Le Moal
2024-02-02  7:30 ` [PATCH 01/26] block: Restore sector of flush requests Damien Le Moal
2024-02-04 11:55   ` Hannes Reinecke
2024-02-05 17:22   ` Bart Van Assche
2024-02-05 23:42     ` Damien Le Moal
2024-02-02  7:30 ` [PATCH 02/26] block: Remove req_bio_endio() Damien Le Moal
2024-02-04 11:57   ` Hannes Reinecke
2024-02-05 17:28   ` Bart Van Assche
2024-02-05 23:45     ` Damien Le Moal
2024-02-09  6:53     ` Damien Le Moal
2024-02-02  7:30 ` [PATCH 03/26] block: Introduce bio_straddle_zones() and bio_offset_from_zone_start() Damien Le Moal
2024-02-03  4:09   ` Bart Van Assche
2024-02-04 11:58   ` Hannes Reinecke
2024-02-02  7:30 ` [PATCH 04/26] block: Introduce blk_zone_complete_request_bio() Damien Le Moal
2024-02-04 11:59   ` Hannes Reinecke
2024-02-02  7:30 ` [PATCH 05/26] block: Allow using bio_attempt_back_merge() internally Damien Le Moal
2024-02-03  4:11   ` Bart Van Assche
2024-02-04 12:00   ` Hannes Reinecke
2024-02-02  7:30 ` [PATCH 06/26] block: Introduce zone write plugging Damien Le Moal
2024-02-04  3:56   ` Ming Lei
2024-02-04 23:57     ` Damien Le Moal
2024-02-05  2:19       ` Ming Lei
2024-02-05  2:41         ` Damien Le Moal
2024-02-05  3:38           ` Ming Lei
2024-02-05  5:11           ` Christoph Hellwig
2024-02-05  5:37             ` Damien Le Moal
2024-02-05  5:50               ` Christoph Hellwig
2024-02-05  6:14                 ` Damien Le Moal
2024-02-05 10:06           ` Ming Lei
2024-02-05 12:20             ` Damien Le Moal
2024-02-05 12:43               ` Damien Le Moal
2024-02-04 12:14   ` Hannes Reinecke
2024-02-05 17:48   ` Bart Van Assche
2024-02-05 23:48     ` Damien Le Moal
2024-02-06  0:52       ` Bart Van Assche
2024-02-02  7:30 ` [PATCH 07/26] block: Allow zero value of max_zone_append_sectors queue limit Damien Le Moal
2024-02-04 12:15   ` Hannes Reinecke
2024-02-02  7:30 ` [PATCH 08/26] block: Implement zone append emulation Damien Le Moal
2024-02-04 12:24   ` Hannes Reinecke
2024-02-05  0:10     ` Damien Le Moal
2024-02-05 17:58   ` Bart Van Assche
2024-02-05 23:57     ` Damien Le Moal
2024-02-02  7:30 ` [PATCH 09/26] block: Allow BIO-based drivers to use blk_revalidate_disk_zones() Damien Le Moal
2024-02-04 12:26   ` Hannes Reinecke
2024-02-02  7:30 ` [PATCH 10/26] dm: Use the block layer zone append emulation Damien Le Moal
2024-02-03 17:58   ` Mike Snitzer
2024-02-05  5:38     ` Damien Le Moal
2024-02-05 20:33       ` Mike Snitzer
2024-02-05 23:40         ` Damien Le Moal
2024-02-06 20:41           ` Mike Snitzer
2024-02-04 12:30   ` Hannes Reinecke
2024-02-02  7:30 ` [PATCH 11/26] scsi: sd: " Damien Le Moal
2024-02-04 12:29   ` Hannes Reinecke
2024-02-06  1:55   ` Martin K. Petersen
2024-02-02  7:30 ` [PATCH 12/26] ublk_drv: Do not request ELEVATOR_F_ZBD_SEQ_WRITE elevator feature Damien Le Moal
2024-02-04 12:31   ` Hannes Reinecke
2024-02-02  7:30 ` [PATCH 13/26] null_blk: " Damien Le Moal
2024-02-04 12:31   ` Hannes Reinecke
2024-02-02  7:30 ` [PATCH 14/26] null_blk: Introduce zone_append_max_sectors attribute Damien Le Moal
2024-02-04 12:32   ` Hannes Reinecke
2024-02-02  7:30 ` [PATCH 15/26] null_blk: Introduce fua attribute Damien Le Moal
2024-02-04 12:33   ` Hannes Reinecke
2024-02-02  7:30 ` [PATCH 16/26] nvmet: zns: Do not reference the gendisk conv_zones_bitmap Damien Le Moal
2024-02-04 12:34   ` Hannes Reinecke
2024-02-02  7:30 ` [PATCH 17/26] block: Remove BLK_STS_ZONE_RESOURCE Damien Le Moal
2024-02-04 12:34   ` Hannes Reinecke
2024-02-02  7:30 ` [PATCH 18/26] block: Simplify blk_revalidate_disk_zones() interface Damien Le Moal
2024-02-04 12:35   ` Hannes Reinecke
2024-02-02  7:30 ` [PATCH 19/26] block: mq-deadline: Remove support for zone write locking Damien Le Moal
2024-02-04 12:36   ` Hannes Reinecke
2024-02-02  7:30 ` [PATCH 20/26] block: Remove elevator required features Damien Le Moal
2024-02-04 12:36   ` Hannes Reinecke
2024-02-02  7:30 ` [PATCH 21/26] block: Do not check zone type in blk_check_zone_append() Damien Le Moal
2024-02-04 12:37   ` Hannes Reinecke
2024-02-02  7:31 ` [PATCH 22/26] block: Move zone related debugfs attribute to blk-zoned.c Damien Le Moal
2024-02-04 12:38   ` Hannes Reinecke
2024-02-02  7:31 ` [PATCH 23/26] block: Remove zone write locking Damien Le Moal
2024-02-04 12:38   ` Hannes Reinecke
2024-02-02  7:31 ` [PATCH 24/26] block: Do not special-case plugging of zone write operations Damien Le Moal
2024-02-04 12:39   ` Hannes Reinecke
2024-02-02  7:31 ` [PATCH 25/26] block: Reduce zone write plugging memory usage Damien Le Moal
2024-02-04 12:42   ` Hannes Reinecke
2024-02-05 17:51     ` Bart Van Assche
2024-02-05 23:55       ` Damien Le Moal
2024-02-06 21:20         ` Bart Van Assche
2024-02-09  3:58           ` Damien Le Moal
2024-02-09 19:36             ` Bart Van Assche
2024-02-10  0:06               ` Damien Le Moal
2024-02-11  3:40                 ` Bart Van Assche
2024-02-12  1:09                   ` Damien Le Moal
2024-02-12 18:58                     ` Bart Van Assche
2024-02-12  8:23                   ` Damien Le Moal
2024-02-12  8:47                     ` Damien Le Moal [this message]
2024-02-12 18:40                       ` Bart Van Assche
2024-02-13  0:05                         ` Damien Le Moal
2024-02-02  7:31 ` [PATCH 26/26] block: Add zone_active_wplugs debugfs entry Damien Le Moal
2024-02-04 12:43   ` Hannes Reinecke
2024-02-02  7:37 ` [PATCH 00/26] Zone write plugging Damien Le Moal
2024-02-03 12:11   ` Jens Axboe
2024-02-09  5:28     ` Damien Le Moal
2024-02-05 17:21 ` Bart Van Assche
2024-02-05 23:42   ` Damien Le Moal
2024-02-06  0:57     ` Bart Van Assche
2024-02-05 18:18 ` Bart Van Assche
2024-02-06  0:07   ` Damien Le Moal
2024-02-06  1:25     ` Bart Van Assche
2024-02-09  4:03       ` Damien Le Moal

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2b45ee45-5f2e-4923-9ef6-a7f03bcb65bf@kernel.org \
    --to=dlemoal@kernel.org \
    --cc=axboe@kernel.dk \
    --cc=bvanassche@acm.org \
    --cc=dm-devel@lists.linux.dev \
    --cc=hare@suse.de \
    --cc=hch@lst.de \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-scsi@vger.kernel.org \
    --cc=martin.petersen@oracle.com \
    --cc=snitzer@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox