Linux bcache driver list
 help / color / mirror / Atom feed
From: Coly Li <colyli@suse.de>
To: Andrea Tomassetti <andrea.tomassetti-opensource@devo.com>
Cc: Bcache Linux <linux-bcache@vger.kernel.org>,
	Eric Wheeler <bcache@lists.ewheeler.net>
Subject: Re: Unusual value of optimal_io_size prevents bcache initialization
Date: Tue, 26 Sep 2023 23:28:02 +0800	[thread overview]
Message-ID: <C02D29AF-02FB-4814-A387-E78E2CB52872@suse.de> (raw)
In-Reply-To: <7BFB15E2-7FC6-40F8-8E26-8F23D12F2CD2@suse.de>

[-- Attachment #1: Type: text/plain, Size: 2284 bytes --]



> 2023年9月24日 22:06,Coly Li <colyli@suse.de> 写道:
> 

[snipped]

>>> Maybe bcache should not directly use q->limits.io_opt as d->stripe_size,
>>> it should be some value less than 1<<31 and aligned to optimal_io_size.
>>> After the code got merged into kernel for 10+ years, it is time to improve
>>> this calculation :-) >
>> Yeah, one of the other doubts I had was exactly regarding this value and if it is still "actual" to calculate it that way. Unfortunately, I don't have the expertise to have an opinion on it. Would you be so kind to share your knowledge and let me understand why it is calculated this way and why is it using the optimal io size? Is it using it to "writeback" optimal-sized blokes?
>> 
> 
> Most of the conditions when underlying hardware doesn’t declare its optimal io size, bcache uses 1<<31 as a default stripe size. It works fine for decade, so I will use it and make sure it is aligned to value of optimal io size. It should work fine. And I will compose a simple patch for this fix.
> 
>>>> Another consideration, stripe_sectors_dirty and full_dirty_stripes, the two
>>>> arrays allocated using n, are being used just in writeback mode, is this
>>>> correct? In my specific case, I'm not planning to use writeback mode so I
>>>> would expect bcache to not even try to create those arrays. Or, at least, to
>>>> not create them during initialization but just in case of a change in the
>>>> working mode (i.e. write-through -> writeback).
>>> Indeed, Mingzhe Zou (if I remember correctly) submitted a patch for this
>>> idea, but it is blocked by other depending patches which are not finished
>>> by me. Yes I like the idea to dynamically allocate/free d->stripe_sectors_dirty
>>> and d->full_dirty_stripes when they are necessary. I hope I may help to make
>>> the change go into upstream sooner.
>>> I will post a patch for your testing.
>> This would be great! Thank you very much! On the other side, if there's anything I can do to help I would be glad to contribute.
> 
> I will post a simple patch for the reported memory allocation failure for you to test soon.


Hi Andrea,

Could you please try the attached patch and see whether it makes some difference? Thank you in advance.

Coly Li


[-- Attachment #2: 0001-bcache-avoid-oversize-memory-allocation-by-small-str.patch --]
[-- Type: application/octet-stream, Size: 3808 bytes --]

From: Coly Li <colyli@suse.de>
Date: Tue, 26 Sep 2023 20:13:19 +0800
Subject: [PATCH] bcache: avoid oversize memory allocation by small stripe_size

Arraies bcache->stripe_sectors_dirty and bcache->full_dirty_stripes are
used for dirty data writeback, their sizes are decided by backing device
capacity and stripe size. Larger backing device capacity or smaller
stripe size make these two arraies occupies more dynamic memory space.

Currently bcache->stripe_size is directly inherited from
queue->limits.io_opt of underlying storage device. For normal hard
drives, its limits.io_opt is 0, and bcache sets the corresponding
stripe_size to 1TB (1<<31 sectors), it works fine 10+ years. But for
devices do declare value for queue->limits.io_opt, small stripe_size
(comparing to 1TB) becomes an issue for oversize memory allocations of
bcache->stripe_sectors_dirty and bcache->full_dirty_stripes, while the
capacity of hard drives gets much larger in recent decade.

For example a raid5 array assembled by three 20TB hardrives, the raid
device capacity is 40TB with typical 512KB limits.io_opt. After the math
calculation in bcache code, these two arraies will occupy 400MB dynamic
memory. Even worse Andrea Tomassetti reports that a 4KB limits.io_opt is
declared on a new 2TB hard drive, then these two arraies request 2GB and
512MB dynamic memory from kzalloc(). The result is that bcache device
always fails to initialize on his system.

To avoid the oversize memory allocation, bcache->stripe_size should not
directly inherited by queue->limits.io_opt from the underlying device.
This patch defines BCH_MIN_STRIPE_SZ (4MB) as minimal bcache stripe size
and set bcache device's stripe size against the declared limits.io_opt
value from the underlying storage device,
- If the declared limits.io_opt > BCH_MIN_STRIPE_SZ, bcache device will
  set its stripe size directly by this limits.io_opt value.
- If the declared limits.io_opt < BCH_MIN_STRIPE_SZ, bcache device will
  set its stripe size by a value multiplying limits.io_opt and euqal or
  large than BCH_MIN_STRIPE_SZ.

Then the minimal stripe size of a bcache device will always be >= 4MB.
For a 40TB raid5 device with 512KB limits.io_opt, memory occupied by
bcache->stripe_sectors_dirty and bcache->full_dirty_stripes will be 50MB
in total. For a 2TB hard drive with 4KB limits.io_opt, memory occupied
by these two arraies will be 2.5MB in total.

Such mount of memory allocated for bcache->stripe_sectors_dirty and
bcache->full_dirty_stripes is reasonable for most of storage devices.

Reported-by: Andrea Tomassetti <andrea.tomassetti-opensource@devo.com>
Signed-off-by: Coly Li <colyli@suse.de>
Cc: Eric Wheeler <bcache@lists.ewheeler.net>
---
 drivers/md/bcache/bcache.h | 1 +
 drivers/md/bcache/super.c  | 2 ++
 2 files changed, 3 insertions(+)

diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h
index 5a79bb3c272f..83eb7f27db3d 100644
--- a/drivers/md/bcache/bcache.h
+++ b/drivers/md/bcache/bcache.h
@@ -265,6 +265,7 @@ struct bcache_device {
 #define BCACHE_DEV_WB_RUNNING		3
 #define BCACHE_DEV_RATE_DW_RUNNING	4
 	int			nr_stripes;
+#define BCH_MIN_STRIPE_SZ		((4 << 20) >> SECTOR_SHIFT)
 	unsigned int		stripe_size;
 	atomic_t		*stripe_sectors_dirty;
 	unsigned long		*full_dirty_stripes;
diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
index 0ae2b3676293..0eb71543d773 100644
--- a/drivers/md/bcache/super.c
+++ b/drivers/md/bcache/super.c
@@ -905,6 +905,8 @@ static int bcache_device_init(struct bcache_device *d, unsigned int block_size,
 
 	if (!d->stripe_size)
 		d->stripe_size = 1 << 31;
+	else if (d->stripe_size < BCH_MIN_STRIPE_SZ)
+		d->stripe_size = round_up(BCH_MIN_STRIPE_SZ, d->stripe_size);
 
 	n = DIV_ROUND_UP_ULL(sectors, d->stripe_size);
 	if (!n || n > max_stripes) {
-- 
2.35.3


[-- Attachment #3: Type: text/plain, Size: 2 bytes --]




  parent reply	other threads:[~2023-09-26 15:28 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-09-22 13:26 Unusual value of optimal_io_size prevents bcache initialization Andrea Tomassetti
2023-09-22 14:22 ` Coly Li
2023-09-23 14:29   ` Andrea Tomassetti
2023-09-24 14:06     ` Coly Li
2023-09-25 20:41       ` Eric Wheeler
2023-09-26 15:37         ` Coly Li
2023-09-26 15:28       ` Coly Li [this message]
2023-09-26 20:53         ` Eric Wheeler
2023-09-27 12:37           ` [EXTERNAL] " Andrea Tomassetti
2023-09-27 12:52           ` Coly Li

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=C02D29AF-02FB-4814-A387-E78E2CB52872@suse.de \
    --to=colyli@suse.de \
    --cc=andrea.tomassetti-opensource@devo.com \
    --cc=bcache@lists.ewheeler.net \
    --cc=linux-bcache@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox