From: Killian De Volder <killian.de.volder@megasoft.be>
To: linux-bcache@vger.kernel.org
Subject: Re: block and bucket sizes
Date: Thu, 13 Aug 2015 13:32:13 +0200 [thread overview]
Message-ID: <55CC803D.9080700@megasoft.be> (raw)
In-Reply-To: <55CC7023.1010806@buttersideup.com>
Well the block-size == the physical block size your bcache device will have.
(As in 4k sector disks VS 512bytes disks)
Please note that the physical-sector size (as advertised by your bcache device the OS and FS)
can have an impact on using it:
- FS might refuse to use smaller blocks then the sector size.
- When using lvm, the sector-size will the biggest of all the pv's used. (AND can change while you are doing an lvextend or pvmove)
- Possible other I don't know about.
I don't know if the block-size matters for performance, but usually the above points, trump performance.
Bucket-size should match erase size. How important is it ? Depend on how much your SSD cares about matched erase sizes.
Killian De Volder
On 13-08-15 12:23, Tim Small wrote:
> Hi,
>
> I couldn't find much in the way of docs on the block and bucket sizes...
>
> I created a bcache device (md 3 disk RAID5 backing, Intel S3500 cache),
> and initially used the default bucket and block sizes.
>
> It looks like flash erase block sizes are now almost universally larger
> than the bcache default bucket size, so if this is important (and the
> man page says it is), then maybe this needs to be increased?
>
> After a load of googling, I think that for this SSD (which uses
> Intel/Micron 20nm MLC), the page size is probably 8 kB, and the erase
> block size is probably 256 x 8 kB = 2 MB
>
> http://www.anandtech.com/show/7147/micron-announces-16nm-128gb-mlc-nand-ssds-in-2014
>
> - if on the other hand it uses 128 Gbit parts, then this will be 16 kB
> page size, and 8 MB erase block.
>
>
> So, after playing around a bit, I take it that:
>
> The block size for the backing and cache devices must be the same (are
> there any implications e.g. file system compatibility - with block sizes
> larger than 4 kB?).
>
> The default bucket size is smaller than the erase block size on this SSD
> (and probably most modern SSDs), and I was wondering if the default
> should be increased?
>
> I'm assuming most users are going to be getting these parameters "wrong"
> - but I'm not sure how much impact this will have on performance and SSD
> endurance? Does this need some sort of wiki -type table with a lookup
> between SSD model number and page/block size (which make-bcache could use)?
>
> It'll be a bit of a pain to move everything off my 512 byte block size
> backing store, and then recreate it, so should I bother?
>
> Tim.
> --
> To unsubscribe from this list: send the line "unsubscribe linux-bcache" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
prev parent reply other threads:[~2015-08-13 11:49 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-08-13 10:23 block and bucket sizes Tim Small
2015-08-13 11:32 ` Killian De Volder [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=55CC803D.9080700@megasoft.be \
--to=killian.de.volder@megasoft.be \
--cc=linux-bcache@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).