From: Joe Landman <joe.landman@gmail.com>
To: Peter Grandi <pg@lxraid2.for.sabi.co.UK>,
Linux RAID <linux-raid@vger.kernel.org>
Subject: Re: The chunk size paradox
Date: Thu, 02 Jan 2014 23:58:49 -0500 [thread overview]
Message-ID: <52C64389.3080106@gmail.com> (raw)
In-Reply-To: <21189.62662.712801.352081@tree.ty.sabi.co.uk>
On 01/02/2014 06:22 PM, Peter Grandi wrote:
> # grep . /sys/block/sd?/queue/physical_block_size
> /sys/block/sda/queue/physical_block_size:512
> /sys/block/sdb/queue/physical_block_size:4096
> /sys/block/sdc/queue/physical_block_size:4096
> /sys/block/sdd/queue/physical_block_size:512
> /sys/block/sde/queue/physical_block_size:4096
> /sys/block/sdf/queue/physical_block_size:512
>
> They are all 2TB "consumer" drives, mostly recent ones. I am
> slightly surprised that half still have 512 physical sectors.
Gaak ... our HGST are showing this as well. For some reason I think I
have an old sdparm when I looked before. Trust that the kernel will
rarely lie to you even if the tools that work with it do.
[root@unison ~]# grep . /sys/block/sd*/queue/physical_block_size
/sys/block/sdaa/queue/physical_block_size:4096
/sys/block/sdab/queue/physical_block_size:4096
/sys/block/sdac/queue/physical_block_size:512
/sys/block/sdad/queue/physical_block_size:4096
/sys/block/sdae/queue/physical_block_size:4096
/sys/block/sdaf/queue/physical_block_size:4096
/sys/block/sdag/queue/physical_block_size:4096
/sys/block/sdah/queue/physical_block_size:4096
/sys/block/sdai/queue/physical_block_size:4096
/sys/block/sdaj/queue/physical_block_size:4096
/sys/block/sdak/queue/physical_block_size:512
/sys/block/sdal/queue/physical_block_size:4096
/sys/block/sdam/queue/physical_block_size:4096
/sys/block/sdan/queue/physical_block_size:4096
/sys/block/sdao/queue/physical_block_size:4096
/sys/block/sdap/queue/physical_block_size:4096
/sys/block/sdaq/queue/physical_block_size:4096
/sys/block/sda/queue/physical_block_size:4096
/sys/block/sdar/queue/physical_block_size:4096
/sys/block/sdas/queue/physical_block_size:512
/sys/block/sdat/queue/physical_block_size:512
/sys/block/sdau/queue/physical_block_size:4096
/sys/block/sdav/queue/physical_block_size:4096
/sys/block/sdaw/queue/physical_block_size:4096
/sys/block/sdax/queue/physical_block_size:4096
/sys/block/sday/queue/physical_block_size:512
/sys/block/sdaz/queue/physical_block_size:512
/sys/block/sdba/queue/physical_block_size:4096
/sys/block/sdbb/queue/physical_block_size:4096
/sys/block/sdbc/queue/physical_block_size:4096
/sys/block/sdbd/queue/physical_block_size:4096
/sys/block/sdbe/queue/physical_block_size:4096
/sys/block/sdbf/queue/physical_block_size:4096
/sys/block/sdbg/queue/physical_block_size:512
/sys/block/sdbh/queue/physical_block_size:512
/sys/block/sdbi/queue/physical_block_size:4096
/sys/block/sdbj/queue/physical_block_size:4096
/sys/block/sdbk/queue/physical_block_size:512
/sys/block/sdbl/queue/physical_block_size:512
/sys/block/sdb/queue/physical_block_size:4096
/sys/block/sdc/queue/physical_block_size:4096
/sys/block/sdd/queue/physical_block_size:4096
/sys/block/sde/queue/physical_block_size:512
/sys/block/sdf/queue/physical_block_size:4096
/sys/block/sdg/queue/physical_block_size:4096
/sys/block/sdh/queue/physical_block_size:4096
/sys/block/sdi/queue/physical_block_size:4096
/sys/block/sdj/queue/physical_block_size:4096
/sys/block/sdk/queue/physical_block_size:4096
/sys/block/sdl/queue/physical_block_size:4096
/sys/block/sdm/queue/physical_block_size:512
/sys/block/sdn/queue/physical_block_size:4096
/sys/block/sdo/queue/physical_block_size:512
/sys/block/sdp/queue/physical_block_size:4096
/sys/block/sdq/queue/physical_block_size:4096
/sys/block/sdr/queue/physical_block_size:4096
/sys/block/sds/queue/physical_block_size:4096
/sys/block/sdt/queue/physical_block_size:4096
/sys/block/sdu/queue/physical_block_size:512
/sys/block/sdv/queue/physical_block_size:4096
/sys/block/sdw/queue/physical_block_size:4096
/sys/block/sdx/queue/physical_block_size:4096
/sys/block/sdy/queue/physical_block_size:4096
/sys/block/sdz/queue/physical_block_size:4096
This is one of the day job's large Ceph boxen. The 4k units are HGST
4TB enterprise drives, and the 512B units are SSDs.
next prev parent reply other threads:[~2014-01-03 4:58 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-12-30 18:48 The chunk size paradox Phillip Susi
2013-12-30 23:38 ` Peter Grandi
2013-12-31 0:01 ` Wolfgang Denk
2013-12-31 13:51 ` David Brown
2014-01-02 20:08 ` Phillip Susi
2014-01-02 14:49 ` joystick
2014-01-02 15:24 ` Phillip Susi
2014-01-02 15:41 ` Stan Hoeppner
2014-01-02 16:31 ` Phillip Susi
2014-01-02 18:02 ` Stan Hoeppner
2014-01-02 19:10 ` Phillip Susi
2014-01-02 22:49 ` Peter Grandi
2014-01-02 23:16 ` Stan Hoeppner
2014-01-03 1:02 ` Phillip Susi
2014-01-02 19:21 ` Joe Landman
2014-01-02 22:42 ` Stan Hoeppner
2014-01-02 22:56 ` Carsten Aulbert
2014-01-03 0:19 ` Phillip Susi
2014-01-03 1:24 ` Stan Hoeppner
2014-01-03 3:14 ` Joe Landman
2014-01-03 3:19 ` Stan Hoeppner
2014-01-03 4:24 ` Stan Hoeppner
2014-01-02 23:22 ` Peter Grandi
2014-01-03 3:09 ` Joe Landman
2014-01-03 4:58 ` Joe Landman [this message]
2014-01-02 22:32 ` Wolfgang Denk
2014-01-03 14:51 ` Benjamin ESTRABAUD
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=52C64389.3080106@gmail.com \
--to=joe.landman@gmail.com \
--cc=linux-raid@vger.kernel.org \
--cc=pg@lxraid2.for.sabi.co.UK \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).