* Why is SECTOR_SIZE = 512 inside kernel ? @ 2015-08-17 12:53 Navin P 2015-08-17 13:54 ` Theodore Ts'o 0 siblings, 1 reply; 4+ messages in thread From: Navin P @ 2015-08-17 12:53 UTC (permalink / raw) To: linux-kernel Hi, Why is SECTOR_SIZE 512 ? http://lxr.free-electrons.com/source/include/linux/ide.h#L118 http://lxr.free-electrons.com/source/include/linux/device-mapper.h#L548 548 #define SECTOR_SHIFT 9 I was looking at disks with hw_sector_size . Most of them i looked at had 512 bytes except for one which had 4096 (virtual disk). The one with AF format ie has logical sector size as 512 and hw_sector_size as 512 . So it is fine for my calculation from /proc/diskstats. But the one with 4096 logical and 4096 physical i multiply hw_sector_size with the sectors read and written but that is wrong since the kernel always defines sectors in terms of 512. Is it going to change or is it cast in stone ? Here is an example. Again this is a VM virtual disk. vdc is the subject in interest. [root@hphuge-049 ~]# fdisk -l /dev/vdb /dev/vdc Disk /dev/vdb: 21.5 GB, 21474836480 bytes, 41943040 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/vdc: 17.2 GB, 17179869184 bytes, 4194304 sectors Units = sectors of 1 * 4096 = 4096 bytes Sector size (logical/physical): 4096 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes [root@hphuge-049 ~]# [root@hphuge-049 ~]# cat /sys/block/vdc/queue/hw_sector_size && cat /sys/block/vdc/queue/logical_block_size && cat /sys/block/vdc/queue/physical_block_size 4096 4096 4096 [root@hphuge-049 ~]# cat /sys/block/vdb/queue/hw_sector_size && cat /sys/block/vdb/queue/logical_block_size && cat /sys/block/vdb/queue/physical_block_size 512 512 4096 [root@hphuge-049 ~]# Regards, Navin ^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: Why is SECTOR_SIZE = 512 inside kernel ? 2015-08-17 12:53 Why is SECTOR_SIZE = 512 inside kernel ? Navin P @ 2015-08-17 13:54 ` Theodore Ts'o 2015-08-18 21:06 ` Brice Goglin 0 siblings, 1 reply; 4+ messages in thread From: Theodore Ts'o @ 2015-08-17 13:54 UTC (permalink / raw) To: Navin P; +Cc: linux-kernel On Mon, Aug 17, 2015 at 06:23:04PM +0530, Navin P wrote: > > Why is SECTOR_SIZE 512 ? > > http://lxr.free-electrons.com/source/include/linux/ide.h#L118 > > http://lxr.free-electrons.com/source/include/linux/device-mapper.h#L548 > > 548 #define SECTOR_SHIFT 9 > > I was looking at disks with hw_sector_size . Most of them i looked at > had 512 bytes except for one which had 4096 (virtual disk). The one > with AF format ie has logical sector size as 512 and hw_sector_size as > 512 . So it is fine for my calculation from /proc/diskstats. > > But the one with 4096 logical and 4096 physical i multiply > hw_sector_size with the sectors read and written but that is wrong > since the kernel always defines sectors in terms of 512. > > Is it going to change or is it cast in stone ? It's cast in stone. There are too many places all over the kernel, especially in a huge number of file systems, which assume that the sector size is 512 bytes. So above the block layer, the sector size is always going to be 512. This is actually *better* for user space programs using /proc/diskstats, since they don't need to know whether a particular underlying hardware is using 512, 4k, (or if the HDD manufacturers fantasies become true 32k or 64k) sector sizes. For similar reason, st_blocks in struct size is always in units of 512 bytes. We don't want to force userspace to have to figure out whether the underlying file system is using 1k, 2k, or 4k. For that reason the units of st_blocks is always going to be 512 bytes, and this is hard-coded in the POSIX standard. - Ted ^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: Why is SECTOR_SIZE = 512 inside kernel ? 2015-08-17 13:54 ` Theodore Ts'o @ 2015-08-18 21:06 ` Brice Goglin 2015-08-18 21:38 ` tytso 0 siblings, 1 reply; 4+ messages in thread From: Brice Goglin @ 2015-08-18 21:06 UTC (permalink / raw) To: Theodore Ts'o; +Cc: LKML Le 17/08/2015 15:54, Theodore Ts'o a écrit : > > It's cast in stone. There are too many places all over the kernel, > especially in a huge number of file systems, which assume that the > sector size is 512 bytes. So above the block layer, the sector size > is always going to be 512. Could this be a problem when using pmem/nvdimm devices with byte-granularity (no BTT layer)? (hw_sector_size reports 512 in this case while we could expect 1 instead). Or it just doesn't matter because BTT is the only way to use these devices for filesystems like other block devices? thanks Brice ^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: Why is SECTOR_SIZE = 512 inside kernel ? 2015-08-18 21:06 ` Brice Goglin @ 2015-08-18 21:38 ` tytso 0 siblings, 0 replies; 4+ messages in thread From: tytso @ 2015-08-18 21:38 UTC (permalink / raw) To: Brice Goglin; +Cc: LKML On Tue, Aug 18, 2015 at 11:06:47PM +0200, Brice Goglin wrote: > Le 17/08/2015 15:54, Theodore Ts'o a écrit : > > > > It's cast in stone. There are too many places all over the kernel, > > especially in a huge number of file systems, which assume that the > > sector size is 512 bytes. So above the block layer, the sector size > > is always going to be 512. > > Could this be a problem when using pmem/nvdimm devices with > byte-granularity (no BTT layer)? (hw_sector_size reports > 512 in this case while we could expect 1 instead). > Or it just doesn't matter because BTT is the only way to use > these devices for filesystems like other block devices? Right now there are very few applications that understand how to use pmem/nvdimm devices as memory. And even where they do, they will need some kind of file system to provide resource isolation in case more than one application or more than one user wants to use the pmem/nvdimm. In that case, they will probably mmap a file and then access the nvdimm directly. In that case, the applications won't be using the block device layer at all, so they won't care about the advertised hw_sector_Size. The challenge with pmem-aware applications is that they need to be able to correctly update their in-memory data structures in such a way that they can correctly recover after an arbitrary power failure. That means they have to use atomic updates and/or copy-on-write update schemes, and I suspect most application writes just aren't going to be able to get this right. So for many legacy applications, they will still read in the file "foo", make changes in local memory, and then write the new contents to the file "foo.new", and then rename "foo.new" on top of "foo". These applications will effectively use nvdimm as super fast flash, and so they will use file systems as file systems. And since file systems today all use block sizes which are multiples of the traditional 512 byte sector size, again, changing something as fundamental as the kernel's internal sector size doesn't have any real value, at least not as far as pmem/nvdimm support is concerned. - Ted ^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2015-08-18 21:38 UTC | newest] Thread overview: 4+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2015-08-17 12:53 Why is SECTOR_SIZE = 512 inside kernel ? Navin P 2015-08-17 13:54 ` Theodore Ts'o 2015-08-18 21:06 ` Brice Goglin 2015-08-18 21:38 ` tytso
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).