* Re: dmesg flooded with "Very big device. Trying to use READ CAPACITY(16)" with 8TB HDDs [not found] <CAJVZm6etk=bL0LY3FZXkm5Wun64F4w6HMxdLhKRD-v+mEGm08w@mail.gmail.com> @ 2018-03-02 11:37 ` Menion 2018-03-02 15:18 ` David Sterba 0 siblings, 1 reply; 6+ messages in thread From: Menion @ 2018-03-02 11:37 UTC (permalink / raw) To: linux-btrfs Is it really a no problem? I mean, for some reason BTRFS is continuously read the HDD capacity in an array, that does not seem to be really correct Bye 2018-02-26 11:07 GMT+01:00 Menion <menion@gmail.com>: > Hi all > I have recently started to operate an array of 5x8TB HDD (WD RED) in RAID5 mode > The array seems to work ok, but with the time the dmesg is flooded by this log: > > [ 338.674673] sd 0:0:0:0: [sda] Very big device. Trying to use READ > CAPACITY(16). > [ 338.767184] sd 0:0:0:1: [sdb] Very big device. Trying to use READ > CAPACITY(16). > [ 338.989477] sd 0:0:0:3: [sdd] Very big device. Trying to use READ > CAPACITY(16). > [ 339.301194] sd 0:0:0:4: [sde] Very big device. Trying to use READ > CAPACITY(16). > [ 339.506579] sd 0:0:0:2: [sdc] Very big device. Trying to use READ > CAPACITY(16). > [ 649.393340] sd 0:0:0:0: [sda] Very big device. Trying to use READ > CAPACITY(16). > [ 650.129849] sd 0:0:0:1: [sdb] Very big device. Trying to use READ > CAPACITY(16). > [ 650.379622] sd 0:0:0:3: [sdd] Very big device. Trying to use READ > CAPACITY(16). > [ 650.524828] sd 0:0:0:4: [sde] Very big device. Trying to use READ > CAPACITY(16). > [ 650.721615] sd 0:0:0:2: [sdc] Very big device. Trying to use READ > CAPACITY(16). > [ 959.544384] sd 0:0:0:0: [sda] Very big device. Trying to use READ > CAPACITY(16). > [ 959.627015] sd 0:0:0:1: [sdb] Very big device. Trying to use READ > CAPACITY(16). > [ 959.790280] sd 0:0:0:3: [sdd] Very big device. Trying to use READ > CAPACITY(16). > [ 959.901179] sd 0:0:0:4: [sde] Very big device. Trying to use READ > CAPACITY(16). > [ 960.048734] sd 0:0:0:2: [sdc] Very big device. Trying to use READ > CAPACITY(16). > > sda,sdb,sdc,sdd,sde as you can imagine are the HDDs in the array > > Other info (note: there is also another single BTRFS array of 3 small > device that never print this log and my root filesystem is BTRFS as > well) > > menion@Menionubuntu:/etc$ uname -a > Linux Menionubuntu 4.15.5-041505-generic #201802221031 SMP Thu Feb 22 > 15:32:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux > menion@Menionubuntu:/etc$ btrfs --version > btrfs-progs v4.15.1 > menion@Menionubuntu:/etc$ sudo btrfs fi show > [sudo] password for menion: > Label: none uuid: 6db4baf7-fda8-41ac-a6ad-1ca7b083430f > Total devices 1 FS bytes used 9.02GiB > devid 1 size 27.07GiB used 11.02GiB path /dev/mmcblk0p3 > > Label: none uuid: 931d40c6-7cd7-46f3-a4bf-61f3a53844bc > Total devices 5 FS bytes used 5.47TiB > devid 1 size 7.28TiB used 1.37TiB path /dev/sda > devid 2 size 7.28TiB used 1.37TiB path /dev/sdb > devid 3 size 7.28TiB used 1.37TiB path /dev/sdc > devid 4 size 7.28TiB used 1.37TiB path /dev/sdd > devid 5 size 7.28TiB used 1.37TiB path /dev/sde > > Label: none uuid: ba1e0d88-2e26-499d-8fe3-458b9c53349a > Total devices 3 FS bytes used 534.50GiB > devid 1 size 232.89GiB used 102.03GiB path /dev/sdh > devid 2 size 232.89GiB used 102.00GiB path /dev/sdi > devid 3 size 465.76GiB used 335.03GiB path /dev/sdj > > menion@Menionubuntu:/etc$ sudo btrfs fi df /media/storage/das1 > Data, RAID5: total=5.49TiB, used=5.46TiB > System, RAID5: total=12.75MiB, used=352.00KiB > Metadata, RAID5: total=7.00GiB, used=6.11GiB > GlobalReserve, single: total=512.00MiB, used=0.00B > menion@Menionubuntu:/etc$ ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: dmesg flooded with "Very big device. Trying to use READ CAPACITY(16)" with 8TB HDDs 2018-03-02 11:37 ` dmesg flooded with "Very big device. Trying to use READ CAPACITY(16)" with 8TB HDDs Menion @ 2018-03-02 15:18 ` David Sterba 2018-03-02 16:19 ` Menion 0 siblings, 1 reply; 6+ messages in thread From: David Sterba @ 2018-03-02 15:18 UTC (permalink / raw) To: Menion; +Cc: linux-btrfs On Fri, Mar 02, 2018 at 12:37:49PM +0100, Menion wrote: > Is it really a no problem? I mean, for some reason BTRFS is > continuously read the HDD capacity in an array, that does not seem to > be really correct The message comes from SCSI: https://elixir.bootlin.com/linux/latest/source/drivers/scsi/sd.c#L2508 Reading drive capacity could be totally opaque for the filesystem, eg. when the scsi layer compares the requested block address with the device size. The sizes of blockdevices is obtained from the i_size member of the inode representing the block device, so there's no direct read by btrfs. You'd have better luck reporting that to scsi or block layer mailinglists. ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: dmesg flooded with "Very big device. Trying to use READ CAPACITY(16)" with 8TB HDDs 2018-03-02 15:18 ` David Sterba @ 2018-03-02 16:19 ` Menion 2018-03-08 10:16 ` Menion 0 siblings, 1 reply; 6+ messages in thread From: Menion @ 2018-03-02 16:19 UTC (permalink / raw) To: dsterba, Menion, linux-btrfs Thanks My point was to understand if this action was taken by BTRFS or automously by scsi. >From your word it seems clear to me that this should go in KERNEL_DEBUG level, instead of KERNEL_NOTICE Bye 2018-03-02 16:18 GMT+01:00 David Sterba <dsterba@suse.cz>: > On Fri, Mar 02, 2018 at 12:37:49PM +0100, Menion wrote: >> Is it really a no problem? I mean, for some reason BTRFS is >> continuously read the HDD capacity in an array, that does not seem to >> be really correct > > The message comes from SCSI: > https://elixir.bootlin.com/linux/latest/source/drivers/scsi/sd.c#L2508 > > Reading drive capacity could be totally opaque for the filesystem, eg. > when the scsi layer compares the requested block address with the device > size. > > The sizes of blockdevices is obtained from the i_size member of the > inode representing the block device, so there's no direct read by btrfs. > You'd have better luck reporting that to scsi or block layer > mailinglists. ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: dmesg flooded with "Very big device. Trying to use READ CAPACITY(16)" with 8TB HDDs 2018-03-02 16:19 ` Menion @ 2018-03-08 10:16 ` Menion 2018-03-08 11:18 ` Menion 0 siblings, 1 reply; 6+ messages in thread From: Menion @ 2018-03-08 10:16 UTC (permalink / raw) To: dsterba, Menion, linux-btrfs Hi again I had a discussion in linux-scsi about this topic My understanding is that it is true that the read_capacity is opaque to the filesystem but it is also true that the scsi layer export two specific read_capacity ops, the read10 and read16 and the upper layers shall select the proper one, based on the response of the other. In the log, I see that this read_capacity_10 is called every 5 minutes, and it fallback to read_capacity_16, since who is doing it endup in calling sd_read_capacity in scsi layer, rather then pickup read10 or read16 directly I am not telling that BTRFS is doing it for sure, but I have ruled out smartd, so based on the periodicity of 5 minutes, can you think about anything in the BTRFS internals that can be responsible of this? 2018-03-02 17:19 GMT+01:00 Menion <menion@gmail.com>: > Thanks > My point was to understand if this action was taken by BTRFS or > automously by scsi. > From your word it seems clear to me that this should go in > KERNEL_DEBUG level, instead of KERNEL_NOTICE > Bye > > 2018-03-02 16:18 GMT+01:00 David Sterba <dsterba@suse.cz>: >> On Fri, Mar 02, 2018 at 12:37:49PM +0100, Menion wrote: >>> Is it really a no problem? I mean, for some reason BTRFS is >>> continuously read the HDD capacity in an array, that does not seem to >>> be really correct >> >> The message comes from SCSI: >> https://elixir.bootlin.com/linux/latest/source/drivers/scsi/sd.c#L2508 >> >> Reading drive capacity could be totally opaque for the filesystem, eg. >> when the scsi layer compares the requested block address with the device >> size. >> >> The sizes of blockdevices is obtained from the i_size member of the >> inode representing the block device, so there's no direct read by btrfs. >> You'd have better luck reporting that to scsi or block layer >> mailinglists. ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: dmesg flooded with "Very big device. Trying to use READ CAPACITY(16)" with 8TB HDDs 2018-03-08 10:16 ` Menion @ 2018-03-08 11:18 ` Menion 2018-03-09 14:54 ` David Sterba 0 siblings, 1 reply; 6+ messages in thread From: Menion @ 2018-03-08 11:18 UTC (permalink / raw) To: dsterba, Menion, linux-btrfs Actually this path can be taken in few occurrency 1) device probe, only when the device is plugged or detected the first time 2) revalidate_disk fops of block device Is it possible that BTRFS every 5 minutes call the revalidate_disk? 2018-03-08 11:16 GMT+01:00 Menion <menion@gmail.com>: > Hi again > I had a discussion in linux-scsi about this topic > My understanding is that it is true that the read_capacity is opaque > to the filesystem but it is also true that the scsi layer export two > specific read_capacity ops, the read10 and read16 and the upper layers > shall select the proper one, based on the response of the other. > In the log, I see that this read_capacity_10 is called every 5 > minutes, and it fallback to read_capacity_16, since who is doing it > endup in calling sd_read_capacity in scsi layer, rather then pickup > read10 or read16 directly > I am not telling that BTRFS is doing it for sure, but I have ruled out > smartd, so based on the periodicity of 5 minutes, can you think about > anything in the BTRFS internals that can be responsible of this? > > 2018-03-02 17:19 GMT+01:00 Menion <menion@gmail.com>: >> Thanks >> My point was to understand if this action was taken by BTRFS or >> automously by scsi. >> From your word it seems clear to me that this should go in >> KERNEL_DEBUG level, instead of KERNEL_NOTICE >> Bye >> >> 2018-03-02 16:18 GMT+01:00 David Sterba <dsterba@suse.cz>: >>> On Fri, Mar 02, 2018 at 12:37:49PM +0100, Menion wrote: >>>> Is it really a no problem? I mean, for some reason BTRFS is >>>> continuously read the HDD capacity in an array, that does not seem to >>>> be really correct >>> >>> The message comes from SCSI: >>> https://elixir.bootlin.com/linux/latest/source/drivers/scsi/sd.c#L2508 >>> >>> Reading drive capacity could be totally opaque for the filesystem, eg. >>> when the scsi layer compares the requested block address with the device >>> size. >>> >>> The sizes of blockdevices is obtained from the i_size member of the >>> inode representing the block device, so there's no direct read by btrfs. >>> You'd have better luck reporting that to scsi or block layer >>> mailinglists. ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: dmesg flooded with "Very big device. Trying to use READ CAPACITY(16)" with 8TB HDDs 2018-03-08 11:18 ` Menion @ 2018-03-09 14:54 ` David Sterba 0 siblings, 0 replies; 6+ messages in thread From: David Sterba @ 2018-03-09 14:54 UTC (permalink / raw) To: Menion; +Cc: dsterba, linux-btrfs On Thu, Mar 08, 2018 at 12:18:04PM +0100, Menion wrote: > Actually this path can be taken in few occurrency > > 1) device probe, only when the device is plugged or detected the first time > 2) revalidate_disk fops of block device > > Is it possible that BTRFS every 5 minutes call the revalidate_disk? An idea: udev or blkid cache refresh, triggers 'btrfs dev scan' that calls blkdev_get_by_path that could in turn call the revalidation and whatnot. Alternatively you can patch the code and add a WARN_ON right after the message, the stacktrace will tell for sure from where it gets triggered. ^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2018-03-09 14:56 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <CAJVZm6etk=bL0LY3FZXkm5Wun64F4w6HMxdLhKRD-v+mEGm08w@mail.gmail.com>
2018-03-02 11:37 ` dmesg flooded with "Very big device. Trying to use READ CAPACITY(16)" with 8TB HDDs Menion
2018-03-02 15:18 ` David Sterba
2018-03-02 16:19 ` Menion
2018-03-08 10:16 ` Menion
2018-03-08 11:18 ` Menion
2018-03-09 14:54 ` David Sterba
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).