From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751780AbbCDVJL (ORCPT ); Wed, 4 Mar 2015 16:09:11 -0500 Received: from arcturus.aphlor.org ([188.246.204.175]:41628 "EHLO arcturus.aphlor.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750843AbbCDVJK (ORCPT ); Wed, 4 Mar 2015 16:09:10 -0500 Date: Wed, 4 Mar 2015 16:09:04 -0500 From: Dave Jones To: Linux Kernel Cc: Neil Brown Subject: RAID0 & diskstats. Message-ID: <20150304210904.GA26981@codemonkey.org.uk> Mail-Followup-To: Dave Jones , Linux Kernel , Neil Brown MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.5.23 (2014-03-12) X-Spam-Score: -2.9 (--) X-Spam-Report: Spam report generated by SpamAssassin on "arcturus.aphlor.org" Content analysis details: (-2.9 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -1.0 ALL_TRUSTED Passed through trusted hosts only via SMTP -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] X-Authenticated-User: davej@codemonkey.org.uk Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Neil, According to Documentation/iostats.txt, the 9th column of /proc/diskstats (and its modern replacement in sysfs) should go to 0 as IO completes. I assembled a RAID0 stripe using two SSD's, and saw this.. # mdadm --assemble /dev/md0 mdadm: /dev/md0 has been started with 2 drives. # cat /sys/block/md0/stat 167 0 5656 0 5 0 4096 0 172 3408 582825 # cat /sys/block/md0/stat 167 0 5656 0 5 0 4096 0 172 231469 39809317 The 10th & 11th fields constantly increase, as field 9 remains non-zero. If I mount and umount a filesystem on that volume, it works as expected, but the 9th 'IOs inflight' field continues to rise and never decreases even though the IO has obviously completed. # umount /mnt/ssd # cat /sys/block/md0/stat 167 0 5656 0 9 0 4225 0 176 571384 98278615 The underlying disks have their respective stats entries behaving as expected, it only seems to affect the upper md layer. Some missing accounting somewhere in md ? (Only tested on 4.0rc2 so far, and only on RAID0) Dave