From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.0 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 41516C433E5 for ; Thu, 23 Jul 2020 08:58:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2728820888 for ; Thu, 23 Jul 2020 08:58:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726597AbgGWI57 (ORCPT ); Thu, 23 Jul 2020 04:57:59 -0400 Received: from bang.steev.me.uk ([81.2.120.65]:34039 "EHLO smtp.steev.me.uk" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725984AbgGWI57 (ORCPT ); Thu, 23 Jul 2020 04:57:59 -0400 Received: from smtp.steev.me.uk ([2001:8b0:162c:10::25] helo=webmail.steev.me.uk) by smtp.steev.me.uk with esmtp (Exim 4.93.0.4) id 1jyX31-008jW0-4K; Thu, 23 Jul 2020 09:57:51 +0100 MIME-Version: 1.0 Date: Thu, 23 Jul 2020 09:57:50 +0100 From: Steven Davies To: kreijack@inwind.it Cc: Zygo Blaxell , John Petrini , John Petrini , linux-btrfs@vger.kernel.org Subject: Re: Filesystem Went Read Only During Raid-10 to Raid-6 Data Conversion In-Reply-To: References: <20200715011843.GH10769@hungrycats.org> <20200716042739.GB8346@hungrycats.org> <20200716225731.GI10769@hungrycats.org> <20200717055706.GJ10769@hungrycats.org> <507b649c-ac60-0b5c-222f-192943c50f16@libero.it> User-Agent: Roundcube Webmail/1.4.4 Message-ID: <20a7c0211b2d9336b69d48fa5c3d0c5c@steev.me.uk> X-Sender: btrfs-list@steev.me.uk Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit Sender: linux-btrfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org On 2020-07-21 21:48, Goffredo Baroncelli wrote: > On 7/21/20 12:15 PM, Steven Davies wrote: >> On 2020-07-20 18:57, Goffredo Baroncelli wrote: >>> On 7/18/20 12:36 PM, Steven Davies wrote: >>>>>> /dev/sdf, ID: 12 >>>>>>     Device size:             9.10TiB >>>>>>     Device slack:              0.00B >>>>>>     Data,RAID10:           784.31GiB >>>>>>     Data,RAID10:             4.01TiB >>>>>>     Data,RAID10:             3.34TiB >>>>>>     Data,RAID6:            458.56GiB >>>>>>     Data,RAID6:            144.07GiB >>>>>>     Data,RAID6:            293.03GiB >>>>>>     Metadata,RAID10:         4.47GiB >>>>>>     Metadata,RAID10:       352.00MiB >>>>>>     Metadata,RAID10:         6.00GiB >>>>>>     Metadata,RAID1C3:        5.00GiB >>>>>>     System,RAID1C3:         32.00MiB >>>>>>     Unallocated:            85.79GiB >>>>> >>> [...] >>>> >>>> RFE: improve 'dev usage' to show these details. >>>> >>>> As a user I'd look at this output and assume a bug in btrfs-tools >>>> because of the repeated conflicting information. >>> >>> What would be the expected output ? >>> What about the example below ? >>> >>>  /dev/sdf, ID: 12 >>>      Device size:             9.10TiB >>>      Device slack:              0.00B >>>      Data,RAID10:           784.31GiB >>>      Data,RAID10:             4.01TiB >>>      Data,RAID10:             3.34TiB >>>      Data,RAID6[3]:         458.56GiB >>>      Data,RAID6[5]:         144.07GiB >>>      Data,RAID6[7]:         293.03GiB >>>      Metadata,RAID10:         4.47GiB >>>      Metadata,RAID10:       352.00MiB >>>      Metadata,RAID10:         6.00GiB >>>      Metadata,RAID1C3:        5.00GiB >>>      System,RAID1C3:         32.00MiB >>>      Unallocated:            85.79GiB >> >> That works for me for RAID6. There are three lines for RAID10 too - >> what's the difference between these? > > The differences is the number of the disks involved. In raid10, the > first 64K are on the first disk, the 2nd 64K are in the 2nd disk and > so until the last disk. Then the n+1 th 64K are again in the first > disk... and so on.. (ok I missed the RAID1 part, but I think the have > giving the idea ) > > So the chunk layout depends by the involved number of disk, even if > the differences is not so dramatic. Is this information that the user/sysadmin needs to be aware of in a similar manner to the original problem that started this thread? If not I'd be tempted to sum all the RAID10 chunks into one line (each for data and metadata). >>>     Data,RAID6:        123.45GiB >>>         /dev/sda     12.34GiB >>>         /dev/sdb     12.34GiB >>>         /dev/sdc     12.34GiB >>>     Data,RAID6:        123.45GiB >>>         /dev/sdb     12.34GiB >>>         /dev/sdc     12.34GiB >>>         /dev/sdd     12.34GiB >>>         /dev/sde     12.34GiB >>>         /dev/sdf     12.34GiB >> >> Here there would need to be something which shows what the difference >> in the RAID6 blocks is - if it's the chunk size then I'd do the same >> as the above example with e.g. Data,RAID6[3]. > > We could add a '[n]' for the profile where it matters, e.g. raid0, > raid10, raid5, raid6. > What do you think ? So like this? That would make sense to me, as long as the meaning of [n] is explained in --help or the manpage. Data,RAID6[3]: 123.45GiB /dev/sda 12.34GiB /dev/sdb 12.34GiB /dev/sdc 12.34GiB Data,RAID6[5]: 123.45GiB /dev/sdb 12.34GiB /dev/sdc 12.34GiB /dev/sdd 12.34GiB /dev/sde 12.34GiB /dev/sdf 12.34GiB -- Steven Davies