From mboxrd@z Thu Jan 1 00:00:00 1970 From: Wols Lists Subject: Re: Two raid5 arrays are inactive and have changed UUIDs Date: Wed, 15 Jan 2020 23:44:38 +0000 Message-ID: <5E1FA3E6.2070303@youngman.org.uk> References: <959ca414-0c97-2e8d-7715-a7cb75790fcd@youngman.org.uk> <5E17D999.5010309@youngman.org.uk> <5E1DDCFC.1080105@youngman.org.uk> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: Sender: linux-raid-owner@vger.kernel.org To: William Morgan Cc: linux-raid@vger.kernel.org List-Id: linux-raid.ids On 15/01/20 22:12, William Morgan wrote: > All 4 drives have the same event count and all four show the same > state of AAAA, but the first and last drive still show bad blocks > present. Is that because ddrescue copied literally everything from the > original drives, including the list of bad blocks? How should I go > about clearing those bad blocks? Is there something more I should do > to verify the integrity of the data? Read the wiki - the section on badblocks will be - enlightening - shall we say. https://raid.wiki.kernel.org/index.php/The_Badblocks_controversy Yes, the bad blocks are implemented within md, so they got copied across along with everything else. So your array should be perfectly fine despite the badblocks allegedly there ... Cheers, Wol