From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-oi0-f45.google.com ([209.85.218.45]:33064 "EHLO mail-oi0-f45.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751808AbcF0Rqv (ORCPT ); Mon, 27 Jun 2016 13:46:51 -0400 Received: by mail-oi0-f45.google.com with SMTP id u201so210499351oie.0 for ; Mon, 27 Jun 2016 10:46:51 -0700 (PDT) MIME-Version: 1.0 In-Reply-To: References: From: Chris Murphy Date: Mon, 27 Jun 2016 11:46:50 -0600 Message-ID: Subject: Re: Strange behavior when replacing device on BTRFS RAID 5 array. To: Chris Murphy Cc: Nick Austin , Btrfs BTRFS Content-Type: text/plain; charset=UTF-8 Sender: linux-btrfs-owner@vger.kernel.org List-ID: On Mon, Jun 27, 2016 at 11:29 AM, Chris Murphy wrote: > > Next is to decide to what degree you want to salvage this volume and > keep using Btrfs raid56 despite the risks Forgot to complete this thought. So if you get a backup, and decide you want to fix it, I would see if you can cancel the replace using "btrfs replace cancel " and confirm that it stops. And now is the risky part, which is whether to try "btrfs add" and then "btrfs remove" or remove the bad drive, reboot, and see if it'll mount with -o degraded, and then use add and remove (in which case you'll use 'remove missing'). The first you risk Btrfs still using the flaky bad drive. The second you risk whether a degraded mount will work, and whether any other drive in the array has a problem while degraded (like an unrecovery read error from a single sector). -- Chris Murphy