public inbox for linux-raid@vger.kernel.org
 help / color / mirror / Atom feed
* RAID 1 | Changing HDs
@ 2025-09-03 11:55 Stefanie Leisestreichler (Febas)
  2025-09-03 12:19 ` Reindl Harald
                   ` (3 more replies)
  0 siblings, 4 replies; 19+ messages in thread
From: Stefanie Leisestreichler (Febas) @ 2025-09-03 11:55 UTC (permalink / raw)
  To: linux-raid

Hi.
I have the system layout shown below.

To avoid data loss, I want to change HDs which have about 46508 hours of 
up time.

I thought, instead of degrading, formatting, rebuilding and so on, I could
- shutdown the computer
- take i.e. /dev/sda and do
- dd bs=98304 conv=sync,noerror if=/dev/sda of=/dev/sdX (X standig for 
device name of new disk)

Is it save to do it this way, presuming the array is in AA-State?

Thanks,
Steffi

/dev/sda1:
           Magic : a92b4efc
         Version : 1.2
     Feature Map : 0x0
      Array UUID : 68c0c9ad:82ede879:2110f427:9f31c140
            Name : speernix15:0  (local to host speernix15)
   Creation Time : Sun Nov 30 19:15:35 2014
      Raid Level : raid1
    Raid Devices : 2

  Avail Dev Size : 1953260976 (931.39 GiB 1000.07 GB)
      Array Size : 976629568 (931.39 GiB 1000.07 GB)
   Used Dev Size : 1953259136 (931.39 GiB 1000.07 GB)
     Data Offset : 262144 sectors
    Super Offset : 8 sectors
    Unused Space : before=261864 sectors, after=1840 sectors
           State : active
     Device UUID : 5871292c:7fcfbd82:b0a28f1b:df7774f9

     Update Time : Thu Aug 28 01:00:03 2025
   Bad Block Log : 512 entries available at offset 264 sectors
        Checksum : b198f5d1 - correct
          Events : 38185

    Device Role : Active device 0
    Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdb1:
           Magic : a92b4efc
         Version : 1.2
     Feature Map : 0x0
      Array UUID : 68c0c9ad:82ede879:2110f427:9f31c140
            Name : speernix15:0  (local to host speernix15)
   Creation Time : Sun Nov 30 19:15:35 2014
      Raid Level : raid1
    Raid Devices : 2

  Avail Dev Size : 1953260976 (931.39 GiB 1000.07 GB)
      Array Size : 976629568 (931.39 GiB 1000.07 GB)
   Used Dev Size : 1953259136 (931.39 GiB 1000.07 GB)
     Data Offset : 262144 sectors
    Super Offset : 8 sectors
    Unused Space : before=261864 sectors, after=1840 sectors
           State : active
     Device UUID : 4bbfbe7a:457829a5:dd9d2e3c:15818bca

     Update Time : Thu Aug 28 01:00:03 2025
   Bad Block Log : 512 entries available at offset 264 sectors
        Checksum : 144ff0ef - correct
          Events : 38185



^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: RAID 1 | Changing HDs
  2025-09-03 11:55 RAID 1 | Changing HDs Stefanie Leisestreichler (Febas)
@ 2025-09-03 12:19 ` Reindl Harald
  2025-09-03 13:20   ` Stefanie Leisestreichler (Febas)
  2025-09-03 12:35 ` Hannes Reinecke
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 19+ messages in thread
From: Reindl Harald @ 2025-09-03 12:19 UTC (permalink / raw)
  To: Stefanie Leisestreichler (Febas), linux-raid

makes no sense especially in case of RAID1

  * shut down the computer
  * replace one disk
  * boot
  * resync RAID

the old disk is 100% identical and can be seen as a full backup

the whole purpose of RAID is that you can replace disks, frankly it's 
desigend to survive a exploding disk at full operations

also you should know what to do when a disk dies - the excatly same 
steps with the difference that your old replaced disk currently is a 
100% fallback even if you confuse source/target and destroy everything

--------------

in case it's a BIOS setup you can clone the whole partitioning with dd 
of the first 512 blocks

the script below is from a setup with 3 RAID1 and doe sthe whole stuff 
including install GRUB2 on the new disk - for UEFI you need some steps 
more because UUIDs must be unique

  * boot
  * system
  * data


[root@south:~]$ cat /scripts/raid-recovery.sh
#!/usr/bin/bash
# define source and target
GOOD_DISK="/dev/sda"
BAD_DISK="/dev/sdb"
# clone MBR
dd if=$GOOD_DISK of=$BAD_DISK bs=512 count=1
# force OS to read partition tables
partprobe $BAD_DISK
# start RAID recovery
mdadm /dev/md0 --add ${BAD_DISK}1
mdadm /dev/md1 --add ${BAD_DISK}3
mdadm /dev/md2 --add ${BAD_DISK}2
# print RAID status on screen
sleep 5
cat /proc/mdstat
# install bootloader on replacement disk
grub2-install "$BAD_DISK"

Am 03.09.25 um 13:55 schrieb Stefanie Leisestreichler (Febas):
> Hi.
> I have the system layout shown below.
> 
> To avoid data loss, I want to change HDs which have about 46508 hours of 
> up time.
> 
> I thought, instead of degrading, formatting, rebuilding and so on, I could
> - shutdown the computer
> - take i.e. /dev/sda and do
> - dd bs=98304 conv=sync,noerror if=/dev/sda of=/dev/sdX (X standig for 
> device name of new disk)
> 
> Is it save to do it this way, presuming the array is in AA-State?
> 
> Thanks,
> Steffi
> 
> /dev/sda1:
>            Magic : a92b4efc
>          Version : 1.2
>      Feature Map : 0x0
>       Array UUID : 68c0c9ad:82ede879:2110f427:9f31c140
>             Name : speernix15:0  (local to host speernix15)
>    Creation Time : Sun Nov 30 19:15:35 2014
>       Raid Level : raid1
>     Raid Devices : 2
> 
>   Avail Dev Size : 1953260976 (931.39 GiB 1000.07 GB)
>       Array Size : 976629568 (931.39 GiB 1000.07 GB)
>    Used Dev Size : 1953259136 (931.39 GiB 1000.07 GB)
>      Data Offset : 262144 sectors
>     Super Offset : 8 sectors
>     Unused Space : before=261864 sectors, after=1840 sectors
>            State : active
>      Device UUID : 5871292c:7fcfbd82:b0a28f1b:df7774f9
> 
>      Update Time : Thu Aug 28 01:00:03 2025
>    Bad Block Log : 512 entries available at offset 264 sectors
>         Checksum : b198f5d1 - correct
>           Events : 38185
> 
>     Device Role : Active device 0
>     Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
> /dev/sdb1:
>            Magic : a92b4efc
>          Version : 1.2
>      Feature Map : 0x0
>       Array UUID : 68c0c9ad:82ede879:2110f427:9f31c140
>             Name : speernix15:0  (local to host speernix15)
>    Creation Time : Sun Nov 30 19:15:35 2014
>       Raid Level : raid1
>     Raid Devices : 2
> 
>   Avail Dev Size : 1953260976 (931.39 GiB 1000.07 GB)
>       Array Size : 976629568 (931.39 GiB 1000.07 GB)
>    Used Dev Size : 1953259136 (931.39 GiB 1000.07 GB)
>      Data Offset : 262144 sectors
>     Super Offset : 8 sectors
>     Unused Space : before=261864 sectors, after=1840 sectors
>            State : active
>      Device UUID : 4bbfbe7a:457829a5:dd9d2e3c:15818bca
> 
>      Update Time : Thu Aug 28 01:00:03 2025
>    Bad Block Log : 512 entries available at offset 264 sectors
>         Checksum : 144ff0ef - correct
>           Events : 38185

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: RAID 1 | Changing HDs
  2025-09-03 11:55 RAID 1 | Changing HDs Stefanie Leisestreichler (Febas)
  2025-09-03 12:19 ` Reindl Harald
@ 2025-09-03 12:35 ` Hannes Reinecke
  2025-09-03 12:49   ` Stefanie Leisestreichler (Febas)
  2025-09-03 15:45 ` Roman Mamedov
  2025-09-03 15:59 ` anthony
  3 siblings, 1 reply; 19+ messages in thread
From: Hannes Reinecke @ 2025-09-03 12:35 UTC (permalink / raw)
  To: Stefanie Leisestreichler (Febas), linux-raid

On 9/3/25 13:55, Stefanie Leisestreichler (Febas) wrote:
> Hi.
> I have the system layout shown below.
> 
> To avoid data loss, I want to change HDs which have about 46508 hours of 
> up time.
> 
> I thought, instead of degrading, formatting, rebuilding and so on, I could
> - shutdown the computer
> - take i.e. /dev/sda and do
> - dd bs=98304 conv=sync,noerror if=/dev/sda of=/dev/sdX (X standig for 
> device name of new disk)
> 
Why would you do that?
In the end, you will have to transfer data from the entire disk
(to a new disk). And that will be the main drag, at it'll take ages.
And there it doesn't matter if you use 'dd' or md resync; both will
be taking roughly the same time.

> Is it save to do it this way, presuming the array is in AA-State?
> 
I would not recommend it.
Replacing drives is _precisely_ why RAID was created in the first place,
so really there is no difference between replacing a faulty drive or
replacing a non-faulty drive.

So I would strongly recommend to use MD tools to replace the drive;
it will serve as a nice exercise on what to do in case of a real
fault :-)

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                  Kernel Storage Architect
hare@suse.de                                +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: RAID 1 | Changing HDs
  2025-09-03 12:35 ` Hannes Reinecke
@ 2025-09-03 12:49   ` Stefanie Leisestreichler (Febas)
  0 siblings, 0 replies; 19+ messages in thread
From: Stefanie Leisestreichler (Febas) @ 2025-09-03 12:49 UTC (permalink / raw)
  To: Hannes Reinecke, linux-raid

On 03.09.25 14:35, Hannes Reinecke wrote:
> it will serve as a nice exercise on what to do in case of a real
> fault :-)

LOL, you nailed it :)
And I was trying to make my life easier with dd :)


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: RAID 1 | Changing HDs
  2025-09-03 12:19 ` Reindl Harald
@ 2025-09-03 13:20   ` Stefanie Leisestreichler (Febas)
  2025-09-03 14:00     ` Reindl Harald
  0 siblings, 1 reply; 19+ messages in thread
From: Stefanie Leisestreichler (Febas) @ 2025-09-03 13:20 UTC (permalink / raw)
  To: Reindl Harald, linux-raid

Hi Harald.
Thanks for your Answer and the script.

Setup is not GPT, no UUID-Problems.

Should I fail/remove first or is it not mandatory?
mdadm --manage /dev/md0 --fail /dev/sdb1
mdadm --manage /dev/md0 --remove /dev/sdb1

Then clone partiton data
sfdisk -d /dev/sda | sfdisk /dev/sdX --force

(/dev/sdX = new HD, /dev/sda = remaining old HD, not removed from array)

followed by
mdadm --manage /dev/md0 --add /dev/sdX

And if synced install grub
grub[2]-install /dev/sdX

Thanks for a short review.


On 03.09.25 14:19, Reindl Harald wrote:
> makes no sense especially in case of RAID1
> 
>   * shut down the computer
>   * replace one disk
>   * boot
>   * resync RAID
> 
> the old disk is 100% identical and can be seen as a full backup
> 
> the whole purpose of RAID is that you can replace disks, frankly it's 
> desigend to survive a exploding disk at full operations
> 
> also you should know what to do when a disk dies - the excatly same 
> steps with the difference that your old replaced disk currently is a 
> 100% fallback even if you confuse source/target and destroy everything
> 
> --------------
> 
> in case it's a BIOS setup you can clone the whole partitioning with dd 
> of the first 512 blocks
> 
> the script below is from a setup with 3 RAID1 and doe sthe whole stuff 
> including install GRUB2 on the new disk - for UEFI you need some steps 
> more because UUIDs must be unique
> 
>   * boot
>   * system
>   * data
> 
> 
> [root@south:~]$ cat /scripts/raid-recovery.sh
> #!/usr/bin/bash
> # define source and target
> GOOD_DISK="/dev/sda"
> BAD_DISK="/dev/sdb"
> # clone MBR
> dd if=$GOOD_DISK of=$BAD_DISK bs=512 count=1
> # force OS to read partition tables
> partprobe $BAD_DISK
> # start RAID recovery
> mdadm /dev/md0 --add ${BAD_DISK}1
> mdadm /dev/md1 --add ${BAD_DISK}3
> mdadm /dev/md2 --add ${BAD_DISK}2
> # print RAID status on screen
> sleep 5
> cat /proc/mdstat
> # install bootloader on replacement disk
> grub2-install "$BAD_DISK"
> 
> Am 03.09.25 um 13:55 schrieb Stefanie Leisestreichler (Febas):
>> Hi.
>> I have the system layout shown below.
>>
>> To avoid data loss, I want to change HDs which have about 46508 hours 
>> of up time.
>>
>> I thought, instead of degrading, formatting, rebuilding and so on, I 
>> could
>> - shutdown the computer
>> - take i.e. /dev/sda and do
>> - dd bs=98304 conv=sync,noerror if=/dev/sda of=/dev/sdX (X standig for 
>> device name of new disk)
>>
>> Is it save to do it this way, presuming the array is in AA-State?
>>
>> Thanks,
>> Steffi
>>
>> /dev/sda1:
>>            Magic : a92b4efc
>>          Version : 1.2
>>      Feature Map : 0x0
>>       Array UUID : 68c0c9ad:82ede879:2110f427:9f31c140
>>             Name : speernix15:0  (local to host speernix15)
>>    Creation Time : Sun Nov 30 19:15:35 2014
>>       Raid Level : raid1
>>     Raid Devices : 2
>>
>>   Avail Dev Size : 1953260976 (931.39 GiB 1000.07 GB)
>>       Array Size : 976629568 (931.39 GiB 1000.07 GB)
>>    Used Dev Size : 1953259136 (931.39 GiB 1000.07 GB)
>>      Data Offset : 262144 sectors
>>     Super Offset : 8 sectors
>>     Unused Space : before=261864 sectors, after=1840 sectors
>>            State : active
>>      Device UUID : 5871292c:7fcfbd82:b0a28f1b:df7774f9
>>
>>      Update Time : Thu Aug 28 01:00:03 2025
>>    Bad Block Log : 512 entries available at offset 264 sectors
>>         Checksum : b198f5d1 - correct
>>           Events : 38185
>>
>>     Device Role : Active device 0
>>     Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
>> /dev/sdb1:
>>            Magic : a92b4efc
>>          Version : 1.2
>>      Feature Map : 0x0
>>       Array UUID : 68c0c9ad:82ede879:2110f427:9f31c140
>>             Name : speernix15:0  (local to host speernix15)
>>    Creation Time : Sun Nov 30 19:15:35 2014
>>       Raid Level : raid1
>>     Raid Devices : 2
>>
>>   Avail Dev Size : 1953260976 (931.39 GiB 1000.07 GB)
>>       Array Size : 976629568 (931.39 GiB 1000.07 GB)
>>    Used Dev Size : 1953259136 (931.39 GiB 1000.07 GB)
>>      Data Offset : 262144 sectors
>>     Super Offset : 8 sectors
>>     Unused Space : before=261864 sectors, after=1840 sectors
>>            State : active
>>      Device UUID : 4bbfbe7a:457829a5:dd9d2e3c:15818bca
>>
>>      Update Time : Thu Aug 28 01:00:03 2025
>>    Bad Block Log : 512 entries available at offset 264 sectors
>>         Checksum : 144ff0ef - correct
>>           Events : 38185



^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: RAID 1 | Changing HDs
  2025-09-03 13:20   ` Stefanie Leisestreichler (Febas)
@ 2025-09-03 14:00     ` Reindl Harald
  2025-09-03 14:26       ` Stefanie Leisestreichler (Febas)
  0 siblings, 1 reply; 19+ messages in thread
From: Reindl Harald @ 2025-09-03 14:00 UTC (permalink / raw)
  To: Stefanie Leisestreichler (Febas), linux-raid



Am 03.09.25 um 15:20 schrieb Stefanie Leisestreichler (Febas):
> Hi Harald.
> Thanks for your Answer and the script.
> 
> Setup is not GPT, no UUID-Problems.
> 
> Should I fail/remove first or is it not mandatory?
> mdadm --manage /dev/md0 --fail /dev/sdb1
> mdadm --manage /dev/md0 --remove /dev/sdb1

besides that a wrote the order of operations think logical - if it would 
be mandatory a RAID won't surivive the event of power on your computer 
and a hard disk dies - they most of the time die at power-on

in reality you could plug the disk off while the system is running - 
it#s the whole purpose of RAID

> Then clone partiton data
> sfdisk -d /dev/sda | sfdisk /dev/sdX --force
> 
> (/dev/sdX = new HD, /dev/sda = remaining old HD, not removed from array)
> 
> followed by
> mdadm --manage /dev/md0 --add /dev/sdX
> 
> And if synced install grub
> grub[2]-install /dev/sdX
> 
> Thanks for a short review.
> 
> 
> On 03.09.25 14:19, Reindl Harald wrote:
>> makes no sense especially in case of RAID1
>>
>>   * shut down the computer
>>   * replace one disk
>>   * boot
>>   * resync RAID
>>
>> the old disk is 100% identical and can be seen as a full backup
>>
>> the whole purpose of RAID is that you can replace disks, frankly it's 
>> desigend to survive a exploding disk at full operations
>>
>> also you should know what to do when a disk dies - the excatly same 
>> steps with the difference that your old replaced disk currently is a 
>> 100% fallback even if you confuse source/target and destroy everything
>>
>> --------------
>>
>> in case it's a BIOS setup you can clone the whole partitioning with dd 
>> of the first 512 blocks
>>
>> the script below is from a setup with 3 RAID1 and doe sthe whole stuff 
>> including install GRUB2 on the new disk - for UEFI you need some steps 
>> more because UUIDs must be unique
>>
>>   * boot
>>   * system
>>   * data
>>
>>
>> [root@south:~]$ cat /scripts/raid-recovery.sh
>> #!/usr/bin/bash
>> # define source and target
>> GOOD_DISK="/dev/sda"
>> BAD_DISK="/dev/sdb"
>> # clone MBR
>> dd if=$GOOD_DISK of=$BAD_DISK bs=512 count=1
>> # force OS to read partition tables
>> partprobe $BAD_DISK
>> # start RAID recovery
>> mdadm /dev/md0 --add ${BAD_DISK}1
>> mdadm /dev/md1 --add ${BAD_DISK}3
>> mdadm /dev/md2 --add ${BAD_DISK}2
>> # print RAID status on screen
>> sleep 5
>> cat /proc/mdstat
>> # install bootloader on replacement disk
>> grub2-install "$BAD_DISK"
>>
>> Am 03.09.25 um 13:55 schrieb Stefanie Leisestreichler (Febas):
>>> Hi.
>>> I have the system layout shown below.
>>>
>>> To avoid data loss, I want to change HDs which have about 46508 hours 
>>> of up time.
>>>
>>> I thought, instead of degrading, formatting, rebuilding and so on, I 
>>> could
>>> - shutdown the computer
>>> - take i.e. /dev/sda and do
>>> - dd bs=98304 conv=sync,noerror if=/dev/sda of=/dev/sdX (X standig 
>>> for device name of new disk)
>>>
>>> Is it save to do it this way, presuming the array is in AA-State?
>>>
>>> Thanks,
>>> Steffi
>>>
>>> /dev/sda1:
>>>            Magic : a92b4efc
>>>          Version : 1.2
>>>      Feature Map : 0x0
>>>       Array UUID : 68c0c9ad:82ede879:2110f427:9f31c140
>>>             Name : speernix15:0  (local to host speernix15)
>>>    Creation Time : Sun Nov 30 19:15:35 2014
>>>       Raid Level : raid1
>>>     Raid Devices : 2
>>>
>>>   Avail Dev Size : 1953260976 (931.39 GiB 1000.07 GB)
>>>       Array Size : 976629568 (931.39 GiB 1000.07 GB)
>>>    Used Dev Size : 1953259136 (931.39 GiB 1000.07 GB)
>>>      Data Offset : 262144 sectors
>>>     Super Offset : 8 sectors
>>>     Unused Space : before=261864 sectors, after=1840 sectors
>>>            State : active
>>>      Device UUID : 5871292c:7fcfbd82:b0a28f1b:df7774f9
>>>
>>>      Update Time : Thu Aug 28 01:00:03 2025
>>>    Bad Block Log : 512 entries available at offset 264 sectors
>>>         Checksum : b198f5d1 - correct
>>>           Events : 38185
>>>
>>>     Device Role : Active device 0
>>>     Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
>>> /dev/sdb1:
>>>            Magic : a92b4efc
>>>          Version : 1.2
>>>      Feature Map : 0x0
>>>       Array UUID : 68c0c9ad:82ede879:2110f427:9f31c140
>>>             Name : speernix15:0  (local to host speernix15)
>>>    Creation Time : Sun Nov 30 19:15:35 2014
>>>       Raid Level : raid1
>>>     Raid Devices : 2
>>>
>>>   Avail Dev Size : 1953260976 (931.39 GiB 1000.07 GB)
>>>       Array Size : 976629568 (931.39 GiB 1000.07 GB)
>>>    Used Dev Size : 1953259136 (931.39 GiB 1000.07 GB)
>>>      Data Offset : 262144 sectors
>>>     Super Offset : 8 sectors
>>>     Unused Space : before=261864 sectors, after=1840 sectors
>>>            State : active
>>>      Device UUID : 4bbfbe7a:457829a5:dd9d2e3c:15818bca
>>>
>>>      Update Time : Thu Aug 28 01:00:03 2025
>>>    Bad Block Log : 512 entries available at offset 264 sectors
>>>         Checksum : 144ff0ef - correct
>>>           Events : 38185

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: RAID 1 | Changing HDs
  2025-09-03 14:00     ` Reindl Harald
@ 2025-09-03 14:26       ` Stefanie Leisestreichler (Febas)
  2025-09-03 21:59         ` Reindl Harald
  0 siblings, 1 reply; 19+ messages in thread
From: Stefanie Leisestreichler (Febas) @ 2025-09-03 14:26 UTC (permalink / raw)
  To: Reindl Harald, linux-raid

On 03.09.25 16:00, Reindl Harald wrote:
> in reality you could plug the disk off while the system is running - 
> it#s the whole purpose of RAID

Got it. I will try to do it with the system running.

Do you know a way to expose the new hard disk to the system when the 
computer is not rebooted? And how about the device identifier? Will it 
be /dev/sdb again when it is plugged with the same sata connector? And 
can I make sure the device name is not changing when it is rebooted again?

Or did I get you wrong and it is just a theoretical possibility to plug 
off a disk when system is running?

Thanks.


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: RAID 1 | Changing HDs
  2025-09-03 11:55 RAID 1 | Changing HDs Stefanie Leisestreichler (Febas)
  2025-09-03 12:19 ` Reindl Harald
  2025-09-03 12:35 ` Hannes Reinecke
@ 2025-09-03 15:45 ` Roman Mamedov
  2025-09-03 16:58   ` Michael Reinelt
  2025-09-03 17:47   ` Stefanie Leisestreichler (Febas)
  2025-09-03 15:59 ` anthony
  3 siblings, 2 replies; 19+ messages in thread
From: Roman Mamedov @ 2025-09-03 15:45 UTC (permalink / raw)
  To: Stefanie Leisestreichler (Febas); +Cc: linux-raid

On Wed, 3 Sep 2025 13:55:05 +0200
"Stefanie Leisestreichler (Febas)" <stefanie.leisestreichler@peter-speer.de>
wrote:

> Hi.
> I have the system layout shown below.
> 
> To avoid data loss, I want to change HDs which have about 46508 hours of 
> up time.

Do you have a free drive bay and connector in your computer (or just the
connector)?

If so, the safest would be to connect all three drives, and then:

  mdadm --add (new drive)
  mdadm --grow -n3 (array)
  mdadm --fail --remove (old drive).
  mdadm --grow -n2 (array)

> I thought, instead of degrading, formatting, rebuilding and so on, I could
> - shutdown the computer
> - take i.e. /dev/sda and do
> - dd bs=98304 conv=sync,noerror if=/dev/sda of=/dev/sdX (X standig for 
> device name of new disk)

Very peculiar dd line you give as an example here.

  - calculator tells us 98304 is 96K, but why? Usually one would just use "1M"
    or the like here for performance reasons, and shorter to remember and type.

  - noerror is "continue after read errors", but do you want it to? I don't
    think the regular dd has the logic to pad output to an exact amount
    of unreadable data on input errors, and if not, then the result is useless.

    If you expect input to have read errors, "ddrescue" should be used.

But that's all beside the point as I don't see a reason to rely on offline
migration with dd here either.

-- 
With respect,
Roman

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: RAID 1 | Changing HDs
  2025-09-03 11:55 RAID 1 | Changing HDs Stefanie Leisestreichler (Febas)
                   ` (2 preceding siblings ...)
  2025-09-03 15:45 ` Roman Mamedov
@ 2025-09-03 15:59 ` anthony
  2025-09-04 17:13   ` Stefanie Leisestreichler (Febas)
  3 siblings, 1 reply; 19+ messages in thread
From: anthony @ 2025-09-03 15:59 UTC (permalink / raw)
  To: Stefanie Leisestreichler (Febas), linux-raid

On 03/09/2025 12:55, Stefanie Leisestreichler (Febas) wrote:
> Hi.
> I have the system layout shown below.
> 
> To avoid data loss, I want to change HDs which have about 46508 hours of 
> up time.
> 
> I thought, instead of degrading, formatting, rebuilding and so on, I could
> - shutdown the computer

Why degrade? There's no need.

But yes, unless you have hot-swap, you will need to shut down the 
computer at some point ... and I'd recommend at least one reboot...

First things first - do you have a spare SATA port on your mobo? Or (I 
wouldn't particularly recommend it) an external USB thingy to drop 
drives in? The latter will let you hot swap and avoid one reboot.

Okay. Plug your new drive in however. You will know which drives are 
your raid, so you just partition the third identically to the other two.

I'm surprised if your drives are MBR not GPT - MBR has been obsolete 
since almost forever nowadays :-) I think there's a GPT command that 
will copy the partitioning from a different drive. "man gdisk" or "man 
fdisk" - ignore stuff that says fdisk can't do GPT, it's long been upgraded.

Okay, now you have a 2-disk mirror and a spare. Add the spare as drive 3 
(ie you need (a) to add it to the array as a spare and then (b) update 
the number of drives in the array to three.) You can do both in the same 
command.

Monitor /proc/mdstat, which will tell you the progress of the resync. 
Once you've got a fully functional 3-drive array, shut the computer 
down. Swap the new drive for the old one and reboot.

/proc/mdstat will now tell you you've got a degraded 3-drive array with 
one missing. fail and remove the missing drive, and you're now left with 
a fully functional 2-drive array again. And at no point has your data 
been at risk.

The removed drive is, as has already been said, a fully functional 
backup (which is why I would recommend only removing it when the system 
is shut down and you KNOW it's clean).

And I wouldn't recommend putting that drive back in the same computer, 
without wiping it first! I don't think it will do any damage, but in my 
experience, raid is likely to get thoroughly confused with a failed 
drive re-appearing, and it's simpler not to do it. If you want to get 
anything back off that drive, stick it in a different computer.


Google for "linux raid wiki". Ignore the crap about it being "obsolete" 
- the admins archived a site which was aimed at end users, and are 
referring people to the kernel documentation !!! Idiots !!! The two are 
aimed at completely different target markets!

(How big are your drives? If they're anything like modern, there's a 
very good chance they are GPT. Is the upper limit for MBR 2GB?)

Cheers,
Wol

> - take i.e. /dev/sda and do
> - dd bs=98304 conv=sync,noerror if=/dev/sda of=/dev/sdX (X standig for 
> device name of new disk)
> 
> Is it save to do it this way, presuming the array is in AA-State?
> 
> Thanks,
> Steffi
> 
> /dev/sda1:
>            Magic : a92b4efc
>          Version : 1.2
>      Feature Map : 0x0
>       Array UUID : 68c0c9ad:82ede879:2110f427:9f31c140
>             Name : speernix15:0  (local to host speernix15)
>    Creation Time : Sun Nov 30 19:15:35 2014
>       Raid Level : raid1
>     Raid Devices : 2
> 
>   Avail Dev Size : 1953260976 (931.39 GiB 1000.07 GB)
>       Array Size : 976629568 (931.39 GiB 1000.07 GB)
>    Used Dev Size : 1953259136 (931.39 GiB 1000.07 GB)
>      Data Offset : 262144 sectors
>     Super Offset : 8 sectors
>     Unused Space : before=261864 sectors, after=1840 sectors
>            State : active
>      Device UUID : 5871292c:7fcfbd82:b0a28f1b:df7774f9
> 
>      Update Time : Thu Aug 28 01:00:03 2025
>    Bad Block Log : 512 entries available at offset 264 sectors
>         Checksum : b198f5d1 - correct
>           Events : 38185
> 
>     Device Role : Active device 0
>     Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
> /dev/sdb1:
>            Magic : a92b4efc
>          Version : 1.2
>      Feature Map : 0x0
>       Array UUID : 68c0c9ad:82ede879:2110f427:9f31c140
>             Name : speernix15:0  (local to host speernix15)
>    Creation Time : Sun Nov 30 19:15:35 2014
>       Raid Level : raid1
>     Raid Devices : 2
> 
>   Avail Dev Size : 1953260976 (931.39 GiB 1000.07 GB)
>       Array Size : 976629568 (931.39 GiB 1000.07 GB)
>    Used Dev Size : 1953259136 (931.39 GiB 1000.07 GB)
>      Data Offset : 262144 sectors
>     Super Offset : 8 sectors
>     Unused Space : before=261864 sectors, after=1840 sectors
>            State : active
>      Device UUID : 4bbfbe7a:457829a5:dd9d2e3c:15818bca
> 
>      Update Time : Thu Aug 28 01:00:03 2025
>    Bad Block Log : 512 entries available at offset 264 sectors
>         Checksum : 144ff0ef - correct
>           Events : 38185
> 
> 
> 


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: RAID 1 | Changing HDs
  2025-09-03 15:45 ` Roman Mamedov
@ 2025-09-03 16:58   ` Michael Reinelt
  2025-09-03 17:28     ` Roman Mamedov
  2025-09-03 17:47   ` Stefanie Leisestreichler (Febas)
  1 sibling, 1 reply; 19+ messages in thread
From: Michael Reinelt @ 2025-09-03 16:58 UTC (permalink / raw)
  To: Roman Mamedov, Stefanie Leisestreichler (Febas); +Cc: linux-raid

On Wed, 3 Sep 2025 at 20:45 +0500, Roman Mamedov wrote:
> If so, the safest would be to connect all three drives, and then:
> 
>   mdadm --add (new drive)
>   mdadm --grow -n3 (array)
>   mdadm --fail --remove (old drive).
>   mdadm --grow -n2 (array)

I do something similar about once a year on my RAID5 array: I replace the oldest drive with a new
one so each drive gets refreshed roughly every three years.

The main difference is that I use mdadm’s "replace" feature, which keeps the array non-degraded
while the data is copied:

mdadm /dev/md0 --add-spare /dev/new_drive
mdadm /dev/md0 --replace /dev/old_drive --with /dev/new_drive


(not sure if this also works for other RAID levels)

On my setup the copy takes about 8 hours. After it completes:

mdadm /dev/md0 --remove /dev/old_drive

This minimizes the time the array would otherwise run degraded and thus reduces the risk of a second
failure during a rebuild.

regards, Michael


-- 
Michael Reinelt <michael@reinelt.co.at>
Ringsiedlung 75
A-8111 Gratwein-Straßengel
+43 676 3079941

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: RAID 1 | Changing HDs
  2025-09-03 16:58   ` Michael Reinelt
@ 2025-09-03 17:28     ` Roman Mamedov
  0 siblings, 0 replies; 19+ messages in thread
From: Roman Mamedov @ 2025-09-03 17:28 UTC (permalink / raw)
  To: linux-raid

On Wed, 03 Sep 2025 18:58:10 +0200
Michael Reinelt <michael@reinelt.co.at> wrote:

> This minimizes the time the array would otherwise run degraded and thus reduces the risk of a second
> failure during a rebuild.

Good point for RAID5/6, but in this case growing RAID1 from 2 to 3 mirrors
doesn't introduce any risk compared to what it was before the operation.

-- 
With respect,
Roman

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: RAID 1 | Changing HDs
  2025-09-03 15:45 ` Roman Mamedov
  2025-09-03 16:58   ` Michael Reinelt
@ 2025-09-03 17:47   ` Stefanie Leisestreichler (Febas)
  2025-09-03 17:57     ` Pascal Hambourg
  2025-09-03 19:50     ` Wol
  1 sibling, 2 replies; 19+ messages in thread
From: Stefanie Leisestreichler (Febas) @ 2025-09-03 17:47 UTC (permalink / raw)
  To: Roman Mamedov; +Cc: linux-raid

On 03.09.25 17:45, Roman Mamedov wrote:
> Do you have a free drive bay and connector in your computer (or just the
> connector)?
> 
> If so, the safest would be to connect all three drives, and then:
> 
>    mdadm --add (new drive)
>    mdadm --grow -n3 (array)
>    mdadm --fail --remove (old drive).
>    mdadm --grow -n2 (array)

I like this method, hopefully I have a free connector on the board.

But still two questions left:
- Do I need to turn off the computer for the new (3rd) disk to be known? 
If I remember correct I had try a hot spare in another scenario and 
wasn't able to expose the new hard disk to be recognized by lsblk or 
similar.

Also I see problems with the device naming as long as no hd uuids are 
not used by having a drive appearing as /dev/sdx and with next reboot as 
/dev/sdn. Are my concerns reasonable or am I just too afraid?

Thanks.


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: RAID 1 | Changing HDs
  2025-09-03 17:47   ` Stefanie Leisestreichler (Febas)
@ 2025-09-03 17:57     ` Pascal Hambourg
  2025-09-03 19:19       ` Stefanie Leisestreichler (Febas)
  2025-09-03 19:50     ` Wol
  1 sibling, 1 reply; 19+ messages in thread
From: Pascal Hambourg @ 2025-09-03 17:57 UTC (permalink / raw)
  To: Stefanie Leisestreichler (Febas); +Cc: linux-raid

On 03/09/2025 at 19:47, Stefanie Leisestreichler (Febas) wrote:
> 
> - Do I need to turn off the computer for the new (3rd) disk to be known? 

It depends whether the host adapter and port support hotplug. On some 
motherboards it must be enabled in BIOS/UEFI settings. Do not disconnect 
a disk while power is on if the port does not support hotplug either.

> Also I see problems with the device naming as long as no hd uuids are 
> not used by having a drive appearing as /dev/sdx and with next reboot 
> as /dev/sdn. Are my concerns reasonable or am I just too afraid?

/dev/sd* names are not persistent and cannot be relied upon. But md uses 
UUIDs for assembly, so it does not matter.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: RAID 1 | Changing HDs
  2025-09-03 17:57     ` Pascal Hambourg
@ 2025-09-03 19:19       ` Stefanie Leisestreichler (Febas)
  0 siblings, 0 replies; 19+ messages in thread
From: Stefanie Leisestreichler (Febas) @ 2025-09-03 19:19 UTC (permalink / raw)
  To: Pascal Hambourg; +Cc: linux-raid

On 03.09.25 19:57, Pascal Hambourg wrote:
> On 03/09/2025 at 19:47, Stefanie Leisestreichler (Febas) wrote:
>>
>> - Do I need to turn off the computer for the new (3rd) disk to be known? 
> 
> It depends whether the host adapter and port support hotplug. On some 
> motherboards it must be enabled in BIOS/UEFI settings. Do not disconnect 
> a disk while power is on if the port does not support hotplug either.
> 
>> Also I see problems with the device naming as long as no hd uuids are 
>> not used by having a drive appearing as /dev/sdx and with next reboot 
>> as /dev/sdn. Are my concerns reasonable or am I just too afraid?
> 
> /dev/sd* names are not persistent and cannot be relied upon. But md uses 
> UUIDs for assembly, so it does not matter.

Thanks, Pascal for explaination.


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: RAID 1 | Changing HDs
  2025-09-03 17:47   ` Stefanie Leisestreichler (Febas)
  2025-09-03 17:57     ` Pascal Hambourg
@ 2025-09-03 19:50     ` Wol
  2025-09-04 17:06       ` Stefanie Leisestreichler (Febas)
  1 sibling, 1 reply; 19+ messages in thread
From: Wol @ 2025-09-03 19:50 UTC (permalink / raw)
  To: Stefanie Leisestreichler (Febas), Roman Mamedov; +Cc: linux-raid

On 03/09/2025 18:47, Stefanie Leisestreichler (Febas) wrote:
> I like this method, hopefully I have a free connector on the board.
> 
> But still two questions left:
> - Do I need to turn off the computer for the new (3rd) disk to be known? 
> If I remember correct I had try a hot spare in another scenario and 
> wasn't able to expose the new hard disk to be recognized by lsblk or 
> similar.

As others have said, if your mobo doesn't support hotplug, DON'T TRY IT!

Otherwise, use eSATA (or USB).>
> Also I see problems with the device naming as long as no hd uuids are 
> not used by having a drive appearing as /dev/sdx and with next reboot 
> as /dev/sdn. Are my concerns reasonable or am I just too afraid?
As others have said, it doesn't matter. Not sure if as the other person 
said md uses uuids, or if it just scans all partitions, and looks for a 
superblock.
What I would say (and it's not for raid1, but definitely raid 5/6) if 
you do have parity raid then make sure every time you change the config, 
generate an mdadm.conf file (or whatever it's called). While md doesn't 
use it, if you have a problem with your raid you will wish you had it!

As I say, you're raid 1, you don't need it now, but if you ever do 
change, DO IT!

Cheers,
Wol

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: RAID 1 | Changing HDs
  2025-09-03 14:26       ` Stefanie Leisestreichler (Febas)
@ 2025-09-03 21:59         ` Reindl Harald
  2025-09-04 17:04           ` Stefanie Leisestreichler (Febas)
  0 siblings, 1 reply; 19+ messages in thread
From: Reindl Harald @ 2025-09-03 21:59 UTC (permalink / raw)
  To: Stefanie Leisestreichler (Febas), linux-raid



Am 03.09.25 um 16:26 schrieb Stefanie Leisestreichler (Febas):
> On 03.09.25 16:00, Reindl Harald wrote:
>> in reality you could plug the disk off while the system is running - 
>> it#s the whole purpose of RAID
> 
> Got it. I will try to do it with the system running.
> 
> Do you know a way to expose the new hard disk to the system when the 
> computer is not rebooted? And how about the device identifier? Will it 
> be /dev/sdb again when it is plugged with the same sata connector? And 
> can I make sure the device name is not changing when it is rebooted again?
> 
> Or did I get you wrong and it is just a theoretical possibility to plug 
> off a disk when system is running?

don't do it while the system is running - it's that easy

you bott with one old disk holding the data and system and will see the 
new empty disk - one line of my script is there to recognize the new, 
cloned partitions

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: RAID 1 | Changing HDs
  2025-09-03 21:59         ` Reindl Harald
@ 2025-09-04 17:04           ` Stefanie Leisestreichler (Febas)
  0 siblings, 0 replies; 19+ messages in thread
From: Stefanie Leisestreichler (Febas) @ 2025-09-04 17:04 UTC (permalink / raw)
  To: Reindl Harald, linux-raid

On 03.09.25 23:59, Reindl Harald wrote:
> 
> 
> Am 03.09.25 um 16:26 schrieb Stefanie Leisestreichler (Febas):
>> On 03.09.25 16:00, Reindl Harald wrote:
>>> in reality you could plug the disk off while the system is running - 
>>> it#s the whole purpose of RAID
>>
>> Got it. I will try to do it with the system running.
>>
>> Do you know a way to expose the new hard disk to the system when the 
>> computer is not rebooted? And how about the device identifier? Will it 
>> be /dev/sdb again when it is plugged with the same sata connector? And 
>> can I make sure the device name is not changing when it is rebooted 
>> again?
>>
>> Or did I get you wrong and it is just a theoretical possibility to 
>> plug off a disk when system is running?
> 
> don't do it while the system is running - it's that easy
> 
> you bott with one old disk holding the data and system and will see the 
> new empty disk - one line of my script is there to recognize the new, 
> cloned partitions

OK, thanks for all your input!


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: RAID 1 | Changing HDs
  2025-09-03 19:50     ` Wol
@ 2025-09-04 17:06       ` Stefanie Leisestreichler (Febas)
  0 siblings, 0 replies; 19+ messages in thread
From: Stefanie Leisestreichler (Febas) @ 2025-09-04 17:06 UTC (permalink / raw)
  To: Wol, Roman Mamedov; +Cc: linux-raid

On 03.09.25 21:50, Wol wrote:
> On 03/09/2025 18:47, Stefanie Leisestreichler (Febas) wrote:
>> I like this method, hopefully I have a free connector on the board.
>>
>> But still two questions left:
>> - Do I need to turn off the computer for the new (3rd) disk to be 
>> known? If I remember correct I had try a hot spare in another scenario 
>> and wasn't able to expose the new hard disk to be recognized by lsblk 
>> or similar.
> 
> As others have said, if your mobo doesn't support hotplug, DON'T TRY IT!
> 
> Otherwise, use eSATA (or USB).>
>> Also I see problems with the device naming as long as no hd uuids are 
>> not used by having a drive appearing as /dev/sdx and with next reboot 
>> as /dev/sdn. Are my concerns reasonable or am I just too afraid?
> As others have said, it doesn't matter. Not sure if as the other person 
> said md uses uuids, or if it just scans all partitions, and looks for a 
> superblock.
> What I would say (and it's not for raid1, but definitely raid 5/6) if 
> you do have parity raid then make sure every time you change the config, 
> generate an mdadm.conf file (or whatever it's called). While md doesn't 
> use it, if you have a problem with your raid you will wish you had it!
> 
> As I say, you're raid 1, you don't need it now, but if you ever do 
> change, DO IT!
> 
> Cheers,
> Wol

Thanks for the tips, very welcome.


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: RAID 1 | Changing HDs
  2025-09-03 15:59 ` anthony
@ 2025-09-04 17:13   ` Stefanie Leisestreichler (Febas)
  0 siblings, 0 replies; 19+ messages in thread
From: Stefanie Leisestreichler (Febas) @ 2025-09-04 17:13 UTC (permalink / raw)
  To: anthony, linux-raid

On 03.09.25 17:59, anthony wrote:
> On 03/09/2025 12:55, Stefanie Leisestreichler (Febas) wrote:
>> Hi.
>> I have the system layout shown below.
>>
>> To avoid data loss, I want to change HDs which have about 46508 hours 
>> of up time.
>>
>> I thought, instead of degrading, formatting, rebuilding and so on, I 
>> could
>> - shutdown the computer
> 
> Why degrade? There's no need.
> 
> But yes, unless you have hot-swap, you will need to shut down the 
> computer at some point ... and I'd recommend at least one reboot...
> 
> First things first - do you have a spare SATA port on your mobo? Or (I 
> wouldn't particularly recommend it) an external USB thingy to drop 
> drives in? The latter will let you hot swap and avoid one reboot.
> 
> Okay. Plug your new drive in however. You will know which drives are 
> your raid, so you just partition the third identically to the other two.
> 
> I'm surprised if your drives are MBR not GPT - MBR has been obsolete 
> since almost forever nowadays :-) I think there's a GPT command that 
> will copy the partitioning from a different drive. "man gdisk" or "man 
> fdisk" - ignore stuff that says fdisk can't do GPT, it's long been 
> upgraded.
> 
> Okay, now you have a 2-disk mirror and a spare. Add the spare as drive 3 
> (ie you need (a) to add it to the array as a spare and then (b) update 
> the number of drives in the array to three.) You can do both in the same 
> command.
> 
> Monitor /proc/mdstat, which will tell you the progress of the resync. 
> Once you've got a fully functional 3-drive array, shut the computer 
> down. Swap the new drive for the old one and reboot.
> 
> /proc/mdstat will now tell you you've got a degraded 3-drive array with 
> one missing. fail and remove the missing drive, and you're now left with 
> a fully functional 2-drive array again. And at no point has your data 
> been at risk.
> 
> The removed drive is, as has already been said, a fully functional 
> backup (which is why I would recommend only removing it when the system 
> is shut down and you KNOW it's clean).
> 
> And I wouldn't recommend putting that drive back in the same computer, 
> without wiping it first! I don't think it will do any damage, but in my 
> experience, raid is likely to get thoroughly confused with a failed 
> drive re-appearing, and it's simpler not to do it. If you want to get 
> anything back off that drive, stick it in a different computer.
> 
> 
> Google for "linux raid wiki". Ignore the crap about it being "obsolete" 
> - the admins archived a site which was aimed at end users, and are 
> referring people to the kernel documentation !!! Idiots !!! The two are 
> aimed at completely different target markets!
> 
> (How big are your drives? If they're anything like modern, there's a 
> very good chance they are GPT. Is the upper limit for MBR 2GB?)
> 
> Cheers,
> Wol
> 

Hi Wol.
The motherboard has a free 6Gb/s SATA port which I will be using follow 
the plan to grow the array and shrink it after sync is done.

My drives have 1GB each and you are right, MBR seems to be no option 
when dealing with HDs bigger than 2GB.

Also thanks a lot for your writing and explaining!
Steffi

>> - take i.e. /dev/sda and do
>> - dd bs=98304 conv=sync,noerror if=/dev/sda of=/dev/sdX (X standig for 
>> device name of new disk)
>>
>> Is it save to do it this way, presuming the array is in AA-State?
>>
>> Thanks,
>> Steffi
>>
>> /dev/sda1:
>>            Magic : a92b4efc
>>          Version : 1.2
>>      Feature Map : 0x0
>>       Array UUID : 68c0c9ad:82ede879:2110f427:9f31c140
>>             Name : speernix15:0  (local to host speernix15)
>>    Creation Time : Sun Nov 30 19:15:35 2014
>>       Raid Level : raid1
>>     Raid Devices : 2
>>
>>   Avail Dev Size : 1953260976 (931.39 GiB 1000.07 GB)
>>       Array Size : 976629568 (931.39 GiB 1000.07 GB)
>>    Used Dev Size : 1953259136 (931.39 GiB 1000.07 GB)
>>      Data Offset : 262144 sectors
>>     Super Offset : 8 sectors
>>     Unused Space : before=261864 sectors, after=1840 sectors
>>            State : active
>>      Device UUID : 5871292c:7fcfbd82:b0a28f1b:df7774f9
>>
>>      Update Time : Thu Aug 28 01:00:03 2025
>>    Bad Block Log : 512 entries available at offset 264 sectors
>>         Checksum : b198f5d1 - correct
>>           Events : 38185
>>
>>     Device Role : Active device 0
>>     Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
>> /dev/sdb1:
>>            Magic : a92b4efc
>>          Version : 1.2
>>      Feature Map : 0x0
>>       Array UUID : 68c0c9ad:82ede879:2110f427:9f31c140
>>             Name : speernix15:0  (local to host speernix15)
>>    Creation Time : Sun Nov 30 19:15:35 2014
>>       Raid Level : raid1
>>     Raid Devices : 2
>>
>>   Avail Dev Size : 1953260976 (931.39 GiB 1000.07 GB)
>>       Array Size : 976629568 (931.39 GiB 1000.07 GB)
>>    Used Dev Size : 1953259136 (931.39 GiB 1000.07 GB)
>>      Data Offset : 262144 sectors
>>     Super Offset : 8 sectors
>>     Unused Space : before=261864 sectors, after=1840 sectors
>>            State : active
>>      Device UUID : 4bbfbe7a:457829a5:dd9d2e3c:15818bca
>>
>>      Update Time : Thu Aug 28 01:00:03 2025
>>    Bad Block Log : 512 entries available at offset 264 sectors
>>         Checksum : 144ff0ef - correct
>>           Events : 38185
>>
>>
>>
> 



^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2025-09-04 17:13 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-09-03 11:55 RAID 1 | Changing HDs Stefanie Leisestreichler (Febas)
2025-09-03 12:19 ` Reindl Harald
2025-09-03 13:20   ` Stefanie Leisestreichler (Febas)
2025-09-03 14:00     ` Reindl Harald
2025-09-03 14:26       ` Stefanie Leisestreichler (Febas)
2025-09-03 21:59         ` Reindl Harald
2025-09-04 17:04           ` Stefanie Leisestreichler (Febas)
2025-09-03 12:35 ` Hannes Reinecke
2025-09-03 12:49   ` Stefanie Leisestreichler (Febas)
2025-09-03 15:45 ` Roman Mamedov
2025-09-03 16:58   ` Michael Reinelt
2025-09-03 17:28     ` Roman Mamedov
2025-09-03 17:47   ` Stefanie Leisestreichler (Febas)
2025-09-03 17:57     ` Pascal Hambourg
2025-09-03 19:19       ` Stefanie Leisestreichler (Febas)
2025-09-03 19:50     ` Wol
2025-09-04 17:06       ` Stefanie Leisestreichler (Febas)
2025-09-03 15:59 ` anthony
2025-09-04 17:13   ` Stefanie Leisestreichler (Febas)

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox