SOFT RAID How To

LINKS

https://unix.stackexchange.com/questions/161859/can-dd-be-used-to-clone-to-a-smaller-hdd-knowing-that-partitions-will-need-ed sgdisk -b=gpt.bak.bin /dev/sda gdisk -R=/dev/sdb /dev/sda dd if=/dev/sda1 of=/dev/sdb1 bs=1M dd if=/dev/sda2 of=/dev/sdb2 bs=1M dd if=/dev/sda3 of=/dev/sdb3 bs=1M etc...

sfdisk -d /dev/sda | sfdisk /dev/sdb  
lub 
sfdisk -d /dev/sda | sfdisk --force /dev/sdb  
lub  
sfdisk --dump /dev/sda | sfdisk --force /dev/sdbmdadm /dev/md1 -a /dev/sdb1 

Usunięcie uszkodzonego dysku/partycji

To remove /dev/sdb, we will mark /dev/sdb1 and /dev/sdb2 as failed and remove them from their respective RAID arrays (/dev/md0 and /dev/md1).

First we mark /dev/sdb1 as failed:

    mdadm --manage /dev/md0 --fail /dev/sdb1

The output of cat /proc/mdstat should look like this:

    server1:~# cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid5] [raid4] [raid6] [raid10]
    md0 : active raid1 sda1[0] sdb1[2](F)
    24418688 blocks [2/1] [U_]    md1 : active raid1 sda2[0] sdb2[1]
    24418688 blocks [2/2] [UU]    unused devices: <none>Then we remove /dev/sdb1 from /dev/md0:    mdadm --manage /dev/md0 --remove /dev/sdb1The output should be like this:    server1:~# mdadm --manage /dev/md0 --remove /dev/sdb1
    mdadm: hot removed /dev/sdb1And **cat /proc/mdstat** should show this:    server1:~# cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid5] [raid4] [raid6] [raid10]
    md0 : active raid1 sda1[0]
      24418688 blocks [2/1] [U_]    md1 : active raid1 sda2[0] sdb2[1]
      24418688 blocks [2/2] [UU]    unused devices: <none>Now we do the same steps again for /dev/sdb2 (which is part of /dev/md1):    mdadm --manage /dev/md1 --fail /dev/sdb2**cat /proc/mdstat**    server1:~# cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid5] [raid4] [raid6] [raid10]
    md0 : active raid1 sda1[0]
      24418688 blocks [2/1] [U_]    md1 : active raid1 sda2[0] sdb2[2](F)
      24418688 blocks [2/1] [U_]    unused devices: <none>
**mdadm --manage /dev/md1 --remove /dev/sdb2**    server1:~# mdadm --manage /dev/md1 --remove /dev/sdb2
    mdadm: hot removed /dev/sdb2**cat /proc/mdstat**    server1:~# cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid5] [raid4] [raid6] [raid10]
    md0 : active raid1 sda1[0]
      24418688 blocks [2/1] [U_]    md1 : active raid1 sda2[0]
      24418688 blocks [2/1] [U_]    unused devices: <none>Then power down the system:    shutdown -h nowand replace the old /dev/sdb hard drive with a new one

Dodanie nowego dysku do macierzy

After you have changed the hard disk /dev/sdb, boot the system.
The first thing we must do now is to create the exact same partitioning as on /dev/sda. We can do this with one simple command:    sfdisk -d /dev/sda | sfdisk /dev/sdbYou can run    fdisk -lto check if both hard drives have the same partitioning now.
Next we add /dev/sdb1 to /dev/md0 and /dev/sdb2 to /dev/md1:    mdadm --manage /dev/md0 --add /dev/sdb1    server1:~# mdadm --manage /dev/md0 --add /dev/sdb1
    mdadm: re-added /dev/sdb1    mdadm --manage /dev/md1 --add /dev/sdb2    server1:~# mdadm --manage /dev/md1 --add /dev/sdb2
    mdadm: re-added /dev/sdb2Now both arays (/dev/md0 and /dev/md1) will be synchronized. Run    cat /proc/mdstat

to see when it's finished.

During the synchronization the output will look like this:

    server1:~# cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid5] [raid4] [raid6] [raid10]
    md0 : active raid1 sda1[0] sdb1[1]
      24418688 blocks [2/1] [U_]
      [=>...................]  recovery =  9.9% (2423168/24418688) finish=2.8min speed=127535K/sec    md1 : active raid1 sda2[0] sdb2[1]
      24418688 blocks [2/1] [U_]
      [=>...................]  recovery =  6.4% (1572096/24418688) finish=1.9min speed=196512K/sec    unused devices: <none>
So, if somebody need it. I found working way. I used systemrescuecd.First step it to remove sda1/2 from RAID1: mdadm /dev/md0 --fail /dev/sda1 mdadm /dev/md1 --fail /dev/sda2 mdadm /dev/md0 --remove /dev/sda1 mdadm /dev/md1 --remove /dev/sda2After I removed /dev/sda2 in Gparted (via systemrescuecd)Removed labels "boot" and "raid" from SDA1 in GpartedThenin the command line: Parted /dev/sda resizepart 1 245000 quitCreated sda2 in Gparted linux-swap.mdadm --add /dev/md0 /dev/sda1 mdadm --add /dev/md1 /dev/sda2Same thing for sdb1/2.After: e2fsck -f /dev/sda1 e2fsck -f /dev/sdb1And: mdadm --grow --size max /dev/md0Then I used "check" for md0 in GParted.Done. Works fine.
#### https://www.cyberciti.biz/tips/linux-raid-increase-resync-rebuild-speed.html To see current limits, enter: sysctl dev.raid.speed_limit_min sysctl dev.raid.speed_limit_max
Sample outputs:
dev.raid.speed_limit_min = 10000 dev.raid.speed_limit_max = 20000 NOTE: The following hacks are used for recovering Linux software raid, and to increase the speed of RAID rebuilds. Options are good for tweaking rebuilt process and may increase overall system load, high cpu and memory usage.
To increase speed, enter: echo value > /proc/sys/dev/raid/speed_limit_min OR sysctl -w dev.raid.speed_limit_min=value
In this example, set it to 50000 K/Sec, enter: echo 50000 > /proc/sys/dev/raid/speed_limit_min OR sysctl -w dev.raid.speed_limit_min=50000
If you want to override the defaults you could add these two lines to /etc/sysctl.conf: