sudo umount /dev/md126
sudo mdadm --detail /dev/md126
mdadm -S /dev/md126
sudo mdadm --zero-superblock /dev/sde1
sudo mdadm /dev/md127 --grow --level=5
sudo mdadm /dev/md127 --add /dev/sde1
sudo mdadm --grow /dev/md127 --raid-devices 3
watch -n 15 -d cat /proc/mdstat
hdd | partition | raid1 | lvm | mount point |
---|---|---|---|---|
sda | sda1 | md0 | - | / |
sdb | sdb1 | md0 | ||
sda | sda2 | - | - | swap |
sdb | sdb2 | - | swap | |
sdc | sdc1 | md1 | vg1 | /home, /srv, |
sdd | sdd1 | md1 | vg1 | /media |
sde | sde1 | md2 | vg1 | |
sdf | sdf1 | md2 | vg1 | |
sdg | sdg1 | md3 | vg2 | /backup |
sdh | sdh1 | md3 | vg2 | /backup |
see also Hardware configuration
Replace the disk, then copy the partition information from the good disk, randomize the UUID, and re-read the partition information into the system. First, install gdisk from the Debian Universe repositories.
apt-get install gdisk sgdisk -R=/dev/sdb /dev/sda sgdisk -G /dev/sdb partprobe
Taken from How can I quickly copy a GPT partition scheme from one hard drive to another?
Other resources on mdadm and How do I rename an mdadm raid array?:
Setup configuration:
mdadm -Es >> /etc/mdadm/mdadm.conf
to check wether root and swap are mounted, enter:
mount free -m -t
to check mismatching uuid's, enter:
ls -la /dev/disk/by-uuid cat /etc/fstab
to fix, do:
vim /etc/fstab
replace the uuid's found in fstab with the ones found in /dev/disk. Make sure you copy the correct uuid (md0, md1) to the respective entry in fstab.
Most Debian and Debian-derived distributions create a cron job which issues an array check at 0106 hours each first Sunday of the month in /etc/cron.d/mdadm. This task appears as resync in /proc/mdstat and syslog. So if you suddenly see RAID-resyncing for no apparent reason, this might be a place to take a look.
Normally the kernel will throttle the resync activity (c.f. nice) to avoid impacting the raid device performance.
However, it is a good idea to manage the resync parameters to get optimal performance.
sudo sysctl dev.raid.speed_limit_min sudo sysctl dev.raid.speed_limit_max
dev.raid.speed_limit_min = 1000 dev.raid.speed_limit_max = 200,000
sudo sysctl -w dev.raid.speed_limit_min=10,000 sudo sysctl -w dev.raid.speed_limit_max=100,000
blockdev --getra /dev/mdX
blockdev --setra 65536 /dev/mdX
cat /sys/block/sdX/device/queue_depth
echo 1 > /sys/block/sdX/device/queue_depth
It records the size (in pages per device) of the stripe cache which is used for synchronising all write operations to the array and all read operations if the array is degraded. The default is 256 which equals to 3MB memory consumption. Valid values are 17 to 32768. Make sure your system has enough memory available: memory_consumed = system_page_size * nr_disks * stripe_cache_size.
getconf PAGESIZE
sudo echo 32768> /sys/block/md0/md/stripe_cache_size
mdadm --stop /dev/md1 mdadm --remove /dev/md1
mdadm --zero-superblock /dev/sdX1
fdisk /dev/sdX
Linux RAID
sudo fdisk /dev/sdX Command (m for help): d Command (m for help): n Command (m for help): t,29
mdadm --create /dev/mdX --level=raid1 --raid-devices=2 /dev/sdX1 missing
cat /proc/mdstat ls /dev/md*
sudo mdadm --assemble --scan
blkid sudo mount /dev/md0 /mnt
mdadm --detail --scan >> /etc/mdadm/mdadm.conf
update-initramfs -u
-z
for network transfers)rsync --progress -arHAX <source dir> <destination dir>
du -sh