cat /proc/mdstat
smartctl -H /dev/sdX
smartctl -i /dv/sdX
mdadm --stop /dev/md1
mdadm --zero-superblock /dev/sdX1
gdisk /dev/sdXCommand (m for help): d
Command (m for help): d Command (m for help): n Command (m for help): t,fd00
mdadm /dev/md0 --add /dev/sdc1
echo 1000000 > /proc/sys/dev/raid/speed_limit_max
Before doing this check the current sync speed:
cat /proc/sys/dev/raid/speed_limit_max
mdadm /dev/md0 --replace /dev/sdX1 --with /dev/sdc1
sdX1 is the device you want to replace, sdc1 is the preferred device to do so and must be declared as a spare on your array. The –with option is optional, if not specified, any available spare will be used. After resyncing the RAID the replaced drive will be marked as failed.
mdadm --manage /dev/md0 --remove /dev/sdX1
mdadm --remove /dev/md0
mdadm -Es to the contents of /etc/mdadm/mdadm/confmdadm -Es >> /etc/mdadm/mdadm.conf
mount free -m -t
blkid ls -la /dev/disk/by-uuid cat /etc/fstab
/etc/fstab with the ones found in /dev/disk. Make sure you copy the correct uuid (md0, md1) to the respective entry in fstab.vim /etc/fstab
Most Debian and Debian-derived distributions create a cron job which issues an array check at 0106 hours each first Sunday of the month in /etc/cron.d/mdadm. This task appears as resync in /proc/mdstat and syslog. So if you suddenly see RAID-resyncing for no apparent reason, this might be a place to take a look.
Normally the kernel will throttle the resync activity (c.f. nice) to avoid impacting the raid device performance.
However, it is a good idea to manage the resync parameters to get optimal performance.
sudo sysctl dev.raid.speed_limit_min sudo sysctl dev.raid.speed_limit_max
dev.raid.speed_limit_min = 1000 dev.raid.speed_limit_max = 200,000
sudo sysctl -w dev.raid.speed_limit_min=10,000 sudo sysctl -w dev.raid.speed_limit_max=100,000
blockdev --getra /dev/mdX
blockdev --setra 65536 /dev/mdX
cat /sys/block/sdX/device/queue_depth
echo 1 > /sys/block/sdX/device/queue_depth
It records the size (in pages per device) of the stripe cache which is used for synchronising all write operations to the array and all read operations if the array is degraded. The default is 256 which equals to 3MB memory consumption. Valid values are 17 to 32768. Make sure your system has enough memory available: memory_consumed = system_page_size * nr_disks * stripe_cache_size.
getconf PAGESIZE
sudo echo 32768> /sys/block/md0/md/stripe_cache_size
mdadm --stop /dev/md1 mdadm --remove /dev/md1
mdadm --zero-superblock /dev/sdX1
fdisk /dev/sdXLinux RAIDsudo fdisk /dev/sdX Command (m for help): d Command (m for help): n Command (m for help): t,29
mdadm --create /dev/mdX --level=raid1 --raid-devices=2 /dev/sdX1 missing
cat /proc/mdstat ls /dev/md*
mdadm --manage /dev/mdX --add /dev/sdX1
sudo mdadm --assemble --scan
blkid sudo mount /dev/md0 /mnt
mdadm --detail --scan >> /etc/mdadm/mdadm.conf
update-initramfs -u
mdadm --grow /dev/md5 -z max
watch -d cat /proc/mdstat
pvdisplay
pvresize /dev/mdX
lvextend -l +100%FREE /dev/<volume_group>/<logical_volume>
cryptsetup resize /dev/mapper/<volume_group>/<logical_volume>_crypt
resize2fs -p /dev/mapper/<volume_group>/<logical_volume>_crypt
google: add drive to degraded raid 1 linux
1. Identify the Array and New Drive Check the current status of your RAID array to identify the device name (e.g., /dev/md0) and confirm which drive is missing. Use the following command: bash cat /proc/mdstat Identify your new physical drive (e.g., /dev/sdb) using: bash lsblk 2. Copy Partition Table (Optional but Recommended) If your RAID is based on partitions rather than whole disks, you should copy the partition layout from the surviving drive (e.g., /dev/sda) to the new drive (e.g., /dev/sdb): bash # WARNING: This will overwrite the partition table on /dev/sdb sudo sfdisk -d /dev/sda | sudo sfdisk /dev/sdb 3. Add the New Drive to the Array Use the mdadm command to add the new drive (or its partition) to the degraded array: bash # If using the whole disk sudo mdadm /dev/md0 –add /dev/sdb
# If using a specific partition sudo mdadm /dev/md0 –add /dev/sdb1 4. Monitor the Rebuild Process Once added, the system will automatically begin synchronizing data to the new drive. You can monitor the progress (percentage and estimated time) with: bash watch -n 1 cat /proc/mdstat 5. Update Bootloader (If Necessary) If your RAID array contains the operating system, you must install the bootloader (GRUB) onto the new drive to ensure the system can boot if the other drive fails: bash sudo grub-install /dev/sdb Summary of Useful Commands Check details: sudo mdadm –detail /dev/md0 Remove a failed drive (if still listed): sudo mdadm /dev/md0 –remove /dev/sdb1 Zero out old metadata on a used disk: sudo mdadm –zero-superblock /dev/sdb1 mdadm raid1 irregular degraded without reason - Server Fault Jun 29, 2565 BE — One drive goes bad is not enough reason for I/O errors appearing on the MD RAID device. It should never display I/O…
Server Fault
How to re-add a drive when RAID is in degraded mode? Dec 20, 2559 BE — Sorted by: 26. Adding Back a RAID 1 Member. To add a RAID member back into the array, use: mdadm -a For example: $ …
Server Fault
RAID1: How do I “Fail” a drive that's marked as “removed”? Dec 26, 2552 BE — There are just a few steps to get it back up and running. * Setup partitions on the replacement disk. These partiti…
Server Fault