Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
deb9:mdadm [2021/03/24 16:28] – [Grow RAID1 to RAID5] Bernard Condraudeb9:mdadm [2024/04/03 15:08] (current) – [Links] Bernard Condrau
Line 34: Line 34:
 | sdh |  sdh1      md3  | vg2            | /backup      | | sdh |  sdh1      md3  | vg2            | /backup      |
  
-see also [[deb720:apsys|Hardware configuration]]+see also [[hw:machines#apollo|Hardware configuration]]
  
 ==== Replace a failed RAID disk ==== ==== Replace a failed RAID disk ====
Line 90: Line 90:
 sudo sysctl dev.raid.speed_limit_max</code> sudo sysctl dev.raid.speed_limit_max</code>
   * Default system values on Debian 10:<code>dev.raid.speed_limit_min = 1000   * Default system values on Debian 10:<code>dev.raid.speed_limit_min = 1000
-dev.raid.speed_limit_max = 200000</code> +dev.raid.speed_limit_max = 200,000</code> 
-  * Set for all Raids:<code>sudo sysctl -w dev.raid.speed_limit_min=50000 +  * Reduce max limit to make server more responsive during resync (2021-12-05):<code>sudo sysctl -w dev.raid.speed_limit_min=10,000 
-sudo sysctl -w dev.raid.speed_limit_max=1000000</code>+sudo sysctl -w dev.raid.speed_limit_max=100,000</code>
  
 === read-ahead === === read-ahead ===
-  * Set current read-ahead (in 512-byte sectors) per Raid device (default value is 256 or 512):<code>blockdev --getra /dev/mdX</code> +  * Get current read-ahead (in 512-byte sectors) per Raid device (default value is 512 on Debian 10):<code>blockdev --getra /dev/mdX</code> 
-  * Set read-ahead (32MB):<code>blockdev --setra 65536 /dev/mdX</code>+  * Set to 32 MB:<code>blockdev --setra 65536 /dev/mdX</code> 
 +  * Set to 65536 on a server with 32GB memory, 32768 on a server with 8GB memory (2021-12-05)
  
 === Disable NCQ === === Disable NCQ ===
Line 104: Line 105:
 ==== Raid 5, 6 only ==== ==== Raid 5, 6 only ====
 === stripe_cache_size === === stripe_cache_size ===
-It records the size (in pages per device) of the stripe cache which is used for synchronising all write operations to the array and all read operations if the array is degraded. The default is 256 which equals to 3MB memory consumption. Valid values are 17 to 32768. Make sure your system has enough memory available: memory_consumed = system_page_size * nr_disks * stripe_cache_size.+It records the size (in pages per device) of the stripe cache which is used for synchronising all write operations to the array and all read operations if the array is degraded. The default is 256 which equals to 3MB memory consumption.  Valid values are 17 to 32768. Make sure your system has enough memory available: memory_consumed = system_page_size * nr_disks * stripe_cache_size.
   * Find system page size, on Debian 10 this is 4096:<code>getconf PAGESIZE</code>   * Find system page size, on Debian 10 this is 4096:<code>getconf PAGESIZE</code>
-  * Set to 384MB memory consumption on a 3 disk Raid:<code>sudo echo 32768> /sys/block/md0/md/stripe_cache_size</code>+  * Set to 384MB memory consumption on a 3 disk:<code>sudo echo 32768> /sys/block/md0/md/stripe_cache_size</code> 
 +  * Set to 32768 on a server with 32 GB memory, set to 16384 on a server with 8 GB memory (2021-12-05) 
 + 
 +===== Prepare RAID with single disk ===== 
 +==== Prepare new disk ==== 
 +  - If the new disk contains partitions 
 +    - Stop any Raid partitions with<code>mdadm --stop /dev/md1 
 +mdadm --remove /dev/md1</code> 
 +    - Remove the superblocks<code>mdadm --zero-superblock /dev/sdX1</code> 
 +    - Remove existing partitions with ''fdisk /dev/sdX'' 
 +  - Create a new partition utilizing the full disk space. When asked, remove the existing signature. Change partition type to ''Linux RAID''<code>sudo fdisk /dev/sdX 
 +Command (m for help): d 
 +Command (m for help): n 
 +Command (m for help): t,29</code> 
 +  - Create the RAID<code>mdadm --create /dev/mdX --level=raid1 --raid-devices=2 /dev/sdX1 missing</code> 
 +  - Check the RAID was created<code>cat /proc/mdstat 
 +ls /dev/md*</code> 
 +==== Links ==== 
 +  * [[https://unix.stackexchange.com/questions/63928/can-i-create-a-software-raid-1-with-one-device|Can I create a software RAID 1 with one device]] 
 +  * [[https://wiki.archlinux.org/title/Convert_a_single_drive_system_to_RAID|Convert a single drive system to RAID]] 
 +  * [[https://bobcares.com/blog/removal-of-mdadm-raid-devices/|Removal of mdadm RAID Devices – How to do it quickly?]]
  
 ===== Move RAID to a new machine ===== ===== Move RAID to a new machine =====
Line 114: Line 135:
   - Append info to mdadm.conf<code>mdadm --detail --scan >> /etc/mdadm/mdadm.conf</code>   - Append info to mdadm.conf<code>mdadm --detail --scan >> /etc/mdadm/mdadm.conf</code>
   - Update initramfs<code>update-initramfs -u</code>   - Update initramfs<code>update-initramfs -u</code>
 +  - Copy entire disk to new RAID (add ''-z'' for network transfers)<code>rsync --progress -arHAX <source dir> <destination dir></code>
 +  - Check size of directory<code>du -sh</code>
  
 ==== Troubleshooting ==== ==== Troubleshooting ====
Line 123: Line 146:
   * [[https://unix.stackexchange.com/questions/23879/using-mdadm-examine-to-write-mdadm-conf/52935#52935|Using mdadm --examine to write mdadm.conf]]   * [[https://unix.stackexchange.com/questions/23879/using-mdadm-examine-to-write-mdadm-conf/52935#52935|Using mdadm --examine to write mdadm.conf]]
   * [[https://askubuntu.com/questions/729370/can-i-transfer-my-mdadm-software-raid-to-a-new-system-in-case-of-hardware-failur|Can I transfer my mdadm Software raid to a new system in case of hardware failure?]]   * [[https://askubuntu.com/questions/729370/can-i-transfer-my-mdadm-software-raid-to-a-new-system-in-case-of-hardware-failur|Can I transfer my mdadm Software raid to a new system in case of hardware failure?]]
 +  * [[https://unix.stackexchange.com/questions/324662/how-to-bring-up-an-inactive-mdadm-raid-after-a-reboot-while-adding-a-drive-to-a|How to bring up an inactive mdadm RAID after a reboot while adding a drive to a raid-6?]] 
 +  * [[https://support.hpe.com/hpesc/public/docDisplay?docId=c02101717&docLocale=en_US|RedHat Enterprise Linux 5: How to Activate a Software Raid Device Manually]] 
 +  * [[https://www.diskinternals.com/raid-recovery/how-to-remove-software-raid-with-mdadm/|Remove mdadm RAID Devices – How to do it?]] 
 +  * [[https://www.oreilly.com/library/view/managing-raid-on/9780596802035/ch04s06s01.html|mdadm.conf]]
 ===== Links ===== ===== Links =====
   * [[https://raid.wiki.kernel.org/index.php/Resync|Resync]]   * [[https://raid.wiki.kernel.org/index.php/Resync|Resync]]
   * [[https://www.cyberciti.biz/tips/linux-raid-increase-resync-rebuild-speed.html#:~:text=The%20%2Fproc%2Fsys%2Fdev,The%20default%20is%201000.|5 Tips To Speed Up Linux Software Raid Rebuilding And Re-syncing]]   * [[https://www.cyberciti.biz/tips/linux-raid-increase-resync-rebuild-speed.html#:~:text=The%20%2Fproc%2Fsys%2Fdev,The%20default%20is%201000.|5 Tips To Speed Up Linux Software Raid Rebuilding And Re-syncing]]
 +  * [[https://bobcares.com/blog/raid-resync/|RAID resync – Best practices]]
   * [[https://raid.wiki.kernel.org/index.php/A_guide_to_mdadm#Upgrading_a_mirror_raid_to_a_parity_raid|A guide to mdadm]]   * [[https://raid.wiki.kernel.org/index.php/A_guide_to_mdadm#Upgrading_a_mirror_raid_to_a_parity_raid|A guide to mdadm]]
   * [[https://www.howtoforge.com/how-to-resize-raid-partitions-shrink-and-grow-software-raid|How To Resize RAID Partitions (Shrink & Grow) (Software RAID)]]   * [[https://www.howtoforge.com/how-to-resize-raid-partitions-shrink-and-grow-software-raid|How To Resize RAID Partitions (Shrink & Grow) (Software RAID)]]