![]() Now, mount the device: # mount -t ext4 /dev/md1 /share/MD1_DATA some parts of the process don't update you on the status. Run a manual file system check and repair first: # e2fsck_64 -fp -C 0 /dev/md1ĭepending on the size of your volume this could potentially take quite some time - about 4 hours in the case of our 12TB RAID-5 volume. Trying to manually mount failed, but it turns out there's a simple solution. dev/md9 509.5M 121.5M 388.0M 24% /mnt/HDA_ROOTĭf: /share/external/UHCI Host Controller: Protocol error Using df -h (a command to check disk space/usage), we can see the /dev/md0 volume is ok, but the `/dev/md1' volume is gone: # df -hįilesystem Size Used Available Use% Mounted on UUID : xxxxxxxx:xxxxxxxx:xxxxxxxx:514de4deīoth volumes seem to still exist, and the RAID disk info is intact. UUID : xxxxxxxx:xxxxxxxx:xxxxxxxx:513ee963 ![]() ![]() Output of mdadm -detail.: # mdadm -detail /dev/md0 The QNAP NAS boxes run a custom linux implementation based on Ubuntu (linux kernel 2.6).įirstly, run mdadm - this command is used to manage and monitor software RAID devices in linux. In this situation, a knowledge of *nix will go a long way to helping you inderstand what the particular problems you are facing entail. So, we open a terminal and SSH into the NAS in order to run a few diagnostic commands and see what's going on. To our dismay, when it powered back up, only one of the two RAID volumes was available. The device itself has two RAID-5 volumes of four disks each.Īfter a power outage that went beyond the time of UPS' ability to keep things running, the unit shut down. We recently suffered a strange issue with one of our NAS devices - a QNAP 8-bay NAS.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |