1

I accidentally ran sudo rm -r on my RAID 1 mount point. I immediately realized my error, panicked, and hit CTRL+C to cancel. Some damage was already done. The lost+found directory and some of my data is gone, but most of it is still there. I can recover my lost data but I am worried about the integrity of the RAID and about the lost+found directory. So I have two questions:

  1. Is my RAID okay? Suppose I deleted the whole RAID mount point. This should only delete the data and the mount point, but the RAID itself would still be intact. So it could be remounted and I could restore the data from a backup. Is that correct?

  2. Do I need to worry about the lost+found directory? If I understand correctly, the lost+found directory only contains names for unlniked files that were found on the disk. So deleting it should not be a problem, as the unliked files themselves are not deleted and they will be found again and placed with a new name in lost+found. Is that correct?

Here is some info output of the RAID:

user@host:~ $ cat /proc/mdstat Personalities : [raid1] [linear] [raid0] [raid6] [raid5] [raid4] [raid10] md0 : active raid1 sdb1[1] sda1[0] 5860385344 blocks super 1.2 [2/2] [UU] bitmap: 0/44 pages [0KB], 65536KB chunk unused devices: <none> 
user@host:~ $ sudo mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Fri May 31 11:25:15 2024 Raid Level : raid1 Array Size : 5860385344 (5.46 TiB 6.00 TB) Used Dev Size : 5860385344 (5.46 TiB 6.00 TB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Fri Sep 27 15:14:40 2024 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Consistency Policy : bitmap Name : host:0 (local to host host) UUID : ... Events : 87057 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 17 1 active sync /dev/sdb1 
user@host:~ $ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS sda 8:0 0 5.5T 0 disk └─sda1 8:1 0 5.5T 0 part └─md0 9:0 0 5.5T 0 raid1 /mount/raid1 sdb 8:16 0 5.5T 0 disk └─sdb1 8:17 0 5.5T 0 part └─md0 9:0 0 5.5T 0 raid1 /mount/raid1 ... 

In summary everything looks fine to me as a layman, but I am asking here to make sure. Any help or reference points would be much appreciated!

1

1 Answer 1

1

It's fine to delete files with rm on an mdadm raid. Happens all day every day, perhaps not quite so accidentally. If missing lost+found is an issue, e2fsck or mklost+found will recreate it.

If you deleted your mdadm.conf, you'd have to make a new one.

If you are growing your raid using --backup-file and you deleted that backup file, that could be a problem. But this file has to be stored outside the raid, anyway.

Some extreme corner cases aside, if no issues appeared so far, it should be fine, as far as mdadm raid is concerned.

2
  • Thanks I am more relaxed now. But out of curiosity, if I accentially ran rm -r on the RAID mount point, would it break the RAID? Commented Sep 27, 2024 at 22:00
  • No, "rm -r" works at a different level than the RAID, so it should be impossible for any ext4 operations to "break" the RAID. You might delete all your data, and render the system unbootable (depending on what was deleted) but the RAID will be fine (if totally inaccessible). :-) Commented Sep 28, 2024 at 16:53

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.