Accidentally Partially Removed RAID 1 Mount Point. Is My RAID Okay?

I accidentally ran sudo rm -r on my RAID 1 mount point. I immediately realized my error, panicked, and hit CTRL+C to cancel. Some damage was already done. The lost+found directory and some of my data is gone, but most of it is still there. I can recover my lost data but I am worried about the integrity of the RAID and about the lost+found directory. So I have two questions:

  1. Is my RAID okay?
    Suppose I deleted the whole RAID mount point. This should only delete the data and the mount point, but the RAID itself would still be intact. So it could be remounted and I could restore the data from a backup. Is that correct?

  2. Do I need to worry about the lost+found directory?
    If I understand correctly, the lost+found directory only contains names for unlniked files that were found on the disk. So deleting it should not be a problem, as the unliked files themselves are not deleted and they will be found again and placed with a new name in lost+found. Is that correct?

Here is some info output of the RAID:

user@host:~ $ cat /proc/mdstat
Personalities : [raid1] [linear] [raid0] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sdb1[1] sda1[0]
      5860385344 blocks super 1.2 [2/2] [UU]
      bitmap: 0/44 pages [0KB], 65536KB chunk

unused devices: <none>
user@host:~ $ sudo mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Fri May 31 11:25:15 2024
        Raid Level : raid1
        Array Size : 5860385344 (5.46 TiB 6.00 TB)
     Used Dev Size : 5860385344 (5.46 TiB 6.00 TB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Fri Sep 27 15:14:40 2024
             State : clean
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : bitmap

              Name : host:0  (local to host host)
              UUID : ...
            Events : 87057

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       1       8       17        1      active sync   /dev/sdb1
user@host:~ $ lsblk
NAME        MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINTS
sda           8:0    0  5.5T  0 disk
└─sda1        8:1    0  5.5T  0 part
  └─md0       9:0    0  5.5T  0 raid1 /mount/raid1
sdb           8:16   0  5.5T  0 disk
└─sdb1        8:17   0  5.5T  0 part
  └─md0       9:0    0  5.5T  0 raid1 /mount/raid1
...

In summary everything looks fine to me as a layman, but I am asking here to make sure.
Any help or reference points would be much appreciated!