When a disk fails you simply replace the device because it was a physical failure. If several disks are failing, then there is an issue with the controller. You can only tell if a disk has failed by testing it in a different machine if a new disk exhibits the same symptoms, or if you test a new disk in the failed machine and it works, then you know the controller is okay. In situations like a RAID where multiple disks work together, a failed disk can take out other disks, so there is a chance for systemic failure across multiple devices. Think about bad sectors and file corruption on a single drive being mirrored to multiple copies on a RAID. Disk failure is typically only limited to one machine, with the exception of networks where terminals (thin clients) all connect to a central system. Replacing a failed disk usually remedies the issue. Statistically you're not likely to experience the same issue unless the replacement disk was purchased at the same time and was part of the same vendor run in which case it might be a manufacturer defect of a component line.
Some ransomware can actually encrypt more than one machine on a network. I have a client whose server was infected, and allowed the virus to spread to multiple machines on the network and then the server and workstations were all locked. In cases where the server might not be infected, but is acting as a host, this can cause recurring infections to workstations that connect to the infected files. Regardless, the issue needs to be fixed systemically to stop the infection.