Skip to main content
added 48 characters in body
Source Link
AbsoluteƵERØ
  • 3.1k
  • 20
  • 20

When a disk fails you simply replace the device because it was a physical failure. If several disks are failing, then there is an issue with the controller. You can only tell if a disk has failed by testing it in a different machine if a new disk exhibits the same symptoms, or if you test a new disk in the failed machine and it works, then you know the controller is okay. In situations like a RAID where multiple disks work together, a failed disk can take out other disks, so there is a chance for systemic failure across multiple devices. Think about bad sectors and file corruption on a single drive being mirrored to multiple copies on a RAID. Disk failure is typically only limited to one machine, with the exception of networks where terminals (thin clients) all connect to a central system. Replacing a failed disk usually remedies the issue. Statistically you're not likely to experience the same issue unless the replacement disk was purchased at the same time and was part of the same vendor run in which case it might be a manufacturer defect of a component line.

Some ransomware can actually encrypt more than one machine on a network. I have a client whose server was infected, and allowed the virus to spread to multiple machines on the network and then the server and workstations were all locked. In cases where the server might not be infected, but is acting as a host, this can cause recurring infections to workstations that connect to the infected files. Regardless, the issue needs to be fixed systemically to stop the infection.

When a disk fails you simply replace the device because it was a physical failure. If several disks are failing, then there is an issue with the controller. You can only tell if a disk has failed by testing it in a different machine if a new disk exhibits the same symptoms, or if you test a new disk in the failed machine and it works, then you know the controller is okay. In situations like a RAID where multiple disks work together, a failed disk can take out other disks, so there is a chance for systemic failure across multiple devices. Think about bad sectors and file corruption on a single drive being mirrored to multiple copies on a RAID. Disk failure is typically only limited to one machine, with the exception of networks where terminals (thin clients) all connect to a central system.

Some ransomware can actually encrypt more than one machine on a network. I have a client whose server was infected, and allowed the virus to spread to multiple machines on the network and then the server and workstations were all locked. In cases where the server might not be infected, but is acting as a host, this can cause recurring infections to workstations that connect to the infected files.

When a disk fails you simply replace the device because it was a physical failure. If several disks are failing, then there is an issue with the controller. You can only tell if a disk has failed by testing it in a different machine if a new disk exhibits the same symptoms, or if you test a new disk in the failed machine and it works, then you know the controller is okay. In situations like a RAID where multiple disks work together, a failed disk can take out other disks, so there is a chance for systemic failure across multiple devices. Think about bad sectors and file corruption on a single drive being mirrored to multiple copies on a RAID. Disk failure is typically only limited to one machine, with the exception of networks where terminals (thin clients) all connect to a central system. Replacing a failed disk usually remedies the issue. Statistically you're not likely to experience the same issue unless the replacement disk was purchased at the same time and was part of the same vendor run in which case it might be a manufacturer defect of a component line.

Some ransomware can actually encrypt more than one machine on a network. I have a client whose server was infected, and allowed the virus to spread to multiple machines on the network and then the server and workstations were all locked. In cases where the server might not be infected, but is acting as a host, this can cause recurring infections to workstations that connect to the infected files. Regardless, the issue needs to be fixed systemically to stop the infection.

added 48 characters in body
Source Link
AbsoluteƵERØ
  • 3.1k
  • 20
  • 20

Note: the system/network still needs to be secured prior to and after an investigation, otherwise the same attack can recur.

In some cases, by enabling encryption on a volume with BitLocker where the volume was not already encrypted, the attacker can prevent access to the entire volume locking the user out of their own system. In this case it's a Windows feature. There would be no evidence of anything "installed" on the system, but rather there would be evidence of an intrusion if the system was configured in such a way that the user's account could be compromised either remotely or in-person with the insertion of an infected device.

In some cases, by enabling encryption on a volume with BitLocker where the volume was not already encrypted, the attacker can prevent access to the entire volume locking the user out of their own system. In this case it's a Windows feature. There would be no evidence of anything "installed" on the system, but rather there would be evidence of an intrusion if the system was configured in such a way that the user's account could be compromised either remotely or in-person with the insertion of an infected device.

Note: the system/network still needs to be secured prior to and after an investigation, otherwise the same attack can recur.

In some cases, by enabling encryption on a volume with BitLocker where the volume was not already encrypted, the attacker can prevent access to the entire volume locking the user out of their own system. In this case it's a Windows feature. There would be no evidence of anything "installed" on the system, but rather there would be evidence of an intrusion if the system was configured in such a way that the user's account could be compromised either remotely or in-person with the insertion of an infected device.

added 1500 characters in body
Source Link
AbsoluteƵERØ
  • 3.1k
  • 20
  • 20

On Disk Failure

When a disk fails you simply replace the device because it was a physical failure. If several disks are failing, then there is an issue with the controller. You can only tell if a disk has failed by testing it in a different machine if a new disk exhibits the same symptoms, or if you test a new disk in the failed machine and it works, then you know the controller is okay. In situations like a RAID where multiple disks work together, a failed disk can take out other disks, so there is a chance for systemic failure across multiple devices. Think about bad sectors and file corruption on a single drive being mirrored to multiple copies on a RAID. Disk failure is typically only limited to one machine, with the exception of networks where terminals (thin clients) all connect to a central system.

It's not always only one machine with Ransomware though

Some ransomware can actually encrypt more than one machine on a network. I have a client whose server was infected, and allowed the virus to spread to multiple machines on the network and then the server and workstations were all locked. In cases where the server might not be infected, but is acting as a host, this can cause recurring infections to workstations that connect to the infected files.

If a hard drive in a workstation dies it doesn't replicate the failure across a network. Comparing ransomware and hard drive failures is like comparing cancer to a broken limb; they are both debilitating.

On Disk Failure

When a disk fails you simply replace the device because it was a physical failure. If several disks are failing, then there is an issue with the controller. You can only tell if a disk has failed by testing it in a different machine if a new disk exhibits the same symptoms, or if you test a new disk in the failed machine and it works, then you know the controller is okay. In situations like a RAID where multiple disks work together, a failed disk can take out other disks, so there is a chance for systemic failure across multiple devices. Think about bad sectors and file corruption on a single drive being mirrored to multiple copies on a RAID. Disk failure is typically only limited to one machine, with the exception of networks where terminals (thin clients) all connect to a central system.

It's not always only one machine with Ransomware though

Some ransomware can actually encrypt more than one machine on a network. I have a client whose server was infected, and allowed the virus to spread to multiple machines on the network and then the server and workstations were all locked. In cases where the server might not be infected, but is acting as a host, this can cause recurring infections to workstations that connect to the infected files.

If a hard drive in a workstation dies it doesn't replicate the failure across a network. Comparing ransomware and hard drive failures is like comparing cancer to a broken limb; they are both debilitating.

added 462 characters in body
Source Link
AbsoluteƵERØ
  • 3.1k
  • 20
  • 20
Loading
Source Link
AbsoluteƵERØ
  • 3.1k
  • 20
  • 20
Loading