Skip to main content
added 116 characters in body
Source Link
Marcus Müller
  • 111k
  • 5
  • 152
  • 292

Does unused flash memory degrade faster?

no. The opposite is the case.

So my question is: Is there something about the nature of flash memory that can explain empty blocks going bad after some time in the shelf?

No, but that's also not what is happening: empty blocks usually don't "exist". Let me explain:

between the physical flash storage (i.e., the memory cells in a chip) and you are multiple layers: the software you're using (i.e., your operating system), its filesystem driver, its link (USB?) driver, the host USB controller; a cable; the device USB controller, the link driver, the flash abstraction layer, and then finally the physical connection to the flash memory.

This flash abstraction layer has the job of presenting a constantly unreliable storage device¹ as reliable device.

Thing is: NAND flash (as used in USB drives) can't set a bit from 1 to 0. To do that, it has to erase a whole flash memory page. So, if you change a byte of storage, what happens is that the flash device looks up in a table "oh, there's an unused page, let's freshly erase it and then write the modified data onto it", because that's better than to erase the current data block and overwrite it, simply because that way, you're not erasing/rewriting the same flash cells over and over again for logical blocks that you write to often. And erasing/rewriting is what ages a cell, i.e., increases the probability that a bit is not read as it was written.

That necessitates

  1. the "free blocks" data structure
  2. the "what physical block does this logical block map to right now" table. Or is it marked as unused, i.e., currently not backed by any physical page, and should return all zeros when read?

So, with 2. it becomes clear that you don't have "unused" blocks. You have blocks that have not yet been written to after they've been marked as not in use anymore.

  • So, what probably happens is that your large file copy either was the thing that brought the reserve in "rarely reused" pages down, and so your USB drive had to use pages that have been overwritten many times already, and hence cause storage errors, OR
  • the large copy just heated up your thumb drive a lot, because it needs to do a lot, and maybe your USB port itself isn't great about powering the device, and so the device decided to black out.

¹ this is not a specialty of flash USB drives; it's true for any kind of mass storage or even just transport medium of reasonable density: there's a probability that some bits flip, and that probability is > 0. That's literally just quantum physics at work, right there.

Does unused flash memory degrade faster?

no. The opposite is the case.

So my question is: Is there something about the nature of flash memory that can explain empty blocks going bad after some time in the shelf?

No, but that's also not what is happening: empty blocks usually don't "exist". Let me explain:

between the physical flash storage (i.e., the memory cells in a chip) and you are multiple layers: the software you're using (i.e., your operating system), its filesystem driver, its link (USB?) driver, the host USB controller; a cable; the device USB controller, the link driver, the flash abstraction layer, and then finally the physical connection to the flash memory.

This flash abstraction layer has the job of presenting a constantly unreliable storage device¹ as reliable device.

Thing is: NAND flash (as used in USB drives) can't set a bit from 1 to 0. To do that, it has to erase a whole flash memory page. So, if you change a byte of storage, what happens is that the flash device looks up in a table "oh, there's an unused page, let's freshly erase it and then write the modified data onto it", because that's better than to erase the current data block and overwrite it, simply because that way, you're not erasing/rewriting the same flash cells over and over again for logical blocks that you write to often. And erasing/rewriting is what ages a cell, i.e., increases the probability that a bit is not read as it was written.

That necessitates

  1. the "free blocks" data structure
  2. the "what physical block does this logical block map to right now" table.

So, with 2. it becomes clear that you don't have "unused" blocks. You have blocks that have not yet been written to after they've been marked as not in use anymore.

  • So, what probably happens is that your large file copy either was the thing that brought the reserve in "rarely reused" pages down, and so your USB drive had to use pages that have been overwritten many times already, and hence cause storage errors, OR
  • the large copy just heated up your thumb drive a lot, because it needs to do a lot, and maybe your USB port itself isn't great about powering the device, and so the device decided to black out.

¹ this is not a specialty of flash USB drives; it's true for any kind of mass storage or even just transport medium of reasonable density: there's a probability that some bits flip, and that probability is > 0. That's literally just quantum physics at work, right there.

Does unused flash memory degrade faster?

no. The opposite is the case.

So my question is: Is there something about the nature of flash memory that can explain empty blocks going bad after some time in the shelf?

No, but that's also not what is happening: empty blocks usually don't "exist". Let me explain:

between the physical flash storage (i.e., the memory cells in a chip) and you are multiple layers: the software you're using (i.e., your operating system), its filesystem driver, its link (USB?) driver, the host USB controller; a cable; the device USB controller, the link driver, the flash abstraction layer, and then finally the physical connection to the flash memory.

This flash abstraction layer has the job of presenting a constantly unreliable storage device¹ as reliable device.

Thing is: NAND flash (as used in USB drives) can't set a bit from 1 to 0. To do that, it has to erase a whole flash memory page. So, if you change a byte of storage, what happens is that the flash device looks up in a table "oh, there's an unused page, let's freshly erase it and then write the modified data onto it", because that's better than to erase the current data block and overwrite it, simply because that way, you're not erasing/rewriting the same flash cells over and over again for logical blocks that you write to often. And erasing/rewriting is what ages a cell, i.e., increases the probability that a bit is not read as it was written.

That necessitates

  1. the "free blocks" data structure
  2. the "what physical block does this logical block map to right now" table. Or is it marked as unused, i.e., currently not backed by any physical page, and should return all zeros when read?

So, with 2. it becomes clear that you don't have "unused" blocks. You have blocks that have not yet been written to after they've been marked as not in use anymore.

  • So, what probably happens is that your large file copy either was the thing that brought the reserve in "rarely reused" pages down, and so your USB drive had to use pages that have been overwritten many times already, and hence cause storage errors, OR
  • the large copy just heated up your thumb drive a lot, because it needs to do a lot, and maybe your USB port itself isn't great about powering the device, and so the device decided to black out.

¹ this is not a specialty of flash USB drives; it's true for any kind of mass storage or even just transport medium of reasonable density: there's a probability that some bits flip, and that probability is > 0. That's literally just quantum physics at work, right there.

Source Link
Marcus Müller
  • 111k
  • 5
  • 152
  • 292

Does unused flash memory degrade faster?

no. The opposite is the case.

So my question is: Is there something about the nature of flash memory that can explain empty blocks going bad after some time in the shelf?

No, but that's also not what is happening: empty blocks usually don't "exist". Let me explain:

between the physical flash storage (i.e., the memory cells in a chip) and you are multiple layers: the software you're using (i.e., your operating system), its filesystem driver, its link (USB?) driver, the host USB controller; a cable; the device USB controller, the link driver, the flash abstraction layer, and then finally the physical connection to the flash memory.

This flash abstraction layer has the job of presenting a constantly unreliable storage device¹ as reliable device.

Thing is: NAND flash (as used in USB drives) can't set a bit from 1 to 0. To do that, it has to erase a whole flash memory page. So, if you change a byte of storage, what happens is that the flash device looks up in a table "oh, there's an unused page, let's freshly erase it and then write the modified data onto it", because that's better than to erase the current data block and overwrite it, simply because that way, you're not erasing/rewriting the same flash cells over and over again for logical blocks that you write to often. And erasing/rewriting is what ages a cell, i.e., increases the probability that a bit is not read as it was written.

That necessitates

  1. the "free blocks" data structure
  2. the "what physical block does this logical block map to right now" table.

So, with 2. it becomes clear that you don't have "unused" blocks. You have blocks that have not yet been written to after they've been marked as not in use anymore.

  • So, what probably happens is that your large file copy either was the thing that brought the reserve in "rarely reused" pages down, and so your USB drive had to use pages that have been overwritten many times already, and hence cause storage errors, OR
  • the large copy just heated up your thumb drive a lot, because it needs to do a lot, and maybe your USB port itself isn't great about powering the device, and so the device decided to black out.

¹ this is not a specialty of flash USB drives; it's true for any kind of mass storage or even just transport medium of reasonable density: there's a probability that some bits flip, and that probability is > 0. That's literally just quantum physics at work, right there.