Skip to main content
added 177 characters in body
Source Link
dr jimbob
  • 39.7k
  • 8
  • 96
  • 164
  1. Multiple Iterations - This helps. Its called key-strengthening and it raises the computational complexity. E.g., if you key-strengthen by a factor of a 1000, an attacker can try 1000 less possible passwords in the same amount of GPU/CPU time.

  2. Multiple hash functions - In principletheory this helps but not really in practice. The case would be if one particular hash function used is somehow broken (e.g., you can reverse any 256-bit hash by trying significantly less than 2256 passwords or can significantly simplify the brute-forcing process while trying common passwords), but the other hash functions used in the chain are not broken. However in my view with modern cryptographic hash functions and typical attackers, that's a fairly unlikely threat -- there are much lower hanging fruit. E.g., that your attacker has broken iterated sha512, so can quickly come up with a password that works for sha512 key-strengthened N times, but not sha512(salt||bcrypt(salt||sha512(salt||pw))). Possibly However in my view with modern cryptographic hash functions and typical attackers, that's a fairly unlikely attack scenario -- there are much lower hanging fruit that don't require the attacker to be make fundamental advances in computer science. Possibly NSA or serious academic computer scientist may come up with clever ways to break hashing algorithms, but not your average malicious attacker. Also even among hash algorithms that are broken and should generally be avoided (e.g., MD5 and SHA-1), they are usually only broken to collision attacks (find a pair of distinct strings s1, s2 that satisfies hash(s1) == hash(s2) much quicker than expected), but not preimage attacks (given a hash h find an s that satisfies hash(s)=h), which is the relevant attack on a password hash.

  3. Salt mixing - This is an implementation detail, and doing it adds no complexity to an attacker. (RainbowPrecomputed rainbow tables for an complex key-strengthened salted hashes don't exist.)

  4. Unique schemes - Again an implementation detail, and adds no security. Also don't run bcrypt multiple times; that's silly. Bcrypt is already keystrengthenedkey-strengthened. E.g., if you hash ausing bcrypt function with a cost of 16, that means the key went through 2^16 ~ 65536 rounds, so if you want a stronger hash just increase the cost (every increase by 1; means bcrypt will be twice as slow).

  5. Black Magic - As a programmer, I wouldn't suggest doing this; as black magic is a maintainance nightmare for you and your team, and again not too significant for an attacker to overcome. Decision comes down to adopt a new technology, so you have to rewrite the non-standard hash routine in a new language on a new platform and unless its very well-documented you may do it slightly differently breaking the system. And if it is well-documented, an attacker may steal your source code; see what the function does to a password and then can use for their brute forcing. If they manage to steal your hashes, they probably have at least one account where they know a password and stole the hash, so can try reverse engineering it (even if they didn't get the source code). Its like obsfucating your source code on your own server. Its more a hassle for you the developer/maintainer, not really for attackers.

  6. Obsfuscation - Don't do this. See black magic. Or just operate on Kerckhoff's principle aka Shannon's maxim: make the system strong even if the enemy perfectly knows the system. Obfuscation/black magic can be overcome with a little analysis and provides some security (defense-in-depth), but not much. However, a strong key-strengthened cryptographic hash with a high-entropy initial passphrase/password can easily be built to have a computational complexity that is completely unfeasible to attack with all the computers in the universe for thousands of years, without major advances in computer science (e.g., quantum computer or breaking the hash function).

  1. Multiple Iterations - This helps. Its called key-strengthening and it raises the computational complexity. E.g., if you key-strengthen by a factor of a 1000, an attacker can try 1000 less possible passwords in the same amount of GPU/CPU time.

  2. Multiple hash functions - In principle if one particular hash function is somehow broken (e.g., you can reverse any 256-bit hash by trying significantly less than 2256 passwords or can significantly simplify the brute-forcing process while trying common passwords), but the other hash functions used in the chain are not broken. However in my view with modern cryptographic hash functions and typical attackers, that's a fairly unlikely threat -- there are much lower hanging fruit. E.g., that your attacker has broken iterated sha512, so can quickly come up with a password that works for sha512 key-strengthened N times, but not sha512(salt||bcrypt(salt||sha512(salt||pw))). Possibly NSA or serious academic computer scientist may come up with clever ways to break hashing algorithms, but not your average malicious attacker. Also even among hash algorithms that are broken and should generally be avoided (e.g., MD5 and SHA-1), they are usually only broken to collision attacks (find a pair of distinct strings s1, s2 that satisfies hash(s1) == hash(s2)), but not preimage attacks (given a hash h find an s that satisfies hash(s)=h) which is the relevant attack on a password hash.

  3. Salt mixing - This is an implementation detail, and doing it adds no complexity to an attacker. (Rainbow tables for an complex key-strengthened salted hashes don't exist.)

  4. Unique schemes - Again an implementation detail, and adds no security. Also don't run bcrypt multiple times; that's silly. Bcrypt is already keystrengthened. E.g., if you hash a bcrypt function with a cost of 16, that means the key went through 2^16 ~ 65536 rounds, so if you want a stronger hash just increase the cost (every increase by 1; means bcrypt will be twice as slow).

  5. Black Magic - As a programmer, I wouldn't suggest doing this; as black magic is a maintainance nightmare for you and your team, and again not too significant for an attacker to overcome. Decision comes down to adopt a new technology, so you have to rewrite the non-standard hash routine in a new language on a new platform and unless its very well-documented you may do it slightly differently breaking the system. And if it is well-documented, an attacker may steal your source code; see what the function does to a password and then can use for their brute forcing. If they manage to steal your hashes, they probably have at least one account where they know a password and stole the hash, so can try reverse engineering it (even if they didn't get the source code). Its like obsfucating your source code on your own server. Its more a hassle for you the developer/maintainer, not really for attackers.

  6. Obsfuscation - Don't do this. See black magic. Or just operate on Kerckhoff's principle aka Shannon's maxim: make the system strong even if the enemy perfectly knows the system. Obfuscation/black magic can be overcome with a little analysis and provides some security (defense-in-depth), but not much. However, a strong key-strengthened cryptographic hash with a high-entropy initial passphrase/password can easily be built to have a computational complexity that is completely unfeasible to attack with all the computers in the universe for thousands of years, without major advances in computer science (e.g., quantum computer or breaking the hash function).

  1. Multiple Iterations - This helps. Its called key-strengthening and it raises the computational complexity. E.g., if you key-strengthen by a factor of a 1000, an attacker can try 1000 less possible passwords in the same amount of GPU/CPU time.

  2. Multiple hash functions - In theory this helps but not really in practice. The case would be if one hash function used is somehow broken (e.g., you can reverse any 256-bit hash by trying significantly less than 2256 passwords or can significantly simplify the brute-forcing process while trying common passwords), but the other hash functions used in the chain are not broken. E.g., that your attacker has broken iterated sha512, so can quickly come up with a password that works for sha512 key-strengthened N times, but not sha512(salt||bcrypt(salt||sha512(salt||pw))). However in my view with modern cryptographic hash functions and typical attackers, that's a fairly unlikely attack scenario -- there are much lower hanging fruit that don't require the attacker to be make fundamental advances in computer science. Possibly NSA or serious academic computer scientist may come up with clever ways to break hashing algorithms, but not your average malicious attacker. Also even among hash algorithms that are broken and should generally be avoided (e.g., MD5 and SHA-1), they are usually only broken to collision attacks (find a pair of distinct strings s1, s2 that satisfies hash(s1) == hash(s2) much quicker than expected), but not preimage attacks (given a hash h find an s that satisfies hash(s)=h), which is the relevant attack on a password hash.

  3. Salt mixing - This is an implementation detail, and doing it adds no complexity to an attacker. (Precomputed rainbow tables for complex key-strengthened salted hashes don't exist.)

  4. Unique schemes - Again an implementation detail, and adds no security. Also don't run bcrypt multiple times; that's silly. Bcrypt is already key-strengthened. E.g., if you hash using bcrypt with a cost of 16, that means the key went through 2^16 ~ 65536 rounds, so if you want a stronger hash just increase the cost (every increase by 1; means bcrypt will be twice as slow).

  5. Black Magic - As a programmer, I wouldn't suggest doing this; as black magic is a maintainance nightmare for you and your team, and again not too significant for an attacker to overcome. Decision comes down to adopt a new technology, so you have to rewrite the non-standard hash routine in a new language on a new platform and unless its very well-documented you may do it slightly differently breaking the system. And if it is well-documented, an attacker may steal your source code; see what the function does to a password and then can use for their brute forcing. If they manage to steal your hashes, they probably have at least one account where they know a password and stole the hash, so can try reverse engineering it (even if they didn't get the source code). Its like obsfucating your source code on your own server. Its more a hassle for you the developer/maintainer, not really for attackers.

  6. Obsfuscation - Don't do this. See black magic. Or just operate on Kerckhoff's principle aka Shannon's maxim: make the system strong even if the enemy perfectly knows the system. Obfuscation/black magic can be overcome with a little analysis and provides some security (defense-in-depth), but not much. However, a strong key-strengthened cryptographic hash with a high-entropy initial passphrase/password can easily be built to have a computational complexity that is completely unfeasible to attack with all the computers in the universe for thousands of years, without major advances in computer science (e.g., quantum computer or breaking the hash function).

My sleep-deprived procrastinating grammar is horrible on re-read.
Source Link
dr jimbob
  • 39.7k
  • 8
  • 96
  • 164
  1. Multiple Iterations - This helps. Its called key-strengthening and it raises the computational complexity. E.g., if you key-strengthen by a factor of a 1000, an attacker can try 1000 less possible passwords in the same amount of GPU/CPU time.

  2. Multiple hash functions - In principle if one particular hash function is somehow broken (e.g., you can reverse any 256-bit hash in less by trying significantly less than 2^2562256 passwords or a can significantly simplify the brute-forcing process while trying common passwords), but the other hash functions used in the chain are not broken. However in my view with modern cryptographic hash functions and typical attackers, that's a fairly unlikely threat -- there are much lower hanging fruit. E.g., that your attacker has broken iterated sha512, so can quickly come up with a password that works for sha512 key-strengthened N times, but not sha512(salt||bcrypt(salt||sha512(salt||pw))). Possibly NSA or serious academic computer scientist may come up with clever ways to break hashing algorithms, but not your average malicious attacker. Also even among hash algorithms that are broken and should generally be avoided (e.g., MD5 and SHA-1), they are usually only broken to collision attacks (find a pair of distinct strings s1, s2 that satisfies hash(s1) == hash(s2)), but not preimage attacks (given a hash h find an s that satisfies hash(s)=h) which is the relevant attack on a password hash.

  3. Salt mixing - This is an implementation detail, and doing it adds no complexity to an attacker. (Rainbow tables for an complex key-strengthened salted hashes don't exist.)

  4. Unique schemes - Again an implementation detail, and adds no security. Also don't run bcrypt multiple times; that's silly. Bcrypt is already keystrengthened. E.g., if you hash a bcrypt function with a cost of 16, that means the key went through 2^16 ~ 65536 rounds, so if you want a stronger hash just increase the cost (every increase by 1; means bcrypt will be twice as slow).

  5. Black Magic - As a programmer, I wouldn't suggest doing this; as black magic is a maintainance nightmare for you and your team, and again not too significant for an attacker to overcome. Decision comes donedown to adopt a new technology, so you have to rewrite the non-standard hash rather than just useroutine in a common iteration,new language on a new platform and the bitunless its very well-twiddling changeddocumented you may do it slightly between implementationsdifferently breaking the system. An And if it is well-documented, an attacker may steal your source code; see what the functions dofunction does to a password and then can use for their brute forcing. If they manage to steal your hashes, they probably have at least one account where they know a password and stole the hash, so can try reverse engineering it (even if they didn't get the source code). Its like obsfucating your source code on your own server. Its more a hassle for you the developer/maintainer, not really for attackers.

  6. Obsfuscation - Don't do this. See black magic. Or just operate on Kerckhoff's principle aka Shannon's maxim: make the system strong even if the enemy perfectly knows the system. Obfuscation/black magic can be overcome with a little analysis and provides some security (defense-in-depth), but not much. However, a strong key-strengthened cryptographic hash with a high-entropy initial passphrase/password can easily be built to have a computational complexity thatsthat is completely infeasibleunfeasible to attack with all the computers in the universe for thousands of years, without major advances in computer science (e.g., quantum computer or breaking the hash function).

  1. Multiple Iterations - This helps. Its called key-strengthening and it raises the computational complexity. E.g., if you key-strengthen by a factor of a 1000, an attacker can try 1000 less possible passwords in the same amount of GPU/CPU time.

  2. Multiple hash functions - In principle if one particular hash function is somehow broken (e.g., you can reverse any 256-bit hash in less by trying significantly less than 2^256 passwords or a can significantly simplify the brute-forcing process while trying common passwords), but the other hash functions used in the chain are not broken. However in my view with modern cryptographic hash functions, that's a fairly unlikely threat. E.g., that your attacker has broken iterated sha512, so can quickly come up with a password that works for sha512 key-strengthened N times, but not sha512(salt||bcrypt(salt||sha512(salt||pw))). Possibly NSA or serious academic computer scientist may come up with clever ways to break hashing algorithms, but not your average malicious attacker.

  3. Salt mixing - This is an implementation detail, and doing it adds no complexity to an attacker. (Rainbow tables for an complex key-strengthened salted hashes don't exist.)

  4. Unique schemes - Again an implementation detail, and adds no security. Also don't run bcrypt multiple times; that's silly. Bcrypt is already keystrengthened. E.g., if you hash a bcrypt function with a cost of 16, that means the key went through 2^16 ~ 65536 rounds, so if you want a stronger hash just increase the cost (every increase by 1; means bcrypt will be twice as slow).

  5. Black Magic - As a programmer, I wouldn't suggest doing this; as black magic is a maintainance nightmare for you and your team, and again not too significant for an attacker to overcome. Decision comes done to adopt a new technology so you have to rewrite the non-standard hash rather than just use a common iteration, and the bit-twiddling changed slightly between implementations. An attacker may steal your source code; see what the functions do to a password and then can use for their brute forcing. If they manage to steal your hashes, they probably have at least one account where they know a password and stole the hash, so can try reverse engineering it (even if they didn't get the source code). Its like obsfucating your source code on your own server. Its more a hassle for you the developer/maintainer, not really for attackers.

  6. Obsfuscation - Don't do this. See black magic. Or just operate on Kerckhoff's principle aka Shannon's maxim: make the system strong even if the enemy perfectly knows the system. Obfuscation/black magic can be overcome with a little analysis and provides some security (defense-in-depth), but not much. However, a strong key-strengthened cryptographic hash with a high-entropy initial passphrase/password can easily be built to have a computational complexity thats completely infeasible to attack with all the computers in the universe for thousands of years, without major advances in computer science (e.g., quantum computer or breaking the hash function).

  1. Multiple Iterations - This helps. Its called key-strengthening and it raises the computational complexity. E.g., if you key-strengthen by a factor of a 1000, an attacker can try 1000 less possible passwords in the same amount of GPU/CPU time.

  2. Multiple hash functions - In principle if one particular hash function is somehow broken (e.g., you can reverse any 256-bit hash by trying significantly less than 2256 passwords or can significantly simplify the brute-forcing process while trying common passwords), but the other hash functions used in the chain are not broken. However in my view with modern cryptographic hash functions and typical attackers, that's a fairly unlikely threat -- there are much lower hanging fruit. E.g., that your attacker has broken iterated sha512, so can quickly come up with a password that works for sha512 key-strengthened N times, but not sha512(salt||bcrypt(salt||sha512(salt||pw))). Possibly NSA or serious academic computer scientist may come up with clever ways to break hashing algorithms, but not your average malicious attacker. Also even among hash algorithms that are broken and should generally be avoided (e.g., MD5 and SHA-1), they are usually only broken to collision attacks (find a pair of distinct strings s1, s2 that satisfies hash(s1) == hash(s2)), but not preimage attacks (given a hash h find an s that satisfies hash(s)=h) which is the relevant attack on a password hash.

  3. Salt mixing - This is an implementation detail, and doing it adds no complexity to an attacker. (Rainbow tables for an complex key-strengthened salted hashes don't exist.)

  4. Unique schemes - Again an implementation detail, and adds no security. Also don't run bcrypt multiple times; that's silly. Bcrypt is already keystrengthened. E.g., if you hash a bcrypt function with a cost of 16, that means the key went through 2^16 ~ 65536 rounds, so if you want a stronger hash just increase the cost (every increase by 1; means bcrypt will be twice as slow).

  5. Black Magic - As a programmer, I wouldn't suggest doing this; as black magic is a maintainance nightmare for you and your team, and again not too significant for an attacker to overcome. Decision comes down to adopt a new technology, so you have to rewrite the non-standard hash routine in a new language on a new platform and unless its very well-documented you may do it slightly differently breaking the system. And if it is well-documented, an attacker may steal your source code; see what the function does to a password and then can use for their brute forcing. If they manage to steal your hashes, they probably have at least one account where they know a password and stole the hash, so can try reverse engineering it (even if they didn't get the source code). Its like obsfucating your source code on your own server. Its more a hassle for you the developer/maintainer, not really for attackers.

  6. Obsfuscation - Don't do this. See black magic. Or just operate on Kerckhoff's principle aka Shannon's maxim: make the system strong even if the enemy perfectly knows the system. Obfuscation/black magic can be overcome with a little analysis and provides some security (defense-in-depth), but not much. However, a strong key-strengthened cryptographic hash with a high-entropy initial passphrase/password can easily be built to have a computational complexity that is completely unfeasible to attack with all the computers in the universe for thousands of years, without major advances in computer science (e.g., quantum computer or breaking the hash function).

twice as slow, not half ;)
Source Link
Polynomial
  • 136.3k
  • 44
  • 313
  • 387
  1. Multiple Iterations - This helps. Its called key-strengthening and it raises the computational complexity. E.g., if you key-strengthen by a factor of a 1000, an attacker can try 1000 less possible passwords in the same amount of GPU/CPU time.

  2. Multiple hash functions - In principle if one particular hash function is somehow broken (e.g., you can reverse any 256-bit hash in less by trying significantly less than 2^256 passwords or a can significantly simplify the brute-forcing process while trying common passwords), but the other hash functions used in the chain are not broken. However in my view with modern cryptographic hash functions, that's a fairly unlikely threat. E.g., that your attacker has broken iterated sha512, so can quickly come up with a password that works for sha512 key-strengthened N times, but not sha512(salt||bcrypt(salt||sha512(salt||pw))). Possibly NSA or serious academic computer scientist may come up with clever ways to break hashing algorithms, but not your average malicious attacker.

  3. Salt mixing - This is an implementation detail, and doing it adds no complexity to an attacker. (Rainbow tables for an complex key-strengthened salted hashes don't exist.)

  4. Unique schemes - Again an implementation detail, and adds no security. Also don't run bcrypt multiple times; that's silly. Bcrypt is already keystrengthened. E.g., if you hash a bcrypt function with a cost of 16, that means the key went through 2^16 ~ 65536 rounds, so if you want a stronger hash just increase the cost (every increase by 1; means bcrypt will be halftwice as slow).

  5. Black Magic - As a programmer, I wouldn't suggest doing this; as black magic is a maintainance nightmare for you and your team, and again not too significant for an attacker to overcome. Decision comes done to adopt a new technology so you have to rewrite the non-standard hash rather than just use a common iteration, and the bit-twiddling changed slightly between implementations. An attacker may steal your source code; see what the functions do to a password and then can use for their brute forcing. If they manage to steal your hashes, they probably have at least one account where they know a password and stole the hash, so can try reverse engineering it (even if they didn't get the source code). Its like obsfucating your source code on your own server. Its more a hassle for you the developer/maintainer, not really for attackers.

  6. Obsfuscation - Don't do this. See black magic. Or just operate on Kerckhoff's principle aka Shannon's maxim: make the system strong even if the enemy perfectly knows the system. Obfuscation/black magic can be overcome with a little analysis and provides some security (defense-in-depth), but not much. However, a strong key-strengthened cryptographic hash with a high-entropy initial passphrase/password can easily be built to have a computational complexity thats completely infeasible to attack with all the computers in the universe for thousands of years, without major advances in computer science (e.g., quantum computer or breaking the hash function).

  1. Multiple Iterations - This helps. Its called key-strengthening and it raises the computational complexity. E.g., if you key-strengthen by a factor of a 1000, an attacker can try 1000 less possible passwords in the same amount of GPU/CPU time.

  2. Multiple hash functions - In principle if one particular hash function is somehow broken (e.g., you can reverse any 256-bit hash in less by trying significantly less than 2^256 passwords or a can significantly simplify the brute-forcing process while trying common passwords), but the other hash functions used in the chain are not broken. However in my view with modern cryptographic hash functions, that's a fairly unlikely threat. E.g., that your attacker has broken iterated sha512, so can quickly come up with a password that works for sha512 key-strengthened N times, but not sha512(salt||bcrypt(salt||sha512(salt||pw))). Possibly NSA or serious academic computer scientist may come up with clever ways to break hashing algorithms, but not your average malicious attacker.

  3. Salt mixing - This is an implementation detail, and doing it adds no complexity to an attacker. (Rainbow tables for an complex key-strengthened salted hashes don't exist.)

  4. Unique schemes - Again an implementation detail, and adds no security. Also don't run bcrypt multiple times; that's silly. Bcrypt is already keystrengthened. E.g., if you hash a bcrypt function with a cost of 16, that means the key went through 2^16 ~ 65536 rounds, so if you want a stronger hash just increase the cost (every increase by 1; means bcrypt will be half as slow).

  5. Black Magic - As a programmer, I wouldn't suggest doing this; as black magic is a maintainance nightmare for you and your team, and again not too significant for an attacker to overcome. Decision comes done to adopt a new technology so you have to rewrite the non-standard hash rather than just use a common iteration, and the bit-twiddling changed slightly between implementations. An attacker may steal your source code; see what the functions do to a password and then can use for their brute forcing. If they manage to steal your hashes, they probably have at least one account where they know a password and stole the hash, so can try reverse engineering it (even if they didn't get the source code). Its like obsfucating your source code on your own server. Its more a hassle for you the developer/maintainer, not really for attackers.

  6. Obsfuscation - Don't do this. See black magic. Or just operate on Kerckhoff's principle aka Shannon's maxim: make the system strong even if the enemy perfectly knows the system. Obfuscation/black magic can be overcome with a little analysis and provides some security (defense-in-depth), but not much. However, a strong key-strengthened cryptographic hash with a high-entropy initial passphrase/password can easily be built to have a computational complexity thats completely infeasible to attack with all the computers in the universe for thousands of years, without major advances in computer science (e.g., quantum computer or breaking the hash function).

  1. Multiple Iterations - This helps. Its called key-strengthening and it raises the computational complexity. E.g., if you key-strengthen by a factor of a 1000, an attacker can try 1000 less possible passwords in the same amount of GPU/CPU time.

  2. Multiple hash functions - In principle if one particular hash function is somehow broken (e.g., you can reverse any 256-bit hash in less by trying significantly less than 2^256 passwords or a can significantly simplify the brute-forcing process while trying common passwords), but the other hash functions used in the chain are not broken. However in my view with modern cryptographic hash functions, that's a fairly unlikely threat. E.g., that your attacker has broken iterated sha512, so can quickly come up with a password that works for sha512 key-strengthened N times, but not sha512(salt||bcrypt(salt||sha512(salt||pw))). Possibly NSA or serious academic computer scientist may come up with clever ways to break hashing algorithms, but not your average malicious attacker.

  3. Salt mixing - This is an implementation detail, and doing it adds no complexity to an attacker. (Rainbow tables for an complex key-strengthened salted hashes don't exist.)

  4. Unique schemes - Again an implementation detail, and adds no security. Also don't run bcrypt multiple times; that's silly. Bcrypt is already keystrengthened. E.g., if you hash a bcrypt function with a cost of 16, that means the key went through 2^16 ~ 65536 rounds, so if you want a stronger hash just increase the cost (every increase by 1; means bcrypt will be twice as slow).

  5. Black Magic - As a programmer, I wouldn't suggest doing this; as black magic is a maintainance nightmare for you and your team, and again not too significant for an attacker to overcome. Decision comes done to adopt a new technology so you have to rewrite the non-standard hash rather than just use a common iteration, and the bit-twiddling changed slightly between implementations. An attacker may steal your source code; see what the functions do to a password and then can use for their brute forcing. If they manage to steal your hashes, they probably have at least one account where they know a password and stole the hash, so can try reverse engineering it (even if they didn't get the source code). Its like obsfucating your source code on your own server. Its more a hassle for you the developer/maintainer, not really for attackers.

  6. Obsfuscation - Don't do this. See black magic. Or just operate on Kerckhoff's principle aka Shannon's maxim: make the system strong even if the enemy perfectly knows the system. Obfuscation/black magic can be overcome with a little analysis and provides some security (defense-in-depth), but not much. However, a strong key-strengthened cryptographic hash with a high-entropy initial passphrase/password can easily be built to have a computational complexity thats completely infeasible to attack with all the computers in the universe for thousands of years, without major advances in computer science (e.g., quantum computer or breaking the hash function).

added 556 characters in body; added 85 characters in body
Source Link
dr jimbob
  • 39.7k
  • 8
  • 96
  • 164
Loading
added 229 characters in body
Source Link
dr jimbob
  • 39.7k
  • 8
  • 96
  • 164
Loading
added 665 characters in body; added 312 characters in body
Source Link
dr jimbob
  • 39.7k
  • 8
  • 96
  • 164
Loading
Source Link
dr jimbob
  • 39.7k
  • 8
  • 96
  • 164
Loading