3

I'm developing a Windows application with end-to-end encryption and need guidance on securing the Master Key stored on user machines. A compromised Master Key could have severe consequences if exploited by malicious programs. Here are my specific concerns:

Key Storage Vulnerability: While I aim to securely store the Master Key locally, I'm aware that traditional methods (e.g., CNG or Windows Hello) lack inter-process isolation. For example, this HackerOne report demonstrates how keys protected by Windows Hello can still be extracted by malware.

Potential Solution with AppContainer: I’ve explored sandboxing the application via AppContainer to restrict access to the Master Key (e.g., stored in TMP). However, public Microsoft documentation lacks explicit confirmation on whether AppContainer’s isolation mechanisms can prevent malicious processes from accessing cryptographic secrets.

Request for Recommendations: What Windows-specific implementation patterns would ensure that stored keys remain inaccessible to malicious software? Are there proven implementation patterns for achieving true process isolation in this context?

Any insights or references to official guidance would be greatly appreciated.

2 Answers 2

10

To a first approximation, if your app can use the key, so can any other non-sandboxed app running under the same user account. That's just the way that the Windows (and MacOS1, Linux, BSD, etc.) security model works.

You can certainly prevent the key from being exposed. The standard approach for that is hardware (TPM or similar), but you can use software (CNG and similar) too; it's not secure against an attacker with administrator rights, but approximately nothing you care about is, anyhow. In particular, preventing exposure and exfiltration of the key doesn't prevent usage of the key, and decryption of all encrypted secrets, and signing of arbitrary spoofed or modified data, etc.

AppContainer isn't going to help you here. AppContainers are sandboxes; they're for protecting the user and OS in the event that your app is malicious or compromised. Any "full trust" (non-sandboxed) process running under the same user account can take full control of any process running inside an AppContainer (though the reverse is not true, since the sandboxed processes are running at a lower integrity level). You could create a very small privileged key-broker process that runs full-trust and then put the rest of the app (including all network traffic, file parsing, user input processing, etc.) into an AppContainer, which would somewhat protect the key in the event that the sandboxed part of the app (but nothing else) is compromised by malware. However, at that point you're really just re-inventing the Windows Crypto API (CAPI/CNG), which already implements the same idea (private keys are handled in a high-privilege process that has no other duty; normal-privilege processes interact with it through RPC channels hidden behind the crypto APIs).


As a side note:

For example, this HackerOne report demonstrates how keys protected by Windows Hello can still be extracted by malware.

No, it doesn't. It demonstrates how keys that are not protected by anything other than the credential manager encryption (which uses the Data Protection API, and DPAPI keys are per-user or per-machine so they're the same across all processes running as that user) don't magically become protected just because you put a call to demand Windows Hello authentication before the call to query the credential manager. Bitwarden arguably screwed up there, in that they could have protected the master key better, but the report is also just wrong:

Furthermore, on a multi-user Windows machine, any administrator account has the ability to perform the same operations for any other users on the same machine that is using Bitwarden desktop with Windows Hello unlock enabled. Although not implemented in the attached proof of concept, this would be possible by simply enumerating local users and accessing each user's credential set, enumerating the entries and retrieving any Bitwarden biometric master key that is present.

Administrators don't (by default) actually get access to the DPAPI keys (and therefore not to the credential vaults) of other users, at least not if those users aren't currently logged in. The fact that the credential manager (both GUI and API) presents its stored secrets in plain text doesn't mean they are actually stored in plain text; they are transparently encrypted and decrypted by the credential manager, similar to the way that NTFS' Encrypting File System feature transparently encrypts and decrypts files so that they don't look encrypted when viewed by a user with access... but even an Administrator can't read them if the user didn't grant access to that Administrator.


1 MacOS Keychain does support restricting secrets to a specific program, rather than just to all processes under a user. It's basically just doing the thing Bitwarden was doing, though; if you aren't the recognized process, Keychain asks you to reauthenticate before letting you read the secrets. Windows' Credential Manager GUI actually does the same thing (demands reauthentication before you can view credential values) but I think the API does not; Keychain might be somewhat stricter here, though I think the actual encryption used on the secrets is the same for all programs running under any given user.

9

You use the phrase "master key" a lot without defining it. The typical meaning of a master key is one that unlocks all other keys; normally the master key is kept securely stored in a central system (often in a Hardware Security Module or HSM) and is never distributed to clients. Instead, each client is given a unique working key; that way if a client is compromised, only that client's working key is lost. This can help limit the "blast radius" of a security failure.

This model is very successfully used by credit card processors today using a key management protocol called DUKPT (Derived Unique Key Per Transaction). Consider a large retailer that has thousands of cash registers sitting on counters located in stores across the country. It would be a ridiculously risky thing to distribute the company's master key into every PIN pad located at every cash register. So they don't. Each PIN pad gets its own key.

But even that is too risky: a bad guy can intercept the network traffic at a remote store, and copy all the credit transactions from all the store's registers. They then sneak into the store one night and steal a PIN pad, and then figure out how to extract the key from it. They can then use that key to decode every credit transaction in their logs. So for every single transaction, the key in the PIN pad is used to derive a brand new key, unique only to that transaction. (Thus the name, Derived Unique Key Per Transaction.)

Once a transaction's data is encrypted, the transaction key is discarded. DUKPT enables all this without the overhead of a public key infrastructure or the expensive CPUs needed to quickly perform the computationally intensive math required by RSA or ECC public key cryptography. And as an important consideration, DUKPT works even when a client is 100% offline. The encrypted data can be safely stored in a queue, and forwarded to the bank whenever the client comes back online.

But if your clients are on a desktop computer they have plenty of CPU, and public key math is almost instant. In that case, TLS provides excellent security and implementations are available in virtually every modern language's library. Use TLS to securely move the client's data to the central system, and at the central system use the "master key" to encrypt the data locally before storing it. That way the master key never leaves the controlled environment. And a compromised client only risks the data that the client has had in its possession, not the full database.

1
  • 1
    This answer doesn't address OP's question. TLS is not enough for E2E encryption and OP's problem is not about making a secure channel. WA, TG, Signal, are not reinventing TLS with their protocols. Also, the concept of DUKPT described here is questionable. If the pin pad can derive a new key on its own, so can an attacker than RE it (including its key material). This, if correctly implemented, can prevent an attacker from reading past logs but not new ones. On top of that, OP's question is: OK, if my pin pad is actually a Win PC, how do I protect its key? DUKPT has nothing to do with it. Commented Feb 22 at 11:51

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.