I think this is a good question, but it doesn't really have the simple answer others have tried.
The most basic problem that malware exploits is the required, and even desired, sharing of resources between the simulated Turing machines of computer programs. The fundamental limitation of a computer that prevents it being a perfect Turing simulant is finite resources; conceptually, the symbol tape of a Turing machine is of infinite length, meaning the machine has access to infinite state data which it can read or write as it chooses. The inference is also that each Turing machine gets a separate tape on which to work, and the state data of any single Turing machine is never seen by another machine, unless that data is provided as the initial state of the second machine's tape. Neither of these are possible, and sometimes not even desirable, in a real computer; we like our computers to multitask (to simulate multiple Turing machines simultaneously), meaning that we must efficiently divide and re-use our finite resources, and in general we actually like it when one program knows what the heck is going on in another program.
A second and related problem is that the conceptual Turing machine doesn't have to worry about the long-term persistence of its data; it "writes" state symbols to its tape and expects that state to persist indefinitely, whether the machine ever returns to that position of the tape or not. In a computer, there are practical concerns about fast but volatile memory versus slower must more persistent memory. This generally necessitates a transfer of memory between volatile and persistent forms.
A third problem is that Turing machines are automatons (Turing's original term was "automatic machine"). They do exactly what they're told to exactly what they're given, nothing more or less. A more intelligent actor is required to configure (program) the machine to do the right thing and give it the proper initial data.
It is possible, in theory, to create an operating system that can maintain multiple Turing machines without allowing the sharing of resources. However, to do so, the following conceptual rules must be followed:
- No sharing of volatile memory between multiple Turing machines executing concurrently.
- No sharing of persistent memory between multiple Turing machines, whether executing or not.
- Any deviation from the first two rules must be performed directly by the intelligent owner of the computer, who must be assumed to be benevolent and infallible.
It's that last rule that is problematic to our modern use of computing devices and very likely puts the kibosh on the whole thing. The absolute prohibition on shared resources between machines (programs) means that the programs being executed by a computer can only ever be provided with initial state data, and can never be given new data while running by external sources (at least not without stopping the program cold and resetting its tape with new data before restarting it).
The requirements that the intelligent actor actually be benevolent and infallible are beyond the scope of a computer (though we try), and the directive that the actor must directly make any state changes requires the actor to have the knowledge and skill to do so, as well as the ability, good intentions and perfection of operations. Modern operating systems, especially in consumer devices, typically assume the opposite of all of this; that the user does not possess the knowledge, skill, good intention or perfection to make necessary data changes directly, and so does not give them the ability. This is usually a good assumption to make; I program for a living, and yet would have no clue how to make direct changes to the binary data of a program currently running on my Android phone.
However, if the actor can't change the data of a program directly, then the system will either only run the machines it was initially given with the data they were initially given forever (possibly good for mathematical computation, such as figuring out pi to the bajillionth place, but not real good for updating your Facebook profile when you can't even load that profile into the memory of a running program), or the operating system must provide a means for the actor to make some changes that are desirable in a safe way. This is typically by providing the user with one or more other machines, that violate the "no shared resources" rule in order to change the data in a way that is assumed to result in correct and desired changes in execution of the program that really owns that data.
This is where malware gets in. Simply being required to allow violations of the no-shared-state rule, for any reason, allows multiple machines to make changes to data that they all depend on. The assumption that must be made is that all designers of all these programs are benevolent and infallible. Neither are true of human computer programmers, who are the ultimate endpoint in the design of any program, no matter how many steps along the path of generating the machine code for the program are automated. A fallible programmer can introduce an error in the program that causes it to corrupt not only its running state but cause problems with any other program using the same data. A malevolent programmer can intentionally do similar things.
So, until machines start doing all our coding for us, while it may be theoretically possible, it is practically impossible to create a system that is usable by and useful to the average end user of a computer product in our modern society, that has zero chance of being affected maliciously.