using the one which is the safest would slow down the code.
If you think correct code is slow, you want to see the performance of incorrect code, once you factor all the business malfunction, detective work, and manual cleanup it could involve!
The issue is really about deciding which things in the computer need to be transactionally consistent or not. Over-constraining things does not necessarily lead to more correctness or safety, but simply to a manifestation of different kinds of problems (including additional complexity for users or developers).
And it's also worth remembering that the database engine can't enforce transactional consistency with people's brains, with paper records, with cached displays of data, nor (typically) with any other computer application. Business information systems (which always involve human and paper elements, in addition to computer applications), have to be designed carefully to maintain an appropriate amount of consistency, rather than assuming total consistency.
In a typical business computer application, where a certain permission has already been granted to a login previously, there isn't usually any adverse implication from a small period of run-on, where the permission is recorded centrally as withdrawn, but the computer continues to allow operation for a short while under the previous state of permissions.
What's usually more important is that audit trails showing which login commanded an operation (and from what computer terminal, etc.), are strictly consistent with operations actually executed.
The prospect of permissions being changed, and the user then sneaking through one last operation in the final seconds, has about the same risks and implications as if the user just did the operation a few seconds earlier when a manager had decided to to remove the permissions but hadn't yet actually recorded the decision on the computer.
Even if the application was designed to require strict consistency between an operation and the permission controls, it probably shouldn't require it and should be redesigned to stop requiring it.
This subjective way of programming kind of hurts my logical brain always looking for potential flaws and its willingness to develop something mathematically or logically proven to be reliable.
The reality is that business systems are not "reliable" in a static way. Controlling complexity is important so that staff can supervise things in an ongoing way, reason about what is going on (and what has gone awry), and intervene when necessary and complete an intervention within a reasonable period of time. A "reliable" system is one that has these properties of oversee-ability and intervention-timeliness.
When you get the sense that things are out of control, it's often a sign that you've allowed things to become too complicated to be amenable to ongoing oversight and intervention, and thus a sign that it will be unreliable.
if (post.owner == user) delete(post). But that's easy to fix within a single SQL query, and also with some NoSQL systems. Only atomicity required, no serializability.