tl;dr
In absolute terms, yes, it is acceptable to both do-something and return-something.
Side effects and impurities
Firstly, I note the question is tagged side-effect and pure-function. Nobody likes "side-effects", do they? And nobody likes "impurities", either.
These terms arise from the perspective that the "main effects" of any method are it's formal results, and that the ideal method is one whose inputs are exclusively its formal arguments.
Unfortunately these terms have arisen in mathematical or rarefied academic circles, and the terms have completely inappropriate and misleading connotations for programmers, where the main effects of a method call are often not the formal results but the writing of data to storage or to a network channel, and where the inputs are very often not confined to the formal arguments but consist of reading data from storage or from a network channel.
And there are, furthermore, very common cases of algorithms in which a mixture of both reads and writes to storage must occur in an atomic and transactional fashion - in other words, where the primary focus of the activity is, in one overall shot, doing both of what some would deem drawing impurities and causing side effects.
Hopefully that sets the overall scene somewhat.
Good program design
It shouldn't need much justification for me to state that designing computer programs is a complicated craft activity that cannot be reduced to a few sentences.
A computer is ultimately a physical machine. A programming language exists to control the activity of that machine.
Part of designing computer software is about creating a conceptualisation of what the software causes (or is supposed to cause) the machine to do, and creating a terminology for that conceptualisation so that it can be talked about.
A great deal of software is designed to be used not by the "developers" of the software but by another group of people, the "users".
The most high-level conceptualisations of software, then, consist of operations ultimately available to the user which allow him to exert enough control over the machine and drive it for his purposes.
Because of the typical internal complexity of the software and the need for the developer to cope with it, there are also often more detailed internal conceptualisations, not necessarily visible to users, but certainly understood by the developers and visible to varying degrees in the source code.
The distinction between reading data (or putting it more abstractly, viewing the state of the machine and the records it holds) and writing data (or more abstractly, causing a change in state) tends to be considered one of the fundamental conceptual distinctions around which a program is devised and its activity sequenced.
This distinction between read and write is a recurring motif that permeates software design, and it doesn't have a single overwhelming cause or justification but instead seems to be found generally convenient for all sorts of reasons.
It is common, especially (but not solely) amongst inexperienced programmers, to do this design work badly.
There are more ways to do the design badly than to do it well, but one of the crucial ways of doing it badly is that concepts and terms end up muddled and conflated, and it becomes unclear to the relevant person (whether developer or user), and for the purpose of exercising their kind of control, whether a particular operation is reading or writing, what it is reading and/or writing, or perhaps how each step of reading or writing is sequenced.
The design can be unclear because the terminology in use doesn't match the activity that actually occurs. It can be unclear because the terminology is vague and doesn't mean anything specific. It can be because the source code is structured badly and it is difficult for the developer to see the details of what it does.
And even sometimes when the facts of what is going on are clear, it may seem that two operations are in an unnatural and unnecessary union relative to their conceptualisation.
Like not being able to ask what is for dinner without simultaneously defecating on the toilet, software might be written so that data cannot be inspected on-screen without triggering an unnecessary printout, or that a sales order cannot be merely viewed without causing the audit stamp/amendment date to change and being prompted to record a reason for change.
In both cases, it should be possible to see the existing state without causing the changes. Even if eating dinner is usually followed by going to the toilet, it shouldn't be an iron law but should be up to the person to control according to the circumstances.
It is from these kinds of situations that the dictum not to read and write in the same operation arises.
But it's not a dictum that can be interpreted slavishly - there could be a perfectly proper reason, for example, why merely reading and viewing data should create an audit record logging access to the data (but not amendment of it). What you end up with, often, is distinguishing between a primary category of record for which the read/write distinction remains crystal clear, and an ancillary category of record like debug/audit logs which can be written even when nominally in a read-only mode in relation to the primary record.
Similarly, when a write operation also involves reads, there can be various excuses.
Conclusion
Keeping reads and writes separate is very much a guideline - a default practice from which deviations are then justified for specific cases.
And methods generally consist of four ways in and out: the formal arguments and results, and the implicit inputs from and outputs to storage. Organising the usage of these ways in and out is the name of the game, not slavishly avoiding any of them.
Result<TOk, TErr>class and pass around meaningful errors in the form of custom class.!is much more idiomatic thanfalse ==