Postgres uses a using a multiversion model (Multiversion Concurrency Control, MVCC).
In default READ COMMITTED isolation level, each separate query effectively sees a snapshot of the database as of the instant the query begins to run. Subsequent queries - even within the same transaction - can see a different snapshot if concurrent transactions are committed in between. (Plus what has been done in the same transaction so far.)
However, as far as CTEs are concerned, all sub-statements in WITH are executed concurrently with the outer statement, they effectively see the same snapshot of the database. All of it is considered a single query for this purpose.
So, no, you don't need an explicit lock to stay consistent.
Encapsulating the logic in a function may be convenient for a number of reasons, but that has no effect whatsoever on concurrency. Aside: a CTE with a volatile function is never inlined. See:
A SELECT does not lock queried rows. Postgres allows concurrent UPDATES. But UPDATE locks target rows. Concurrent transactions trying to write also, have to wait until the locking transaction has finished.
If you want to forbid writes to rows (columns) that have only been selected from while your UPDATE is in progress, you may want to take locks anyway (or use a stricter isolation level). Maybe FOR UPDATE locks, or maybe a weaker lock. That depends on details and requirements you are expressly withholding / not giving in your question.
Also (though you did not ask for that), if multiple concurrent transactions may be writing to overlapping rows (more than one at a time), be sure to adhere to the same, consistent order of rows to avoid deadlocks.