3

I have never used atomic operations, and remain largely ignorant of how to utilize them. And yet, I often come across objects like this when peering into Qt's backend: https://doc.qt.io/qt-6/qatomicinteger.html

This leads me to question if there is a feature here that I have not been properly taking advantage of.

My question therefore is regarding,

  • When should one utilize atomic operations?
  • Is it supposed to improve performance?
  • Are there other considerations besides performance?
  • When would one look at someones code, and chastize them for having not used atomic operations?

Looking back, I am wondering if there are places that I should have been using them, but just went with a mutex instead.

1
  • 1
    "When would one look at someones code, and chastize them for having not used atomic operations?" - If you see code you think could be improved or written differently, then just discuss it with them by suggesting a change, explaining the reasoning as needed; there's no reason to make it personal, and remember there may be perfectly good reasons for way the code had originally been written (if there are any potential issues or problems it may simply have been that the person had been completely unaware of those problems or might have just made a simple honest mistake.). Commented Jul 29, 2022 at 10:56

3 Answers 3

9

Atomic operations are useful for writing safe concurrent/multithreaded code with shared mutable state without having to resort to (expensive) locks or mutexes. For small data types such as integers or pointers, atomics allow us to replace locks like

{ QMutexLocker locker(&mutex); x += 1; } 

with atomics like

atomicX.fetchAndAddAcquire(1); 

Or with the C++ standard library, we can replace
{ std::lock_guard<std::mutex> guard(mutex); x += 1; }
with
atomicX.fetch_add(1, std::memory_order_acq_rel).

This has potential performance benefits since CPUs provide dedicated instructions for atomic operations. Also, there are different "memory orderings" with different guarantees about when which value is visible to a different CPU core. Atomics allow us to select the appropriate degree of control here, with more relaxed orderings leading to potentially better performance than using a stricter ordering or a mutex/lock. Memory orders are defined in the C++ standard, for example see the summary on cppreference.com.

Specifically on x86 architectures, you will likely not see a difference between ordinary volatile variables and atomics with relaxed or acquire/release orderings. The CPU architecture already provides strong guarantees for all memory accesses. However, accurate use of memory orderings is quite relevant on ARM architectures. Using too relaxed memory orders (or no atomics at all) could corrupt data.

Specific answers to your questions:

  • When should one utilize atomic operations?

    When all of the following hold:

    • you have multiple threads that share mutable state
    • your modifications to this state only affect single words (integers, pointers, …)
    • you want to avoid locks/mutexes

    Counter-indications:

    • the data is only used by a single thread → use ordinary variables
    • the shared data remains constant → use ordinary variables
    • changes to the shared data affect more than one word at a time → use locks/mutexes
    • you do not want to learn about memory orderings → use atomics with sequentially consistent ordering or locks/mutexes
  • Is it supposed to improve performance?

    Primarily, it is intended to improve correctness.

    But different memory orders allow us to select the most relaxed (and therefore fastest) memory order that still meets our needs. Of course we could always impose a sequentially consistent order (memory_order_seq_cst) but that is generally slowest and will involve CPU-level locks.

  • Are there other considerations besides performance?

    Correctness.

  • When would one look at someones code, and chastize them for having not used atomic operations?

    Not at all.

    • Chastizing people is usually not very didactic – it builds resentment.
    • If someone shares mutable state across threads and doesn't protect accesses via mutexes, locks, or atomics, there are potential race conditions. It could make sense to raise this issue.
    • If someone uses mutexes or locks to protect single-word data changes, switching to atomics could lead to simplifications and performance improvements.
8
  • What is auto's type supposed to be? I Commented Jul 29, 2022 at 11:44
  • 1
    @Anon The guard variable is supposed to represent the held lock on the mutex. Once the guard goes out of scope, its destructor will run and the mutex will be unlocked. The specific type of the guard is not relevant here, so I used auto to get C++11 type inference. Commented Jul 29, 2022 at 11:47
  • doc.qt.io/qt-5/qmutex.html#lock << returns a void. Its kind of throwing me off. Can you just be explicit here? My heuristic when seeing auto inside of code is to assume that the type neccessarily has to be inferred by the compiler. If that is the case here, I would like to know. Otherwise its a bit of a red herring for me. Commented Jul 29, 2022 at 11:53
  • 2
    An other advantage of atomic variables over mutexes is to reduce programmer error. When using mutexes, sometimes you can forget to lock (or unlock) the mutex when accessing the variable. With the atomic variables you don't have to care about that. Commented Jul 29, 2022 at 12:44
  • 1
    @amon: well, replacing this "pseudo code" by a piece which follows a little bit more the standard mutex syntax would less astonish readers, without making the answer much longer. Commented Jul 29, 2022 at 19:01
-2

If you ask the question, then you shouldn't. You should check if you have multi-threaded code, where multiple threads can read or write the same variable (with at least one writer), and then read up on the rules of your language, possible of your processor, for this situation, whether this will cause you problems (it probably will cause you hard to find problems), and them you solve these problems by using mutexes, atomic access, serial queues, or best by asking someone for help who knows how to do this.

But the principle is this: If all threads using a variable always use atomic operations, and some thread modifies the variable using an atomic operation, then any other thread either sees that the modification hasn't started yet, or that it has finished. It never sees the variable in a state in between.

-2

Atomic operations are used when you, as a software engineer, are dealing with thread-safe code. Generally, this means that your code must be thread-safe since a single section of code is access by multiple threads and the section of code accesses shared data/shared memory; this is highly critical in firmware or code impacting hardware operations. The thread-safe nature of atomic code ensures that some section of code produces accurate and correct data about the system. Accuracy and correctness is the primary goal of atomic operations.

  • When should one utilize atomic operations? When you must write multi-threaded code that accesses shared memory, which works accurately and with re-entrant behavior. You can also run into issues with atomic operations when you are dealing with different word sizes offered by different CPUs. If the shared access is only on one word, then library support for atomics is fine. However, if you have to modify data stored in multiple words of memory, then it's best to use mutexes and locks to produce proper behavior.

  • Is it supposed to improve performance? I think atomic operations are typically used to provide accurate behavior by avoiding race conditions.

  • Are there other considerations besides performance? As I said, I think the primary consideration is accurate functionality.

  • When would one look at someone's code, and chastise them for having not used atomic operations? This is what code reviews are for. If a colleague has written code that does not use atomic operations and the code falls into the category requiring atomic operations, then it can be discussed with them in your next code review. E.g. if they have a section of code with a critical section which allows multiple threads to enter and the critical section accesses shared data, then either atomic functions from a library or mutexes and locks need to be used. The lack of thread-safe code in a multi-threaded environment will eventually lead to race conditions and in turn unexpected behavior. I have seen this happen in my previous position at Thermo Fisher when integrating vendor software and hardware systems.

As a final note, the atomic operations in language libraries like C++ or C#, such as C++ QT, provide a more automated way of handling concurrent code. So, the programmers isn't as responsible for the proper locking and unlocking of their code in sequence. This can also be a big deal because you can sometimes see code that is accessed by multiple threads in an unmaintainable manner and it can be easier to leave the job to the library.

1
  • 1
    As it’s currently written, your answer is unclear. Please edit to add additional details that will help others understand how this addresses the question asked. You can find more information on how to write good answers in the help center. Commented Jun 29 at 7:20

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.