12

Just wanted to check to make sure that I understand this. A synchronized method doesn't create a thread, right? It only makes sure that no other thread is invoking this method while one thread within the same process (ie the JVM) is using it, right?

1
  • 5
    Synchronized threads are like bathroom stalls - one occupant at a time, please. Commented Sep 1, 2011 at 15:32

3 Answers 3

21

A synchronized method doesn't create a thread, right?

Right.

It only makes sure that no other thread is invoking this method while one thread within the same process (ie the JVM) is using it, right?

Right.

For more information, read Synchronized Methods. I also recommend reading Java Concurrency in Practice.

Sign up to request clarification or add additional context in comments.

3 Comments

With regard to your second question, it doesn't prevent other threads from invoking that particular method. Instead, it will block the other threads and place them in a queue. Once the lock is made available, the thread that's next in line will obtain the lock and gain access to the method.
Yeah, that's what I meant to say. It works like a Semaphore, only that it's not. The synchronized keyword in Java allows a thread to enter what is called a Monitor, which is alot like a Semaphore. I don't recall from school what the differences were, except that Semaphores are used in C, and Monitors require Object-Oriented langauges like C++ and Java.
But, to be honest, the synchronized keyword really reminds me of a Mutex lock, rather than a Semaphore because it seems that only one thread can enter the "Critical Region" at a single time, while the other threads have to wait for it. You can't specify the number of threads allowed to enter a monitor in Java?
3

That's mostly correct. Calling a synchronized method does not spawn a new thread. It just makes other threads block when they try to call any other synchronized method for that instance of that object.

The key thing to remember is that all synchronized methods of a class use the same lock.

Comments

3

Yes.

There is another important role for synchronized blocks too: when a thread enters a synchronized block it sees the changes to the values made by the previous thread that accessed that block (or another block synchronized with the same lock).

Basically on a multicore cpu, each thread as its own memory cache on its core: each core has copies of the same variables, their values might be different on each core. When there is a synchronized operation, the JVM makes sure that the variable values are copied from one memory cache to the other. A thread entering a synchronzed block then sees the values "updated" by a previous thread.

As suggested by mre, Java Concurrency in Practice is a good reference if you really want to understand multithreading and learn best practices.

4 Comments

Yeah, so it's THREAD and not PROCESS that has its own memory cache per each core, right? I've heard people tell me that it was one process per core, but I think that you're right. I also recall my professor in college tell me that only one process can access a single CPU at any point in time.
Well, it's complicated. A thread and a process are nearly the same thing. The difference is that a thread shares some common memory with sister threads, but a process has only its own memory. The common memory is useful because threads can basically communicate with it. I think on most OS's threads have to be defined as children of a process. If you have a dual core cpu, at any point in time, either 2 processes, 2 threads, or 1 thread + 1 process are running. What your professor said was in the time of single core cpu's.
If that's the case, then how is having multiple cores any different from multiple CPU's? CPU's have their own bit of memory, right? It seems that the memory used on the CPU is meant for only one process. But, I'm no computer engineer, I really am speculating here.
Indeed multiple cpu's is pretty much the same as multiple cores. Each core has some very fast local cache (usually just a few thousand kB). The whole cpu has a slower global cache which is shared by all cores (usually a few MB). Having many layers of caches like this make things complicated for synchronization, but it really improves performance. As for threads, different threads might run on different cores, but they can also run on the same core.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.